diff --git a/camel-activemq.md b/camel-activemq.md new file mode 100644 index 0000000000000000000000000000000000000000..f8616ad7e32307af8c412a32824028196ca51f51 --- /dev/null +++ b/camel-activemq.md @@ -0,0 +1,276 @@ +# Activemq + +**Since Camel 1.0** + +**Both producer and consumer are supported** + +The ActiveMQ component is an extension to the JMS component and has been +pre-configured for using Apache ActiveMQ 5.x (not Artemis). Users of +Apache ActiveMQ Artemis should use the JMS component. + +The camel-activemq component is best intended for ActiveMQ 5.x classic +brokers. If you use ActiveMQ 6.x brokers, then use the camel-activemq6 +component instead. + +**More documentation** + +See the JMS component for more documentation and examples + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-activemq + x.x.x + + + +# URI format + + activemq:[queue:|topic:]destinationName[?options] + +Where `destinationName` is a JMS queue or topic name. By default, the +`destinationName` is interpreted as a queue name. For example, to +connect to the queue, `foo` use: + + activemq:foo + +# Examples + +You’ll need to provide a connectionFactory to the ActiveMQ Component, to +have the following examples working. + +## Producer Example + + from("timer:mytimer?period=5000") + .setBody(constant("HELLO from Camel!")) + .to("activemq:queue:HELLO.WORLD"); + +## Consumer Example + + from("activemq:queue:HELLO.WORLD") + .log("Received a message - ${body}"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|brokerURL|Sets the broker URL to use to connect to ActiveMQ. If none configured then localhost:61616 is used by default (however can be overridden by configuration from environment variables)||string| +|clientId|Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions with JMS 1.1.||string| +|connectionFactory|The connection factory to be use. A connection factory must be configured either on the component or endpoint.||object| +|disableReplyTo|Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another.|false|boolean| +|durableSubscriptionName|The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well.||string| +|embedded|Use an embedded in-memory (non-persistent) ActiveMQ broker for development and testing purposes. You must have activemq-broker JAR on the classpath.|false|boolean| +|jmsMessageType|Allows you to force the use of a specific jakarta.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it.||object| +|replyTo|Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer).||string| +|testConnectionOnStartup|Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well.|false|boolean| +|usePooledConnection|Enables or disables whether a PooledConnectionFactory will be used so that when messages are sent to ActiveMQ from outside a message consuming thread, pooling will be used rather than the default with the Spring JmsTemplate which will create a new connection, session, producer for each message then close them all down again. The default value is true.|true|boolean| +|useSingleConnection|Enables or disables whether a Spring SingleConnectionFactory will be used so that when messages are sent to ActiveMQ from outside a message consuming thread, pooling will be used rather than the default with the Spring JmsTemplate which will create a new connection, session, producer for each message then close them all down again. The default value is false and a pooled connection is used by default.|false|boolean| +|acknowledgementModeName|The JMS acknowledgement name, which is one of: SESSION\_TRANSACTED, CLIENT\_ACKNOWLEDGE, AUTO\_ACKNOWLEDGE, DUPS\_OK\_ACKNOWLEDGE|AUTO\_ACKNOWLEDGE|string| +|artemisConsumerPriority|Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer).||integer| +|asyncConsumer|Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions).|false|boolean| +|autoStartup|Specifies whether the consumer container should auto-startup.|true|boolean| +|cacheLevel|Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details.||integer| +|cacheLevelName|Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE\_AUTO, CACHE\_CONNECTION, CACHE\_CONSUMER, CACHE\_NONE, and CACHE\_SESSION. The default setting is CACHE\_AUTO. See the Spring documentation and Transactions Cache Levels for more information.|CACHE\_AUTO|string| +|concurrentConsumers|Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.|1|integer| +|maxConcurrentConsumers|Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.||integer| +|replyToDeliveryPersistent|Specifies whether to use persistent delivery by default for replies.|true|boolean| +|selector|Sets the JMS selector to use||string| +|subscriptionDurable|Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well.|false|boolean| +|subscriptionName|Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0).||string| +|subscriptionShared|Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker.|false|boolean| +|acceptMessagesWhileStopping|Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option.|false|boolean| +|allowReplyManagerQuickStop|Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag.|false|boolean| +|consumerType|The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|defaultTaskExecutorType|Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached thread-pool-like). If not set, it defaults to the previous behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers.||object| +|eagerLoadingOfProperties|Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody.|false|boolean| +|eagerPoisonBody|If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties.|Poison JMS message due to ${exception.message}|string| +|exposeListenerSession|Specifies whether the listener session should be exposed when consuming messages.|false|boolean| +|replyToConsumerType|The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|replyToSameDestinationAllowed|Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself.|false|boolean| +|taskExecutor|Allows you to specify a custom task executor for consuming messages.||object| +|deliveryDelay|Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker.|-1|integer| +|deliveryMode|Specifies the delivery mode to be used. Possible values are those defined by jakarta.jms.DeliveryMode. NON\_PERSISTENT = 1 and PERSISTENT = 2.||integer| +|deliveryPersistent|Specifies whether persistent delivery is used by default.|true|boolean| +|explicitQosEnabled|Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers.|false|boolean| +|formatDateHeadersToIso8601|Sets whether JMS date properties should be formatted according to the ISO 8601 standard.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|preserveMessageQos|Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header.|false|boolean| +|priority|Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect.|4|integer| +|replyToConcurrentConsumers|Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.|1|integer| +|replyToMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.||integer| +|replyToOnTimeoutMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS.|1|integer| +|replyToOverride|Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination.||string| +|replyToType|Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive.||object| +|requestTimeout|The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option.|20000|duration| +|timeToLive|When sending messages, specifies the time-to-live of the message (in milliseconds).|-1|integer| +|allowAdditionalHeaders|This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example, some message systems, such as WMQ, do this with header names using prefix JMS\_IBM\_MQMD\_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching.||string| +|allowNullBody|Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown.|true|boolean| +|alwaysCopyMessage|If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set)|false|boolean| +|correlationProperty|When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel.||string| +|disableTimeToLive|Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details.|false|boolean| +|forceSendOriginalMessage|When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received.|false|boolean| +|includeSentJMSMessageID|Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination.|false|boolean| +|replyToCacheLevelName|Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE\_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE\_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE\_NONE to work. Note: If using temporary queues then CACHE\_NONE is not allowed, and you must use a higher value such as CACHE\_CONSUMER or CACHE\_SESSION.||string| +|replyToDestinationSelectorName|Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue).||string| +|streamMessageTypeEnabled|Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data.|false|boolean| +|allowAutoWiredConnectionFactory|Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default.|true|boolean| +|allowAutoWiredDestinationResolver|Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default.|true|boolean| +|allowSerializedHeaders|Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|artemisStreamingEnabled|Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used.|false|boolean| +|asyncStartListener|Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or fail-over. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry.|false|boolean| +|asyncStopListener|Whether to stop the JmsConsumer message listener asynchronously, when stopping a route.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configuration|To use a shared JMS configuration||object| +|destinationResolver|A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry).||object| +|errorHandler|Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler.||object| +|exceptionListener|Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions.||object| +|idleConsumerLimit|Specify the limit for the number of consumers that are allowed to be idle at any given time.|1|integer| +|idleTaskExecutionLimit|Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring.|1|integer| +|includeAllJMSXProperties|Whether to include all JMSX prefixed properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply.|false|boolean| +|includeCorrelationIDAsBytes|Whether the JMS consumer should include JMSCorrelationIDAsBytes as a header on the Camel Message.|true|boolean| +|jmsKeyFormatStrategy|Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation.||object| +|mapJmsMessage|Specifies whether Camel should auto map the received JMS message to a suited payload type, such as jakarta.jms.TextMessage to a String etc.|true|boolean| +|maxMessagesPerTask|The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required.|-1|integer| +|messageConverter|To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a jakarta.jms.Message.||object| +|messageCreatedStrategy|To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of jakarta.jms.Message objects when Camel is sending a JMS message.||object| +|messageIdEnabled|When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value.|true|boolean| +|messageListenerContainerFactory|Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom.||object| +|messageTimestampEnabled|Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value.|true|boolean| +|pubSubNoLocal|Specifies whether to inhibit the delivery of messages published by its own connection.|false|boolean| +|queueBrowseStrategy|To use a custom QueueBrowseStrategy when browsing queues||object| +|receiveTimeout|The timeout for receiving messages (in milliseconds).|1000|duration| +|recoveryInterval|Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds.|5000|duration| +|requestTimeoutCheckerInterval|Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout.|1000|duration| +|serviceLocationEnabled|Whether to detect the network address location of the JMS broker on startup. This information is gathered via reflection on the ConnectionFactory, and is vendor specific. This option can be used to turn this off.|true|boolean| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|temporaryQueueResolver|A pluggable TemporaryQueueResolver that allows you to use your own resolver for creating temporary queues (some messaging systems has special requirements for creating temporary queues).||object| +|transferException|If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a jakarta.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!|false|boolean| +|transferExchange|You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!|false|boolean| +|trustAllPackages|Define if all Java packages are trusted or not (for Java object JMS message types). Notice its not recommended practice to send Java serialized objects over network. Setting this to true can expose security risks, so use this with care.|false|boolean| +|useMessageIDAsCorrelationID|Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages.|false|boolean| +|waitForProvisionCorrelationToBeUpdatedCounter|Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled.|50|integer| +|waitForProvisionCorrelationToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for provisional correlation id to be updated.|100|duration| +|waitForTemporaryReplyToToBeUpdatedCounter|Number of times to wait for temporary replyTo queue to be created and ready when doing request/reply over JMS.|200|integer| +|waitForTemporaryReplyToToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for temporary replyTo queue to be ready.|100|duration| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| +|errorHandlerLoggingLevel|Allows to configure the default errorHandler logging level for logging uncaught exceptions.|WARN|object| +|errorHandlerLogStackTrace|Allows to control whether stack-traces should be logged or not, by the default errorHandler.|true|boolean| +|password|Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|username|Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|transacted|Specifies whether to use transacted mode|false|boolean| +|transactedInOut|Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction.|false|boolean| +|lazyCreateTransactionManager|If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true.|true|boolean| +|transactionManager|The Spring transaction manager to use.||object| +|transactionName|The name of the transaction to use.||string| +|transactionTimeout|The timeout value of the transaction (in seconds), if using transacted mode.|-1|integer| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|destinationType|The kind of destination to use|queue|string| +|destinationName|Name of the queue or topic to use as destination||string| +|clientId|Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions with JMS 1.1.||string| +|connectionFactory|The connection factory to be use. A connection factory must be configured either on the component or endpoint.||object| +|disableReplyTo|Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another.|false|boolean| +|durableSubscriptionName|The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well.||string| +|jmsMessageType|Allows you to force the use of a specific jakarta.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it.||object| +|replyTo|Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer).||string| +|testConnectionOnStartup|Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well.|false|boolean| +|acknowledgementModeName|The JMS acknowledgement name, which is one of: SESSION\_TRANSACTED, CLIENT\_ACKNOWLEDGE, AUTO\_ACKNOWLEDGE, DUPS\_OK\_ACKNOWLEDGE|AUTO\_ACKNOWLEDGE|string| +|artemisConsumerPriority|Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer).||integer| +|asyncConsumer|Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions).|false|boolean| +|autoStartup|Specifies whether the consumer container should auto-startup.|true|boolean| +|cacheLevel|Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details.||integer| +|cacheLevelName|Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE\_AUTO, CACHE\_CONNECTION, CACHE\_CONSUMER, CACHE\_NONE, and CACHE\_SESSION. The default setting is CACHE\_AUTO. See the Spring documentation and Transactions Cache Levels for more information.|CACHE\_AUTO|string| +|concurrentConsumers|Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.|1|integer| +|maxConcurrentConsumers|Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.||integer| +|replyToDeliveryPersistent|Specifies whether to use persistent delivery by default for replies.|true|boolean| +|selector|Sets the JMS selector to use||string| +|subscriptionDurable|Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well.|false|boolean| +|subscriptionName|Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0).||string| +|subscriptionShared|Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker.|false|boolean| +|acceptMessagesWhileStopping|Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option.|false|boolean| +|allowReplyManagerQuickStop|Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag.|false|boolean| +|consumerType|The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|defaultTaskExecutorType|Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached thread-pool-like). If not set, it defaults to the previous behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers.||object| +|destinationOptions|Destination Options are a way to provide extended configuration options to a JMS consumer without having to extend the JMS API. The options are encoded using URL query syntax in the destination name that the consumer is created on. See more details at https://activemq.apache.org/destination-options.||object| +|eagerLoadingOfProperties|Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody.|false|boolean| +|eagerPoisonBody|If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties.|Poison JMS message due to ${exception.message}|string| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|exposeListenerSession|Specifies whether the listener session should be exposed when consuming messages.|false|boolean| +|replyToConsumerType|The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|replyToSameDestinationAllowed|Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself.|false|boolean| +|taskExecutor|Allows you to specify a custom task executor for consuming messages.||object| +|deliveryDelay|Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker.|-1|integer| +|deliveryMode|Specifies the delivery mode to be used. Possible values are those defined by jakarta.jms.DeliveryMode. NON\_PERSISTENT = 1 and PERSISTENT = 2.||integer| +|deliveryPersistent|Specifies whether persistent delivery is used by default.|true|boolean| +|explicitQosEnabled|Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers.|false|boolean| +|formatDateHeadersToIso8601|Sets whether JMS date properties should be formatted according to the ISO 8601 standard.|false|boolean| +|preserveMessageQos|Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header.|false|boolean| +|priority|Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect.|4|integer| +|replyToConcurrentConsumers|Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.|1|integer| +|replyToMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.||integer| +|replyToOnTimeoutMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS.|1|integer| +|replyToOverride|Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination.||string| +|replyToType|Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive.||object| +|requestTimeout|The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option.|20000|duration| +|timeToLive|When sending messages, specifies the time-to-live of the message (in milliseconds).|-1|integer| +|allowAdditionalHeaders|This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example, some message systems, such as WMQ, do this with header names using prefix JMS\_IBM\_MQMD\_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching.||string| +|allowNullBody|Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown.|true|boolean| +|alwaysCopyMessage|If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set)|false|boolean| +|correlationProperty|When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel.||string| +|disableTimeToLive|Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details.|false|boolean| +|forceSendOriginalMessage|When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received.|false|boolean| +|includeSentJMSMessageID|Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|replyToCacheLevelName|Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE\_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE\_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE\_NONE to work. Note: If using temporary queues then CACHE\_NONE is not allowed, and you must use a higher value such as CACHE\_CONSUMER or CACHE\_SESSION.||string| +|replyToDestinationSelectorName|Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue).||string| +|streamMessageTypeEnabled|Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data.|false|boolean| +|allowSerializedHeaders|Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|artemisStreamingEnabled|Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used.|false|boolean| +|asyncStartListener|Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or fail-over. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry.|false|boolean| +|asyncStopListener|Whether to stop the JmsConsumer message listener asynchronously, when stopping a route.|false|boolean| +|destinationResolver|A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry).||object| +|errorHandler|Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler.||object| +|exceptionListener|Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions.||object| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object| +|idleConsumerLimit|Specify the limit for the number of consumers that are allowed to be idle at any given time.|1|integer| +|idleTaskExecutionLimit|Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring.|1|integer| +|includeAllJMSXProperties|Whether to include all JMSX prefixed properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply.|false|boolean| +|jmsKeyFormatStrategy|Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation.||object| +|mapJmsMessage|Specifies whether Camel should auto map the received JMS message to a suited payload type, such as jakarta.jms.TextMessage to a String etc.|true|boolean| +|maxMessagesPerTask|The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required.|-1|integer| +|messageConverter|To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a jakarta.jms.Message.||object| +|messageCreatedStrategy|To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of jakarta.jms.Message objects when Camel is sending a JMS message.||object| +|messageIdEnabled|When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value.|true|boolean| +|messageListenerContainerFactory|Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom.||object| +|messageTimestampEnabled|Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value.|true|boolean| +|pubSubNoLocal|Specifies whether to inhibit the delivery of messages published by its own connection.|false|boolean| +|receiveTimeout|The timeout for receiving messages (in milliseconds).|1000|duration| +|recoveryInterval|Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds.|5000|duration| +|requestTimeoutCheckerInterval|Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout.|1000|duration| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|temporaryQueueResolver|A pluggable TemporaryQueueResolver that allows you to use your own resolver for creating temporary queues (some messaging systems has special requirements for creating temporary queues).||object| +|transferException|If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a jakarta.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!|false|boolean| +|transferExchange|You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!|false|boolean| +|useMessageIDAsCorrelationID|Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages.|false|boolean| +|waitForProvisionCorrelationToBeUpdatedCounter|Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled.|50|integer| +|waitForProvisionCorrelationToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for provisional correlation id to be updated.|100|duration| +|waitForTemporaryReplyToToBeUpdatedCounter|Number of times to wait for temporary replyTo queue to be created and ready when doing request/reply over JMS.|200|integer| +|waitForTemporaryReplyToToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for temporary replyTo queue to be ready.|100|duration| +|errorHandlerLoggingLevel|Allows to configure the default errorHandler logging level for logging uncaught exceptions.|WARN|object| +|errorHandlerLogStackTrace|Allows to control whether stack-traces should be logged or not, by the default errorHandler.|true|boolean| +|password|Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|username|Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|transacted|Specifies whether to use transacted mode|false|boolean| +|transactedInOut|Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction.|false|boolean| +|lazyCreateTransactionManager|If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true.|true|boolean| +|transactionManager|The Spring transaction manager to use.||object| +|transactionName|The name of the transaction to use.||string| +|transactionTimeout|The timeout value of the transaction (in seconds), if using transacted mode.|-1|integer| diff --git a/camel-activemq6.md b/camel-activemq6.md new file mode 100644 index 0000000000000000000000000000000000000000..d1b75e0b2eda292a21796b499921c643da43d635 --- /dev/null +++ b/camel-activemq6.md @@ -0,0 +1,276 @@ +# Activemq6 + +**Since Camel 4.7** + +**Both producer and consumer are supported** + +The ActiveMQ component is an extension to the JMS component and has been +pre-configured for using Apache ActiveMQ 6.x (not Artemis). Users of +Apache ActiveMQ Artemis should use the JMS component. + +The camel-activemq component is best intended for ActiveMQ 6.x brokers. +If you use ActiveMQ 5.x brokers, then use the camel-activemq 5.x +component instead. + +**More documentation** + +See the JMS component for more documentation and examples + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-activemq + x.x.x + + + +# URI format + + activemq:[queue:|topic:]destinationName[?options] + +Where `destinationName` is a JMS queue or topic name. By default, the +`destinationName` is interpreted as a queue name. For example, to +connect to the queue, `foo` use: + + activemq:foo + +# Examples + +You’ll need to provide a connectionFactory to the ActiveMQ Component, to +have the following examples working. + +## Producer Example + + from("timer:mytimer?period=5000") + .setBody(constant("HELLO from Camel!")) + .to("activemq:queue:HELLO.WORLD"); + +## Consumer Example + + from("activemq:queue:HELLO.WORLD") + .log("Received a message - ${body}"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|brokerURL|Sets the broker URL to use to connect to ActiveMQ. If none configured then localhost:61616 is used by default (however can be overridden by configuration from environment variables)||string| +|clientId|Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions with JMS 1.1.||string| +|connectionFactory|The connection factory to be use. A connection factory must be configured either on the component or endpoint.||object| +|disableReplyTo|Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another.|false|boolean| +|durableSubscriptionName|The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well.||string| +|embedded|Use an embedded in-memory (non-persistent) ActiveMQ broker for development and testing purposes. You must have activemq-broker JAR on the classpath.|false|boolean| +|jmsMessageType|Allows you to force the use of a specific jakarta.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it.||object| +|replyTo|Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer).||string| +|testConnectionOnStartup|Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well.|false|boolean| +|usePooledConnection|Enables or disables whether a PooledConnectionFactory will be used so that when messages are sent to ActiveMQ from outside a message consuming thread, pooling will be used rather than the default with the Spring JmsTemplate which will create a new connection, session, producer for each message then close them all down again. The default value is true.|true|boolean| +|useSingleConnection|Enables or disables whether a Spring SingleConnectionFactory will be used so that when messages are sent to ActiveMQ from outside a message consuming thread, pooling will be used rather than the default with the Spring JmsTemplate which will create a new connection, session, producer for each message then close them all down again. The default value is false and a pooled connection is used by default.|false|boolean| +|acknowledgementModeName|The JMS acknowledgement name, which is one of: SESSION\_TRANSACTED, CLIENT\_ACKNOWLEDGE, AUTO\_ACKNOWLEDGE, DUPS\_OK\_ACKNOWLEDGE|AUTO\_ACKNOWLEDGE|string| +|artemisConsumerPriority|Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer).||integer| +|asyncConsumer|Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions).|false|boolean| +|autoStartup|Specifies whether the consumer container should auto-startup.|true|boolean| +|cacheLevel|Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details.||integer| +|cacheLevelName|Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE\_AUTO, CACHE\_CONNECTION, CACHE\_CONSUMER, CACHE\_NONE, and CACHE\_SESSION. The default setting is CACHE\_AUTO. See the Spring documentation and Transactions Cache Levels for more information.|CACHE\_AUTO|string| +|concurrentConsumers|Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.|1|integer| +|maxConcurrentConsumers|Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.||integer| +|replyToDeliveryPersistent|Specifies whether to use persistent delivery by default for replies.|true|boolean| +|selector|Sets the JMS selector to use||string| +|subscriptionDurable|Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well.|false|boolean| +|subscriptionName|Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0).||string| +|subscriptionShared|Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker.|false|boolean| +|acceptMessagesWhileStopping|Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option.|false|boolean| +|allowReplyManagerQuickStop|Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag.|false|boolean| +|consumerType|The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|defaultTaskExecutorType|Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached thread-pool-like). If not set, it defaults to the previous behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers.||object| +|eagerLoadingOfProperties|Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody.|false|boolean| +|eagerPoisonBody|If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties.|Poison JMS message due to ${exception.message}|string| +|exposeListenerSession|Specifies whether the listener session should be exposed when consuming messages.|false|boolean| +|replyToConsumerType|The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|replyToSameDestinationAllowed|Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself.|false|boolean| +|taskExecutor|Allows you to specify a custom task executor for consuming messages.||object| +|deliveryDelay|Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker.|-1|integer| +|deliveryMode|Specifies the delivery mode to be used. Possible values are those defined by jakarta.jms.DeliveryMode. NON\_PERSISTENT = 1 and PERSISTENT = 2.||integer| +|deliveryPersistent|Specifies whether persistent delivery is used by default.|true|boolean| +|explicitQosEnabled|Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers.|false|boolean| +|formatDateHeadersToIso8601|Sets whether JMS date properties should be formatted according to the ISO 8601 standard.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|preserveMessageQos|Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header.|false|boolean| +|priority|Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect.|4|integer| +|replyToConcurrentConsumers|Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.|1|integer| +|replyToMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.||integer| +|replyToOnTimeoutMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS.|1|integer| +|replyToOverride|Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination.||string| +|replyToType|Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive.||object| +|requestTimeout|The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option.|20000|duration| +|timeToLive|When sending messages, specifies the time-to-live of the message (in milliseconds).|-1|integer| +|allowAdditionalHeaders|This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example, some message systems, such as WMQ, do this with header names using prefix JMS\_IBM\_MQMD\_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching.||string| +|allowNullBody|Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown.|true|boolean| +|alwaysCopyMessage|If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set)|false|boolean| +|correlationProperty|When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel.||string| +|disableTimeToLive|Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details.|false|boolean| +|forceSendOriginalMessage|When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received.|false|boolean| +|includeSentJMSMessageID|Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination.|false|boolean| +|replyToCacheLevelName|Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE\_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE\_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE\_NONE to work. Note: If using temporary queues then CACHE\_NONE is not allowed, and you must use a higher value such as CACHE\_CONSUMER or CACHE\_SESSION.||string| +|replyToDestinationSelectorName|Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue).||string| +|streamMessageTypeEnabled|Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data.|false|boolean| +|allowAutoWiredConnectionFactory|Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default.|true|boolean| +|allowAutoWiredDestinationResolver|Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default.|true|boolean| +|allowSerializedHeaders|Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|artemisStreamingEnabled|Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used.|false|boolean| +|asyncStartListener|Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or fail-over. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry.|false|boolean| +|asyncStopListener|Whether to stop the JmsConsumer message listener asynchronously, when stopping a route.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configuration|To use a shared JMS configuration||object| +|destinationResolver|A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry).||object| +|errorHandler|Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler.||object| +|exceptionListener|Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions.||object| +|idleConsumerLimit|Specify the limit for the number of consumers that are allowed to be idle at any given time.|1|integer| +|idleTaskExecutionLimit|Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring.|1|integer| +|includeAllJMSXProperties|Whether to include all JMSX prefixed properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply.|false|boolean| +|includeCorrelationIDAsBytes|Whether the JMS consumer should include JMSCorrelationIDAsBytes as a header on the Camel Message.|true|boolean| +|jmsKeyFormatStrategy|Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation.||object| +|mapJmsMessage|Specifies whether Camel should auto map the received JMS message to a suited payload type, such as jakarta.jms.TextMessage to a String etc.|true|boolean| +|maxMessagesPerTask|The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required.|-1|integer| +|messageConverter|To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a jakarta.jms.Message.||object| +|messageCreatedStrategy|To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of jakarta.jms.Message objects when Camel is sending a JMS message.||object| +|messageIdEnabled|When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value.|true|boolean| +|messageListenerContainerFactory|Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom.||object| +|messageTimestampEnabled|Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value.|true|boolean| +|pubSubNoLocal|Specifies whether to inhibit the delivery of messages published by its own connection.|false|boolean| +|queueBrowseStrategy|To use a custom QueueBrowseStrategy when browsing queues||object| +|receiveTimeout|The timeout for receiving messages (in milliseconds).|1000|duration| +|recoveryInterval|Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds.|5000|duration| +|requestTimeoutCheckerInterval|Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout.|1000|duration| +|serviceLocationEnabled|Whether to detect the network address location of the JMS broker on startup. This information is gathered via reflection on the ConnectionFactory, and is vendor specific. This option can be used to turn this off.|true|boolean| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|temporaryQueueResolver|A pluggable TemporaryQueueResolver that allows you to use your own resolver for creating temporary queues (some messaging systems has special requirements for creating temporary queues).||object| +|transferException|If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a jakarta.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!|false|boolean| +|transferExchange|You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!|false|boolean| +|trustAllPackages|Define if all Java packages are trusted or not (for Java object JMS message types). Notice its not recommended practice to send Java serialized objects over network. Setting this to true can expose security risks, so use this with care.|false|boolean| +|useMessageIDAsCorrelationID|Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages.|false|boolean| +|waitForProvisionCorrelationToBeUpdatedCounter|Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled.|50|integer| +|waitForProvisionCorrelationToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for provisional correlation id to be updated.|100|duration| +|waitForTemporaryReplyToToBeUpdatedCounter|Number of times to wait for temporary replyTo queue to be created and ready when doing request/reply over JMS.|200|integer| +|waitForTemporaryReplyToToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for temporary replyTo queue to be ready.|100|duration| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| +|errorHandlerLoggingLevel|Allows to configure the default errorHandler logging level for logging uncaught exceptions.|WARN|object| +|errorHandlerLogStackTrace|Allows to control whether stack-traces should be logged or not, by the default errorHandler.|true|boolean| +|password|Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|username|Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|transacted|Specifies whether to use transacted mode|false|boolean| +|transactedInOut|Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction.|false|boolean| +|lazyCreateTransactionManager|If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true.|true|boolean| +|transactionManager|The Spring transaction manager to use.||object| +|transactionName|The name of the transaction to use.||string| +|transactionTimeout|The timeout value of the transaction (in seconds), if using transacted mode.|-1|integer| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|destinationType|The kind of destination to use|queue|string| +|destinationName|Name of the queue or topic to use as destination||string| +|clientId|Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions with JMS 1.1.||string| +|connectionFactory|The connection factory to be use. A connection factory must be configured either on the component or endpoint.||object| +|disableReplyTo|Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another.|false|boolean| +|durableSubscriptionName|The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well.||string| +|jmsMessageType|Allows you to force the use of a specific jakarta.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it.||object| +|replyTo|Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer).||string| +|testConnectionOnStartup|Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well.|false|boolean| +|acknowledgementModeName|The JMS acknowledgement name, which is one of: SESSION\_TRANSACTED, CLIENT\_ACKNOWLEDGE, AUTO\_ACKNOWLEDGE, DUPS\_OK\_ACKNOWLEDGE|AUTO\_ACKNOWLEDGE|string| +|artemisConsumerPriority|Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer).||integer| +|asyncConsumer|Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions).|false|boolean| +|autoStartup|Specifies whether the consumer container should auto-startup.|true|boolean| +|cacheLevel|Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details.||integer| +|cacheLevelName|Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE\_AUTO, CACHE\_CONNECTION, CACHE\_CONSUMER, CACHE\_NONE, and CACHE\_SESSION. The default setting is CACHE\_AUTO. See the Spring documentation and Transactions Cache Levels for more information.|CACHE\_AUTO|string| +|concurrentConsumers|Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.|1|integer| +|maxConcurrentConsumers|Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.||integer| +|replyToDeliveryPersistent|Specifies whether to use persistent delivery by default for replies.|true|boolean| +|selector|Sets the JMS selector to use||string| +|subscriptionDurable|Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well.|false|boolean| +|subscriptionName|Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0).||string| +|subscriptionShared|Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker.|false|boolean| +|acceptMessagesWhileStopping|Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option.|false|boolean| +|allowReplyManagerQuickStop|Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag.|false|boolean| +|consumerType|The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|defaultTaskExecutorType|Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached thread-pool-like). If not set, it defaults to the previous behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers.||object| +|destinationOptions|Destination Options are a way to provide extended configuration options to a JMS consumer without having to extend the JMS API. The options are encoded using URL query syntax in the destination name that the consumer is created on. See more details at https://activemq.apache.org/destination-options.||object| +|eagerLoadingOfProperties|Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody.|false|boolean| +|eagerPoisonBody|If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties.|Poison JMS message due to ${exception.message}|string| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|exposeListenerSession|Specifies whether the listener session should be exposed when consuming messages.|false|boolean| +|replyToConsumerType|The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|replyToSameDestinationAllowed|Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself.|false|boolean| +|taskExecutor|Allows you to specify a custom task executor for consuming messages.||object| +|deliveryDelay|Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker.|-1|integer| +|deliveryMode|Specifies the delivery mode to be used. Possible values are those defined by jakarta.jms.DeliveryMode. NON\_PERSISTENT = 1 and PERSISTENT = 2.||integer| +|deliveryPersistent|Specifies whether persistent delivery is used by default.|true|boolean| +|explicitQosEnabled|Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers.|false|boolean| +|formatDateHeadersToIso8601|Sets whether JMS date properties should be formatted according to the ISO 8601 standard.|false|boolean| +|preserveMessageQos|Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header.|false|boolean| +|priority|Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect.|4|integer| +|replyToConcurrentConsumers|Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.|1|integer| +|replyToMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.||integer| +|replyToOnTimeoutMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS.|1|integer| +|replyToOverride|Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination.||string| +|replyToType|Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive.||object| +|requestTimeout|The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option.|20000|duration| +|timeToLive|When sending messages, specifies the time-to-live of the message (in milliseconds).|-1|integer| +|allowAdditionalHeaders|This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example, some message systems, such as WMQ, do this with header names using prefix JMS\_IBM\_MQMD\_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching.||string| +|allowNullBody|Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown.|true|boolean| +|alwaysCopyMessage|If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set)|false|boolean| +|correlationProperty|When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel.||string| +|disableTimeToLive|Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details.|false|boolean| +|forceSendOriginalMessage|When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received.|false|boolean| +|includeSentJMSMessageID|Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|replyToCacheLevelName|Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE\_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE\_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE\_NONE to work. Note: If using temporary queues then CACHE\_NONE is not allowed, and you must use a higher value such as CACHE\_CONSUMER or CACHE\_SESSION.||string| +|replyToDestinationSelectorName|Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue).||string| +|streamMessageTypeEnabled|Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data.|false|boolean| +|allowSerializedHeaders|Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|artemisStreamingEnabled|Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used.|false|boolean| +|asyncStartListener|Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or fail-over. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry.|false|boolean| +|asyncStopListener|Whether to stop the JmsConsumer message listener asynchronously, when stopping a route.|false|boolean| +|destinationResolver|A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry).||object| +|errorHandler|Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler.||object| +|exceptionListener|Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions.||object| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object| +|idleConsumerLimit|Specify the limit for the number of consumers that are allowed to be idle at any given time.|1|integer| +|idleTaskExecutionLimit|Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring.|1|integer| +|includeAllJMSXProperties|Whether to include all JMSX prefixed properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply.|false|boolean| +|jmsKeyFormatStrategy|Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation.||object| +|mapJmsMessage|Specifies whether Camel should auto map the received JMS message to a suited payload type, such as jakarta.jms.TextMessage to a String etc.|true|boolean| +|maxMessagesPerTask|The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required.|-1|integer| +|messageConverter|To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a jakarta.jms.Message.||object| +|messageCreatedStrategy|To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of jakarta.jms.Message objects when Camel is sending a JMS message.||object| +|messageIdEnabled|When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value.|true|boolean| +|messageListenerContainerFactory|Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom.||object| +|messageTimestampEnabled|Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value.|true|boolean| +|pubSubNoLocal|Specifies whether to inhibit the delivery of messages published by its own connection.|false|boolean| +|receiveTimeout|The timeout for receiving messages (in milliseconds).|1000|duration| +|recoveryInterval|Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds.|5000|duration| +|requestTimeoutCheckerInterval|Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout.|1000|duration| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|temporaryQueueResolver|A pluggable TemporaryQueueResolver that allows you to use your own resolver for creating temporary queues (some messaging systems has special requirements for creating temporary queues).||object| +|transferException|If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a jakarta.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!|false|boolean| +|transferExchange|You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!|false|boolean| +|useMessageIDAsCorrelationID|Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages.|false|boolean| +|waitForProvisionCorrelationToBeUpdatedCounter|Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled.|50|integer| +|waitForProvisionCorrelationToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for provisional correlation id to be updated.|100|duration| +|waitForTemporaryReplyToToBeUpdatedCounter|Number of times to wait for temporary replyTo queue to be created and ready when doing request/reply over JMS.|200|integer| +|waitForTemporaryReplyToToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for temporary replyTo queue to be ready.|100|duration| +|errorHandlerLoggingLevel|Allows to configure the default errorHandler logging level for logging uncaught exceptions.|WARN|object| +|errorHandlerLogStackTrace|Allows to control whether stack-traces should be logged or not, by the default errorHandler.|true|boolean| +|password|Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|username|Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|transacted|Specifies whether to use transacted mode|false|boolean| +|transactedInOut|Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction.|false|boolean| +|lazyCreateTransactionManager|If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true.|true|boolean| +|transactionManager|The Spring transaction manager to use.||object| +|transactionName|The name of the transaction to use.||string| +|transactionTimeout|The timeout value of the transaction (in seconds), if using transacted mode.|-1|integer| diff --git a/camel-amqp.md b/camel-amqp.md new file mode 100644 index 0000000000000000000000000000000000000000..b1018fefbc2c8a36e8936acf2cf9675af79c7550 --- /dev/null +++ b/camel-amqp.md @@ -0,0 +1,337 @@ +# Amqp + +**Since Camel 1.2** + +**Both producer and consumer are supported** + +The AMQP component supports the [AMQP 1.0 +protocol](http://www.amqp.org/) using the JMS Client API of the +[Qpid](http://qpid.apache.org/) project. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-amqp + ${camel.version} + + +# URI format + + amqp:[queue:|topic:]destinationName[?options] + +# Usage + +As AMQP component is inherited from JMS component, the usage of the +former is almost identical to the latter: + +**Using AMQP component** + + // Consuming from AMQP queue + from("amqp:queue:incoming"). + to(...); + + // Sending messages to the AMQP topic + from(...). + to("amqp:topic:notify"); + +# Configuring AMQP component + +**Creating AMQP 1.0 component** + + AMQPComponent amqp = AMQPComponent.amqpComponent("amqp://localhost:5672"); + + AMQPComponent authorizedAmqp = AMQPComponent.amqpComponent("amqp://localhost:5672", "user", "password"); + +You can also add an instance of +`org.apache.camel.component.amqp.AMQPConnectionDetails` to the registry +to automatically configure the AMQP component. For example, for Spring +Boot, you have to define bean: + +**AMQP connection details auto-configuration** + + @Bean + AMQPConnectionDetails amqpConnection() { + return new AMQPConnectionDetails("amqp://localhost:5672"); + } + + @Bean + AMQPConnectionDetails securedAmqpConnection() { + return new AMQPConnectionDetails("amqp://localhost:5672", "username", "password"); + } + +Likewise, you can also use CDI producer methods when using Camel-CDI + +**AMQP connection details auto-configuration for CDI** + + @Produces + AMQPConnectionDetails amqpConnection() { + return new AMQPConnectionDetails("amqp://localhost:5672"); + } + +You can also rely on the [Camel properties](#properties-component.adoc) +to read the AMQP connection details. Factory method +`AMQPConnectionDetails.discoverAMQP()` attempts to read Camel properties +in a Kubernetes-like convention, just as demonstrated on the snippet +below: + +**AMQP connection details auto-configuration** + + export AMQP_SERVICE_HOST = "mybroker.com" + export AMQP_SERVICE_PORT = "6666" + export AMQP_SERVICE_USERNAME = "username" + export AMQP_SERVICE_PASSWORD = "password" + + ... + + @Bean + AMQPConnectionDetails amqpConnection() { + return AMQPConnectionDetails.discoverAMQP(); + } + +**Enabling AMQP specific options** + +If you, for example, need to enable `amqp.traceFrames` you can do that +by appending the option to your URI, like the following example: + + AMQPComponent amqp = AMQPComponent.amqpComponent("amqp://localhost:5672?amqp.traceFrames=true"); + +For reference, take a look at the [QPID JMS client +configuration](https://qpid.apache.org/releases/qpid-jms-1.7.0/docs/index.html) + +# Using topics + +To have using topics working with `camel-amqp` you need to configure the +component to use `topic://` as topic prefix, as shown below: + + + + + + + + + + +Keep in mind that both `AMQPComponent#amqpComponent()` methods and +`AMQPConnectionDetails` pre-configure the component with the topic +prefix, so you don’t have to configure it explicitly. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|clientId|Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions with JMS 1.1.||string| +|connectionFactory|The connection factory to be use. A connection factory must be configured either on the component or endpoint.||object| +|disableReplyTo|Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another.|false|boolean| +|durableSubscriptionName|The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well.||string| +|includeAmqpAnnotations|Whether to include AMQP annotations when mapping from AMQP to Camel Message. Setting this to true maps AMQP message annotations that contain a JMS\_AMQP\_MA\_ prefix to message headers. Due to limitations in Apache Qpid JMS API, currently delivery annotations are ignored.|false|boolean| +|jmsMessageType|Allows you to force the use of a specific jakarta.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it.||object| +|replyTo|Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer).||string| +|testConnectionOnStartup|Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well.|false|boolean| +|acknowledgementModeName|The JMS acknowledgement name, which is one of: SESSION\_TRANSACTED, CLIENT\_ACKNOWLEDGE, AUTO\_ACKNOWLEDGE, DUPS\_OK\_ACKNOWLEDGE|AUTO\_ACKNOWLEDGE|string| +|artemisConsumerPriority|Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer).||integer| +|asyncConsumer|Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions).|false|boolean| +|autoStartup|Specifies whether the consumer container should auto-startup.|true|boolean| +|cacheLevel|Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details.||integer| +|cacheLevelName|Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE\_AUTO, CACHE\_CONNECTION, CACHE\_CONSUMER, CACHE\_NONE, and CACHE\_SESSION. The default setting is CACHE\_AUTO. See the Spring documentation and Transactions Cache Levels for more information.|CACHE\_AUTO|string| +|concurrentConsumers|Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.|1|integer| +|maxConcurrentConsumers|Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.||integer| +|replyToDeliveryPersistent|Specifies whether to use persistent delivery by default for replies.|true|boolean| +|selector|Sets the JMS selector to use||string| +|subscriptionDurable|Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well.|false|boolean| +|subscriptionName|Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0).||string| +|subscriptionShared|Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker.|false|boolean| +|acceptMessagesWhileStopping|Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option.|false|boolean| +|allowReplyManagerQuickStop|Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag.|false|boolean| +|consumerType|The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|defaultTaskExecutorType|Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached thread-pool-like). If not set, it defaults to the previous behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers.||object| +|eagerLoadingOfProperties|Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody.|false|boolean| +|eagerPoisonBody|If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties.|Poison JMS message due to ${exception.message}|string| +|exposeListenerSession|Specifies whether the listener session should be exposed when consuming messages.|false|boolean| +|replyToConsumerType|The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|replyToSameDestinationAllowed|Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself.|false|boolean| +|taskExecutor|Allows you to specify a custom task executor for consuming messages.||object| +|deliveryDelay|Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker.|-1|integer| +|deliveryMode|Specifies the delivery mode to be used. Possible values are those defined by jakarta.jms.DeliveryMode. NON\_PERSISTENT = 1 and PERSISTENT = 2.||integer| +|deliveryPersistent|Specifies whether persistent delivery is used by default.|true|boolean| +|explicitQosEnabled|Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers.|false|boolean| +|formatDateHeadersToIso8601|Sets whether JMS date properties should be formatted according to the ISO 8601 standard.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|preserveMessageQos|Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header.|false|boolean| +|priority|Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect.|4|integer| +|replyToConcurrentConsumers|Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.|1|integer| +|replyToMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.||integer| +|replyToOnTimeoutMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS.|1|integer| +|replyToOverride|Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination.||string| +|replyToType|Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive.||object| +|requestTimeout|The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option.|20000|duration| +|timeToLive|When sending messages, specifies the time-to-live of the message (in milliseconds).|-1|integer| +|allowAdditionalHeaders|This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example, some message systems, such as WMQ, do this with header names using prefix JMS\_IBM\_MQMD\_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching.||string| +|allowNullBody|Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown.|true|boolean| +|alwaysCopyMessage|If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set)|false|boolean| +|correlationProperty|When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel.||string| +|disableTimeToLive|Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details.|false|boolean| +|forceSendOriginalMessage|When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received.|false|boolean| +|includeSentJMSMessageID|Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination.|false|boolean| +|replyToCacheLevelName|Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE\_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE\_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE\_NONE to work. Note: If using temporary queues then CACHE\_NONE is not allowed, and you must use a higher value such as CACHE\_CONSUMER or CACHE\_SESSION.||string| +|replyToDestinationSelectorName|Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue).||string| +|streamMessageTypeEnabled|Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data.|false|boolean| +|allowAutoWiredConnectionFactory|Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default.|true|boolean| +|allowAutoWiredDestinationResolver|Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default.|true|boolean| +|allowSerializedHeaders|Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|artemisStreamingEnabled|Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used.|false|boolean| +|asyncStartListener|Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or fail-over. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry.|false|boolean| +|asyncStopListener|Whether to stop the JmsConsumer message listener asynchronously, when stopping a route.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configuration|To use a shared JMS configuration||object| +|destinationResolver|A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry).||object| +|errorHandler|Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler.||object| +|exceptionListener|Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions.||object| +|idleConsumerLimit|Specify the limit for the number of consumers that are allowed to be idle at any given time.|1|integer| +|idleTaskExecutionLimit|Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring.|1|integer| +|includeAllJMSXProperties|Whether to include all JMSX prefixed properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply.|false|boolean| +|includeCorrelationIDAsBytes|Whether the JMS consumer should include JMSCorrelationIDAsBytes as a header on the Camel Message.|true|boolean| +|jmsKeyFormatStrategy|Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation.||object| +|mapJmsMessage|Specifies whether Camel should auto map the received JMS message to a suited payload type, such as jakarta.jms.TextMessage to a String etc.|true|boolean| +|maxMessagesPerTask|The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required.|-1|integer| +|messageConverter|To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a jakarta.jms.Message.||object| +|messageCreatedStrategy|To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of jakarta.jms.Message objects when Camel is sending a JMS message.||object| +|messageIdEnabled|When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value.|true|boolean| +|messageListenerContainerFactory|Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom.||object| +|messageTimestampEnabled|Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value.|true|boolean| +|pubSubNoLocal|Specifies whether to inhibit the delivery of messages published by its own connection.|false|boolean| +|queueBrowseStrategy|To use a custom QueueBrowseStrategy when browsing queues||object| +|receiveTimeout|The timeout for receiving messages (in milliseconds).|1000|duration| +|recoveryInterval|Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds.|5000|duration| +|requestTimeoutCheckerInterval|Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout.|1000|duration| +|serviceLocationEnabled|Whether to detect the network address location of the JMS broker on startup. This information is gathered via reflection on the ConnectionFactory, and is vendor specific. This option can be used to turn this off.|true|boolean| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|temporaryQueueResolver|A pluggable TemporaryQueueResolver that allows you to use your own resolver for creating temporary queues (some messaging systems has special requirements for creating temporary queues).||object| +|transferException|If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a jakarta.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!|false|boolean| +|transferExchange|You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!|false|boolean| +|useMessageIDAsCorrelationID|Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages.|false|boolean| +|waitForProvisionCorrelationToBeUpdatedCounter|Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled.|50|integer| +|waitForProvisionCorrelationToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for provisional correlation id to be updated.|100|duration| +|waitForTemporaryReplyToToBeUpdatedCounter|Number of times to wait for temporary replyTo queue to be created and ready when doing request/reply over JMS.|200|integer| +|waitForTemporaryReplyToToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for temporary replyTo queue to be ready.|100|duration| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| +|errorHandlerLoggingLevel|Allows to configure the default errorHandler logging level for logging uncaught exceptions.|WARN|object| +|errorHandlerLogStackTrace|Allows to control whether stack-traces should be logged or not, by the default errorHandler.|true|boolean| +|password|Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|username|Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|transacted|Specifies whether to use transacted mode|false|boolean| +|transactedInOut|Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction.|false|boolean| +|lazyCreateTransactionManager|If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true.|true|boolean| +|transactionManager|The Spring transaction manager to use.||object| +|transactionName|The name of the transaction to use.||string| +|transactionTimeout|The timeout value of the transaction (in seconds), if using transacted mode.|-1|integer| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|destinationType|The kind of destination to use|queue|string| +|destinationName|Name of the queue or topic to use as destination||string| +|clientId|Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions with JMS 1.1.||string| +|connectionFactory|The connection factory to be use. A connection factory must be configured either on the component or endpoint.||object| +|disableReplyTo|Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another.|false|boolean| +|durableSubscriptionName|The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well.||string| +|jmsMessageType|Allows you to force the use of a specific jakarta.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it.||object| +|replyTo|Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer).||string| +|testConnectionOnStartup|Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well.|false|boolean| +|acknowledgementModeName|The JMS acknowledgement name, which is one of: SESSION\_TRANSACTED, CLIENT\_ACKNOWLEDGE, AUTO\_ACKNOWLEDGE, DUPS\_OK\_ACKNOWLEDGE|AUTO\_ACKNOWLEDGE|string| +|artemisConsumerPriority|Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer).||integer| +|asyncConsumer|Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions).|false|boolean| +|autoStartup|Specifies whether the consumer container should auto-startup.|true|boolean| +|cacheLevel|Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details.||integer| +|cacheLevelName|Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE\_AUTO, CACHE\_CONNECTION, CACHE\_CONSUMER, CACHE\_NONE, and CACHE\_SESSION. The default setting is CACHE\_AUTO. See the Spring documentation and Transactions Cache Levels for more information.|CACHE\_AUTO|string| +|concurrentConsumers|Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.|1|integer| +|maxConcurrentConsumers|Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.||integer| +|replyToDeliveryPersistent|Specifies whether to use persistent delivery by default for replies.|true|boolean| +|selector|Sets the JMS selector to use||string| +|subscriptionDurable|Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well.|false|boolean| +|subscriptionName|Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0).||string| +|subscriptionShared|Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker.|false|boolean| +|acceptMessagesWhileStopping|Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option.|false|boolean| +|allowReplyManagerQuickStop|Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag.|false|boolean| +|consumerType|The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|defaultTaskExecutorType|Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached thread-pool-like). If not set, it defaults to the previous behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers.||object| +|eagerLoadingOfProperties|Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody.|false|boolean| +|eagerPoisonBody|If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties.|Poison JMS message due to ${exception.message}|string| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|exposeListenerSession|Specifies whether the listener session should be exposed when consuming messages.|false|boolean| +|replyToConsumerType|The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|replyToSameDestinationAllowed|Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself.|false|boolean| +|taskExecutor|Allows you to specify a custom task executor for consuming messages.||object| +|deliveryDelay|Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker.|-1|integer| +|deliveryMode|Specifies the delivery mode to be used. Possible values are those defined by jakarta.jms.DeliveryMode. NON\_PERSISTENT = 1 and PERSISTENT = 2.||integer| +|deliveryPersistent|Specifies whether persistent delivery is used by default.|true|boolean| +|explicitQosEnabled|Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers.|false|boolean| +|formatDateHeadersToIso8601|Sets whether JMS date properties should be formatted according to the ISO 8601 standard.|false|boolean| +|preserveMessageQos|Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header.|false|boolean| +|priority|Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect.|4|integer| +|replyToConcurrentConsumers|Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.|1|integer| +|replyToMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.||integer| +|replyToOnTimeoutMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS.|1|integer| +|replyToOverride|Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination.||string| +|replyToType|Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive.||object| +|requestTimeout|The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option.|20000|duration| +|timeToLive|When sending messages, specifies the time-to-live of the message (in milliseconds).|-1|integer| +|allowAdditionalHeaders|This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example, some message systems, such as WMQ, do this with header names using prefix JMS\_IBM\_MQMD\_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching.||string| +|allowNullBody|Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown.|true|boolean| +|alwaysCopyMessage|If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set)|false|boolean| +|correlationProperty|When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel.||string| +|disableTimeToLive|Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details.|false|boolean| +|forceSendOriginalMessage|When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received.|false|boolean| +|includeSentJMSMessageID|Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|replyToCacheLevelName|Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE\_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE\_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE\_NONE to work. Note: If using temporary queues then CACHE\_NONE is not allowed, and you must use a higher value such as CACHE\_CONSUMER or CACHE\_SESSION.||string| +|replyToDestinationSelectorName|Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue).||string| +|streamMessageTypeEnabled|Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data.|false|boolean| +|allowSerializedHeaders|Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|artemisStreamingEnabled|Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used.|false|boolean| +|asyncStartListener|Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or fail-over. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry.|false|boolean| +|asyncStopListener|Whether to stop the JmsConsumer message listener asynchronously, when stopping a route.|false|boolean| +|destinationResolver|A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry).||object| +|errorHandler|Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler.||object| +|exceptionListener|Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions.||object| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object| +|idleConsumerLimit|Specify the limit for the number of consumers that are allowed to be idle at any given time.|1|integer| +|idleTaskExecutionLimit|Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring.|1|integer| +|includeAllJMSXProperties|Whether to include all JMSX prefixed properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply.|false|boolean| +|jmsKeyFormatStrategy|Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation.||object| +|mapJmsMessage|Specifies whether Camel should auto map the received JMS message to a suited payload type, such as jakarta.jms.TextMessage to a String etc.|true|boolean| +|maxMessagesPerTask|The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required.|-1|integer| +|messageConverter|To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a jakarta.jms.Message.||object| +|messageCreatedStrategy|To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of jakarta.jms.Message objects when Camel is sending a JMS message.||object| +|messageIdEnabled|When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value.|true|boolean| +|messageListenerContainerFactory|Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom.||object| +|messageTimestampEnabled|Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value.|true|boolean| +|pubSubNoLocal|Specifies whether to inhibit the delivery of messages published by its own connection.|false|boolean| +|receiveTimeout|The timeout for receiving messages (in milliseconds).|1000|duration| +|recoveryInterval|Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds.|5000|duration| +|requestTimeoutCheckerInterval|Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout.|1000|duration| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|temporaryQueueResolver|A pluggable TemporaryQueueResolver that allows you to use your own resolver for creating temporary queues (some messaging systems has special requirements for creating temporary queues).||object| +|transferException|If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a jakarta.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!|false|boolean| +|transferExchange|You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!|false|boolean| +|useMessageIDAsCorrelationID|Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages.|false|boolean| +|waitForProvisionCorrelationToBeUpdatedCounter|Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled.|50|integer| +|waitForProvisionCorrelationToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for provisional correlation id to be updated.|100|duration| +|waitForTemporaryReplyToToBeUpdatedCounter|Number of times to wait for temporary replyTo queue to be created and ready when doing request/reply over JMS.|200|integer| +|waitForTemporaryReplyToToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for temporary replyTo queue to be ready.|100|duration| +|errorHandlerLoggingLevel|Allows to configure the default errorHandler logging level for logging uncaught exceptions.|WARN|object| +|errorHandlerLogStackTrace|Allows to control whether stack-traces should be logged or not, by the default errorHandler.|true|boolean| +|password|Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|username|Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|transacted|Specifies whether to use transacted mode|false|boolean| +|transactedInOut|Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction.|false|boolean| +|lazyCreateTransactionManager|If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true.|true|boolean| +|transactionManager|The Spring transaction manager to use.||object| +|transactionName|The name of the transaction to use.||string| +|transactionTimeout|The timeout value of the transaction (in seconds), if using transacted mode.|-1|integer| diff --git a/camel-arangodb.md b/camel-arangodb.md new file mode 100644 index 0000000000000000000000000000000000000000..73c2d7a86a330e85a2c92d651856f6b7b3ae6769 --- /dev/null +++ b/camel-arangodb.md @@ -0,0 +1,93 @@ +# Arangodb + +**Since Camel 3.5** + +**Only producer is supported** + +The ArangoDb component is client for ArangoDb that uses the [arango java +driver](https://github.com/arangodb/arangodb-java-driver) to perform +queries on collections and graphs in the ArangoDb database. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-arangodb + x.x.x + + + +# URI format + + arangodb:database[?options] + +# Examples + +## Producer Examples + +### Save document on a collection + + from("direct:insert") + .to("arangodb:testDb?documentCollection=collection&operation=SAVE_DOCUMENT"); + +And you can set as body a BaseDocument class + + BaseDocument myObject = new BaseDocument(); + myObject.addAttribute("a", "Foo"); + myObject.addAttribute("b", 42); + +### Query a collection + + from("direct:query") + .to("arangodb:testDb?operation=AQL_QUERY + +And you can invoke an AQL Query in this way + + String query = "FOR t IN " + COLLECTION_NAME + " FILTER t.value == @value"; + Map bindVars = new MapBuilder().put("value", "hello") + .get(); + + Exchange result = template.request("direct:query", exchange -> { + exchange.getMessage().setHeader(AQL_QUERY, query); + exchange.getMessage().setHeader(AQL_QUERY_BIND_PARAMETERS, bindVars); + }); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|documentCollection|Collection name, when using ArangoDb as a Document Database. Set the documentCollection name when using the CRUD operation on the document database collections (SAVE\_DOCUMENT , FIND\_DOCUMENT\_BY\_KEY, UPDATE\_DOCUMENT, DELETE\_DOCUMENT).||string| +|edgeCollection|Collection name of vertices, when using ArangoDb as a Graph Database. Set the edgeCollection name to perform CRUD operation on edges using these operations : SAVE\_VERTEX, FIND\_VERTEX\_BY\_KEY, UPDATE\_VERTEX, DELETE\_VERTEX. The graph attribute is mandatory.||string| +|graph|Graph name, when using ArangoDb as a Graph Database. Combine this attribute with one of the two attributes vertexCollection and edgeCollection.||string| +|host|ArangoDB host. If host and port are default, this field is Optional.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|Operations to perform on ArangoDb. For the operation AQL\_QUERY, no need to specify a collection or graph.||object| +|port|ArangoDB exposed port. If host and port are default, this field is Optional.||integer| +|vertexCollection|Collection name of vertices, when using ArangoDb as a Graph Database. Set the vertexCollection name to perform CRUD operation on vertices using these operations : SAVE\_EDGE, FIND\_EDGE\_BY\_KEY, UPDATE\_EDGE, DELETE\_EDGE. The graph attribute is mandatory.||string| +|arangoDB|To use an existing ArangDB client.||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|vertx|To use an existing Vertx in the ArangoDB client.||object| +|password|ArangoDB password. If user and password are default, this field is Optional.||string| +|user|ArangoDB user. If user and password are default, this field is Optional.||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|database|database name||string| +|documentCollection|Collection name, when using ArangoDb as a Document Database. Set the documentCollection name when using the CRUD operation on the document database collections (SAVE\_DOCUMENT , FIND\_DOCUMENT\_BY\_KEY, UPDATE\_DOCUMENT, DELETE\_DOCUMENT).||string| +|edgeCollection|Collection name of vertices, when using ArangoDb as a Graph Database. Set the edgeCollection name to perform CRUD operation on edges using these operations : SAVE\_VERTEX, FIND\_VERTEX\_BY\_KEY, UPDATE\_VERTEX, DELETE\_VERTEX. The graph attribute is mandatory.||string| +|graph|Graph name, when using ArangoDb as a Graph Database. Combine this attribute with one of the two attributes vertexCollection and edgeCollection.||string| +|host|ArangoDB host. If host and port are default, this field is Optional.||string| +|operation|Operations to perform on ArangoDb. For the operation AQL\_QUERY, no need to specify a collection or graph.||object| +|port|ArangoDB exposed port. If host and port are default, this field is Optional.||integer| +|vertexCollection|Collection name of vertices, when using ArangoDb as a Graph Database. Set the vertexCollection name to perform CRUD operation on vertices using these operations : SAVE\_EDGE, FIND\_EDGE\_BY\_KEY, UPDATE\_EDGE, DELETE\_EDGE. The graph attribute is mandatory.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|arangoDB|To use an existing ArangDB client.||object| +|vertx|To use an existing Vertx instance in the ArangoDB client.||object| +|password|ArangoDB password. If user and password are default, this field is Optional.||string| +|user|ArangoDB user. If user and password are default, this field is Optional.||string| diff --git a/camel-as2.md b/camel-as2.md new file mode 100644 index 0000000000000000000000000000000000000000..c1de2d2120c64ca12b3e182f22287f709b99f352 --- /dev/null +++ b/camel-as2.md @@ -0,0 +1,89 @@ +# As2 + +**Since Camel 2.22** + +**Both producer and consumer are supported** + +The AS2 component provides transport of EDI messages using the HTTP +transfer protocol as specified in +[RFC4130](https://tools.ietf.org/html/rfc4130). + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-as2 + x.x.x + + + +# URI format + +**Sample URL** + + as2://apiName/methodName + +apiName can be one of: + +- client + +- server + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configuration|Component configuration||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|apiName|What kind of operation to perform||object| +|methodName|What sub operation to use for the selected operation||string| +|as2From|The value of the AS2From header of AS2 message.||string| +|as2MessageStructure|The structure of AS2 Message. One of: PLAIN - No encryption, no signature, SIGNED - No encryption, signature, ENCRYPTED - Encryption, no signature, ENCRYPTED\_SIGNED - Encryption, signature||object| +|as2To|The value of the AS2To header of AS2 message.||string| +|as2Version|The version of the AS2 protocol.|1.1|string| +|asyncMdnPortNumber|The port number of asynchronous MDN server.||integer| +|attachedFileName|The name of the attached file||string| +|clientFqdn|The Client Fully Qualified Domain Name (FQDN). Used in message ids sent by endpoint.|camel.apache.org|string| +|compressionAlgorithm|The algorithm used to compress EDI message.||object| +|dispositionNotificationTo|The value of the Disposition-Notification-To header. Assigning a value to this parameter requests a message disposition notification (MDN) for the AS2 message.||string| +|ediMessageTransferEncoding|The transfer encoding of EDI message.||string| +|ediMessageType|The content type of EDI message. One of application/edifact, application/edi-x12, application/edi-consent, application/xml||object| +|from|The value of the From header of AS2 message.||string| +|hostnameVerifier|Set hostname verifier for SSL session.||object| +|httpConnectionPoolSize|The maximum size of the connection pool for http connections (client only)|5|integer| +|httpConnectionPoolTtl|The time to live for connections in the connection pool (client only)|15m|object| +|httpConnectionTimeout|The timeout of the http connection (client only)|5s|object| +|httpSocketTimeout|The timeout of the underlying http socket (client only)|5s|object| +|inBody|Sets the name of a parameter to be passed in the exchange In Body||string| +|mdnMessageTemplate|The template used to format MDN message||string| +|receiptDeliveryOption|The return URL that the message receiver should send an asynchronous MDN to. If not present the receipt is synchronous. (Client only)||string| +|requestUri|The request URI of EDI message.|/|string| +|server|The value included in the Server message header identifying the AS2 Server.|Camel AS2 Server Endpoint|string| +|serverFqdn|The Server Fully Qualified Domain Name (FQDN). Used in message ids sent by endpoint.|camel.apache.org|string| +|serverPortNumber|The port number of server.||integer| +|sslContext|Set SSL context for connection to remote server.||object| +|subject|The value of Subject header of AS2 message.||string| +|targetHostname|The host name (IP or DNS name) of target host.||string| +|targetPortNumber|The port number of target host. -1 indicates the scheme default port.|80|integer| +|userAgent|The value included in the User-Agent message header identifying the AS2 user agent.|Camel AS2 Client Endpoint|string| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|decryptingPrivateKey|The key used to encrypt the EDI message.||object| +|encryptingAlgorithm|The algorithm used to encrypt EDI message.||object| +|encryptingCertificateChain|The chain of certificates used to encrypt EDI message.||object| +|signedReceiptMicAlgorithms|The list of algorithms, in order of preference, requested to generate a message integrity check (MIC) returned in message dispostion notification (MDN)||array| +|signingAlgorithm|The algorithm used to sign EDI message.||object| +|signingCertificateChain|The chain of certificates used to sign EDI message.||object| +|signingPrivateKey|The key used to sign the EDI message.||object| +|validateSigningCertificateChain|Certificates to validate the message's signature against. If not supplied, validation will not take place. Server: validates the received message. Client: not yet implemented, should validate the MDN||object| diff --git a/camel-asterisk.md b/camel-asterisk.md new file mode 100644 index 0000000000000000000000000000000000000000..cfe12a450e983e9545656bd8258cd112577db638 --- /dev/null +++ b/camel-asterisk.md @@ -0,0 +1,72 @@ +# Asterisk + +**Since Camel 2.18** + +**Both producer and consumer are supported** + +The Asterisk component allows you to work easily with an Asterisk PBX +Server [http://www.asterisk.org/](http://www.asterisk.org/) using +[asterisk-java](https://asterisk-java.org/) + +This component helps to interface with [Asterisk Manager +Interface](http://www.voip-info.org/wiki-Asterisk+manager+API) + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-asterisk + x.x.x + + + +# URI format + + asterisk:name[?options] + +# Action + +Supported actions are: + +- QUEUE\_STATUS: Queue status + +- SIP\_PEERS: List SIP peers + +- EXTENSION\_STATE: Check extension status + +# Examples + +## Producer Example + + from("direct:in") + .to("asterisk://myVoIP?hostname=hostname&username=username&password=password&action=EXTENSION_STATE") + +## Consumer Example + + from("asterisk:myVoIP?hostname=hostname&username=username&password=password") + .log("Received a message - ${body}"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Name of component||string| +|hostname|The hostname of the asterisk server||string| +|password|Login password||string| +|username|Login username||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|action|What action to perform such as getting queue status, sip peers or extension state.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-atmosphere-websocket.md b/camel-atmosphere-websocket.md new file mode 100644 index 0000000000000000000000000000000000000000..73dfd3a979ef2dd47d7fcdb31e1b45bdfda64a27 --- /dev/null +++ b/camel-atmosphere-websocket.md @@ -0,0 +1,119 @@ +# Atmosphere-websocket + +**Since Camel 2.14** + +**Both producer and consumer are supported** + +The Atmosphere-Websocket component provides Websocket based endpoints +for a servlet communicating with external clients over Websocket (as a +servlet accepting websocket connections from external clients). This +component uses the +[Atmosphere](https://github.com/Atmosphere/atmosphere) library to +support the Websocket transport in various Servlet containers. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-atmosphere-websocket + x.x.x + + + +# Reading and Writing Data over Websocket + +An atmopshere-websocket endpoint can either write data to the socket or +read from the socket, depending on whether the endpoint is configured as +the producer or the consumer, respectively. + +# Examples + +## Consumer Example + +In the route below, Camel will read from the specified websocket +connection. + + from("atmosphere-websocket:///servicepath") + .to("direct:next"); + +And the equivalent Spring sample: + + + + + + + + +## Producer Example + +In the route below, Camel will write to the specified websocket +connection. + + from("direct:next") + .to("atmosphere-websocket:///servicepath"); + +And the equivalent Spring sample: + + + + + + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|muteException|If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace.|true|boolean| +|servletName|Default name of servlet to use. The default name is CamelServlet.|CamelServlet|string| +|attachmentMultipartBinding|Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turn off by default as this may require servlet specific configuration to enable this when using Servlet's.|false|boolean| +|fileNameExtWhitelist|Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml.||string| +|httpRegistry|To use a custom org.apache.camel.component.servlet.HttpRegistry.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|allowJavaSerializedObject|Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|httpBinding|To use a custom HttpBinding to control the mapping between Camel message and HttpClient.||object| +|httpConfiguration|To use the shared HttpConfiguration as base configuration.||object| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|servicePath|Name of websocket endpoint||string| +|chunked|If this option is false the Servlet will disable the HTTP streaming and set the content-length header on the response|true|boolean| +|disableStreamCache|Determines whether or not the raw input stream from Servlet is cached or not (Camel will read the stream into a in memory/overflow to file, Stream caching) cache. By default Camel will cache the Servlet input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The http producer will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is as the message body.|false|boolean| +|sendToAll|Whether to send to all (broadcast) or send to a single receiver.|false|boolean| +|transferException|If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk.|false|boolean| +|useStreaming|To enable streaming to send data as multiple text fragments.|false|boolean| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object| +|httpBinding|To use a custom HttpBinding to control the mapping between Camel message and HttpClient.||object| +|async|Configure the consumer to work in async mode|false|boolean| +|httpMethodRestrict|Used to only allow consuming if the HttpMethod matches, such as GET/POST/PUT etc. Multiple methods can be specified separated by comma.||string| +|logException|If enabled and an Exchange failed processing on the consumer side the exception's stack trace will be logged when the exception stack trace is not sent in the response's body.|false|boolean| +|matchOnUriPrefix|Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found.|false|boolean| +|muteException|If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace.|false|boolean| +|responseBufferSize|To use a custom buffer size on the jakarta.servlet.ServletResponse.||integer| +|servletName|Name of the servlet to use|CamelServlet|string| +|attachmentMultipartBinding|Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turn off by default as this may require servlet specific configuration to enable this when using Servlet's.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|eagerCheckContentAvailable|Whether to eager check whether the HTTP requests has content if the content-length header is 0 or not present. This can be turned on in case HTTP clients do not send streamed data.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|fileNameExtWhitelist|Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml.||string| +|mapHttpMessageBody|If this option is true then IN exchange Body of the exchange will be mapped to HTTP body. Setting this to false will avoid the HTTP mapping.|true|boolean| +|mapHttpMessageFormUrlEncodedBody|If this option is true then IN exchange Form Encoded body of the exchange will be mapped to HTTP. Setting this to false will avoid the HTTP Form Encoded body mapping.|true|boolean| +|mapHttpMessageHeaders|If this option is true then IN exchange Headers of the exchange will be mapped to HTTP headers. Setting this to false will avoid the HTTP Headers mapping.|true|boolean| +|optionsEnabled|Specifies whether to enable HTTP OPTIONS for this Servlet consumer. By default OPTIONS is turned off.|false|boolean| +|traceEnabled|Specifies whether to enable HTTP TRACE for this Servlet consumer. By default TRACE is turned off.|false|boolean| +|bridgeEndpoint|If the option is true, HttpProducer will ignore the Exchange.HTTP\_URI header, and use the endpoint's URI for request. You may also set the option throwExceptionOnFailure to be false to let the HttpProducer send all the fault response back.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|oauth2ClientId|OAuth2 client id||string| +|oauth2ClientSecret|OAuth2 client secret||string| +|oauth2TokenEndpoint|OAuth2 Token endpoint||string| diff --git a/camel-atom.md b/camel-atom.md new file mode 100644 index 0000000000000000000000000000000000000000..80078b39c37cf31b0efb7a00fe66aa28f0c7bf90 --- /dev/null +++ b/camel-atom.md @@ -0,0 +1,116 @@ +# Atom + +**Since Camel 1.2** + +**Only consumer is supported** + +The Atom component is used for polling Atom feeds. + +Camel will poll the feed every 60 seconds by default. +**Note:** The component currently only supports polling (consuming) +feeds. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-atom + x.x.x + + + +# URI format + + atom://atomUri[?options] + +Where **atomUri** is the URI to the Atom feed to poll. + +# Exchange data format + +Camel will set the In body on the returned `Exchange` with the entries. +Depending on the `splitEntries` flag Camel will either return one +`Entry` or a `List`. + + +++++ + + + + + + + + + + + + + + + + + + + +
OptionValueBehavior

splitEntries

true

Only a single entry from the currently +being processed feed is set: +exchange.in.body(Entry)

splitEntries

false

The entire list of entries from the +feed is set: exchange.in.body(List<Entry>)

+ +Camel can set the `Feed` object on the In header (see `feedHeader` +option to disable this): + +# Examples + +## Consumer Example + +In this sample, we poll James Strachan’s blog. + + from("atom://http://macstrac.blogspot.com/feeds/posts/default").to("seda:feeds"); + +In this sample, we want to filter only good blogs we like to a SEDA +queue. The sample also shows how to set up Camel standalone, not running +in any Container or using Spring. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|feedUri|The URI to the feed to poll.||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|sortEntries|Sets whether to sort entries by published date. Only works when splitEntries = true.|false|boolean| +|splitEntries|Sets whether or not entries should be sent individually or whether the entire feed should be sent as a single message|true|boolean| +|throttleEntries|Sets whether all entries identified in a single feed poll should be delivered immediately. If true, only one entry is processed per delay. Only applicable when splitEntries = true.|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|feedHeader|Sets whether to add the feed object as a header.|true|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| diff --git a/camel-avro.md b/camel-avro.md new file mode 100644 index 0000000000000000000000000000000000000000..f54cd9553cd196b3fd387bc8918d912d6e86ec5e --- /dev/null +++ b/camel-avro.md @@ -0,0 +1,212 @@ +# Avro + +**Since Camel 2.10** + +**Both producer and consumer are supported** + +This component provides support for Apache Avro’s rpc, by providing +producers and consumers endpoint for using avro over netty or http. +Before Camel 3.2 this functionality was a part of camel-avro component. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-avro-rpc + x.x.x + + + +# Apache Avro Overview + +Avro allows you to define message types and a protocol using a json like +format and then generate java code for the specified types and messages. +An example of how a schema looks like is below. + + {"namespace": "org.apache.camel.avro.generated", + "protocol": "KeyValueProtocol", + + "types": [ + {"name": "Key", "type": "record", + "fields": [ + {"name": "key", "type": "string"} + ] + }, + {"name": "Value", "type": "record", + "fields": [ + {"name": "value", "type": "string"} + ] + } + ], + + "messages": { + "put": { + "request": [{"name": "key", "type": "Key"}, {"name": "value", "type": "Value"} ], + "response": "null" + }, + "get": { + "request": [{"name": "key", "type": "Key"}], + "response": "Value" + } + } + } + +You can easily generate classes from a schema, using maven, ant etc. +More details can be found at the [Apache Avro +documentation](http://avro.apache.org/docs/current/). + +However, it doesn’t enforce a schema-first approach, and you can create +schema for your existing classes. You can use existing protocol +interfaces to make RCP calls. You should use interface for the protocol +itself and POJO beans or primitive/String classes for parameter and +result types. Here is an example of the class that corresponds to the +schema above: + + package org.apache.camel.avro.reflection; + + public interface KeyValueProtocol { + void put(String key, Value value); + Value get(String key); + } + + class Value { + private String value; + public String getValue() { return value; } + public void setValue(String value) { this.value = value; } + } + +*Note: Existing classes can be used only for RPC (see below), not in +data format.* + +# Using Avro RPC in Camel + +As mentioned above, Avro also provides RPC support over multiple +transports such as http and netty. Camel provides consumers and +producers for these two transports. + + avro:[transport]:[host]:[port][?options] + +The supported transport values are currently http or netty. + +You can specify the message name right in the URI: + + avro:[transport]:[host]:[port][/messageName][?options] + +For consumers, this allows you to have multiple routes attached to the +same socket. Dispatching to the correct route will be done by the avro +component automatically. Route with no messageName specified (if any) +will be used as default. + +When using camel producers for avro ipc, the "in" message body needs to +contain the parameters of the operation specified in the avro protocol. +The response will be added in the body of the "out" message. + +In a similar manner when using camel avro consumers for avro ipc, the +request parameters will be placed inside the "in" message body of the +created exchange. Once the exchange is processed, the body of the "out" +message will be sent as a response. + +**Note:** By default, consumer parameters are wrapped into an array. If +you’ve got only one parameter, **since 2.12** you can use +`singleParameter` URI option to receive it directly in the "in" message +body without array wrapping. + +# Examples + +An example of using camel avro producers via http: + + + + + + + +In the example above you need to fill `CamelAvroMessageName` header. + +You can use the following syntax to call constant messages: + + + + + + + +An example of consuming messages using camel avro consumers via netty: + + + + + + ${in.headers.CamelAvroMessageName == 'put'} + + + + ${in.headers.CamelAvroMessageName == 'get'} + + + + + +You can set up two distinct routes to perform the same task: + + + + + + + + + + +In the example above, get takes only one parameter, so `singleParameter` +is used and `getProcessor` will receive Value class directly in body, +while `putProcessor` will receive an array of size 2 with `String` key +and `Value` value filled as array contents. + +# Avro via HTTP SPI + +The Avro RPC component offers the +`org.apache.camel.component.avro.spi.AvroRpcHttpServerFactory` service +provider interface (SPI) so that various platforms can provide their own +implementation based on their native HTTP server. + +The default implementation available in +`org.apache.camel:camel-avro-jetty` is based on +`org.apache.avro:avro-ipc-jetty`. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|protocol|Avro protocol to use||object| +|protocolClassName|Avro protocol to use defined by the FQN class name||string| +|protocolLocation|Avro protocol location||string| +|reflectionProtocol|If the protocol object provided is reflection protocol. Should be used only with protocol parameter because for protocolClassName protocol type will be auto-detected|false|boolean| +|singleParameter|If true, consumer parameter won't be wrapped into an array. Will fail if protocol specifies more than one parameter for the message|false|boolean| +|uriAuthority|Authority to use (username and password)||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configuration|To use a shared AvroConfiguration to configure options once||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|transport|Transport to use, can be either http or netty||object| +|port|Port number to use||integer| +|host|Hostname to use||string| +|messageName|The name of the message to send.||string| +|protocol|Avro protocol to use||object| +|protocolClassName|Avro protocol to use defined by the FQN class name||string| +|protocolLocation|Avro protocol location||string| +|reflectionProtocol|If the protocol object provided is reflection protocol. Should be used only with protocol parameter because for protocolClassName protocol type will be auto-detected|false|boolean| +|singleParameter|If true, consumer parameter won't be wrapped into an array. Will fail if protocol specifies more than one parameter for the message|false|boolean| +|uriAuthority|Authority to use (username and password)||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-aws-bedrock-agent-runtime.md b/camel-aws-bedrock-agent-runtime.md new file mode 100644 index 0000000000000000000000000000000000000000..92efcf85a82071442f94ebad375ea2e0919513f3 --- /dev/null +++ b/camel-aws-bedrock-agent-runtime.md @@ -0,0 +1,136 @@ +# Aws-bedrock-agent-runtime + +**Since Camel 4.5** + +**Only producer is supported** + +The AWS2 Bedrock component supports invoking a supported LLM model from +[AWS Bedrock](https://aws.amazon.com/bedrock/) service. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon Bedrock. More information is available at +[Amazon Bedrock](https://aws.amazon.com/bedrock/). + +# URI Format + + aws-bedrock-agent-runtime://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required Bedrock component options + +You have to provide the bedrockRuntimeClient in the Registry or your +accessKey and secretKey to access the [Amazon +Bedrock](https://aws.amazon.com/bedrock/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws-bedrock + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|knowledgeBaseId|Define the Knowledge Base Id we are going to use||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|modelId|Define the model Id we are going to use||string| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name|false|string| +|region|The region in which Bedrock Agent Runtime client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useDefaultCredentialsProvider|Set whether the Bedrock Agent Runtime client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Bedrock Agent Runtime client should expect to load credentials through a profile credentials provider.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|bedrockAgentRuntimeClient|To use an existing configured AWS Bedrock Agent Runtime client||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the Bedrock Agent Runtime client||string| +|proxyPort|To define a proxy port when instantiating the Bedrock Agent Runtime client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Bedrock Agent Runtime client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useSessionCredentials|Set whether the Bedrock Agent Runtime client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Bedrock.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|knowledgeBaseId|Define the Knowledge Base Id we are going to use||string| +|modelId|Define the model Id we are going to use||string| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name|false|string| +|region|The region in which Bedrock Agent Runtime client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useDefaultCredentialsProvider|Set whether the Bedrock Agent Runtime client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Bedrock Agent Runtime client should expect to load credentials through a profile credentials provider.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|bedrockAgentRuntimeClient|To use an existing configured AWS Bedrock Agent Runtime client||object| +|proxyHost|To define a proxy host when instantiating the Bedrock Agent Runtime client||string| +|proxyPort|To define a proxy port when instantiating the Bedrock Agent Runtime client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Bedrock Agent Runtime client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useSessionCredentials|Set whether the Bedrock Agent Runtime client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Bedrock.|false|boolean| diff --git a/camel-aws-bedrock-agent.md b/camel-aws-bedrock-agent.md new file mode 100644 index 0000000000000000000000000000000000000000..078547505875fd15e8f8e39c7d7d1bab404fedba --- /dev/null +++ b/camel-aws-bedrock-agent.md @@ -0,0 +1,160 @@ +# Aws-bedrock-agent + +**Since Camel 4.5** + +**Both producer and consumer are supported** + +The AWS2 Bedrock component supports invoking a supported LLM model from +[AWS Bedrock](https://aws.amazon.com/bedrock/) service. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon Bedrock. More information is available at +[Amazon Bedrock](https://aws.amazon.com/bedrock/). + +# URI Format + + aws-bedrock-agent-runtime://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required Bedrock component options + +You have to provide the bedrockRuntimeClient in the Registry or your +accessKey and secretKey to access the [Amazon +Bedrock](https://aws.amazon.com/bedrock/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws-bedrock + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|dataSourceId|Define the Data source Id we are going to use||string| +|knowledgeBaseId|Define the Knowledge Base Id we are going to use||string| +|modelId|Define the model Id we are going to use||string| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name|false|string| +|region|The region in which Bedrock Agent client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useDefaultCredentialsProvider|Set whether the Bedrock Agent client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Bedrock Agent client should expect to load credentials through a profile credentials provider.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|ingestionJobId|Define the Ingestion Job Id we want to track||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|bedrockAgentClient|To use an existing configured AWS Bedrock Agent client||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the Bedrock Agent client||string| +|proxyPort|To define a proxy port when instantiating the Bedrock Agent client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Bedrock Agent client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useSessionCredentials|Set whether the Bedrock Agent client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Bedrock.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|dataSourceId|Define the Data source Id we are going to use||string| +|knowledgeBaseId|Define the Knowledge Base Id we are going to use||string| +|modelId|Define the model Id we are going to use||string| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name|false|string| +|region|The region in which Bedrock Agent client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useDefaultCredentialsProvider|Set whether the Bedrock Agent client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Bedrock Agent client should expect to load credentials through a profile credentials provider.|false|boolean| +|ingestionJobId|Define the Ingestion Job Id we want to track||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|bedrockAgentClient|To use an existing configured AWS Bedrock Agent client||object| +|proxyHost|To define a proxy host when instantiating the Bedrock Agent client||string| +|proxyPort|To define a proxy port when instantiating the Bedrock Agent client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Bedrock Agent client|HTTPS|object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useSessionCredentials|Set whether the Bedrock Agent client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Bedrock.|false|boolean| diff --git a/camel-aws-bedrock.md b/camel-aws-bedrock.md new file mode 100644 index 0000000000000000000000000000000000000000..fd742b38a72a5a20b72d5f1b53d43138bc896b50 --- /dev/null +++ b/camel-aws-bedrock.md @@ -0,0 +1,830 @@ +# Aws-bedrock + +**Since Camel 4.5** + +**Only producer is supported** + +The AWS2 Bedrock component supports invoking a supported LLM model from +[AWS Bedrock](https://aws.amazon.com/bedrock/) service. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon Bedrock. More information is available at +[Amazon Bedrock](https://aws.amazon.com/bedrock/). + +# URI Format + + aws-bedrock://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required Bedrock component options + +You have to provide the bedrockRuntimeClient in the Registry or your +accessKey and secretKey to access the [Amazon +Bedrock](https://aws.amazon.com/bedrock/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## Supported AWS Bedrock Models + +- Titan Text Express V1 with id `amazon.titan-text-express-v1` Express + is a large language model for text generation. It is useful for a + wide range of advanced, general language tasks such as open-ended + text generation and conversational chat, as well as support within + Retrieval Augmented Generation (RAG). + +Json schema for request + + { + "$schema": "http://json-schema.org/draft-04/schema#", + "type": "object", + "properties": { + "inputText": { + "type": "string" + }, + "textGenerationConfig": { + "type": "object", + "properties": { + "maxTokenCount": { + "type": "integer" + }, + "stopSequences": { + "type": "array", + "items": [ + { + "type": "string" + } + ] + }, + "temperature": { + "type": "integer" + }, + "topP": { + "type": "integer" + } + }, + "required": [ + "maxTokenCount", + "stopSequences", + "temperature", + "topP" + ] + } + }, + "required": [ + "inputText", + "textGenerationConfig" + ] + } + +- Titan Text Lite V1 with id `amazon.titan-text-lite-v1` Lite is a + light weight efficient model, ideal for fine-tuning of + English-language tasks. + +Json schema for request + + { + "$schema": "http://json-schema.org/draft-04/schema#", + "type": "object", + "properties": { + "inputText": { + "type": "string" + }, + "textGenerationConfig": { + "type": "object", + "properties": { + "maxTokenCount": { + "type": "integer" + }, + "stopSequences": { + "type": "array", + "items": [ + { + "type": "string" + } + ] + }, + "temperature": { + "type": "integer" + }, + "topP": { + "type": "integer" + } + }, + "required": [ + "maxTokenCount", + "stopSequences", + "temperature", + "topP" + ] + } + }, + "required": [ + "inputText", + "textGenerationConfig" + ] + } + +- Titan Image Generator G1 with id `amazon.titan-image-generator-v1` + It generates images from text, and allows users to upload and edit + an existing image. Users can edit an image with a text prompt + (without a mask) or parts of an image with an image mask. You can + extend the boundaries of an image with outpainting, and fill in an + image with inpainting. + +Json schema for request + + { + "$schema": "http://json-schema.org/draft-04/schema#", + "type": "object", + "properties": { + "textToImageParams": { + "type": "object", + "properties": { + "text": { + "type": "string" + }, + "negativeText": { + "type": "string" + } + }, + "required": [ + "text", + "negativeText" + ] + }, + "taskType": { + "type": "string" + }, + "imageGenerationConfig": { + "type": "object", + "properties": { + "cfgScale": { + "type": "integer" + }, + "seed": { + "type": "integer" + }, + "quality": { + "type": "string" + }, + "width": { + "type": "integer" + }, + "height": { + "type": "integer" + }, + "numberOfImages": { + "type": "integer" + } + }, + "required": [ + "cfgScale", + "seed", + "quality", + "width", + "height", + "numberOfImages" + ] + } + }, + "required": [ + "textToImageParams", + "taskType", + "imageGenerationConfig" + ] + } + +- Titan Embeddings G1 with id `amazon.titan-embed-text-v1` The Amazon + Titan Embeddings G1 - Text – Text v1.2 can intake up to 8k tokens + and outputs a vector of 1,536 dimensions. The model also works in + 25+ different language + +Json schema for request + + { + "$schema": "http://json-schema.org/draft-04/schema#", + "type": "object", + "properties": { + "inputText": { + "type": "string" + } + }, + "required": [ + "inputText" + ] + } + +- Jurassic2-Ultra with id `ai21.j2-ultra-v1` Jurassic-2 Ultra is + AI21’s most powerful model for complex tasks that require advanced + text generation and comprehension. + +Json schema for request + + { + "$schema": "http://json-schema.org/draft-04/schema#", + "type": "object", + "properties": { + "prompt": { + "type": "string" + }, + "maxTokens": { + "type": "integer" + }, + "temperature": { + "type": "integer" + }, + "topP": { + "type": "integer" + }, + "stopSequences": { + "type": "array", + "items": [ + { + "type": "string" + } + ] + }, + "presencePenalty": { + "type": "object", + "properties": { + "scale": { + "type": "integer" + } + }, + "required": [ + "scale" + ] + }, + "frequencyPenalty": { + "type": "object", + "properties": { + "scale": { + "type": "integer" + } + }, + "required": [ + "scale" + ] + } + }, + "required": [ + "prompt", + "maxTokens", + "temperature", + "topP", + "stopSequences", + "presencePenalty", + "frequencyPenalty" + ] + } + +- Jurassic2-Mid with id `ai21.j2-mid-v1` Jurassic-2 Mid is less + powerful than Ultra, yet carefully designed to strike the right + balance between exceptional quality and affordability. + +Json schema for request + + { + "$schema": "http://json-schema.org/draft-04/schema#", + "type": "object", + "properties": { + "prompt": { + "type": "string" + }, + "maxTokens": { + "type": "integer" + }, + "temperature": { + "type": "integer" + }, + "topP": { + "type": "integer" + }, + "stopSequences": { + "type": "array", + "items": [ + { + "type": "string" + } + ] + }, + "presencePenalty": { + "type": "object", + "properties": { + "scale": { + "type": "integer" + } + }, + "required": [ + "scale" + ] + }, + "frequencyPenalty": { + "type": "object", + "properties": { + "scale": { + "type": "integer" + } + }, + "required": [ + "scale" + ] + } + }, + "required": [ + "prompt", + "maxTokens", + "temperature", + "topP", + "stopSequences", + "presencePenalty", + "frequencyPenalty" + ] + } + +- Claude Instant V1.2 with id `anthropic.claude-instant-v1` A fast, + affordable yet still very capable model, which can handle a range of + tasks including casual dialogue, text analysis, summarization, and + document question-answering. + +Json schema for request + + { + "$schema": "http://json-schema.org/draft-04/schema#", + "type": "object", + "properties": { + "prompt": { + "type": "string" + }, + "max_tokens_to_sample": { + "type": "integer" + }, + "stop_sequences": { + "type": "array", + "items": [ + { + "type": "string" + } + ] + }, + "temperature": { + "type": "number" + }, + "top_p": { + "type": "integer" + }, + "top_k": { + "type": "integer" + }, + "anthropic_version": { + "type": "string" + } + }, + "required": [ + "prompt", + "max_tokens_to_sample", + "stop_sequences", + "temperature", + "top_p", + "top_k", + "anthropic_version" + ] + } + +- Claude 2 with id `anthropic.claude-v2` Anthropic’s highly capable + model across a wide range of tasks from sophisticated dialogue and + creative content generation to detailed instruction following. + +Json schema for request + + { + "$schema": "http://json-schema.org/draft-04/schema#", + "type": "object", + "properties": { + "prompt": { + "type": "string" + }, + "max_tokens_to_sample": { + "type": "integer" + }, + "stop_sequences": { + "type": "array", + "items": [ + { + "type": "string" + } + ] + }, + "temperature": { + "type": "number" + }, + "top_p": { + "type": "integer" + }, + "top_k": { + "type": "integer" + }, + "anthropic_version": { + "type": "string" + } + }, + "required": [ + "prompt", + "max_tokens_to_sample", + "stop_sequences", + "temperature", + "top_p", + "top_k", + "anthropic_version" + ] + } + +- Claude 2.1 with id `anthropic.claude-v2:1` An update to Claude 2 + that features double the context window, plus improvements across + reliability, hallucination rates, and evidence-based accuracy in + long document and RAG contexts. + +Json schema for request + + { + "$schema": "http://json-schema.org/draft-04/schema#", + "type": "object", + "properties": { + "prompt": { + "type": "string" + }, + "max_tokens_to_sample": { + "type": "integer" + }, + "stop_sequences": { + "type": "array", + "items": [ + { + "type": "string" + } + ] + }, + "temperature": { + "type": "number" + }, + "top_p": { + "type": "integer" + }, + "top_k": { + "type": "integer" + }, + "anthropic_version": { + "type": "string" + } + }, + "required": [ + "prompt", + "max_tokens_to_sample", + "stop_sequences", + "temperature", + "top_p", + "top_k", + "anthropic_version" + ] + } + +- Claude 3 Sonnet with id `anthropic.claude-3-sonnet-20240229-v1:0` + Claude 3 Sonnet by Anthropic strikes the ideal balance between + intelligence and speed—particularly for enterprise workloads. + +Json schema for request + + { + "$schema": "http://json-schema.org/draft-04/schema#", + "type": "object", + "properties": { + "messages": { + "type": "array", + "items": [ + { + "type": "object", + "properties": { + "role": { + "type": "string" + }, + "content": { + "type": "array", + "items": [ + { + "type": "object", + "properties": { + "type": { + "type": "string" + }, + "text": { + "type": "string" + } + }, + "required": [ + "type", + "text" + ] + } + ] + } + }, + "required": [ + "role", + "content" + ] + } + ] + }, + "max_tokens": { + "type": "integer" + }, + "anthropic_version": { + "type": "string" + } + }, + "required": [ + "messages", + "max_tokens", + "anthropic_version" + ] + } + +- Claude 3 Haiku with id `anthropic.claude-3-haiku-20240307-v1:0` + Claude 3 Haiku is Anthropic’s fastest, most compact model for + near-instant responsiveness. It answers simple queries and requests + with speed. + +Json schema for request + + { + "$schema": "http://json-schema.org/draft-04/schema#", + "type": "object", + "properties": { + "messages": { + "type": "array", + "items": [ + { + "type": "object", + "properties": { + "role": { + "type": "string" + }, + "content": { + "type": "array", + "items": [ + { + "type": "object", + "properties": { + "type": { + "type": "string" + }, + "text": { + "type": "string" + } + }, + "required": [ + "type", + "text" + ] + } + ] + } + }, + "required": [ + "role", + "content" + ] + } + ] + }, + "max_tokens": { + "type": "integer" + }, + "anthropic_version": { + "type": "string" + } + }, + "required": [ + "messages", + "max_tokens", + "anthropic_version" + ] + } + +## Bedrock Producer operations + +Camel-AWS Bedrock component provides the following operation on the +producer side: + +- invokeTextModel + +- invokeImageModel + +- invokeEmbeddingsModel + +# Producer Examples + +- invokeTextModel: this operation will invoke a model from Bedrock. + This is an example for both Titan Express and Titan Lite. + + + + from("direct:invoke") + .to("aws-bedrock://test?bedrockRuntimeClient=#amazonBedrockRuntimeClient&operation=invokeTextModel&modelId=" + + BedrockModels.TITAN_TEXT_EXPRESS_V1.model)) + +and you can then send to the direct endpoint something like + + final Exchange result = template.send("direct:invoke", exchange -> { + ObjectMapper mapper = new ObjectMapper(); + ObjectNode rootNode = mapper.createObjectNode(); + rootNode.put("inputText", + "User: Generate synthetic data for daily product sales in various categories - include row number, product name, category, date of sale and price. Produce output in JSON format. Count records and ensure there are no more than 5."); + + ArrayNode stopSequences = mapper.createArrayNode(); + stopSequences.add("User:"); + ObjectNode childNode = mapper.createObjectNode(); + childNode.put("maxTokenCount", 1024); + childNode.put("stopSequences", stopSequences); + childNode.put("temperature", 0).put("topP", 1); + + rootNode.put("textGenerationConfig", childNode); + exchange.getMessage().setBody(mapper.writer().writeValueAsString(rootNode)); + exchange.getMessage().setHeader(BedrockConstants.MODEL_CONTENT_TYPE, "application/json"); + exchange.getMessage().setHeader(BedrockConstants.MODEL_ACCEPT_CONTENT_TYPE, "application/json"); + }); + +where template is a ProducerTemplate. + +- invokeImageModel: this operation will invoke a model from Bedrock. + This is an example for both Titan Express and Titan Lite. + + + + from("direct:invoke") + .to("aws-bedrock://test?bedrockRuntimeClient=#amazonBedrockRuntimeClient&operation=invokeImageModel&modelId=" + + BedrockModels.TITAN_IMAGE_GENERATOR_V1.model)) + .split(body()) + .unmarshal().base64() + .setHeader("CamelFileName", simple("image-${random(128)}.png")).to("file:target/generated_images") + +and you can then send to the direct endpoint something like + + final Exchange result = template.send("direct:send_titan_image", exchange -> { + ObjectMapper mapper = new ObjectMapper(); + ObjectNode rootNode = mapper.createObjectNode(); + ObjectNode textParameter = mapper.createObjectNode(); + textParameter.putIfAbsent("text", + new TextNode("A Sci-fi camel running in the desert")); + rootNode.putIfAbsent("textToImageParams", textParameter); + rootNode.putIfAbsent("taskType", new TextNode("TEXT_IMAGE")); + ObjectNode childNode = mapper.createObjectNode(); + childNode.putIfAbsent("numberOfImages", new IntNode(3)); + childNode.putIfAbsent("quality", new TextNode("standard")); + childNode.putIfAbsent("cfgScale", new IntNode(8)); + childNode.putIfAbsent("height", new IntNode(512)); + childNode.putIfAbsent("width", new IntNode(512)); + childNode.putIfAbsent("seed", new IntNode(0)); + + rootNode.putIfAbsent("imageGenerationConfig", childNode); + + exchange.getMessage().setBody(mapper.writer().writeValueAsString(rootNode)); + exchange.getMessage().setHeader(BedrockConstants.MODEL_CONTENT_TYPE, "application/json"); + exchange.getMessage().setHeader(BedrockConstants.MODEL_ACCEPT_CONTENT_TYPE, "application/json"); + }); + +where template is a ProducerTemplate. + +- invokeEmbeddingsModel: this operation will invoke an Embeddings + model from Bedrock. This is an example for Titan Embeddings G1. + + + + from("direct:send_titan_embeddings") + .to("aws-bedrock:label?useDefaultCredentialsProvider=true®ion=us-east-1&operation=invokeEmbeddingsModel&modelId=" + + BedrockModels.TITAN_EMBEDDINGS_G1.model) + .to(result); + +and you can then send to the direct endpoint something like + + final Exchange result = template.send("direct:send_titan_embeddings", exchange -> { + ObjectMapper mapper = new ObjectMapper(); + ObjectNode rootNode = mapper.createObjectNode(); + rootNode.putIfAbsent("inputText", + new TextNode("A Sci-fi camel running in the desert")); + + exchange.getMessage().setBody(mapper.writer().writeValueAsString(rootNode)); + exchange.getMessage().setHeader(BedrockConstants.MODEL_CONTENT_TYPE, "application/json"); + exchange.getMessage().setHeader(BedrockConstants.MODEL_ACCEPT_CONTENT_TYPE, "*/*"); + }); + +where template is a ProducerTemplate. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws-bedrock + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|modelId|Define the model Id we are going to use||string| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name|false|string| +|region|The region in which Bedrock client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useDefaultCredentialsProvider|Set whether the Bedrock client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Bedrock client should expect to load credentials through a profile credentials provider.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|bedrockRuntimeClient|To use an existing configured AWS Bedrock Runtime client||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the Bedrock client||string| +|proxyPort|To define a proxy port when instantiating the Bedrock client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Bedrock client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useSessionCredentials|Set whether the Bedrock client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Bedrock.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|modelId|Define the model Id we are going to use||string| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name|false|string| +|region|The region in which Bedrock client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useDefaultCredentialsProvider|Set whether the Bedrock client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Bedrock client should expect to load credentials through a profile credentials provider.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|bedrockRuntimeClient|To use an existing configured AWS Bedrock Runtime client||object| +|proxyHost|To define a proxy host when instantiating the Bedrock client||string| +|proxyPort|To define a proxy port when instantiating the Bedrock client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Bedrock client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useSessionCredentials|Set whether the Bedrock client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Bedrock.|false|boolean| diff --git a/camel-aws-cloudtrail.md b/camel-aws-cloudtrail.md new file mode 100644 index 0000000000000000000000000000000000000000..e8b664bb6fc68d01c0613c2190453e3a0d5a13cd --- /dev/null +++ b/camel-aws-cloudtrail.md @@ -0,0 +1,143 @@ +# Aws-cloudtrail + +**Since Camel 3.19** + +**Only consumer is supported** + +The AWS Cloudtrail component supports receiving events from Amazon +Cloudtrail service. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon Cloudtrail. More information is available at +[AWS Cloudtrail](https://aws.amazon.com/cloudtrail/) + +# Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +# Cloudtrail Events consumed + +The Cloudtrail consumer will use an API method called LookupEvents. + +This method will only take into account management events like +create/update/delete of resources and Cloudtrail insight events where +enabled. + +This means you won’t consume the events registered in the Cloudtrail +logs stored on S3, in case of creation of a new Trail. + +This is important to notice, and it must be taken into account when +using this component. + +# URI Format + + aws-cloudtrail://label[?options] + +The stream needs to be created prior to it being used. + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|configuration|Component configuration||object| +|eventSource|Specify an event source to select events||string| +|maxResults|Maximum number of records that will be fetched in each poll|1|integer| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with uriEndpointOverride option|false|boolean| +|region|The region in which Cloudtrail client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|cloudTrailClient|Amazon Cloudtrail client to use for all requests for this endpoint||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the Cloudtrail client||string| +|proxyPort|To define a proxy port when instantiating the Cloudtrail client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Cloudtrail client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name.||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume a IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Cloudtrail client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Cloudtrail client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the CloudTrail client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in CloudTrail.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|A label for indexing cloudtrail endpoints||string| +|eventSource|Specify an event source to select events||string| +|maxResults|Maximum number of records that will be fetched in each poll|1|integer| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with uriEndpointOverride option|false|boolean| +|region|The region in which Cloudtrail client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|cloudTrailClient|Amazon Cloudtrail client to use for all requests for this endpoint||object| +|proxyHost|To define a proxy host when instantiating the Cloudtrail client||string| +|proxyPort|To define a proxy port when instantiating the Cloudtrail client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Cloudtrail client|HTTPS|object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name.||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume a IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Cloudtrail client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Cloudtrail client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the CloudTrail client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in CloudTrail.|false|boolean| diff --git a/camel-aws-config.md b/camel-aws-config.md new file mode 100644 index 0000000000000000000000000000000000000000..ce74a14b8b6facd46196317a7d0f4a08078fda82 --- /dev/null +++ b/camel-aws-config.md @@ -0,0 +1,117 @@ +# Aws-config + +**Since Camel 4.3** + +**Only producer is supported** + +The AWS Config component supports create and delete config rules [AWS +ECS](https://aws.amazon.com/config/) clusters instances. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon Config. More information is available at [Amazon +Config](https://aws.amazon.com/config/). + +# URI Format + + aws-config://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required Config component options + +You have to provide the ConfigClient in the Registry or your accessKey +and secretKey to access the [Amazon +Config](https://aws.amazon.com/config/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which the Config client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configClient|Amazon AWS Config Client instance||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the Config client||string| +|proxyPort|To define a proxy port when instantiating the Config client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Config client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Config client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Config client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the Config client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Config.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which the Config client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|configClient|Amazon AWS Config Client instance||object| +|proxyHost|To define a proxy host when instantiating the Config client||string| +|proxyPort|To define a proxy port when instantiating the Config client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Config client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Config client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Config client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the Config client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Config.|false|boolean| diff --git a/camel-aws-secrets-manager.md b/camel-aws-secrets-manager.md new file mode 100644 index 0000000000000000000000000000000000000000..987bdcbed1017214a0242e526922cfa6f59b947e --- /dev/null +++ b/camel-aws-secrets-manager.md @@ -0,0 +1,393 @@ +# Aws-secrets-manager + +**Since Camel 3.9** + +**Only producer is supported** + +The AWS Secrets Manager component supports list secret [AWS Secrets +Manager](https://aws.amazon.com/secrets-manager/) service. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon Secrets Manager. More information is available +at [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/). + +# URI Format + + aws-secrets-manager://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## Using AWS Secrets Manager Property Function + +To use this function, you’ll need to provide credentials to AWS Secrets +Manager Service as environment variables: + + export $CAMEL_VAULT_AWS_ACCESS_KEY=accessKey + export $CAMEL_VAULT_AWS_SECRET_KEY=secretKey + export $CAMEL_VAULT_AWS_REGION=region + +You can also configure the credentials in the `application.properties` +file such as: + + camel.vault.aws.accessKey = accessKey + camel.vault.aws.secretKey = secretKey + camel.vault.aws.region = region + +If you want instead to use the [AWS default credentials +provider](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html), +you’ll need to provide the following env variables: + + export $CAMEL_VAULT_AWS_USE_DEFAULT_CREDENTIALS_PROVIDER=true + export $CAMEL_VAULT_AWS_REGION=region + +You can also configure the credentials in the `application.properties` +file such as: + + camel.vault.aws.defaultCredentialsProvider = true + camel.vault.aws.region = region + +It is also possible to specify a particular profile name for accessing +AWS Secrets Manager + + export $CAMEL_VAULT_AWS_USE_PROFILE_CREDENTIALS_PROVIDER=true + export $CAMEL_VAULT_AWS_PROFILE_NAME=test-account + export $CAMEL_VAULT_AWS_REGION=region + +You can also configure the credentials in the `application.properties` +file such as: + + camel.vault.aws.profileCredentialsProvider = true + camel.vault.aws.profileName = test-account + camel.vault.aws.region = region + +At this point, you’ll be able to reference a property in the following +way: + + + + + + + + +Where route will be the name of the secret stored in the AWS Secrets +Manager Service. + +You could specify a default value in case the secret is not present on +AWS Secret Manager: + + + + + + + + +In this case, if the secret doesn’t exist, the property will fall back +to "default" as value. + +Also, you are able to get a particular field of the secret, if you have, +for example, a secret named database of this form: + + { + "username": "admin", + "password": "password123", + "engine": "postgres", + "host": "127.0.0.1", + "port": "3128", + "dbname": "db" + } + +You’re able to do get single secret value in your route, like for +example: + + + + + + + + +Or re-use the property as part of an endpoint. + +You could specify a default value in case the particular field of secret +is not present on AWS Secret Manager: + + + + + + + + +In this case, if the secret doesn’t exist or the secret exists, but the +username field is not part of the secret, the property will fall back to +"admin" as value. + +There is also the syntax to get a particular version of the secret for +both the approach, with field/default value specified or only with +secret: + + + + + + + + +This approach will return the RAW route secret with the version +*bf9b4f4b-8e63-43fd-a73c-3e2d3748b451*. + + + + + + + + +This approach will return the route secret value with version +*bf9b4f4b-8e63-43fd-a73c-3e2d3748b451* or default value in case the +secret doesn’t exist or the version doesn’t exist. + + + + + + + + +This approach will return the username field of the database secret with +version *bf9b4f4b-8e63-43fd-a73c-3e2d3748b451* or admin in case the +secret doesn’t exist or the version doesn’t exist. + +For the moment we are not considering the rotation function if any are +applied, but it is in the work to be done. + +The only requirement is adding the camel-aws-secrets-manager jar to your +Camel application. + +## Automatic Camel context reloading on Secret Refresh + +Being able to reload Camel context on a Secret Refresh could be done by +specifying the usual credentials (the same used for AWS Secret Manager +Property Function). + +With Environment variables: + + export $CAMEL_VAULT_AWS_USE_DEFAULT_CREDENTIALS_PROVIDER=accessKey + export $CAMEL_VAULT_AWS_REGION=region + +or as plain Camel main properties: + + camel.vault.aws.useDefaultCredentialProvider = true + camel.vault.aws.region = region + +Or by specifying accessKey/SecretKey and region, instead of using the +default credentials provider chain. + +To enable the automatic refresh, you’ll need additional properties to +set: + + camel.vault.aws.refreshEnabled=true + camel.vault.aws.refreshPeriod=60000 + camel.vault.aws.secrets=Secret + camel.main.context-reload-enabled = true + +where `camel.vault.aws.refreshEnabled` will enable the automatic context +reload, `camel.vault.aws.refreshPeriod` is the interval of time between +two different checks for update events and `camel.vault.aws.secrets` is +a regex representing the secrets we want to track for updates. + +Note that `camel.vault.aws.secrets` is not mandatory: if not specified +the task responsible for checking updates events will take into accounts +or the properties with an `aws:` prefix. + +## Automatic Camel context reloading on Secret Refresh with Eventbridge and AWS SQS Services + +Another option is to use AWS EventBridge in conjunction with the AWS SQS +service. + +On the AWS side, the following resources need to be created: + +- an AWS Couldtrail trail + +- an AWS SQS Queue + +- an Eventbridge rule of the following kind + + + + { + "source": ["aws.secretsmanager"], + "detail-type": ["AWS API Call via CloudTrail"], + "detail": { + "eventSource": ["secretsmanager.amazonaws.com"] + } + } + +This rule will make the event related to AWS Secrets Manager filtered + +- You need to set the a Rule target to the AWS SQS Queue for + Eventbridge rule + +- You need to give permission to the Eventbrige rule, to write on the + above SQS Queue. For doing this you’ll need to define a json file + like this: + + + + { + "Policy": "{\"Version\":\"2012-10-17\",\"Id\":\"/SQSDefaultPolicy\",\"Statement\":[{\"Sid\": \"EventsToMyQueue\", \"Effect\": \"Allow\", \"Principal\": {\"Service\": \"events.amazonaws.com\"}, \"Action\": \"sqs:SendMessage\", \"Resource\": \"\", \"Condition\": {\"ArnEquals\": {\"aws:SourceArn\": \"\"}}}]}" + } + +Change the values for queue\_arn and eventbridge\_rule\_arn, save the +file with policy.json name and run the following command with AWS CLI + + aws sqs set-queue-attributes --queue-url --attributes file://policy.json + +where queue\_url is the AWS SQS Queue URL of the just created Queue. + +Now you should be able to set up the configuration on the Camel side. To +enable the SQS notification add the following properties: + + camel.vault.aws.refreshEnabled=true + camel.vault.aws.refreshPeriod=60000 + camel.vault.aws.secrets=Secret + camel.main.context-reload-enabled = true + camel.vault.aws.useSqsNotification=true + camel.vault.aws.sqsQueueUrl= + +where queue\_url is the AWS SQS Queue URL of the just created Queue. + +Whenever an event of PutSecretValue for the Secret named *Secret* will +happen, a message will be enqueued in the AWS SQS Queue and consumed on +the Camel side and a context reload will be triggered. + +## Secrets Manager Producer operations + +Camel-AWS-Secrets-manager component provides the following operation on +the producer side: + +- listSecrets + +- createSecret + +- deleteSecret + +- describeSecret + +- rotateSecret + +- getSecret + +- updateSecret + +- replicateSecretToRegions + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws-secrets-manager + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|binaryPayload|Set if the secret is binary or not|false|boolean| +|configuration|Component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|region|The region in which a Secrets Manager client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useProfileCredentialsProvider|Set whether the Secrets Manager client should expect to load credentials through a profile credentials provider.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|secretsManagerClient|To use an existing configured AWS Secrets Manager client||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the Secrets Manager client||string| +|proxyPort|To define a proxy port when instantiating the Secrets Manager client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Secrets Manager client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Translate client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useSessionCredentials|Set whether the Secrets Manager client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Secrets Manager.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|binaryPayload|Set if the secret is binary or not|false|boolean| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|region|The region in which a Secrets Manager client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useProfileCredentialsProvider|Set whether the Secrets Manager client should expect to load credentials through a profile credentials provider.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|secretsManagerClient|To use an existing configured AWS Secrets Manager client||object| +|proxyHost|To define a proxy host when instantiating the Secrets Manager client||string| +|proxyPort|To define a proxy port when instantiating the Secrets Manager client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Secrets Manager client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Translate client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useSessionCredentials|Set whether the Secrets Manager client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Secrets Manager.|false|boolean| diff --git a/camel-aws2-athena.md b/camel-aws2-athena.md new file mode 100644 index 0000000000000000000000000000000000000000..369b2687738ac2264a38b92d5e74398db5a61e20 --- /dev/null +++ b/camel-aws2-athena.md @@ -0,0 +1,408 @@ +# Aws2-athena + +**Since Camel 3.4** + +**Only producer is supported** + +The AWS2 Athena component supports running queries with [AWS +Athena](https://aws.amazon.com/athena/) and working with results. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon Athena. More information is available at [AWS +Athena](https://aws.amazon.com/athena/). + +# URI Format + + aws2-athena://label[?options] + +You can append query options to the URI in the following format: +`?options=value&option2=value&...` + +Required Athena component options + +You have to provide the amazonAthenaClient in the Registry or your +accessKey and secretKey to access the [AWS +Athena](https://aws.amazon.com/athena/) service. + +# Examples + +## Producer Examples + +For example, to run a simple query, wait up to 60 seconds for +completion, and log the results: + + from("direct:start") + .setBody(constant("SELECT 1")) + .to("aws2-athena://label?waitTimeout=60000&outputLocation=s3://bucket/path/") + .to("aws2-athena://label?operation=getQueryResults&outputType=StreamList") + .split(body()).streaming() + .to("log:out") + .to("mock:result"); + +Similarly, running the query and returning a path to the results in S3: + + from("direct:start") + .setBody(constant("SELECT 1")) + .to("aws2-athena://label?waitTimeout=60000&outputLocation=s3://bucket/path/") + .to("aws2-athena://label?operation=getQueryResults&outputType=S3Pointer") + .to("mock:result"); + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## Athena Producer operations + +The Camel-AWS Athena component provides the following operation on the +producer side: + +- getQueryExecution + +- getQueryResults + +- listQueryExecutions + +- startQueryExecution + +## Advanced AmazonAthena configuration + +If your Camel Application is running behind a firewall or if you need to +have more control over the `AthenaClient` instance configuration, you +can create your own instance and refer to it in your Camel aws2-athena +component configuration: + + from("aws2-athena://MyQuery?amazonAthenaClient=#client&...") + .to("mock:result"); + +## Overriding query parameters with message headers + +Message headers listed in "Message headers evaluated by the Athena +producer" override the corresponding query parameters listed in "Query +Parameters". + +For example: + + from("direct:start") + .setHeader(Athena2Constants.OUTPUT_LOCATION, constant("s3://other/location/")) + .to("aws2-athena:label?outputLocation=s3://foo/bar/") + .to("mock:result"); + +Will cause the output location to be `s3://other/location/`. + +## Athena Producer Operation examples + +- getQueryExecution: this operation returns information about a query + given its query execution ID + + + + from("direct:start") + .to("aws2-athena://label?operation=getQueryExecution&queryExecutionId=11111111-1111-1111-1111-111111111111") + .to("mock:result"); + +The preceding example will yield an [Athena +QueryExecution](https://docs.aws.amazon.com/athena/latest/APIReference/API_QueryExecution.html) +in the body. + +The getQueryExecution operation also supports retrieving the query +execution ID from a header (`CamelAwsAthenaQueryExecutionId`), and since +startQueryExecution sets the same header, upon starting a query, these +operations can be used together: + + from("direct:start") + .setBody(constant("SELECT 1")) + .to("aws2-athena://label?operation=startQueryExecution&outputLocation=s3://bucket/path/") + .to("aws2-athena://label?operation=getQueryExecution") + .to("mock:result"); + +The preceding example will yield an Athena QueryExecution in the body +for the query that was just started. + +- getQueryResults: this operation returns the results of a query that + has succeeded. The results are returned in the body in one of three + formats. + +`StreamList` - the default - returns a +[GetQueryResultsIterable](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/athena/paginators/GetQueryResultsIterable.html) +in the body that can page through all results: + + from("direct:start") + .setBody(constant("SELECT 1")) + .to("aws2-athena://label?operation=startQueryExecution&waitTimeout=60000&outputLocation=s3://bucket/path/") + .to("aws2-athena://label?operation=getQueryResults&outputType=StreamList") + .to("mock:result"); + +The output of StreamList can be processed in various ways: + + from("direct:start") + .setBody(constant( + "SELECT * FROM (" + + " VALUES" + + " (1, 'a')," + + " (2, 'b')" + + ") AS t (id, name)")) + .to("aws2-athena://label?operation=startQueryExecution&waitTimeout=60000&outputLocation=s3://bucket/path/") + .to("aws2-athena://label?operation=getQueryResults&outputType=StreamList") + .split(body()).streaming() + .process(new Processor() { + + @Override + public void process(Exchange exchange) { + GetQueryResultsResponse page = exchange + .getMessage() + .getBody(GetQueryResultsResponse.class); + for (Row row : page.resultSet().rows()) { + String line = row.data() + .stream() + .map(Datum::varCharValue) + .collect(Collectors.joining(",")); + System.out.println(line); + } + } + }) + .to("mock:result"); + +The preceding example will print the results of the query as CSV to the +console. + +`SelectList` - returns a +[GetQueryResponse](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/athena/model/GetQueryResultsResponse.html) +in the body containing at most 1,000 rows, plus the NextToken value as a +header (`CamelAwsAthenaNextToken`), which can be used for manual +pagination of results: + + from("direct:start") + .setBody(constant("SELECT 1")) + .to("aws2-athena://label?operation=startQueryExecution&waitTimeout=60000&outputLocation=s3://bucket/path/") + .to("aws2-athena://label?operation=getQueryResults&outputType=SelectList") + .to("mock:result"); + +The preceding example will return a +[GetQueryResponse](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/athena/model/GetQueryResultsResponse.html) +in the body plus the NextToken value as a header +(`CamelAwsAthenaNextToken`), which can be used to manually page through +the results 1,000 rows at a time. + +`S3Pointer` - return an S3 path (e.g. `s3://bucket/path/`) pointing to +the results: + + from("direct:start") + .setBody(constant("SELECT 1")) + .to("aws2-athena://label?operation=startQueryExecution&waitTimeout=60000&outputLocation=s3://bucket/path/") + .to("aws2-athena://label?operation=getQueryResults&outputType=S3Pointer") + .to("mock:result"); + +The preceding example will return an S3 path (e.g. `s3://bucket/path/`) +in the body pointing to the results. The path will also be set in a +header (`CamelAwsAthenaOutputLocation`). + +- listQueryExecutions: this operation returns a list of query + execution IDs + + + + from("direct:start") + .to("aws2-athena://label?operation=listQueryExecutions") + .to("mock:result"); + +The preceding example will return a list of query executions in the +body, plus the NextToken value as a header (`CamelAwsAthenaNextToken`) +than can be used for manual pagination of results. + +- startQueryExecution: this operation starts the execution of a query. + It supports waiting for the query to complete before proceeding, and + retrying the query based on a set of configurable failure + conditions: + + + + from("direct:start") + .setBody(constant("SELECT 1")) + .to("aws2-athena://label?operation=startQueryExecution&outputLocation=s3://bucket/path/") + .to("mock:result"); + +The preceding example will start the query `SELECT 1` and configure the +results to be saved to `s3://bucket/path/`, but will not wait for the +query to complete. + + from("direct:start") + .setBody(constant("SELECT 1")) + .to("aws2-athena://label?operation=startQueryExecution&waitTimeout=60000&outputLocation=s3://bucket/path/") + .to("mock:result"); + +The preceding example will start a query and wait up to 60 seconds for +it to reach a status that indicates it is complete (one of SUCCEEDED, +FAILED, CANCELLED, or UNKNOWN\_TO\_SDK\_VERSION). Upon failure, the +query would not be retried. + + from("direct:start") + .setBody(constant("SELECT 1")) + .to("aws2-athena://label?operation=startQueryExecution&waitTimeout=60000&initialDelay=10000&delay=1000&maxAttempts=3&retry=retryable&outputLocation=s3://bucket/path/") + .to("mock:result"); + +The preceding example will start a query and wait up to 60 seconds for +it to reach a status that indicates it is complete (one of SUCCEEDED, +FAILED, CANCELLED, or UNKNOWN\_TO\_SDK\_VERSION). Upon failure, the +query would be automatically retried up to two more times if the failure +state indicates the query may succeed upon retry (Athena queries that +fail with states such as `GENERIC_INTERNAL_ERROR` or "resource limit +exhaustion" will sometimes succeed if retried). While waiting for the +query to complete, the query status would first be checked after an +initial delay of 10 seconds, and subsequently every 1 second until the +query completes. + +## Putting it all together + + from("direct:start") + .setBody(constant("SELECT 1")) + .to("aws2-athena://label?waitTimeout=60000&&maxAttempts=3&retry=retryable&outputLocation=s3://bucket/path/") + .to("aws2-athena://label?operation=getQueryResults&outputType=StreamList") + .to("mock:result"); + +The preceding example will start the query and wait up to 60 seconds for +it to complete. Upon completion, getQueryResults put the results of the +query into the body of the message for further processing. + +For the sake of completeness, a similar outcome could be achieved with +the following: + + from("direct:start") + .setBody(constant("SELECT 1")) + .to("aws2-athena://label?operation=startQueryExecution&outputLocation=s3://bucket/path/") + .loopDoWhile(simple("${header." + Athena2Constants.QUERY_EXECUTION_STATE + "} != 'SUCCEEDED'")) + .delay(1_000) + .to("aws2-athena://label?operation=getQueryExecution") + .end() + .to("aws2-athena://label?operation=getQueryResults&outputType=StreamList") + .to("mock:result"); + +Caution: the preceding example would block indefinitely, however, if the +query did not complete with a status of SUCCEEDED. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-athena + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|The component configuration.||object| +|database|The Athena database to use.||string| +|delay|Milliseconds before the next poll for query execution status. See the section Waiting for Query Completion and Retrying Failed Queries to learn more.|2000|integer| +|initialDelay|Milliseconds before the first poll for query execution status. See the section Waiting for Query Completion and Retrying Failed Queries to learn more.|1000|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|maxAttempts|Maximum number of times to attempt a query. Set to 1 to disable retries. See the section Waiting for Query Completion and Retrying Failed Queries to learn more.|1|integer| +|maxResults|Max number of results to return for the given operation (if supported by the Athena API endpoint). If not set, will use the Athena API default for the given operation.||integer| +|nextToken|Pagination token to use in the case where the response from the previous request was truncated.||string| +|operation|The Athena API function to call.|startQueryExecution|object| +|outputLocation|The location in Amazon S3 where query results are stored, such as s3://path/to/query/bucket/. Ensure this value ends with a forward slash.||string| +|outputType|How query results should be returned. One of StreamList (default - return a GetQueryResultsIterable that can page through all results), SelectList (returns at most 1000 rows at a time, plus a NextToken value as a header than can be used for manual pagination of results), S3Pointer (return an S3 path pointing to the results).|StreamList|object| +|queryExecutionId|The unique ID identifying the query execution.||string| +|queryString|The SQL query to run. Except for simple queries, prefer setting this as the body of the Exchange or as a header using Athena2Constants.QUERY\_STRING to avoid having to deal with URL encoding issues.||string| +|region|The region in which Athena client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1).||string| +|resetWaitTimeoutOnRetry|Reset the waitTimeout countdown in the event of a query retry. If set to true, potential max time spent waiting for queries is equal to waitTimeout x maxAttempts. See the section Waiting for Query Completion and Retrying Failed Queries to learn more.|true|boolean| +|retry|Optional comma separated list of error types to retry the query for. Use: 'retryable' to retry all retryable failure conditions (e.g. generic errors and resources exhausted), 'generic' to retry 'GENERIC\_INTERNAL\_ERROR' failures, 'exhausted' to retry queries that have exhausted resource limits, 'always' to always retry regardless of failure condition, or 'never' or null to never retry (default). See the section Waiting for Query Completion and Retrying Failed Queries to learn more.|never|string| +|waitTimeout|Optional max wait time in millis to wait for a successful query completion. See the section Waiting for Query Completion and Retrying Failed Queries to learn more.|0|integer| +|workGroup|The workgroup to use for running the query.||string| +|amazonAthenaClient|The AmazonAthena instance to use as the client.||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|clientRequestToken|A unique string to ensure issues queries are idempotent. It is unlikely you will need to set this.||string| +|includeTrace|Include useful trace information at the beginning of queries as an SQL comment (prefixed with --).|false|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the Athena client.||string| +|proxyPort|To define a proxy port when instantiating the Athena client.||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Athena client.|HTTPS|object| +|accessKey|Amazon AWS Access Key.||string| +|encryptionOption|The encryption type to use when storing query results in S3. One of SSE\_S3, SSE\_KMS, or CSE\_KMS.||object| +|kmsKey|For SSE-KMS and CSE-KMS, this is the KMS key ARN or ID.||string| +|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key.||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|useDefaultCredentialsProvider|Set whether the Athena client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in|false|boolean| +|useProfileCredentialsProvider|Set whether the Athena client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the Athena client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume a IAM role for doing operations in Athena.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|database|The Athena database to use.||string| +|delay|Milliseconds before the next poll for query execution status. See the section Waiting for Query Completion and Retrying Failed Queries to learn more.|2000|integer| +|initialDelay|Milliseconds before the first poll for query execution status. See the section Waiting for Query Completion and Retrying Failed Queries to learn more.|1000|integer| +|maxAttempts|Maximum number of times to attempt a query. Set to 1 to disable retries. See the section Waiting for Query Completion and Retrying Failed Queries to learn more.|1|integer| +|maxResults|Max number of results to return for the given operation (if supported by the Athena API endpoint). If not set, will use the Athena API default for the given operation.||integer| +|nextToken|Pagination token to use in the case where the response from the previous request was truncated.||string| +|operation|The Athena API function to call.|startQueryExecution|object| +|outputLocation|The location in Amazon S3 where query results are stored, such as s3://path/to/query/bucket/. Ensure this value ends with a forward slash.||string| +|outputType|How query results should be returned. One of StreamList (default - return a GetQueryResultsIterable that can page through all results), SelectList (returns at most 1000 rows at a time, plus a NextToken value as a header than can be used for manual pagination of results), S3Pointer (return an S3 path pointing to the results).|StreamList|object| +|queryExecutionId|The unique ID identifying the query execution.||string| +|queryString|The SQL query to run. Except for simple queries, prefer setting this as the body of the Exchange or as a header using Athena2Constants.QUERY\_STRING to avoid having to deal with URL encoding issues.||string| +|region|The region in which Athena client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1).||string| +|resetWaitTimeoutOnRetry|Reset the waitTimeout countdown in the event of a query retry. If set to true, potential max time spent waiting for queries is equal to waitTimeout x maxAttempts. See the section Waiting for Query Completion and Retrying Failed Queries to learn more.|true|boolean| +|retry|Optional comma separated list of error types to retry the query for. Use: 'retryable' to retry all retryable failure conditions (e.g. generic errors and resources exhausted), 'generic' to retry 'GENERIC\_INTERNAL\_ERROR' failures, 'exhausted' to retry queries that have exhausted resource limits, 'always' to always retry regardless of failure condition, or 'never' or null to never retry (default). See the section Waiting for Query Completion and Retrying Failed Queries to learn more.|never|string| +|waitTimeout|Optional max wait time in millis to wait for a successful query completion. See the section Waiting for Query Completion and Retrying Failed Queries to learn more.|0|integer| +|workGroup|The workgroup to use for running the query.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|amazonAthenaClient|The AmazonAthena instance to use as the client.||object| +|clientRequestToken|A unique string to ensure issues queries are idempotent. It is unlikely you will need to set this.||string| +|includeTrace|Include useful trace information at the beginning of queries as an SQL comment (prefixed with --).|false|boolean| +|proxyHost|To define a proxy host when instantiating the Athena client.||string| +|proxyPort|To define a proxy port when instantiating the Athena client.||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Athena client.|HTTPS|object| +|accessKey|Amazon AWS Access Key.||string| +|encryptionOption|The encryption type to use when storing query results in S3. One of SSE\_S3, SSE\_KMS, or CSE\_KMS.||object| +|kmsKey|For SSE-KMS and CSE-KMS, this is the KMS key ARN or ID.||string| +|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key.||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|useDefaultCredentialsProvider|Set whether the Athena client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in|false|boolean| +|useProfileCredentialsProvider|Set whether the Athena client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the Athena client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume a IAM role for doing operations in Athena.|false|boolean| diff --git a/camel-aws2-cw.md b/camel-aws2-cw.md new file mode 100644 index 0000000000000000000000000000000000000000..abe0610e6c6e0d1e53a3aed471d1ea4a2f67cc30 --- /dev/null +++ b/camel-aws2-cw.md @@ -0,0 +1,163 @@ +# Aws2-cw + +**Since Camel 3.1** + +**Only producer is supported** + +The AWS2 Cloudwatch component allows messages to be sent to an [Amazon +CloudWatch](https://aws.amazon.com/cloudwatch/) metrics. The +implementation of the Amazon API is provided by the [AWS +SDK](https://aws.amazon.com/sdkforjava/). + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon CloudWatch. More information is available at +[Amazon CloudWatch](https://aws.amazon.com/cloudwatch/). + +# URI Format + + aws2-cw://namespace[?options] + +The metrics will be created if they don’t already exist. + +You can append query options to the URI in the following format: +`?options=value&option2=value&...` + +Required CW component options + +You have to provide the amazonCwClient in the Registry or your accessKey +and secretKey to access the [Amazon’s +CloudWatch](https://aws.amazon.com/cloudwatch/). + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## Advanced CloudWatchClient configuration + +If you need more control over the `CloudWatchClient` instance +configuration you can create your own instance and refer to it from the +URI: + + from("direct:start") + .to("aws2-cw://namespace?amazonCwClient=#client"); + +The `#client` refers to a `CloudWatchClient` in the Registry. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-cw + ${camel-version} + + +where `${camel-version`} must be replaced by the actual version of +Camel. + +# Examples + +## Producer Example + + from("direct:start") + .to("aws2-cw://http://camel.apache.org/aws-cw"); + +and sends something like + + exchange.getIn().setHeader(Cw2Constants.METRIC_NAME, "ExchangesCompleted"); + exchange.getIn().setHeader(Cw2Constants.METRIC_VALUE, "2.0"); + exchange.getIn().setHeader(Cw2Constants.METRIC_UNIT, "Count"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|The component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|name|The metric name||string| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|region|The region in which CW client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|timestamp|The metric timestamp||object| +|unit|The metric unit||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|value|The metric value||number| +|amazonCwClient|To use the AmazonCloudWatch as the client||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the CW client||string| +|proxyPort|To define a proxy port when instantiating the CW client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the CW client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Cloudwatch client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the CloudWatch client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in CloudWatch.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|namespace|The metric namespace||string| +|name|The metric name||string| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|region|The region in which CW client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|timestamp|The metric timestamp||object| +|unit|The metric unit||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|value|The metric value||number| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|amazonCwClient|To use the AmazonCloudWatch as the client||object| +|proxyHost|To define a proxy host when instantiating the CW client||string| +|proxyPort|To define a proxy port when instantiating the CW client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the CW client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Cloudwatch client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the CloudWatch client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in CloudWatch.|false|boolean| diff --git a/camel-aws2-ddb.md b/camel-aws2-ddb.md new file mode 100644 index 0000000000000000000000000000000000000000..bab25d186582bf9ef1f803925eb9c5daa059278a --- /dev/null +++ b/camel-aws2-ddb.md @@ -0,0 +1,263 @@ +# Aws2-ddb + +**Since Camel 3.1** + +**Only producer is supported** + +The AWS2 DynamoDB component supports storing and retrieving data from/to +[Amazon’s DynamoDB](https://aws.amazon.com/dynamodb) service. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon DynamoDB. More information is available at +[Amazon DynamoDB](https://aws.amazon.com/dynamodb). + +# URI Format + + aws2-ddb://domainName[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required DDB component options + +You have to provide the amazonDDBClient in the Registry or your +accessKey and secretKey to access the [Amazon’s +DynamoDB](https://aws.amazon.com/dynamodb). + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## Advanced AmazonDynamoDB configuration + +If you need more control over the `AmazonDynamoDB` instance +configuration you can create your own instance and refer to it from the +URI: + + public class MyRouteBuilder extends RouteBuilder { + + private String accessKey = "myaccessKey"; + private String secretKey = "secretKey"; + + @Override + public void configure() throws Exception { + + DynamoDbClient client = DynamoDbClient.builder() + .region(Region.AP_SOUTHEAST_2) + .credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create(accessKey, secretKey))) + .build(); + + getCamelContext().getRegistry().bind("client", client); + + from("direct:start") + .to("aws2-ddb://domainName?amazonDDBClient=#client"); + } + } + +The `#client` refers to a `DynamoDbClient` in the Registry. + +# Supported producer operations + +- BatchGetItems + +- DeleteItem + +- DeleteTable + +- DescribeTable + +- GetItem + +- PutItem + +- Query + +- Scan + +- UpdateItem + +- UpdateTable + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-ddb + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +# Examples + +## Producer Examples + +- PutItem: this operation will create an entry into DynamoDB + + + + Map attributeMap = new HashMap<>(); + attributeMap.put("partitionKey", AttributeValue.builder().s("3000").build()); + attributeMap.put("id", AttributeValue.builder().s("1001").build()); + attributeMap.put("barcode", AttributeValue.builder().s("9002811220001").build()); + + from("direct:start") + .setHeader(Ddb2Constants.OPERATION, constant(Ddb2Operations.PutItem)) + .setHeader(Ddb2Constants.CONSISTENT_READ, constant("true")) + .setHeader(Ddb2Constants.RETURN_VALUES, constant("ALL_OLD")) + .setHeader(Ddb2Constants.ITEM, constant(attributeMap)) + .setHeader(Ddb2Constants.ATTRIBUTE_NAMES, constant(attributeMap.keySet())) + .to("aws2-ddb://" + tableName + "?amazonDDBClient=#client"); + +- UpdateItem: this operation will update an entry into DynamoDB + + + + Map attributeMap = new HashMap<>(); + attributeMap.put("partitionKey", AttributeValueUpdate.builder().value(AttributeValue.builder().s("3000").build()).build()); + attributeMap.put("sortKey", AttributeValueUpdate.builder().value(AttributeValue.builder().s("1001").build()).build()); + attributeMap.put("borcode", AttributeValueUpdate.builder().value(AttributeValue.builder().s("900281122").build()).build()); + + Map keyMap = new HashMap<>(); + keyMap.put("partitionKey", AttributeValue.builder().s("3000").build()); + keyMap.put("sortKey", AttributeValue.builder().s("1001").build()); + + from("direct:start") + .setHeader(Ddb2Constants.OPERATION, constant(Ddb2Operations.UpdateItem)) + .setHeader(Ddb2Constants.UPDATE_VALUES, constant(attributeMap)) + .setHeader(Ddb2Constants.KEY, constant(keyMap)) + .to("aws2-ddb://" + tableName + "?amazonDDBClient=#client"); + +- GetItem: this operation will retrieve an entry from DynamoDB + + + + from("direct:get") + .process(exchange -> { + final Map keyMap = new HashMap<>(); + keyMap.put("table-key", AttributeValue.builder().s("1").build()); + + exchange.getIn().setHeader(Ddb2Constants.OPERATION, Ddb2Operations.GetItem); + exchange.getIn().setHeader(Ddb2Constants.ATTRIBUTE_NAMES, constant(List.of("table-key", "message"))); + exchange.getIn().setHeader(Ddb2Constants.KEY, keyMap); + }) + .toF("aws2-ddb://%s?amazonDDBClient=#client&consistentRead=true", tableName); + +- DeleteItem: this operation will delete an entry from DynamoDB + + + + from("direct:delete") + .process(exchange -> { + final Map keyMap = new HashMap<>(); + keyMap.put("table-key", AttributeValue.builder().s("1").build()); + + exchange.getIn().setHeader(Ddb2Constants.OPERATION, Ddb2Operations.DeleteItem); + exchange.getIn().setHeader(Ddb2Constants.KEY, keyMap); + }) + .toF("aws2-ddb://%s?amazonDDBClient=#client&consistentRead=true", tableName); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|The component configuration||object| +|consistentRead|Determines whether strong consistency should be enforced when data is read.|false|boolean| +|enabledInitialDescribeTable|Set whether the initial Describe table operation in the DDB Endpoint must be done, or not.|true|boolean| +|keyAttributeName|Attribute name when creating table||string| +|keyAttributeType|Attribute type when creating table||string| +|keyScalarType|The key scalar type, it can be S (String), N (Number) and B (Bytes)||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|What operation to perform|PutItem|object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with uriEndpointOverride option|false|boolean| +|readCapacity|The provisioned throughput to reserve for reading resources from your table||integer| +|region|The region in which DDB client needs to work||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|writeCapacity|The provisioned throughput to reserved for writing resources to your table||integer| +|amazonDDBClient|To use the AmazonDynamoDB as the client||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the DDB client||string| +|proxyPort|The region in which DynamoDB client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||integer| +|proxyProtocol|To define a proxy protocol when instantiating the DDB client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume a IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the DDB client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the DDB client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in DDB.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|tableName|The name of the table currently worked with.||string| +|consistentRead|Determines whether strong consistency should be enforced when data is read.|false|boolean| +|enabledInitialDescribeTable|Set whether the initial Describe table operation in the DDB Endpoint must be done, or not.|true|boolean| +|keyAttributeName|Attribute name when creating table||string| +|keyAttributeType|Attribute type when creating table||string| +|keyScalarType|The key scalar type, it can be S (String), N (Number) and B (Bytes)||string| +|operation|What operation to perform|PutItem|object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with uriEndpointOverride option|false|boolean| +|readCapacity|The provisioned throughput to reserve for reading resources from your table||integer| +|region|The region in which DDB client needs to work||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|writeCapacity|The provisioned throughput to reserved for writing resources to your table||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|amazonDDBClient|To use the AmazonDynamoDB as the client||object| +|proxyHost|To define a proxy host when instantiating the DDB client||string| +|proxyPort|The region in which DynamoDB client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||integer| +|proxyProtocol|To define a proxy protocol when instantiating the DDB client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume a IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the DDB client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the DDB client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in DDB.|false|boolean| diff --git a/camel-aws2-ddbstream.md b/camel-aws2-ddbstream.md new file mode 100644 index 0000000000000000000000000000000000000000..3e35b397091083e90d68c82b52906381e39365af --- /dev/null +++ b/camel-aws2-ddbstream.md @@ -0,0 +1,181 @@ +# Aws2-ddbstream + +**Since Camel 3.1** + +**Only consumer is supported** + +The AWS2 DynamoDB Stream component supports receiving messages from +Amazon DynamoDB Stream service. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon DynamoDB Streams. More information are available +at [AWS DynamoDB](https://aws.amazon.com/dynamodb/) + +# URI Format + + aws2-ddbstream://table-name[?options] + +The stream needs to be created prior to it being used. +You can append query options to the URI in the following format, +?options=value\&option2=value\&… + +Required DynamoDBStream component options + +You have to provide the DynamoDbStreamsClient in the Registry with +proxies and relevant credentials configured. + +# Sequence Numbers + +You can provide a literal string as the sequence number or provide a +bean in the registry. An example of using the bean would be to save your +current position in the change feed and restore it on Camel startup. + +It is an error to provide a sequence number that is greater than the +largest sequence number in the describe-streams result, as this will +lead to the AWS call returning an HTTP 400. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials, by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - aws.accessKeyId and aws.secretKey + +- Environment variables - AWS\_ACCESS\_KEY\_ID and + AWS\_SECRET\_ACCESS\_KEY. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable AWS\_CONTAINER\_CREDENTIALS\_RELATIVE\_URI is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +# Coping with Downtime + +## AWS DynamoDB Streams outage of less than 24 hours + +The consumer will resume from the last seen sequence number (as +implemented for +[CAMEL-9515](https://issues.apache.org/jira/browse/CAMEL-9515)), so you +should receive a flood of events in quick succession, as long as the +outage did not also include DynamoDB itself. + +## AWS DynamoDB Streams outage of more than 24 hours + +Given that AWS only retain 24 hours worth of changes, you will have +missed change events no matter what mitigations are in place. + +## Message Body + +The Message body is instance of +"software.amazon.awssdk.services.dynamodb.model.Record", for more +information about it, have a look at the [related +javadoc](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/model/Record.html) + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-ddb + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|configuration|The component configuration||object| +|maxResultsPerRequest|Maximum number of records that will be fetched in each poll||integer| +|overrideEndpoint|Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option|false|boolean| +|region|The region in which DDBStreams client needs to work||string| +|streamIteratorType|Defines where in the DynamoDB stream to start getting records. Note that using FROM\_START can cause a significant delay before the stream has caught up to real-time.|FROM\_LATEST|object| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|amazonDynamoDbStreamsClient|Amazon DynamoDB client to use for all requests for this endpoint||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the DDBStreams client||string| +|proxyPort|To define a proxy port when instantiating the DDBStreams client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the DDBStreams client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name.||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume a IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the DynamoDB Streams client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Cloudtrail client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the DDB Streams client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in DDB.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|tableName|Name of the dynamodb table||string| +|maxResultsPerRequest|Maximum number of records that will be fetched in each poll||integer| +|overrideEndpoint|Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option|false|boolean| +|region|The region in which DDBStreams client needs to work||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|streamIteratorType|Defines where in the DynamoDB stream to start getting records. Note that using FROM\_START can cause a significant delay before the stream has caught up to real-time.|FROM\_LATEST|object| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|amazonDynamoDbStreamsClient|Amazon DynamoDB client to use for all requests for this endpoint||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|proxyHost|To define a proxy host when instantiating the DDBStreams client||string| +|proxyPort|To define a proxy port when instantiating the DDBStreams client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the DDBStreams client|HTTPS|object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name.||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume a IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the DynamoDB Streams client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Cloudtrail client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the DDB Streams client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in DDB.|false|boolean| diff --git a/camel-aws2-ec2.md b/camel-aws2-ec2.md new file mode 100644 index 0000000000000000000000000000000000000000..ed5379d0cab4cc84bd8e5c791c5a4cd90c5cb534 --- /dev/null +++ b/camel-aws2-ec2.md @@ -0,0 +1,232 @@ +# Aws2-ec2 + +**Since Camel 3.1** + +**Only producer is supported** + +The AWS2 EC2 component supports the ability to create, run, start, stop +and terminate [AWS EC2](https://aws.amazon.com/ec2/) instances. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon EC2. More information is available at [Amazon +EC2](https://aws.amazon.com/ec2/). + +# URI Format + + aws2-ec2://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required EC2 component options + +You have to provide the amazonEc2Client in the Registry or your +accessKey and secretKey to access the [Amazon +EC2](https://aws.amazon.com/ec2/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +# Supported producer operations + +- createAndRunInstances + +- startInstances + +- stopInstances + +- terminateInstances + +- describeInstances + +- describeInstancesStatus + +- rebootInstances + +- monitorInstances + +- unmonitorInstances + +- createTags + +- deleteTags + +# Examples + +## Producer Examples + +- createAndRunInstances: this operation will create an EC2 instance + and run it + + + + from("direct:createAndRun") + .setHeader(EC2Constants.IMAGE_ID, constant("ami-fd65ba94")) + .setHeader(EC2Constants.INSTANCE_TYPE, constant(InstanceType.T2Micro)) + .setHeader(EC2Constants.INSTANCE_MIN_COUNT, constant("1")) + .setHeader(EC2Constants.INSTANCE_MAX_COUNT, constant("1")) + .to("aws2-ec2://TestDomain?accessKey=xxxx&secretKey=xxxx&operation=createAndRunInstances"); + +- startInstances: this operation will start a list of EC2 instances + + + + from("direct:start") + .process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Collection l = new ArrayList<>(); + l.add("myinstance"); + exchange.getIn().setHeader(AWS2EC2Constants.INSTANCES_IDS, l); + } + }) + .to("aws2-ec2://TestDomain?accessKey=xxxx&secretKey=xxxx&operation=startInstances"); + +- stopInstances: this operation will stop a list of EC2 instances + + + + from("direct:stop") + .process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Collection l = new ArrayList<>(); + l.add("myinstance"); + exchange.getIn().setHeader(AWS2EC2Constants.INSTANCES_IDS, l); + } + }) + .to("aws2-ec2://TestDomain?accessKey=xxxx&secretKey=xxxx&operation=stopInstances"); + +- terminateInstances: this operation will terminate a list of EC2 + instances + + + + from("direct:stop") + .process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Collection l = new ArrayList<>(); + l.add("myinstance"); + exchange.getIn().setHeader(AWS2EC2Constants.INSTANCES_IDS, l); + } + }) + .to("aws2-ec2://TestDomain?accessKey=xxxx&secretKey=xxxx&operation=terminateInstances"); + +# Using a POJO as body + +Sometimes building an AWS Request can be complex because of multiple +options. We introduce the possibility to use a POJO as a body. In AWS +EC2 there are multiple operations you can submit, as an example for +Create and run an instance, you can do something like: + + from("direct:start") + .setBody(RunInstancesRequest.builder().imageId("test-1").instanceType(InstanceType.T2_MICRO).build()) + .to("aws2-ec2://TestDomain?accessKey=xxxx&secretKey=xxxx&operation=createAndRunInstances&pojoRequest=true"); + +In this way, you’ll pass the request directly without the need of +passing headers and options specifically related to this operation. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-ec2 + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|amazonEc2Client|To use an existing configured AmazonEC2Client client||object| +|configuration|The component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform. It can be createAndRunInstances, startInstances, stopInstances, terminateInstances, describeInstances, describeInstancesStatus, rebootInstances, monitorInstances, unmonitorInstances, createTags or deleteTags||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which EC2 client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the EC2 client||string| +|proxyPort|To define a proxy port when instantiating the EC2 client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the EC2 client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the EC2 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the EC2 client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the EC2 client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in EC2.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|amazonEc2Client|To use an existing configured AmazonEC2Client client||object| +|operation|The operation to perform. It can be createAndRunInstances, startInstances, stopInstances, terminateInstances, describeInstances, describeInstancesStatus, rebootInstances, monitorInstances, unmonitorInstances, createTags or deleteTags||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which EC2 client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|proxyHost|To define a proxy host when instantiating the EC2 client||string| +|proxyPort|To define a proxy port when instantiating the EC2 client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the EC2 client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the EC2 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the EC2 client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the EC2 client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in EC2.|false|boolean| diff --git a/camel-aws2-ecs.md b/camel-aws2-ecs.md new file mode 100644 index 0000000000000000000000000000000000000000..de5ca857471af7f8a20507f1bad30431c8fe53ef --- /dev/null +++ b/camel-aws2-ecs.md @@ -0,0 +1,168 @@ +# Aws2-ecs + +**Since Camel 3.1** + +**Only producer is supported** + +The AWS2 ECS component supports create, delete, describe and list +clusters [AWS ECS](https://aws.amazon.com/ecs/) clusters instances. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon ECS. More information is available at [Amazon +ECS](https://aws.amazon.com/ecs/). + +# URI Format + + aws2-ecs://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required ECS component options + +You have to provide the amazonECSClient in the Registry or your +accessKey and secretKey to access the [Amazon +ECS](https://aws.amazon.com/ecs/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## ECS Producer operations + +Camel-AWS ECS component provides the following operation on the producer +side: + +- listClusters + +- createCluster + +- describeCluster + +- deleteCluster + +# Producer Examples + +- listClusters: this operation will list the available clusters in ECS + + + + from("direct:listClusters") + .to("aws2-ecs://test?ecsClient=#amazonEcsClient&operation=listClusters") + +# Using a POJO as body + +Sometimes building an AWS Request can be complex because of multiple +options. We introduce the possibility to use a POJO as a body. In AWS +ECS there are multiple operations you can submit, as an example for List +cluster request, you can do something like: + + from("direct:start") + .setBody(ListClustersRequest.builder().maxResults(10).build()) + .to("aws2-ecs://test?ecsClient=#amazonEcsClient&operation=listClusters&pojoRequest=true") + +In this way, you’ll pass the request directly without the need of +passing headers and options specifically related to this operation. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-ecs + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which the ECS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|ecsClient|To use an existing configured AWS ECS client||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the ECS client||string| +|proxyPort|To define a proxy port when instantiating the ECS client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the ECS client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the ECS client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the ECS client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the ECS client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in ECS.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which the ECS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|ecsClient|To use an existing configured AWS ECS client||object| +|proxyHost|To define a proxy host when instantiating the ECS client||string| +|proxyPort|To define a proxy port when instantiating the ECS client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the ECS client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the ECS client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the ECS client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the ECS client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in ECS.|false|boolean| diff --git a/camel-aws2-eks.md b/camel-aws2-eks.md new file mode 100644 index 0000000000000000000000000000000000000000..3662864782ccb2a03b26d18e86cb9c98019d26b3 --- /dev/null +++ b/camel-aws2-eks.md @@ -0,0 +1,168 @@ +# Aws2-eks + +**Since Camel 3.1** + +**Only producer is supported** + +The AWS2 EKS component supports create, delete, describe and list +clusters [AWS EKS](https://aws.amazon.com/eks/) clusters instances. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon EKS. More information is available at [Amazon +EKS](https://aws.amazon.com/eks/). + +# URI Format + + aws2-eks://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required EKS component options + +You have to provide the amazonEKSClient in the Registry or your +accessKey and secretKey to access the [Amazon +EKS](https://aws.amazon.com/eks/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## EKS Producer operations + +Camel-AWS EKS component provides the following operation on the producer +side: + +- listClusters + +- createCluster + +- describeCluster + +- deleteCluster + +# Producer Examples + +- listClusters: this operation will list the available clusters in EKS + + + + from("direct:listClusters") + .to("aws2-eks://test?eksClient=#amazonEksClient&operation=listClusters") + +# Using a POJO as body + +Sometimes building an AWS Request can be complex because of multiple +options. We introduce the possibility to use a POJO as a body. In AWS +EKS there are multiple operations you can submit, as an example for List +cluster request, you can do something like: + + from("direct:start") + .setBody(ListClustersRequest.builder().maxResults(12).build()) + .to("aws2-eks://test?eksClient=#amazonEksClient&operation=listClusters&pojoRequest=true") + +In this way, you’ll pass the request directly without the need of +passing headers and options specifically related to this operation. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-eks + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name|false|string| +|region|The region in which EKS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useDefaultCredentialsProvider|Set whether the EKS client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the EKS client should expect to load credentials through a profile credentials provider.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|eksClient|To use an existing configured AWS EKS client||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the EKS client||string| +|proxyPort|To define a proxy port when instantiating the EKS client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the EKS client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useSessionCredentials|Set whether the EKS client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in EKS.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name|false|string| +|region|The region in which EKS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useDefaultCredentialsProvider|Set whether the EKS client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the EKS client should expect to load credentials through a profile credentials provider.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|eksClient|To use an existing configured AWS EKS client||object| +|proxyHost|To define a proxy host when instantiating the EKS client||string| +|proxyPort|To define a proxy port when instantiating the EKS client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the EKS client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useSessionCredentials|Set whether the EKS client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in EKS.|false|boolean| diff --git a/camel-aws2-eventbridge.md b/camel-aws2-eventbridge.md new file mode 100644 index 0000000000000000000000000000000000000000..1d6e00cff21d9d18a79b5307b0d998e9b9600510 --- /dev/null +++ b/camel-aws2-eventbridge.md @@ -0,0 +1,365 @@ +# Aws2-eventbridge + +**Since Camel 3.6** + +**Only producer is supported** + +The AWS2 Eventbridge component supports assumeRole operation. [AWS +Eventbridge](https://aws.amazon.com/eventbridge/). + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon Eventbridge. More information is available at +[Amazon Eventbridge](https://aws.amazon.com/eventbridge/). + +To create a rule that triggers on an action by an AWS service that does +not emit events, you can base the rule on API calls made by that +service. The API calls are recorded by AWS CloudTrail, so you’ll need to +have CloudTrail enabled. For more information, check [Services Supported +by CloudTrail Event +History](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html). + +# URI Format + + aws2-eventbridge://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## AWS2-Eventbridge Producer operations + +Camel-AWS2-Eventbridge component provides the following operation on the +producer side: + +- putRule + +- putTargets + +- removeTargets + +- deleteRule + +- enableRule + +- disableRule + +- listRules + +- describeRule + +- listTargetsByRule + +- listRuleNamesByTarget + +- putEvent + +- PutRule: this operation creates a rule related to an eventbus + + + + from("direct:putRule").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(EventbridgeConstants.RULE_NAME, "firstrule"); + } + }) + .to("aws2-eventbridge://test?operation=putRule&eventPatternFile=file:src/test/resources/eventpattern.json") + .to("mock:result"); + +This operation will create a rule named *firstrule*, and it will use a +json file for defining the EventPattern. + +- PutTargets: this operation will add a target to the rule + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(EventbridgeConstants.RULE_NAME, "firstrule"); + Target target = Target.builder().id("sqs-queue").arn("arn:aws:sqs:eu-west-1:780410022472:camel-connector-test") + .build(); + List targets = new ArrayList(); + targets.add(target); + exchange.getIn().setHeader(EventbridgeConstants.TARGETS, targets); + } + }) + .to("aws2-eventbridge://test?operation=putTargets") + .to("mock:result"); + +This operation will add the target sqs-queue with the arn reported to +the targets of the *firstrule* rule. + +- RemoveTargets: this operation will remove a collection of targets + from the rule + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(EventbridgeConstants.RULE_NAME, "firstrule"); + List ids = new ArrayList(); + targets.add("sqs-queue"); + exchange.getIn().setHeader(EventbridgeConstants.TARGETS_IDS, targets); + } + }) + .to("aws2-eventbridge://test?operation=removeTargets") + .to("mock:result"); + +This operation will remove the target sqs-queue from the *firstrule* +rule. + +- DeleteRule: this operation will delete a rule related to an eventbus + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(EventbridgeConstants.RULE_NAME, "firstrule"); + } + }) + .to("aws2-eventbridge://test?operation=deleteRule") + .to("mock:result"); + +This operation will remove the *firstrule* rule from the test eventbus. + +- EnableRule: this operation will enable a rule related to an eventbus + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(EventbridgeConstants.RULE_NAME, "firstrule"); + } + }) + .to("aws2-eventbridge://test?operation=enableRule") + .to("mock:result"); + +This operation will enable the *firstrule* rule from the test eventbus. + +- DisableRule: this operation will disable a rule related to an + eventbus + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(EventbridgeConstants.RULE_NAME, "firstrule"); + } + }) + .to("aws2-eventbridge://test?operation=disableRule") + .to("mock:result"); + +This operation will disable the *firstrule* rule from the test eventbus. + +- ListRules: this operation will list all the rules related to an + eventbus with prefix first + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(EventbridgeConstants.RULE_NAME_PREFIX, "first"); + } + }) + .to("aws2-eventbridge://test?operation=listRules") + .to("mock:result"); + +This operation will list all the rules with prefix first from the test +eventbus. + +- DescribeRule: this operation will describe a specified rule related + to an eventbus + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(EventbridgeConstants.RULE_NAME, "firstrule"); + } + }) + .to("aws2-eventbridge://test?operation=describeRule") + .to("mock:result"); + +This operation will describe the *firstrule* rule from the test +eventbus. + +- ListTargetsByRule: this operation will return a list of targets + associated with a rule + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(EventbridgeConstants.RULE_NAME, "firstrule"); + } + }) + .to("aws2-eventbridge://test?operation=listTargetsByRule") + .to("mock:result"); + +this operation will return a list of targets associated with the +*firstrule* rule. + +- ListRuleNamesByTarget: this operation will return a list of rules + associated with a target + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(EventbridgeConstants.TARGET_ARN, "firstrule"); + } + }) + .to("aws2-eventbridge://test?operation=listRuleNamesByTarget") + .to("mock:result"); + +this operation will return a list of rules associated with a target. + +- PutEvent: this operation will send an event to the Servicebus + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(EventbridgeConstants.EVENT_RESOURCES_ARN, "arn:aws:sqs:eu-west-1:780410022472:camel-connector-test"); + exchange.getIn().setHeader(EventbridgeConstants.EVENT_SOURCE, "com.pippo"); + exchange.getIn().setHeader(EventbridgeConstants.EVENT_DETAIL_TYPE, "peppe"); + exchange.getIn().setBody("Test Event"); + } + }) + .to("aws2-eventbridge://test?operation=putEvent") + .to("mock:result"); + +this operation will return a list of entries with related ID sent to +servicebus. + +# Updating the rule + +To update a rule, you’ll need to perform the putRule operation again. +There is no explicit update rule operation in the Java SDK. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-eventbridge + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|eventPatternFile|EventPattern File||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform|putRule|object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which the Eventbridge client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|eventbridgeClient|To use an existing configured AWS Eventbridge client||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the Eventbridge client||string| +|proxyPort|To define a proxy port when instantiating the Eventbridge client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Eventbridge client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Eventbridge client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Eventbridge client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the Eventbridge client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Eventbridge.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|eventbusNameOrArn|Event bus name or ARN||string| +|eventPatternFile|EventPattern File||string| +|operation|The operation to perform|putRule|object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which the Eventbridge client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|eventbridgeClient|To use an existing configured AWS Eventbridge client||object| +|proxyHost|To define a proxy host when instantiating the Eventbridge client||string| +|proxyPort|To define a proxy port when instantiating the Eventbridge client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Eventbridge client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Eventbridge client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Eventbridge client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the Eventbridge client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Eventbridge.|false|boolean| diff --git a/camel-aws2-iam.md b/camel-aws2-iam.md new file mode 100644 index 0000000000000000000000000000000000000000..46a20db0e1a9bed6e942a510fd2f9b2fbe32365b --- /dev/null +++ b/camel-aws2-iam.md @@ -0,0 +1,228 @@ +# Aws2-iam + +**Since Camel 3.1** + +**Only producer is supported** + +The AWS2 IAM component supports create, run, start, stop and terminate +[AWS IAM](https://aws.amazon.com/iam/) instances. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon IAM. More information is available at [Amazon +IAM](https://aws.amazon.com/iam/). + +# URI Format + + aws2-iam://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +The AWS2 IAM component works on the aws-global region, and it has +aws-global as the default region + +Required IAM component options + +You have to provide the amazonKmsClient in the Registry or your +accessKey and secretKey to access the [Amazon +IAM](https://aws.amazon.com/iam/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## IAM Producer operations + +Camel-AWS2 IAM component provides the following operation on the +producer side: + +- listAccessKeys + +- createUser + +- deleteUser + +- listUsers + +- getUser + +- createAccessKey + +- deleteAccessKey + +- updateAccessKey + +- createGroup + +- deleteGroup + +- listGroups + +- addUserToGroup + +- removeUserFromGroup + +# Producer Examples + +- createUser: this operation will create a user in IAM + + + + from("direct:createUser") + .setHeader(IAM2Constants.USERNAME, constant("camel")) + .to("aws2-iam://test?iamClient=#amazonIAMClient&operation=createUser") + +- deleteUser: this operation will delete a user in IAM + + + + from("direct:deleteUser") + .setHeader(IAM2Constants.USERNAME, constant("camel")) + .to("aws2-iam://test?iamClient=#amazonIAMClient&operation=deleteUser") + +- listUsers: this operation will list the users in IAM + + + + from("direct:listUsers") + .to("aws2-iam://test?iamClient=#amazonIAMClient&operation=listUsers") + +- createGroup: this operation will add a group in IAM + + + + from("direct:deleteUser") + .setHeader(IAM2Constants.GROUP_NAME, constant("camel")) + .to("aws2-iam://test?iamClient=#amazonIAMClient&operation=createGroup") + +- deleteGroup: this operation will delete a group in IAM + + + + from("direct:deleteUser") + .setHeader(IAM2Constants.GROUP_NAME, constant("camel")) + .to("aws2-iam://test?iamClient=#amazonIAMClient&operation=deleteGroup") + +- listGroups: this operation will list the groups in IAM + + + + from("direct:listUsers") + .to("aws2-iam://test?iamClient=#amazonIAMClient&operation=listGroups") + +# Using a POJO as body + +Sometimes building an AWS Request can be complex because of multiple +options. We introduce the possibility to use a POJO as a body. In AWS +IAM, there are multiple operations you can submit, as an example for +Create User request, you can do something like: + + from("direct:createUser") + .setBody(CreateUserRequest.builder().userName("camel").build()) + .to("aws2-iam://test?iamClient=#amazonIAMClient&operation=createUser&pojoRequest=true") + +In this way, you’ll pass the request directly without the need of +passing headers and options specifically related to this operation. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-iam + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|iamClient|To use an existing configured AWS IAM client||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform. You can configure a default operation on the component level, or the operation as part of the endpoint, or via a message header with the key CamelAwsIAMOperation.||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which IAM client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()|aws-global|string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the IAM client||string| +|proxyPort|To define a proxy port when instantiating the IAM client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the IAM client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the IAM client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the IAM client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the IAM client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume a IAM role for doing operations in IAM.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|iamClient|To use an existing configured AWS IAM client||object| +|operation|The operation to perform. You can configure a default operation on the component level, or the operation as part of the endpoint, or via a message header with the key CamelAwsIAMOperation.||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which IAM client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()|aws-global|string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|proxyHost|To define a proxy host when instantiating the IAM client||string| +|proxyPort|To define a proxy port when instantiating the IAM client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the IAM client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the IAM client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the IAM client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the IAM client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume a IAM role for doing operations in IAM.|false|boolean| diff --git a/camel-aws2-kinesis-firehose.md b/camel-aws2-kinesis-firehose.md new file mode 100644 index 0000000000000000000000000000000000000000..cbd970ca1b16b5706d8d77d122229a3228a253c1 --- /dev/null +++ b/camel-aws2-kinesis-firehose.md @@ -0,0 +1,192 @@ +# Aws2-kinesis-firehose + +**Since Camel 3.2** + +**Only producer is supported** + +The AWS2 Kinesis Firehose component supports sending messages to Amazon +Kinesis Firehose service (Batch not supported). + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon Kinesis Firehose. More information is available +at [AWS Kinesis Firehose](https://aws.amazon.com/kinesis/firehose/) + +The AWS2 Kinesis Firehose component is not supported in OSGI + +# URI Format + + aws2-kinesis-firehose://delivery-stream-name[?options] + +The stream needs to be created prior to it being used. + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +# URI Options + +Required Kinesis Firehose component options + +You have to provide the FirehoseClient in the Registry with proxies and +relevant credentials configured. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey` + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## Amazon Kinesis Firehose configuration + +You then have to reference the FirehoseClient in the +`amazonKinesisFirehoseClient` URI option. + + from("aws2-kinesis-firehose://mykinesisdeliverystream?amazonKinesisFirehoseClient=#kinesisClient") + .to("log:out?showAll=true"); + +## Providing AWS Credentials + +It is recommended that the credentials are obtained by using the +[DefaultAWSCredentialsProviderChain](http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html) +that is the default when creating a new ClientConfiguration instance, +however, a different +[AWSCredentialsProvider](http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/AWSCredentialsProvider.html) +can be specified when calling createClient(…). + +## Kinesis Firehose Producer operations + +Camel-AWS s3 component provides the following operation on the producer +side: + +- SendBatchRecord + +- CreateDeliveryStream + +- DeleteDeliveryStream + +- DescribeDeliveryStream + +- UpdateDestination + +## Send Batch Records Example + +You can send an iterable of Kinesis Record (as the following example +shows), or you can send directly a PutRecordBatchRequest POJO instance +in the body. + + @Test + public void testFirehoseBatchRouting() throws Exception { + Exchange exchange = template.send("direct:start", ExchangePattern.InOnly, new Processor() { + public void process(Exchange exchange) throws Exception { + List recs = new ArrayList(); + Record rec = Record.builder().data(SdkBytes.fromString("Test1", Charset.defaultCharset())).build(); + Record rec1 = Record.builder().data(SdkBytes.fromString("Test2", Charset.defaultCharset())).build(); + recs.add(rec); + recs.add(rec1); + exchange.getIn().setBody(recs); + } + }); + assertNotNull(exchange.getIn().getBody()); + } + + from("direct:start").to("aws2-kinesis-firehose://cc?amazonKinesisFirehoseClient=#FirehoseClient&operation=sendBatchRecord"); + +In the deliveryStream you’ll find "Test1Test2". + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-kinesis + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cborEnabled|This option will set the CBOR\_ENABLED property during the execution|true|boolean| +|configuration|Component configuration||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with uriEndpointOverride option|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to do in case the user don't want to send only a record||object| +|region|The region in which Kinesis Firehose client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useDefaultCredentialsProvider|Set whether the Kinesis Firehose client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|amazonKinesisFirehoseClient|Amazon Kinesis Firehose client to use for all requests for this endpoint||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|proxyHost|To define a proxy host when instantiating the Kinesis Firehose client||string| +|proxyPort|To define a proxy port when instantiating the Kinesis Firehose client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Kinesis Firehose client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name.||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume a IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useProfileCredentialsProvider|Set whether the Kinesis Firehose client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the Kinesis Firehose client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in Kinesis Firehose.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|streamName|Name of the stream||string| +|cborEnabled|This option will set the CBOR\_ENABLED property during the execution|true|boolean| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with uriEndpointOverride option|false|boolean| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|operation|The operation to do in case the user don't want to send only a record||object| +|region|The region in which Kinesis Firehose client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|useDefaultCredentialsProvider|Set whether the Kinesis Firehose client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|amazonKinesisFirehoseClient|Amazon Kinesis Firehose client to use for all requests for this endpoint||object| +|proxyHost|To define a proxy host when instantiating the Kinesis Firehose client||string| +|proxyPort|To define a proxy port when instantiating the Kinesis Firehose client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Kinesis Firehose client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name.||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume a IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useProfileCredentialsProvider|Set whether the Kinesis Firehose client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the Kinesis Firehose client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in Kinesis Firehose.|false|boolean| diff --git a/camel-aws2-kinesis.md b/camel-aws2-kinesis.md new file mode 100644 index 0000000000000000000000000000000000000000..5344e52e0e7a74fe4d0c60ef2926b480cb040c3c --- /dev/null +++ b/camel-aws2-kinesis.md @@ -0,0 +1,242 @@ +# Aws2-kinesis + +**Since Camel 3.2** + +**Both producer and consumer are supported** + +The AWS2 Kinesis component supports consuming messages from and +producing messages to Amazon Kinesis service. + +The AWS2 Kinesis component also supports Synchronous and Asynchronous +Client, which means you choose what fits best your requirements, so if +you need the connection (client) to be async, there’s a property of +*asyncClient* (in DSL also can be found) needs to be turned true. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon Kinesis. More information is available at [AWS +Kinesis](https://aws.amazon.com/kinesis/) + +# URI Format + + aws2-kinesis://stream-name[?options] + +The stream needs to be created prior to it being used. + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required Kinesis component options + +You have to provide the KinesisClient in the Registry with proxies and +relevant credentials configured. + +# Batch Consumer + +This component implements the Batch Consumer. + +This allows you, for instance, to know how many messages exist in this +batch and for instance, let the Aggregator aggregate this number of +messages. + +The consumer is able to consume either from a single specific shard or +all available shards (multiple shards consumption) of Amazon Kinesis, +therefore, if you leave the *shardId* property in the DSL configuration +empty, then it’ll consume all available shards otherwise only the +specified shard corresponding to the shardId will be consumed. + +# Batch Producer + +This component implements the Batch Producer. + +This allows you to send multiple messages in a single request to Amazon +Kinesis. Messages with batch size more than 500 is allowed. Producer +will split them into multiple requests. + +The batch type needs to implement the `Iterable` interface. For example, +it can be a `List`, `Set` or any other collection type. The message type +can be one or more of types `byte[]`, `ByteBuffer`, UTF-8 `String`, or +`InputStream`. Other types are not supported. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## AmazonKinesis configuration + +You then have to reference the KinesisClient in the +`amazonKinesisClient` URI option. + + from("aws2-kinesis://mykinesisstream?amazonKinesisClient=#kinesisClient") + .to("log:out?showAll=true"); + +## Providing AWS Credentials + +It is recommended that the credentials are obtained by using the +[DefaultAWSCredentialsProviderChain](http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html) +that is the default when creating a new ClientConfiguration instance, +however, a different +[AWSCredentialsProvider](http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/AWSCredentialsProvider.html) +can be specified when calling createClient(…). + +## AWS Kinesis KCL Consumer + +The component supports also the KCL (Kinesis Client Library) for +consuming from a Kinesis Data Stream. + +To enable this feature you’ll need to set two different parameter in +your endpoint: + + from("aws2-kinesis://mykinesisstream?asyncClient=true&useDefaultCredentialsProvider=true&useKclConsumers=true") + .to("log:out?showAll=true"); + +This feature will make possible to automatically checkpointing the Shard +Iterations by combining the usage of KCL, DynamoDB Table and CloudWatch +alarms. + +Everything will work out of the box, by simply using your AWS +Credentials. + +In the beginning the consumer will require 60/70 seconds for preparing +everything, listing the shards, creating/querying the Lease table on +Dynamo DB. Keep it in mind while working with the KCL consumer. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-kinesis + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cborEnabled|This option will set the CBOR\_ENABLED property during the execution|true|boolean| +|configuration|Component configuration||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with uriEndpointOverride option|false|boolean| +|region|The region in which Kinesis Firehose client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|iteratorType|Defines where in the Kinesis stream to start getting records|TRIM\_HORIZON|object| +|maxResultsPerRequest|Maximum number of records that will be fetched in each poll|1|integer| +|sequenceNumber|The sequence number to start polling from. Required if iteratorType is set to AFTER\_SEQUENCE\_NUMBER or AT\_SEQUENCE\_NUMBER||string| +|shardClosed|Define what will be the behavior in case of shard closed. Possible value are ignore, silent and fail. In case of ignore a message will be logged and the consumer will restart from the beginning,in case of silent there will be no logging and the consumer will start from the beginning,in case of fail a ReachedClosedStateException will be raised|ignore|object| +|shardId|Defines which shardId in the Kinesis stream to get records from||string| +|shardMonitorInterval|The interval in milliseconds to wait between shard polling|10000|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|amazonKinesisClient|Amazon Kinesis client to use for all requests for this endpoint||object| +|asyncClient|If we want to a KinesisAsyncClient instance set it to true|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|cloudWatchAsyncClient|If we want to a KCL Consumer, we can pass an instance of CloudWatchAsyncClient||object| +|dynamoDbAsyncClient|If we want to a KCL Consumer, we can pass an instance of DynamoDbAsyncClient||object| +|useKclConsumers|If we want to a KCL Consumer set it to true|false|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the Kinesis client||string| +|proxyPort|To define a proxy port when instantiating the Kinesis client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Kinesis client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name.||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume a IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Kinesis client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Kinesis client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the Kinesis client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in Kinesis.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|streamName|Name of the stream||string| +|cborEnabled|This option will set the CBOR\_ENABLED property during the execution|true|boolean| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with uriEndpointOverride option|false|boolean| +|region|The region in which Kinesis Firehose client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|iteratorType|Defines where in the Kinesis stream to start getting records|TRIM\_HORIZON|object| +|maxResultsPerRequest|Maximum number of records that will be fetched in each poll|1|integer| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|sequenceNumber|The sequence number to start polling from. Required if iteratorType is set to AFTER\_SEQUENCE\_NUMBER or AT\_SEQUENCE\_NUMBER||string| +|shardClosed|Define what will be the behavior in case of shard closed. Possible value are ignore, silent and fail. In case of ignore a message will be logged and the consumer will restart from the beginning,in case of silent there will be no logging and the consumer will start from the beginning,in case of fail a ReachedClosedStateException will be raised|ignore|object| +|shardId|Defines which shardId in the Kinesis stream to get records from||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|shardMonitorInterval|The interval in milliseconds to wait between shard polling|10000|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|amazonKinesisClient|Amazon Kinesis client to use for all requests for this endpoint||object| +|asyncClient|If we want to a KinesisAsyncClient instance set it to true|false|boolean| +|cloudWatchAsyncClient|If we want to a KCL Consumer, we can pass an instance of CloudWatchAsyncClient||object| +|dynamoDbAsyncClient|If we want to a KCL Consumer, we can pass an instance of DynamoDbAsyncClient||object| +|useKclConsumers|If we want to a KCL Consumer set it to true|false|boolean| +|proxyHost|To define a proxy host when instantiating the Kinesis client||string| +|proxyPort|To define a proxy port when instantiating the Kinesis client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Kinesis client|HTTPS|object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name.||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume a IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Kinesis client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Kinesis client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the Kinesis client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in Kinesis.|false|boolean| diff --git a/camel-aws2-kms.md b/camel-aws2-kms.md new file mode 100644 index 0000000000000000000000000000000000000000..caaada9880f9684617a86b87abfff51ebb52e1e7 --- /dev/null +++ b/camel-aws2-kms.md @@ -0,0 +1,193 @@ +# Aws2-kms + +**Since Camel 3.1** + +**Only producer is supported** + +The AWS2 KMS component supports the ability to work with keys stored in +[AWS KMS](https://aws.amazon.com/kms/) instances. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon KMS. More information is available at [Amazon +KMS](https://aws.amazon.com/kms/). + +# URI Format + + aws2-kms://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required KMS component options + +You have to provide the amazonKmsClient in the Registry or your +accessKey and secretKey to access the [Amazon +KMS](https://aws.amazon.com/kms/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## KMS Producer operations + +Camel-AWS KMS component provides the following operation on the producer +side: + +- listKeys + +- createKey + +- disableKey + +- scheduleKeyDeletion + +- describeKey + +- enableKey + +# Producer Examples + +- listKeys: this operation will list the available keys in KMS + + + + from("direct:listKeys") + .to("aws2-kms://test?kmsClient=#amazonKmsClient&operation=listKeys") + +- createKey: this operation will create a key in KMS + + + + from("direct:createKey") + .to("aws2-kms://test?kmsClient=#amazonKmsClient&operation=createKey") + +- disableKey: this operation will disable a key in KMS + + + + from("direct:disableKey") + .setHeader(KMS2Constants.KEY_ID, constant("123") + .to("aws2-kms://test?kmsClient=#amazonKmsClient&operation=disableKey") + +- enableKey: this operation will enable a key in KMS + + + + from("direct:enableKey") + .setHeader(KMS2Constants.KEY_ID, constant("123") + .to("aws2-kms://test?kmsClient=#amazonKmsClient&operation=enableKey") + +# Using a POJO as body + +Sometimes building an AWS Request can be complex because of multiple +options. We introduce the possibility to use a POJO as the body. In AWS +KMS there are multiple operations you can submit, as an example for List +keys request, you can do something like: + + from("direct:createUser") + .setBody(ListKeysRequest.builder().limit(10).build()) + .to("aws2-kms://test?kmsClient=#amazonKmsClient&operation=listKeys&pojoRequest=true") + +In this way, you’ll pass the request directly without the need of +passing headers and options specifically related to this operation. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-kms + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which EKS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|kmsClient|To use an existing configured AWS KMS client||object| +|proxyHost|To define a proxy host when instantiating the KMS client||string| +|proxyPort|To define a proxy port when instantiating the KMS client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the KMS client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the KMS client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the KMS client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the KMS client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume a IAM role for doing operations in KMS.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which EKS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|kmsClient|To use an existing configured AWS KMS client||object| +|proxyHost|To define a proxy host when instantiating the KMS client||string| +|proxyPort|To define a proxy port when instantiating the KMS client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the KMS client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the KMS client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the KMS client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the KMS client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume a IAM role for doing operations in KMS.|false|boolean| diff --git a/camel-aws2-lambda.md b/camel-aws2-lambda.md new file mode 100644 index 0000000000000000000000000000000000000000..437f778c8015da3ea1849ae5cbe2d96a1e31630a --- /dev/null +++ b/camel-aws2-lambda.md @@ -0,0 +1,225 @@ +# Aws2-lambda + +**Since Camel 3.2** + +**Only producer is supported** + +The AWS2 Lambda component supports create, get, list, delete, and invoke +[AWS Lambda](https://aws.amazon.com/lambda/) functions. + +**Prerequisites** + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon Lambda. More information is available at [AWS +Lambda](https://aws.amazon.com/lambda/). + +When creating a Lambda function, you need to specify an IAM role which +has at least the AWSLambdaBasicExecuteRole policy attached. + +# URI Format + + aws2-lambda://functionName[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required Lambda component options + +You have to provide the awsLambdaClient in the Registry or your +accessKey and secretKey to access the [Amazon +Lambda](https://aws.amazon.com/lambda/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +# List of Available Operations + +- listFunctions + +- getFunction + +- createFunction + +- deleteFunction + +- invokeFunction + +- updateFunction + +- createEventSourceMapping + +- deleteEventSourceMapping + +- listEventSourceMapping + +- listTags + +- tagResource + +- untagResource + +- publishVersion + +- listVersions + +- createAlias + +- deleteAlias + +- getAlias + +- listAliases + +# Examples + +## Producer Example + +To have a full understanding of how the component works, you may have a +look at these [integration +tests](https://github.com/apache/camel/tree/main/components/camel-aws/camel-aws2-lambda/src/test/java/org/apache/camel/component/aws2/lambda/integration) + +## Producer Examples + +- CreateFunction: this operation will create a function for you in AWS + Lambda + + + + from("direct:createFunction").to("aws2-lambda://GetHelloWithName?operation=createFunction").to("mock:result"); + +and by sending + + template.send("direct:createFunction", ExchangePattern.InOut, new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(Lambda2Constants.RUNTIME, "nodejs6.10"); + exchange.getIn().setHeader(Lambda2Constants.HANDLER, "GetHelloWithName.handler"); + exchange.getIn().setHeader(Lambda2Constants.DESCRIPTION, "Hello with node.js on Lambda"); + exchange.getIn().setHeader(Lambda2Constants.ROLE, + "arn:aws:iam::643534317684:role/lambda-execution-role"); + + ClassLoader classLoader = getClass().getClassLoader(); + File file = new File( + classLoader + .getResource("org/apache/camel/component/aws2/lambda/function/node/GetHelloWithName.zip") + .getFile()); + FileInputStream inputStream = new FileInputStream(file); + exchange.getIn().setBody(inputStream); + } + }); + +# Using a POJO as body + +Sometimes building an AWS Request can be complex because of multiple +options. We introduce the possibility to use a POJO as the body. In AWS +Lambda there are multiple operations you can submit, as an example for +Get Function request, you can do something like: + + from("direct:getFunction") + .setBody(GetFunctionRequest.builder().functionName("test").build()) + .to("aws2-lambda://GetHelloWithName?awsLambdaClient=#awsLambdaClient&operation=getFunction&pojoRequest=true") + +In this way, you’ll pass the request directly without the need of +passing headers and options specifically related to this operation. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-lambda + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform. It can be listFunctions, getFunction, createFunction, deleteFunction or invokeFunction|invokeFunction|object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which the Lambda client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|awsLambdaClient|To use an existing configured AwsLambdaClient client||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the Lambda client||string| +|proxyPort|To define a proxy port when instantiating the Lambda client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Lambda client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Lambda client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Lambda client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the Lambda client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Lambda.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|function|Name of the Lambda function.||string| +|operation|The operation to perform. It can be listFunctions, getFunction, createFunction, deleteFunction or invokeFunction|invokeFunction|object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which the Lambda client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|awsLambdaClient|To use an existing configured AwsLambdaClient client||object| +|proxyHost|To define a proxy host when instantiating the Lambda client||string| +|proxyPort|To define a proxy port when instantiating the Lambda client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Lambda client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Lambda client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Lambda client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the Lambda client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Lambda.|false|boolean| diff --git a/camel-aws2-mq.md b/camel-aws2-mq.md new file mode 100644 index 0000000000000000000000000000000000000000..0d3ad087d153830929e83567dd45cf01d1d109f9 --- /dev/null +++ b/camel-aws2-mq.md @@ -0,0 +1,216 @@ +# Aws2-mq + +**Since Camel 3.1** + +**Only producer is supported** + +The AWS2 MQ component supports create, run, start, stop and terminate +[AWS MQ](https://aws.amazon.com/amazon-mq/) instances. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon MQ. More information is available at [Amazon +MQ](https://aws.amazon.com/amazon-mq/). + +# URI Format + + aws2-mq://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required MQ component options + +You have to provide the amazonMqClient in the Registry or your accessKey +and secretKey to access the [Amazon +MQ](https://aws.amazon.com/amazon-mq/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## MQ Producer operations + +Camel-AWS MQ component provides the following operation on the producer +side: + +- listBrokers + +- createBroker + +- deleteBroker + +- rebootBroker + +- updateBroker + +- describeBroker + +# Examples + +## Producer Examples + +- listBrokers: this operation will list the available MQ Brokers in + AWS + + + + from("direct:listBrokers") + .to("aws2-mq://test?amazonMqClient=#amazonMqClient&operation=listBrokers") + +- createBroker: this operation will create an MQ Broker in AWS + + + + from("direct:createBroker") + .process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(MQ2Constants.BROKER_NAME, "test"); + exchange.getIn().setHeader(MQ2Constants.BROKER_DEPLOYMENT_MODE, DeploymentMode.SINGLE_INSTANCE); + exchange.getIn().setHeader(MQ2Constants.BROKER_INSTANCE_TYPE, "mq.t2.micro"); + exchange.getIn().setHeader(MQ2Constants.BROKER_ENGINE, EngineType.ACTIVEMQ.name()); + exchange.getIn().setHeader(MQ2Constants.BROKER_ENGINE_VERSION, "5.15.6"); + exchange.getIn().setHeader(MQ2Constants.BROKER_PUBLICLY_ACCESSIBLE, false); + List users = new ArrayList<>(); + User.Builder user = User.builder(); + user.username("camel"); + user.password("camelpwd"); + users.add(user.build()); + exchange.getIn().setHeader(MQ2Constants.BROKER_USERS, users); + + } + }) + .to("aws2-mq://test?amazonMqClient=#amazonMqClient&operation=createBroker") + +- deleteBroker: this operation will delete an MQ Broker in AWS + + + + from("direct:listBrokers") + .setHeader(MQ2Constants.BROKER_ID, constant("123") + .to("aws2-mq://test?amazonMqClient=#amazonMqClient&operation=deleteBroker") + +- rebootBroker: this operation will delete an MQ Broker in AWS + + + + from("direct:listBrokers") + .setHeader(MQ2Constants.BROKER_ID, constant("123") + .to("aws2-mq://test?amazonMqClient=#amazonMqClient&operation=rebootBroker") + +# Using a POJO as body + +Sometimes building an AWS Request can be complex because of multiple +options. We introduce the possibility to use a POJO as the body. In AWS +MQ, there are multiple operations you can submit, as an example for List +brokers request, you can do something like: + + from("direct:aws2-mq") + .setBody(ListBrokersRequest.builder().maxResults(10).build()) + .to("aws2-mq://test?amazonMqClient=#amazonMqClient&operation=listBrokers&pojoRequest=true") + +In this way, you’ll pass the request directly without the need of +passing headers and options specifically related to this operation. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-mq + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform. It can be listBrokers, createBroker, deleteBroker||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which MQ client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|amazonMqClient|To use a existing configured AmazonMQClient client||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the MQ client||string| +|proxyPort|To define a proxy port when instantiating the MQ client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the MQ client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the MQ client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the MQ client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the MQ client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in MQ.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|operation|The operation to perform. It can be listBrokers, createBroker, deleteBroker||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which MQ client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|amazonMqClient|To use a existing configured AmazonMQClient client||object| +|proxyHost|To define a proxy host when instantiating the MQ client||string| +|proxyPort|To define a proxy port when instantiating the MQ client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the MQ client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the MQ client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the MQ client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the MQ client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in MQ.|false|boolean| diff --git a/camel-aws2-msk.md b/camel-aws2-msk.md new file mode 100644 index 0000000000000000000000000000000000000000..20121b21b64e70ad666850fcb57e8d9761ea132a --- /dev/null +++ b/camel-aws2-msk.md @@ -0,0 +1,199 @@ +# Aws2-msk + +**Since Camel 3.1** + +**Only producer is supported** + +The AWS2 MSK component supports create, run, start, stop and terminate +[AWS MSK](https://aws.amazon.com/msk/) instances. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon MSK. More information is available at [Amazon +MSK](https://aws.amazon.com/msk/). + +# URI Format + + aws2-msk://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required MSK component options + +You have to provide the amazonMskClient in the Registry or your +accessKey and secretKey to access the [Amazon +MSK](https://aws.amazon.com/msk/) service. + +# Usage + +## Static credentials vs Default Credential Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## MSK Producer operations + +Camel-AWS MSK component provides the following operation on the producer +side: + +- listClusters + +- createCluster + +- deleteCluster + +- describeCluster + +# Examples + +## Producer Examples + +- listClusters: this operation will list the available MSK Brokers in + AWS + + + + from("direct:listClusters") + .to("aws2-msk://test?mskClient=#amazonMskClient&operation=listClusters") + +- createCluster: this operation will create an MSK Cluster in AWS + + + + from("direct:createCluster") + .process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(MSK2Constants.CLUSTER_NAME, "test-kafka"); + exchange.getIn().setHeader(MSK2Constants.CLUSTER_KAFKA_VERSION, "2.1.1"); + exchange.getIn().setHeader(MSK2Constants.BROKER_NODES_NUMBER, 2); + BrokerNodeGroupInfo groupInfo = BrokerNodeGroupInfo.builder().build(); + exchange.getIn().setHeader(MSK2Constants.BROKER_NODES_GROUP_INFO, groupInfo); + } + }) + .to("aws2-msk://test?mskClient=#amazonMskClient&operation=createCluster") + +- deleteCluster: this operation will delete an MSK Cluster in AWS + + + + from("direct:deleteCluster") + .setHeader(MSK2Constants.CLUSTER_ARN, constant("test-kafka")); + .to("aws2-msk://test?mskClient=#amazonMskClient&operation=deleteCluster") + + from("direct:createCluster") + .process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(MSK2Constants.CLUSTER_NAME, "test-kafka"); + exchange.getIn().setHeader(MSK2Constants.CLUSTER_KAFKA_VERSION, "2.1.1"); + exchange.getIn().setHeader(MSK2Constants.BROKER_NODES_NUMBER, 2); + BrokerNodeGroupInfo groupInfo = BrokerNodeGroupInfo.builder().build(); + exchange.getIn().setHeader(MSK2Constants.BROKER_NODES_GROUP_INFO, groupInfo); + } + }) + .to("aws2-msk://test?mskClient=#amazonMskClient&operation=deleteCluster") + +# Using a POJO as body + +Sometimes building an AWS Request can be complex because of multiple +options. We introduce the possibility to use a POJO as the body. In AWS +MSK, there are multiple operations you can submit, as an example for +List clusters request, you can do something like: + + from("direct:aws2-msk") + .setBody(ListClustersRequest.builder().maxResults(10).build()) + .to("aws2-msk://test?mskClient=#amazonMskClient&operation=listClusters&pojoRequest=true") + +In this way, you’ll pass the request directly without the need of +passing headers and options specifically related to this operation. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-msk + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which the MSK client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|mskClient|To use an existing configured AWS MSK client||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the MSK client||string| +|proxyPort|To define a proxy port when instantiating the MSK client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the MSK client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Kafka client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the MSK client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the MSK client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in MSK.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|operation|The operation to perform||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which the MSK client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|mskClient|To use an existing configured AWS MSK client||object| +|proxyHost|To define a proxy host when instantiating the MSK client||string| +|proxyPort|To define a proxy port when instantiating the MSK client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the MSK client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Kafka client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the MSK client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the MSK client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in MSK.|false|boolean| diff --git a/camel-aws2-redshift-data.md b/camel-aws2-redshift-data.md new file mode 100644 index 0000000000000000000000000000000000000000..547d700294bda443c0d8104aaed11070ff550450 --- /dev/null +++ b/camel-aws2-redshift-data.md @@ -0,0 +1,184 @@ +# Aws2-redshift-data + +**Since Camel 4.1** + +**Only producer is supported** + +The AWS2 Redshift Data component supports the following operations on +[AWS Redshift](https://aws.amazon.com/redshift/): + +- listDatabases, listSchemas, listStatements, listTables, + describeTable, executeStatement, batchExecuteStatement, + cancelStatement, describeStatement, getStatementResult + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon Redshift. More information is available at [AWS +Redshift](https://aws.amazon.com/redshift/). + +# URI Format + + aws2-redshift-data://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required Redshift Data component options + +You have to provide the awsRedshiftDataClient in the Registry or your +accessKey and secretKey to access the [AWS +Redshift](https://aws.amazon.com/redshift/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey` + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## Redshift Producer operations + +Camel-AWS Redshift Data component provides the following operation on +the producer side: + +- listDatabases + +- listSchemas + +- listStatements + +- listTables + +- describeTable + +- executeStatement + +- batchExecuteStatement + +- cancelStatement + +- describeStatement + +- getStatementResult + +# Producer Examples + +- listDatabases: this operation will list redshift databases + + + + from("direct:listDatabases") + .to("aws2-redshift-data://test?awsRedshiftDataClient=#awsRedshiftDataClient&operation=listDatabases") + +# Using a POJO as body + +Sometimes building an AWS Request can be complex because of multiple +options. We introduce the possibility to use a POJO as body. In AWS +Redshift Data there are multiple operations you can submit, as an +example for List Databases request, you can do something like: + + from("direct:start") + .setBody(ListDatabases.builder().database("database1").build()) + .to("aws2-redshift-data://test?awsRedshiftDataClient=#awsRedshiftDataClient&operation=listDatabases&pojoRequest=true") + +In this way you’ll pass the request directly without the need of passing +headers and options specifically related to this operation. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-redshift-data + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform. It can be batchExecuteStatement, cancelStatement, describeStatement, describeTable, executeStatement, getStatementResult, listDatabases, listSchemas, listStatements or listTables||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|region|The region in which RedshiftData client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useDefaultCredentialsProvider|Set whether the RedshiftData client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the RedshiftData client should expect to load credentials through a profile credentials provider.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|awsRedshiftDataClient|To use an existing configured AwsRedshiftDataClient client||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the RedshiftData client||string| +|proxyPort|To define a proxy port when instantiating the RedshiftData client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the RedshiftData client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|useSessionCredentials|Set whether the Redshift client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Redshift.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|operation|The operation to perform. It can be batchExecuteStatement, cancelStatement, describeStatement, describeTable, executeStatement, getStatementResult, listDatabases, listSchemas, listStatements or listTables||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|region|The region in which RedshiftData client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useDefaultCredentialsProvider|Set whether the RedshiftData client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the RedshiftData client should expect to load credentials through a profile credentials provider.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|awsRedshiftDataClient|To use an existing configured AwsRedshiftDataClient client||object| +|proxyHost|To define a proxy host when instantiating the RedshiftData client||string| +|proxyPort|To define a proxy port when instantiating the RedshiftData client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the RedshiftData client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|useSessionCredentials|Set whether the Redshift client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Redshift.|false|boolean| diff --git a/camel-aws2-s3.md b/camel-aws2-s3.md new file mode 100644 index 0000000000000000000000000000000000000000..0c641b976396c4607635c373276aa2fba81c8ded --- /dev/null +++ b/camel-aws2-s3.md @@ -0,0 +1,644 @@ +# Aws2-s3 + +**Since Camel 3.2** + +**Both producer and consumer are supported** + +The AWS2 S3 component supports storing and retrieving objects from/to +[Amazon’s S3](https://aws.amazon.com/s3) service. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon S3. More information is available at [Amazon +S3](https://aws.amazon.com/s3). + +# URI Format + + aws2-s3://bucketNameOrArn[?options] + +The bucket will be created if it doesn’t already exist. + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required S3 component options + +You have to provide the amazonS3Client in the Registry or your accessKey +and secretKey to access the [Amazon’s S3](https://aws.amazon.com/s3). + +# Batch Consumer + +This component implements the Batch Consumer. + +This allows you, for instance, to know how many messages exist in this +batch and for instance, let the Aggregator aggregate this number of +messages. + +# Usage + +For example, to read file `hello.txt` from bucket `helloBucket`, use the +following snippet: + + from("aws2-s3://helloBucket?accessKey=yourAccessKey&secretKey=yourSecretKey&prefix=hello.txt") + .to("file:/var/downloaded"); + +## S3 Producer operations + +Camel-AWS2-S3 component provides the following operation on the producer +side: + +- copyObject + +- deleteObject + +- listBuckets + +- deleteBucket + +- listObjects + +- getObject (this will return an S3Object instance) + +- getObjectRange (this will return an S3Object instance) + +- createDownloadLink + +If you don’t specify an operation, explicitly the producer will do: + +- a single file upload + +- a multipart upload if multiPartUpload option is enabled + +## Advanced AmazonS3 configuration + +If your Camel Application is running behind a firewall or if you need to +have more control over the `S3Client` instance configuration, you can +create your own instance and refer to it in your Camel aws2-s3 component +configuration: + + from("aws2-s3://MyBucket?amazonS3Client=#client&delay=5000&maxMessagesPerPoll=5") + .to("mock:result"); + +## Use KMS with the S3 component + +To use AWS KMS to encrypt/decrypt data by using AWS infrastructure, you +can use the options introduced in 2.21.x like in the following example + + from("file:tmp/test?fileName=test.txt") + .setHeader(AWS2S3Constants.KEY, constant("testFile")) + .to("aws2-s3://mybucket?amazonS3Client=#client&useAwsKMS=true&awsKMSKeyId=3f0637ad-296a-3dfe-a796-e60654fb128c"); + +In this way, you’ll ask S3 to use the KMS key +3f0637ad-296a-3dfe-a796-e60654fb128c, to encrypt the file test.txt. When +you ask to download this file, the decryption will be done directly +before the download. + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## S3 Producer Operation examples + +- Single Upload: This operation will upload a file to S3 based on the + body content + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(AWS2S3Constants.KEY, "camel.txt"); + exchange.getIn().setBody("Camel rocks!"); + } + }) + .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client") + .to("mock:result"); + +This operation will upload the file camel.txt with the content "Camel +rocks!" in the *mycamelbucket* bucket + +- Multipart Upload: This operation will perform a multipart upload of + a file to S3 based on the body content + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(AWS2S3Constants.KEY, "empty.txt"); + exchange.getIn().setBody(new File("src/empty.txt")); + } + }) + .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&multiPartUpload=true&autoCreateBucket=true&partSize=1048576") + .to("mock:result"); + +This operation will perform a multipart upload of the file empty.txt +with based on the content the file src/empty.txt in the *mycamelbucket* +bucket + +- CopyObject: this operation copies an object from one bucket to a + different one + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(AWS2S3Constants.BUCKET_DESTINATION_NAME, "camelDestinationBucket"); + exchange.getIn().setHeader(AWS2S3Constants.KEY, "camelKey"); + exchange.getIn().setHeader(AWS2S3Constants.DESTINATION_KEY, "camelDestinationKey"); + } + }) + .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=copyObject") + .to("mock:result"); + +This operation will copy the object with the name expressed in the +header camelDestinationKey to the camelDestinationBucket bucket, from +the bucket *mycamelbucket*. + +- DeleteObject: this operation deletes an object from a bucket + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(AWS2S3Constants.KEY, "camelKey"); + } + }) + .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=deleteObject") + .to("mock:result"); + +This operation will delete the object camelKey from the bucket +*mycamelbucket*. + +- ListBuckets: this operation lists the buckets for this account in + this region + + + + from("direct:start") + .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=listBuckets") + .to("mock:result"); + +This operation will list the buckets for this account + +- DeleteBucket: this operation deletes the bucket specified as URI + parameter or header + + + + from("direct:start") + .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=deleteBucket") + .to("mock:result"); + +This operation will delete the bucket *mycamelbucket* + +- ListObjects: this operation list object in a specific bucket + + + + from("direct:start") + .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=listObjects") + .to("mock:result"); + +This operation will list the objects in the *mycamelbucket* bucket + +- GetObject: this operation gets a single object in a specific bucket + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(AWS2S3Constants.KEY, "camelKey"); + } + }) + .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=getObject") + .to("mock:result"); + +This operation will return an S3Object instance related to the camelKey +object in *mycamelbucket* bucket. + +- GetObjectRange: this operation gets a single object range in a + specific bucket + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(AWS2S3Constants.KEY, "camelKey"); + exchange.getIn().setHeader(AWS2S3Constants.RANGE_START, "0"); + exchange.getIn().setHeader(AWS2S3Constants.RANGE_END, "9"); + } + }) + .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=getObjectRange") + .to("mock:result"); + +This operation will return an S3Object instance related to the camelKey +object in *mycamelbucket* bucket, containing the bytes from 0 to 9. + +- CreateDownloadLink: this operation will return a download link + through S3 Presigner + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(AWS2S3Constants.KEY, "camelKey"); + } + }) + .to("aws2-s3://mycamelbucket?accessKey=xxx&secretKey=yyy®ion=region&operation=createDownloadLink") + .to("mock:result"); + +This operation will return a download link url for the file camel-key in +the bucket *mycamelbucket* and region *region*. Parameters (`accessKey`, +`secretKey` and `region`) are mandatory for this operation, if S3 client +is autowired from the registry. + +If checksum validations are enabled, the url will no longer be browser +compatible because it adds a signed header that must be included in the +HTTP request. + +# Streaming Upload mode + +With the stream mode enabled, users will be able to upload data to S3 +without knowing ahead of time the dimension of the data, by leveraging +multipart upload. The upload will be completed when the batchSize has +been completed or the batchMessageNumber has been reached. There are two +possible naming strategies: progressive and random. With the progressive +strategy, each file will have the name composed by keyName option and a +progressive counter, and eventually the file extension (if any), while +with the random strategy a UUID will be added after keyName and +eventually the file extension will be appended. + +As an example: + + from(kafka("topic1").brokers("localhost:9092")) + .log("Kafka Message is: ${body}") + .to(aws2S3("camel-bucket").streamingUploadMode(true).batchMessageNumber(25).namingStrategy(AWS2S3EndpointBuilderFactory.AWSS3NamingStrategyEnum.progressive).keyName("{{kafkaTopic1}}/{{kafkaTopic1}}.txt")); + + from(kafka("topic2").brokers("localhost:9092")) + .log("Kafka Message is: ${body}") + .to(aws2S3("camel-bucket").streamingUploadMode(true).batchMessageNumber(25).namingStrategy(AWS2S3EndpointBuilderFactory.AWSS3NamingStrategyEnum.random).keyName("{{kafkaTopic2}}/{{kafkaTopic2}}.txt")); + +The default size for a batch is 1 Mb, but you can adjust it according to +your requirements. + +When you stop your producer route, the producer will take care of +flushing the remaining buffered message and complete the upload. + +In Streaming upload, you’ll be able to restart the producer from the +point where it left. It’s important to note that this feature is +critical only when using the progressive naming strategy. + +By setting the restartingPolicy to lastPart, you will restart uploading +files and contents from the last part number the producer left. + +As example: - Start the route with progressive naming strategy and +keyname equals to camel.txt, with batchMessageNumber equals to 20, and +restartingPolicy equals to lastPart - Send 70 messages. - Stop the +route - On your S3 bucket you should now see four files: camel.txt, +camel-1.txt, camel-2.txt and camel-3.txt, the first three will have 20 +messages, while the last one is only 10. - Restart the route - Send 25 +messages - Stop the route - You’ll now have two other files in your +bucket: camel-5.txt and camel-6.txt, the first with 20 messages and the +second with 5 messages. - Go ahead + +This won’t be needed when using the random naming strategy. + +On the opposite, you can specify the override restartingPolicy. In that +case, you’ll be able to override whatever you written before (for that +particular keyName) in your bucket. + +In Streaming upload mode, the only keyName option that will be taken +into account is the endpoint option. Using the header will throw an NPE +and this is done by design. Setting the header means potentially change +the file name on each exchange, and this is against the aim of the +streaming upload producer. The keyName needs to be fixed and static. The +selected naming strategy will do the rest of the work. + +Another possibility is specifying a streamingUploadTimeout with +batchMessageNumber and batchSize options. With this option, the user +will be able to complete the upload of a file after a certain time +passed. In this way, the upload completion will be passed on three +tiers: the timeout, the number of messages and the batch size. + +As an example: + + from(kafka("topic1").brokers("localhost:9092")) + .log("Kafka Message is: ${body}") + .to(aws2S3("camel-bucket").streamingUploadMode(true).batchMessageNumber(25).streamingUploadTimeout(10000).namingStrategy(AWS2S3EndpointBuilderFactory.AWSS3NamingStrategyEnum.progressive).keyName("{{kafkaTopic1}}/{{kafkaTopic1}}.txt")); + +In this case, the upload will be completed after 10 seconds. + +# Bucket Auto-creation + +With the option `autoCreateBucket` users are able to avoid the +auto-creation of an S3 Bucket in case it doesn’t exist. The default for +this option is `false`. If set to false, any operation on a not-existent +bucket in AWS won’t be successful and an error will be returned. + +# Moving stuff between a bucket and another bucket + +Some users like to consume stuff from a bucket and move the content in a +different one without using the copyObject feature of this component. If +this is case for you, remember to remove the bucketName header from the +incoming exchange of the consumer, otherwise the file will always be +overwritten on the same original bucket. + +# MoveAfterRead consumer option + +In addition to deleteAfterRead, it has been added another option, +moveAfterRead. With this option enabled, the consumed object will be +moved to a target destinationBucket instead of being only deleted. This +will require specifying the destinationBucket option. As example: + + from("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&moveAfterRead=true&destinationBucket=myothercamelbucket") + .to("mock:result"); + +In this case, the objects consumed will be moved to *myothercamelbucket* +bucket and deleted from the original one (because of deleteAfterRead set +to true as default). + +You have also the possibility of using a key prefix/suffix while moving +the file to a different bucket. The options are destinationBucketPrefix +and destinationBucketSuffix. + +Taking the above example, you could do something like: + + from("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&moveAfterRead=true&destinationBucket=myothercamelbucket&destinationBucketPrefix=RAW(pre-)&destinationBucketSuffix=RAW(-suff)") + .to("mock:result"); + +In this case, the objects consumed will be moved to *myothercamelbucket* +bucket and deleted from the original one (because of deleteAfterRead set +to true as default). + +So if the file name is test, in the *myothercamelbucket* you should see +a file called pre-test-suff. + +# Using customer key as encryption + +We introduced also the customer key support (an alternative of using +KMS). The following code shows an example. + + String key = UUID.randomUUID().toString(); + byte[] secretKey = generateSecretKey(); + String b64Key = Base64.getEncoder().encodeToString(secretKey); + String b64KeyMd5 = Md5Utils.md5AsBase64(secretKey); + + String awsEndpoint = "aws2-s3://mycamel?autoCreateBucket=false&useCustomerKey=true&customerKeyId=RAW(" + b64Key + ")&customerKeyMD5=RAW(" + b64KeyMd5 + ")&customerAlgorithm=" + AES256.name(); + + from("direct:putObject") + .setHeader(AWS2S3Constants.KEY, constant("test.txt")) + .setBody(constant("Test")) + .to(awsEndpoint); + +# Using a POJO as body + +Sometimes building an AWS Request can be complex because of multiple +options. We introduce the possibility to use a POJO as the body. In AWS +S3 there are multiple operations you can submit, as an example for List +brokers request, you can do something like: + + from("direct:aws2-s3") + .setBody(ListObjectsRequest.builder().bucket(bucketName).build()) + .to("aws2-s3://test?amazonS3Client=#amazonS3Client&operation=listObjects&pojoRequest=true") + +In this way, you’ll pass the request directly without the need of +passing headers and options specifically related to this operation. + +# Create S3 client and add component to registry + +Sometimes you would want to perform some advanced configuration using +AWS2S3Configuration, which also allows to set the S3 client. You can +create and set the S3 client in the component configuration as shown in +the following example + + String awsBucketAccessKey = "your_access_key"; + String awsBucketSecretKey = "your_secret_key"; + + S3Client s3Client = S3Client.builder().credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create(awsBucketAccessKey, awsBucketSecretKey))) + .region(Region.US_EAST_1).build(); + + AWS2S3Configuration configuration = new AWS2S3Configuration(); + configuration.setAmazonS3Client(s3Client); + configuration.setAutoDiscoverClient(true); + configuration.setBucketName("s3bucket2020"); + configuration.setRegion("us-east-1"); + +Now you can configure the S3 component (using the configuration object +created above) and add it to the registry in the configure method before +initialization of routes. + + AWS2S3Component s3Component = new AWS2S3Component(getContext()); + s3Component.setConfiguration(configuration); + s3Component.setLazyStartProducer(true); + camelContext.addComponent("aws2-s3", s3Component); + +Now your component will be used for all the operations implemented in +camel routes. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-s3 + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|autoCreateBucket|Setting the autocreation of the S3 bucket bucketName. This will apply also in case of moveAfterRead option enabled, and it will create the destinationBucket if it doesn't exist already.|false|boolean| +|configuration|The component configuration||object| +|delimiter|The delimiter which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in.||string| +|forcePathStyle|Set whether the S3 client should use path-style URL instead of virtual-hosted-style|false|boolean| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|policy|The policy for this queue to set in the com.amazonaws.services.s3.AmazonS3#setBucketPolicy() method.||string| +|prefix|The prefix which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in.||string| +|region|The region in which the S3 client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|customerAlgorithm|Define the customer algorithm to use in case CustomerKey is enabled||string| +|customerKeyId|Define the id of the Customer key to use in case CustomerKey is enabled||string| +|customerKeyMD5|Define the MD5 of Customer key to use in case CustomerKey is enabled||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|deleteAfterRead|Delete objects from S3 after they have been retrieved. The deleting is only performed if the Exchange is committed. If a rollback occurs, the object is not deleted. If this option is false, then the same objects will be retrieved over and over again in the polls. Therefore, you need to use the Idempotent Consumer EIP in the route to filter out duplicates. You can filter using the AWS2S3Constants#BUCKET\_NAME and AWS2S3Constants#KEY headers, or only the AWS2S3Constants#KEY header.|true|boolean| +|destinationBucket|Define the destination bucket where an object must be moved when moveAfterRead is set to true.||string| +|destinationBucketPrefix|Define the destination bucket prefix to use when an object must be moved, and moveAfterRead is set to true.||string| +|destinationBucketSuffix|Define the destination bucket suffix to use when an object must be moved, and moveAfterRead is set to true.||string| +|doneFileName|If provided, Camel will only consume files if a done file exists.||string| +|fileName|To get the object from the bucket with the given file name||string| +|ignoreBody|If it is true, the S3 Object Body will be ignored completely if it is set to false, the S3 Object will be put in the body. Setting this to true will override any behavior defined by includeBody option.|false|boolean| +|includeBody|If it is true, the S3Object exchange will be consumed and put into the body and closed. If false, the S3Object stream will be put raw into the body and the headers will be set with the S3 object metadata. This option is strongly related to the autocloseBody option. In case of setting includeBody to true because the S3Object stream will be consumed then it will also be closed, while in case of includeBody false then it will be up to the caller to close the S3Object stream. However, setting autocloseBody to true when includeBody is false it will schedule to close the S3Object stream automatically on exchange completion.|true|boolean| +|includeFolders|If it is true, the folders/directories will be consumed. If it is false, they will be ignored, and Exchanges will not be created for those|true|boolean| +|moveAfterRead|Move objects from S3 bucket to a different bucket after they have been retrieved. To accomplish the operation, the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved.|false|boolean| +|autocloseBody|If this option is true and includeBody is false, then the S3Object.close() method will be called on exchange completion. This option is strongly related to includeBody option. In case of setting includeBody to false and autocloseBody to false, it will be up to the caller to close the S3Object stream. Setting autocloseBody to true, will close the S3Object stream automatically.|true|boolean| +|batchMessageNumber|The number of messages composing a batch in streaming upload mode|10|integer| +|batchSize|The batch size (in bytes) in streaming upload mode|1000000|integer| +|bufferSize|The buffer size (in bytes) in streaming upload mode|1000000|integer| +|deleteAfterWrite|Delete file object after the S3 file has been uploaded|false|boolean| +|keyName|Setting the key name for an element in the bucket through endpoint parameter||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|multiPartUpload|If it is true, camel will upload the file with multipart format. The part size is decided by the partSize option. Camel will only do multipart uploads for files that are larger than the part-size thresholds. Files that are smaller will be uploaded in a single operation.|false|boolean| +|namingStrategy|The naming strategy to use in streaming upload mode|progressive|object| +|operation|The operation to do in case the user don't want to do only an upload||object| +|partSize|Set up the partSize which is used in multipart upload, the default size is 25M. Camel will only do multipart uploads for files that are larger than the part-size thresholds. Files that are smaller will be uploaded in a single operation.|26214400|integer| +|restartingPolicy|The restarting policy to use in streaming upload mode|override|object| +|storageClass|The storage class to set in the com.amazonaws.services.s3.model.PutObjectRequest request.||string| +|streamingUploadMode|When stream mode is true, the upload to bucket will be done in streaming|false|boolean| +|streamingUploadTimeout|While streaming upload mode is true, this option set the timeout to complete upload||integer| +|awsKMSKeyId|Define the id of KMS key to use in case KMS is enabled||string| +|useAwsKMS|Define if KMS must be used or not|false|boolean| +|useCustomerKey|Define if Customer Key must be used or not|false|boolean| +|useSSES3|Define if SSE S3 must be used or not|false|boolean| +|amazonS3Client|Reference to a com.amazonaws.services.s3.AmazonS3 in the registry.||object| +|amazonS3Presigner|An S3 Presigner for Request, used mainly in createDownloadLink operation||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the SQS client||string| +|proxyPort|Specify a proxy port to be used inside the client definition.||integer| +|proxyProtocol|To define a proxy protocol when instantiating the S3 client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the S3 client should expect to load credentials through a default credentials provider.|false|boolean| +|useProfileCredentialsProvider|Set whether the S3 client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the S3 client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in S3.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bucketNameOrArn|Bucket name or ARN||string| +|autoCreateBucket|Setting the autocreation of the S3 bucket bucketName. This will apply also in case of moveAfterRead option enabled, and it will create the destinationBucket if it doesn't exist already.|false|boolean| +|delimiter|The delimiter which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in.||string| +|forcePathStyle|Set whether the S3 client should use path-style URL instead of virtual-hosted-style|false|boolean| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|policy|The policy for this queue to set in the com.amazonaws.services.s3.AmazonS3#setBucketPolicy() method.||string| +|prefix|The prefix which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in.||string| +|region|The region in which the S3 client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|customerAlgorithm|Define the customer algorithm to use in case CustomerKey is enabled||string| +|customerKeyId|Define the id of the Customer key to use in case CustomerKey is enabled||string| +|customerKeyMD5|Define the MD5 of Customer key to use in case CustomerKey is enabled||string| +|deleteAfterRead|Delete objects from S3 after they have been retrieved. The deleting is only performed if the Exchange is committed. If a rollback occurs, the object is not deleted. If this option is false, then the same objects will be retrieved over and over again in the polls. Therefore, you need to use the Idempotent Consumer EIP in the route to filter out duplicates. You can filter using the AWS2S3Constants#BUCKET\_NAME and AWS2S3Constants#KEY headers, or only the AWS2S3Constants#KEY header.|true|boolean| +|destinationBucket|Define the destination bucket where an object must be moved when moveAfterRead is set to true.||string| +|destinationBucketPrefix|Define the destination bucket prefix to use when an object must be moved, and moveAfterRead is set to true.||string| +|destinationBucketSuffix|Define the destination bucket suffix to use when an object must be moved, and moveAfterRead is set to true.||string| +|doneFileName|If provided, Camel will only consume files if a done file exists.||string| +|fileName|To get the object from the bucket with the given file name||string| +|ignoreBody|If it is true, the S3 Object Body will be ignored completely if it is set to false, the S3 Object will be put in the body. Setting this to true will override any behavior defined by includeBody option.|false|boolean| +|includeBody|If it is true, the S3Object exchange will be consumed and put into the body and closed. If false, the S3Object stream will be put raw into the body and the headers will be set with the S3 object metadata. This option is strongly related to the autocloseBody option. In case of setting includeBody to true because the S3Object stream will be consumed then it will also be closed, while in case of includeBody false then it will be up to the caller to close the S3Object stream. However, setting autocloseBody to true when includeBody is false it will schedule to close the S3Object stream automatically on exchange completion.|true|boolean| +|includeFolders|If it is true, the folders/directories will be consumed. If it is false, they will be ignored, and Exchanges will not be created for those|true|boolean| +|maxConnections|Set the maxConnections parameter in the S3 client configuration|60|integer| +|maxMessagesPerPoll|Gets the maximum number of messages as a limit to poll at each polling. Gets the maximum number of messages as a limit to poll at each polling. The default value is 10. Use 0 or a negative number to set it as unlimited.|10|integer| +|moveAfterRead|Move objects from S3 bucket to a different bucket after they have been retrieved. To accomplish the operation, the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved.|false|boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|autocloseBody|If this option is true and includeBody is false, then the S3Object.close() method will be called on exchange completion. This option is strongly related to includeBody option. In case of setting includeBody to false and autocloseBody to false, it will be up to the caller to close the S3Object stream. Setting autocloseBody to true, will close the S3Object stream automatically.|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|inProgressRepository|A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|batchMessageNumber|The number of messages composing a batch in streaming upload mode|10|integer| +|batchSize|The batch size (in bytes) in streaming upload mode|1000000|integer| +|bufferSize|The buffer size (in bytes) in streaming upload mode|1000000|integer| +|deleteAfterWrite|Delete file object after the S3 file has been uploaded|false|boolean| +|keyName|Setting the key name for an element in the bucket through endpoint parameter||string| +|multiPartUpload|If it is true, camel will upload the file with multipart format. The part size is decided by the partSize option. Camel will only do multipart uploads for files that are larger than the part-size thresholds. Files that are smaller will be uploaded in a single operation.|false|boolean| +|namingStrategy|The naming strategy to use in streaming upload mode|progressive|object| +|operation|The operation to do in case the user don't want to do only an upload||object| +|partSize|Set up the partSize which is used in multipart upload, the default size is 25M. Camel will only do multipart uploads for files that are larger than the part-size thresholds. Files that are smaller will be uploaded in a single operation.|26214400|integer| +|restartingPolicy|The restarting policy to use in streaming upload mode|override|object| +|storageClass|The storage class to set in the com.amazonaws.services.s3.model.PutObjectRequest request.||string| +|streamingUploadMode|When stream mode is true, the upload to bucket will be done in streaming|false|boolean| +|streamingUploadTimeout|While streaming upload mode is true, this option set the timeout to complete upload||integer| +|awsKMSKeyId|Define the id of KMS key to use in case KMS is enabled||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|useAwsKMS|Define if KMS must be used or not|false|boolean| +|useCustomerKey|Define if Customer Key must be used or not|false|boolean| +|useSSES3|Define if SSE S3 must be used or not|false|boolean| +|amazonS3Client|Reference to a com.amazonaws.services.s3.AmazonS3 in the registry.||object| +|amazonS3Presigner|An S3 Presigner for Request, used mainly in createDownloadLink operation||object| +|proxyHost|To define a proxy host when instantiating the SQS client||string| +|proxyPort|Specify a proxy port to be used inside the client definition.||integer| +|proxyProtocol|To define a proxy protocol when instantiating the S3 client|HTTPS|object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the S3 client should expect to load credentials through a default credentials provider.|false|boolean| +|useProfileCredentialsProvider|Set whether the S3 client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the S3 client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in S3.|false|boolean| diff --git a/camel-aws2-ses.md b/camel-aws2-ses.md new file mode 100644 index 0000000000000000000000000000000000000000..72616657a30e6f629c4af977698f6f49344d25d5 --- /dev/null +++ b/camel-aws2-ses.md @@ -0,0 +1,162 @@ +# Aws2-ses + +**Since Camel 3.1** + +**Only producer is supported** + +The AWS2 SES component supports sending emails with [Amazon’s +SES](https://aws.amazon.com/ses) service. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon SES. More information is available at [Amazon +SES](https://aws.amazon.com/ses). + +# URI Format + + aws2-ses://from[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required SES component options + +You have to provide the amazonSESClient in the Registry or your +accessKey and secretKey to access the [Amazon’s +SES](https://aws.amazon.com/ses). + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## Advanced SesClient configuration + +If you need more control over the `SesClient` instance configuration you +can create your own instance and refer to it from the URI: + + from("direct:start") + .to("aws2-ses://example@example.com?amazonSESClient=#client"); + +The `#client` refers to a `SesClient` in the Registry. + +# Examples + +## Producer Examples + + from("direct:start") + .setHeader(SesConstants.SUBJECT, constant("This is my subject")) + .setHeader(SesConstants.TO, constant(Collections.singletonList("to@example.com")) + .setBody(constant("This is my message text.")) + .to("aws2-ses://from@example.com?accessKey=xxx&secretKey=yyy"); + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-ses + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bcc|List of comma-separated destination blind carbon copy (bcc) email address. Can be overridden with 'CamelAwsSesBcc' header.||string| +|cc|List of comma-separated destination carbon copy (cc) email address. Can be overridden with 'CamelAwsSesCc' header.||string| +|configuration|component configuration||object| +|configurationSet|Set the configuration set to send with every request. Override it with 'CamelAwsSesConfigurationSet' header.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|region|The region in which SES client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|replyToAddresses|List of comma separated reply-to email address(es) for the message, override it using 'CamelAwsSesReplyToAddresses' header.||string| +|returnPath|The email address to which bounce notifications are to be forwarded, override it using 'CamelAwsSesReturnPath' header.||string| +|subject|The subject which is used if the message header 'CamelAwsSesSubject' is not present.||string| +|to|List of comma separated destination email address. Can be overridden with 'CamelAwsSesTo' header.||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|amazonSESClient|To use the AmazonSimpleEmailService as the client||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the SES client||string| +|proxyPort|To define a proxy port when instantiating the SES client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the SES client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Ses client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the SES client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the SES client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in SES.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|from|The sender's email address.||string| +|bcc|List of comma-separated destination blind carbon copy (bcc) email address. Can be overridden with 'CamelAwsSesBcc' header.||string| +|cc|List of comma-separated destination carbon copy (cc) email address. Can be overridden with 'CamelAwsSesCc' header.||string| +|configurationSet|Set the configuration set to send with every request. Override it with 'CamelAwsSesConfigurationSet' header.||string| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|region|The region in which SES client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|replyToAddresses|List of comma separated reply-to email address(es) for the message, override it using 'CamelAwsSesReplyToAddresses' header.||string| +|returnPath|The email address to which bounce notifications are to be forwarded, override it using 'CamelAwsSesReturnPath' header.||string| +|subject|The subject which is used if the message header 'CamelAwsSesSubject' is not present.||string| +|to|List of comma separated destination email address. Can be overridden with 'CamelAwsSesTo' header.||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|amazonSESClient|To use the AmazonSimpleEmailService as the client||object| +|proxyHost|To define a proxy host when instantiating the SES client||string| +|proxyPort|To define a proxy port when instantiating the SES client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the SES client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Ses client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the SES client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the SES client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in SES.|false|boolean| diff --git a/camel-aws2-sns.md b/camel-aws2-sns.md new file mode 100644 index 0000000000000000000000000000000000000000..5f4126d3047870be6437b003e5f1ac783dcd68a3 --- /dev/null +++ b/camel-aws2-sns.md @@ -0,0 +1,253 @@ +# Aws2-sns + +**Since Camel 3.1** + +**Only producer is supported** + +The AWS2 SNS component allows messages to be sent to an [Amazon Simple +Notification](https://aws.amazon.com/sns) Topic. The implementation of +the Amazon API is provided by the [AWS +SDK](https://aws.amazon.com/sdkforjava/). + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon SNS. More information is available at [Amazon +SNS](https://aws.amazon.com/sns). + +# URI Format + + aws2-sns://topicNameOrArn[?options] + +The topic will be created if they don’t already exist. + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +# URI Options + +Required SNS component options + +You have to provide the amazonSNSClient in the Registry or your +accessKey and secretKey to access the [Amazon’s +SNS](https://aws.amazon.com/sns). + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## Advanced AmazonSNS configuration + +If you need more control over the `SnsClient` instance configuration you +can create your own instance and refer to it from the URI: + + from("direct:start") + .to("aws2-sns://MyTopic?amazonSNSClient=#client"); + +The `#client` refers to a `AmazonSNS` in the Registry. + +## Create a subscription between an AWS SNS Topic and an AWS SQS Queue + +You can create a subscription of an SQS Queue to an SNS Topic in this +way: + + from("direct:start") + .to("aws2-sns://test-camel-sns1?amazonSNSClient=#amazonSNSClient&subscribeSNStoSQS=true&queueArn=arn:aws:sqs:eu-central-1:123456789012:test_camel"); + +The `#amazonSNSClient` refers to a `SnsClient` in the Registry. By +specifying `subscribeSNStoSQS` to true and a `queueArn` of an existing +SQS Queue, you’ll be able to subscribe your SQS Queue to your SNS Topic. + +At this point, you can consume messages coming from SNS Topic through +your SQS Queue + + from("aws2-sqs://test-camel?amazonSQSClient=#amazonSQSClient&delay=50&maxMessagesPerPoll=5") + .to(...); + +# Topic Autocreation + +With the option `autoCreateTopic` users are able to avoid the +autocreation of an SNS Topic in case it doesn’t exist. The default for +this option is `false`. If set to false, any operation on a non-existent +topic in AWS won’t be successful and an error will be returned. + +# SNS FIFO + +SNS FIFO are supported. While creating the SQS queue, you will subscribe +to the SNS topic there is an important point to remember, you’ll need to +make possible for the SNS Topic to send the message to the SQS Queue. + +This is clear with an example. + +Suppose you created an SNS FIFO Topic called Order.fifo and an SQS Queue +called QueueSub.fifo. + +In the access Policy of the QueueSub.fifo you should submit something +like this + + { + "Version": "2008-10-17", + "Id": "__default_policy_ID", + "Statement": [ + { + "Sid": "__owner_statement", + "Effect": "Allow", + "Principal": { + "AWS": "arn:aws:iam::780560123482:root" + }, + "Action": "SQS:*", + "Resource": "arn:aws:sqs:eu-west-1:780560123482:QueueSub.fifo" + }, + { + "Effect": "Allow", + "Principal": { + "Service": "sns.amazonaws.com" + }, + "Action": "SQS:SendMessage", + "Resource": "arn:aws:sqs:eu-west-1:780560123482:QueueSub.fifo", + "Condition": { + "ArnLike": { + "aws:SourceArn": "arn:aws:sns:eu-west-1:780410022472:Order.fifo" + } + } + } + ] + } + +This is a critical step to make the subscription work correctly. + +## SNS Fifo Topic Message group ID Strategy and message Deduplication ID Strategy + +When sending something to the FIFO topic, you’ll need to always set up a +message group ID strategy. + +If the content-based message deduplication has been enabled on the SNS +Fifo topic, where won’t be the need of setting a message deduplication +id strategy, otherwise you’ll have to set it. + +# Examples + +## Producer Examples + +Sending to a topic + + from("direct:start") + .to("aws2-sns://camel-topic?subject=The+subject+message&autoCreateTopic=true"); + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-sns + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|autoCreateTopic|Setting the auto-creation of the topic|false|boolean| +|configuration|Component configuration||object| +|kmsMasterKeyId|The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|messageDeduplicationIdStrategy|Only for FIFO Topic. Strategy for setting the messageDeduplicationId on the message. It can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message.|useExchangeId|string| +|messageGroupIdStrategy|Only for FIFO Topic. Strategy for setting the messageGroupId on the message. It can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsSnsMessageGroupId will be used.||string| +|messageStructure|The message structure to use such as json||string| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|policy|The policy for this topic. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|queueArn|The ARN endpoint to subscribe to||string| +|region|The region in which the SNS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|serverSideEncryptionEnabled|Define if Server Side Encryption is enabled or not on the topic|false|boolean| +|subject|The subject which is used if the message header 'CamelAwsSnsSubject' is not present.||string| +|subscribeSNStoSQS|Define if the subscription between SNS Topic and SQS must be done or not|false|boolean| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|amazonSNSClient|To use the AmazonSNS as the client||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the SNS client||string| +|proxyPort|To define a proxy port when instantiating the SNS client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the SNS client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the SNS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the SNS client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the SNS client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in SNS.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|topicNameOrArn|Topic name or ARN||string| +|autoCreateTopic|Setting the auto-creation of the topic|false|boolean| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to map headers to/from Camel.||object| +|kmsMasterKeyId|The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK.||string| +|messageDeduplicationIdStrategy|Only for FIFO Topic. Strategy for setting the messageDeduplicationId on the message. It can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message.|useExchangeId|string| +|messageGroupIdStrategy|Only for FIFO Topic. Strategy for setting the messageGroupId on the message. It can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsSnsMessageGroupId will be used.||string| +|messageStructure|The message structure to use such as json||string| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|policy|The policy for this topic. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|queueArn|The ARN endpoint to subscribe to||string| +|region|The region in which the SNS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|serverSideEncryptionEnabled|Define if Server Side Encryption is enabled or not on the topic|false|boolean| +|subject|The subject which is used if the message header 'CamelAwsSnsSubject' is not present.||string| +|subscribeSNStoSQS|Define if the subscription between SNS Topic and SQS must be done or not|false|boolean| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|amazonSNSClient|To use the AmazonSNS as the client||object| +|proxyHost|To define a proxy host when instantiating the SNS client||string| +|proxyPort|To define a proxy port when instantiating the SNS client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the SNS client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the SNS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the SNS client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the SNS client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in SNS.|false|boolean| diff --git a/camel-aws2-sqs.md b/camel-aws2-sqs.md new file mode 100644 index 0000000000000000000000000000000000000000..12239f3a3cc91c9bc95543e07317bd2aff3d15c6 --- /dev/null +++ b/camel-aws2-sqs.md @@ -0,0 +1,389 @@ +# Aws2-sqs + +**Since Camel 3.1** + +**Both producer and consumer are supported** + +The AWS2 SQS component supports sending and receiving messages to +[Amazon’s SQS](https://aws.amazon.com/sqs) service. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon SQS. More information is available at [Amazon +SQS](https://aws.amazon.com/sqs). + +# URI Format + + aws2-sqs://queueNameOrArn[?options] + +The queue will be created if they don’t already exist. + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required SQS component options + +You have to provide the amazonSQSClient in the Registry or your +accessKey and secretKey to access the [Amazon’s +SQS](https://aws.amazon.com/sqs). + +# Batch Consumer + +This component implements the Batch Consumer. + +This allows you, for instance, to know how many messages exist in this +batch and for instance, let the Aggregator aggregate this number of +messages. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## Advanced AmazonSQS configuration + +If your Camel Application is running behind a firewall or if you need to +have more control over the SqsClient instance configuration, you can +create your own instance, and configure Camel to use your instance by +the bean id. + +In the example below, we use *myClient* as the bean id: + + // crate my own instance of SqsClient + SqsClient sqs = ... + + // register the client into Camel registry + camelContext.getRegistry().bind("myClient", sqs); + + // refer to the custom client via myClient as the bean id + from("aws2-sqs://MyQueue?amazonSQSClient=#m4yClient&delay=5000&maxMessagesPerPoll=5") + .to("mock:result"); + +## DelayQueue VS Delay for Single message + +When the option delayQueue is set to true, the SQS Queue will be a +DelayQueue with the DelaySeconds option as delay. For more information +about DelayQueue you can read the [AWS SQS +documentation](https://docs.aws.amazon.com/en_us/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-delay-queues.html). +One important information to take into account is the following: + +- For standard queues, the per-queue delay setting is not + retroactively—changing the setting doesn’t affect the delay of + messages already in the queue. + +- For FIFO queues, the per-queue delay setting is + retroactively—changing the setting affects the delay of messages + already in the queue. + +as stated in the official documentation. If you want to specify a delay +on single messages, you can ignore the delayQueue option, while you can +set this option to true if you need to add a fixed delay to all messages +enqueued. + +## Server Side Encryption + +There is a set of Server Side Encryption attributes for a queue. The +related option are: `serverSideEncryptionEnabled`, `keyMasterKeyId` and +`kmsDataKeyReusePeriod`. The SSE is disabled by default. You need to +explicitly set the option to true and set the related parameters as +queue attributes. + +# JMS-style Selectors + +SQS does not allow selectors, but you can effectively achieve this by +using the Camel Filter EIP and setting an appropriate +`visibilityTimeout`. When SQS dispatches a message, it will wait up to +the visibility timeout before it tries to dispatch the message to a +different consumer unless a DeleteMessage is received. By default, Camel +will always send the DeleteMessage at the end of the route, unless the +route ended in failure. To achieve appropriate filtering and not send +the DeleteMessage even on successful completion of the route, use a +Filter: + + from("aws2-sqs://MyQueue?amazonSQSClient=#client&defaultVisibilityTimeout=5000&deleteIfFiltered=false&deleteAfterRead=false") + .filter("${header.login} == true") + .setProperty(Sqs2Constants.SQS_DELETE_FILTERED, constant(true)) + .to("mock:filter"); + +In the above code, if an exchange doesn’t have an appropriate header, it +will not make it through the filter AND also not be deleted from the SQS +queue. After 5000 milliseconds, the message will become visible to other +consumers. + +Note we must set the property `Sqs2Constants.SQS_DELETE_FILTERED` to +`true` to instruct Camel to send the DeleteMessage, if being filtered. + +# Available Producer Operations + +- single message (default) + +- sendBatchMessage + +- deleteMessage + +- listQueues + +# Send Message + + from("direct:start") + .setBody(constant("Camel rocks!")) + .to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1"); + +# Send Batch Message + +You can set a `SendMessageBatchRequest` or an `Iterable` + + from("direct:start") + .setHeader(SqsConstants.SQS_OPERATION, constant("sendBatchMessage")) + .process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + List c = new ArrayList(); + c.add("team1"); + c.add("team2"); + c.add("team3"); + c.add("team4"); + exchange.getIn().setBody(c); + } + }) + .to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1"); + +As result, you’ll get an exchange containing a +`SendMessageBatchResponse` instance, that you can examine to check what +messages were successful and what not. The id set on each message of the +batch will be a Random UUID. + +# Delete single Message + +Use deleteMessage operation to delete a single message. You’ll need to +set a receipt handle header for the message you want to delete. + + from("direct:start") + .setHeader(SqsConstants.SQS_OPERATION, constant("deleteMessage")) + .setHeader(SqsConstants.RECEIPT_HANDLE, constant("123456")) + .to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1"); + +As result, you’ll get an exchange containing a `DeleteMessageResponse` +instance, that you can use to check if the message was deleted or not. + +# List Queues + +Use listQueues operation to list queues. + + from("direct:start") + .setHeader(SqsConstants.SQS_OPERATION, constant("listQueues")) + .to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1"); + +As result, you’ll get an exchange containing a `ListQueuesResponse` +instance, that you can examine to check the actual queues. + +# Purge Queue + +Use purgeQueue operation to purge queue. + + from("direct:start") + .setHeader(SqsConstants.SQS_OPERATION, constant("purgeQueue")) + .to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1"); + +As result you’ll get an exchange containing a `PurgeQueueResponse` +instance. + +# Queue Auto-creation + +With the option `autoCreateQueue` users are able to avoid the +autocreation of an SQS Queue in case it doesn’t exist. The default for +this option is `false`. If set to *false*, any operation on a +non-existent queue in AWS won’t be successful and an error will be +returned. + +# Send Batch Message and Message Deduplication Strategy + +In case you’re using a SendBatchMessage Operation, you can set two +different kinds of Message Deduplication Strategy: - useExchangeId - +useContentBasedDeduplication + +The first one will use a ExchangeIdMessageDeduplicationIdStrategy, that +will use the Exchange ID as parameter The other one will use a +NullMessageDeduplicationIdStrategy, that will use the body as a +deduplication element. + +In case of send batch message operation, you’ll need to use the +`useContentBasedDeduplication` and on the Queue you’re pointing you’ll +need to enable the `content based deduplication` option. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-sqs + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|amazonAWSHost|The hostname of the Amazon AWS cloud.|amazonaws.com|string| +|autoCreateQueue|Setting the auto-creation of the queue|false|boolean| +|configuration|The AWS SQS default configuration||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|protocol|The underlying protocol used to communicate with SQS|https|string| +|queueOwnerAWSAccountId|Specify the queue owner aws account id when you need to connect the queue with a different account owner.||string| +|region|The region in which SQS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|attributeNames|A list of attribute names to receive when consuming. Multiple names can be separated by comma.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|concurrentConsumers|Allows you to use multiple threads to poll the sqs queue to increase throughput|1|integer| +|defaultVisibilityTimeout|The default visibility timeout (in seconds)||integer| +|deleteAfterRead|Delete message from SQS after it has been read|true|boolean| +|deleteIfFiltered|Whether to send the DeleteMessage to the SQS queue if the exchange has property with key Sqs2Constants#SQS\_DELETE\_FILTERED (CamelAwsSqsDeleteFiltered) set to true.|true|boolean| +|extendMessageVisibility|If enabled, then a scheduled background task will keep extending the message visibility on SQS. This is needed if it takes a long time to process the message. If set to true defaultVisibilityTimeout must be set. See details at Amazon docs.|false|boolean| +|kmsDataKeyReusePeriodSeconds|The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again. An integer representing seconds, between 60 seconds (1 minute) and 86,400 seconds (24 hours). Default: 300 (5 minutes).||integer| +|kmsMasterKeyId|The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK.||string| +|messageAttributeNames|A list of message attribute names to receive when consuming. Multiple names can be separated by comma.||string| +|serverSideEncryptionEnabled|Define if Server Side Encryption is enabled or not on the queue|false|boolean| +|visibilityTimeout|The duration (in seconds) that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request to set in the com.amazonaws.services.sqs.model.SetQueueAttributesRequest. This only makes sense if it's different from defaultVisibilityTimeout. It changes the queue visibility timeout attribute permanently.||integer| +|waitTimeSeconds|Duration in seconds (0 to 20) that the ReceiveMessage action call will wait until a message is in the queue to include in the response.||integer| +|batchSeparator|Set the separator when passing a String to send batch message operation|,|string| +|delaySeconds|Delay sending messages for a number of seconds.||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|messageDeduplicationIdStrategy|Only for FIFO queues. Strategy for setting the messageDeduplicationId on the message. It can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message.|useExchangeId|string| +|messageGroupIdStrategy|Only for FIFO queues. Strategy for setting the messageGroupId on the message. It can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used.||string| +|messageHeaderExceededLimit|What to do if sending to AWS SQS has more messages than AWS allows (currently only maximum 10 message headers are allowed). WARN will log a WARN about the limit is for each additional header, so the message can be sent to AWS. WARN\_ONCE will only log one time a WARN about the limit is hit, and drop additional headers, so the message can be sent to AWS. IGNORE will ignore (no logging) and drop additional headers, so the message can be sent to AWS. FAIL will cause an exception to be thrown and the message is not sent to AWS.|WARN|string| +|operation|The operation to do in case the user don't want to send only a message||object| +|amazonSQSClient|To use the AmazonSQS client||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|delayQueue|Define if you want to apply delaySeconds option to the queue or on single messages|false|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the SQS client||string| +|proxyPort|To define a proxy port when instantiating the SQS client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the SQS client|HTTPS|object| +|maximumMessageSize|The maximumMessageSize (in bytes) an SQS message can contain for this queue.||integer| +|messageRetentionPeriod|The messageRetentionPeriod (in seconds) a message will be retained by SQS for this queue.||integer| +|policy|The policy for this queue. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|queueUrl|To define the queueUrl explicitly. All other parameters, which would influence the queueUrl, are ignored. This parameter is intended to be used to connect to a mock implementation of SQS, for testing purposes.||string| +|receiveMessageWaitTimeSeconds|If you do not specify WaitTimeSeconds in the request, the queue attribute ReceiveMessageWaitTimeSeconds is used to determine how long to wait.||integer| +|redrivePolicy|Specify the policy that send message to DeadLetter queue. See detail at Amazon docs.||string| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the SQS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the SQS client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the SQS client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in SQS.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|queueNameOrArn|Queue name or ARN||string| +|amazonAWSHost|The hostname of the Amazon AWS cloud.|amazonaws.com|string| +|autoCreateQueue|Setting the auto-creation of the queue|false|boolean| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to map headers to/from Camel.||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|protocol|The underlying protocol used to communicate with SQS|https|string| +|queueOwnerAWSAccountId|Specify the queue owner aws account id when you need to connect the queue with a different account owner.||string| +|region|The region in which SQS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|attributeNames|A list of attribute names to receive when consuming. Multiple names can be separated by comma.||string| +|concurrentConsumers|Allows you to use multiple threads to poll the sqs queue to increase throughput|1|integer| +|defaultVisibilityTimeout|The default visibility timeout (in seconds)||integer| +|deleteAfterRead|Delete message from SQS after it has been read|true|boolean| +|deleteIfFiltered|Whether to send the DeleteMessage to the SQS queue if the exchange has property with key Sqs2Constants#SQS\_DELETE\_FILTERED (CamelAwsSqsDeleteFiltered) set to true.|true|boolean| +|extendMessageVisibility|If enabled, then a scheduled background task will keep extending the message visibility on SQS. This is needed if it takes a long time to process the message. If set to true defaultVisibilityTimeout must be set. See details at Amazon docs.|false|boolean| +|kmsDataKeyReusePeriodSeconds|The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again. An integer representing seconds, between 60 seconds (1 minute) and 86,400 seconds (24 hours). Default: 300 (5 minutes).||integer| +|kmsMasterKeyId|The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK.||string| +|maxMessagesPerPoll|Gets the maximum number of messages as a limit to poll at each polling. Is default unlimited, but use 0 or negative number to disable it as unlimited.||integer| +|messageAttributeNames|A list of message attribute names to receive when consuming. Multiple names can be separated by comma.||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|serverSideEncryptionEnabled|Define if Server Side Encryption is enabled or not on the queue|false|boolean| +|visibilityTimeout|The duration (in seconds) that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request to set in the com.amazonaws.services.sqs.model.SetQueueAttributesRequest. This only makes sense if it's different from defaultVisibilityTimeout. It changes the queue visibility timeout attribute permanently.||integer| +|waitTimeSeconds|Duration in seconds (0 to 20) that the ReceiveMessage action call will wait until a message is in the queue to include in the response.||integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|batchSeparator|Set the separator when passing a String to send batch message operation|,|string| +|delaySeconds|Delay sending messages for a number of seconds.||integer| +|messageDeduplicationIdStrategy|Only for FIFO queues. Strategy for setting the messageDeduplicationId on the message. It can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message.|useExchangeId|string| +|messageGroupIdStrategy|Only for FIFO queues. Strategy for setting the messageGroupId on the message. It can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used.||string| +|messageHeaderExceededLimit|What to do if sending to AWS SQS has more messages than AWS allows (currently only maximum 10 message headers are allowed). WARN will log a WARN about the limit is for each additional header, so the message can be sent to AWS. WARN\_ONCE will only log one time a WARN about the limit is hit, and drop additional headers, so the message can be sent to AWS. IGNORE will ignore (no logging) and drop additional headers, so the message can be sent to AWS. FAIL will cause an exception to be thrown and the message is not sent to AWS.|WARN|string| +|operation|The operation to do in case the user don't want to send only a message||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|amazonSQSClient|To use the AmazonSQS client||object| +|delayQueue|Define if you want to apply delaySeconds option to the queue or on single messages|false|boolean| +|proxyHost|To define a proxy host when instantiating the SQS client||string| +|proxyPort|To define a proxy port when instantiating the SQS client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the SQS client|HTTPS|object| +|maximumMessageSize|The maximumMessageSize (in bytes) an SQS message can contain for this queue.||integer| +|messageRetentionPeriod|The messageRetentionPeriod (in seconds) a message will be retained by SQS for this queue.||integer| +|policy|The policy for this queue. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|queueUrl|To define the queueUrl explicitly. All other parameters, which would influence the queueUrl, are ignored. This parameter is intended to be used to connect to a mock implementation of SQS, for testing purposes.||string| +|receiveMessageWaitTimeSeconds|If you do not specify WaitTimeSeconds in the request, the queue attribute ReceiveMessageWaitTimeSeconds is used to determine how long to wait.||integer| +|redrivePolicy|Specify the policy that send message to DeadLetter queue. See detail at Amazon docs.||string| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the SQS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the SQS client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the SQS client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in SQS.|false|boolean| diff --git a/camel-aws2-step-functions.md b/camel-aws2-step-functions.md new file mode 100644 index 0000000000000000000000000000000000000000..27e2d9fec979db2b4e3f443d1e9df9c682558385 --- /dev/null +++ b/camel-aws2-step-functions.md @@ -0,0 +1,202 @@ +# Aws2-step-functions + +**Since Camel 4.0** + +**Only producer is supported** + +The AWS2 Step Functions component supports the following operations on +[AWS Step Functions](https://aws.amazon.com/step-functions/): + +- Create, delete, update, describe, list state machines. + +- Create, delete, describe, list activities. + +- Start, start sync, stop, list, describe executions. + +- Get activities task. + +- Get execution history + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon Step Functions. More information is available at +[AWS Step Functions](https://aws.amazon.com/step-functions/). + +# URI Format + + aws2-step-functions://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required Step Functions component options + +You have to provide the awsSfnClient in the Registry or your accessKey +and secretKey to access the [AWS Step +Functions](https://aws.amazon.com/step-functions/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## Step Functions Producer operations + +Camel-AWS Step Functions component provides the following operation on +the producer side: + +- createStateMachine + +- deleteStateMachine + +- updateStateMachine + +- describeStateMachine + +- listStateMachines + +- createActivity + +- deleteActivity + +- describeActivity + +- getActivityTask + +- listActivities + +- startExecution + +- startSyncExecution + +- stopExecution + +- describeExecution + +- listExecutions + +- getExecutionHistory + +# Producer Examples + +- createStateMachine: this operation will create a state machine + + + + from("direct:createStateMachine") + .to("aws2-step-functions://test?awsSfnClient=#awsSfnClient&operation=createMachine") + +# Using a POJO as body + +Sometimes building an AWS Request can be complex because of multiple +options. We introduce the possibility to use a POJO as the body. In AWS +Step Functions, there are multiple operations you can submit, as an +example for Create state machine request, you can do something like: + + from("direct:start") + .setBody(CreateStateMachineRequest.builder().name("state-machine").build()) + .to("aws2-step-functions://test?awsSfnClient=#awsSfnClient&operation=createStateMachine&pojoRequest=true") + +In this way, you’ll pass the request directly without the need of +passing headers and options specifically related to this operation. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-step-functions + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform. It can be listFunctions, getFunction, createFunction, deleteFunction or invokeFunction||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|region|The region in which StepFunctions client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useDefaultCredentialsProvider|Set whether the StepFunctions client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the StepFunctions client should expect to load credentials through a profile credentials provider.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|awsSfnClient|To use an existing configured AwsStepFunctionsClient client||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the StepFunctions client||string| +|proxyPort|To define a proxy port when instantiating the StepFunctions client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the StepFunctions client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|useSessionCredentials|Set whether the Step Functions client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Step Functions.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|operation|The operation to perform. It can be listFunctions, getFunction, createFunction, deleteFunction or invokeFunction||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|region|The region in which StepFunctions client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useDefaultCredentialsProvider|Set whether the StepFunctions client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the StepFunctions client should expect to load credentials through a profile credentials provider.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|awsSfnClient|To use an existing configured AwsStepFunctionsClient client||object| +|proxyHost|To define a proxy host when instantiating the StepFunctions client||string| +|proxyPort|To define a proxy port when instantiating the StepFunctions client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the StepFunctions client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|useSessionCredentials|Set whether the Step Functions client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Step Functions.|false|boolean| diff --git a/camel-aws2-sts.md b/camel-aws2-sts.md new file mode 100644 index 0000000000000000000000000000000000000000..3ea46bea8883457b2c66d4d66f67660d0502b8a2 --- /dev/null +++ b/camel-aws2-sts.md @@ -0,0 +1,182 @@ +# Aws2-sts + +**Since Camel 3.5** + +**Only producer is supported** + +The AWS2 STS component supports assumeRole operation. [AWS +STS](https://aws.amazon.com/sts/). + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon STS. More information is available at [Amazon +STS](https://aws.amazon.com/sts/). + +The AWS2 STS component works on the aws-global region, and it has +aws-global as the default region. + +# URI Format + + aws2-sts://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required STS component options + +You have to provide the amazonSTSClient in the Registry or your +accessKey and secretKey to access the [Amazon +STS](https://aws.amazon.com/sts/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## STS Producer operations + +Camel-AWS STS component provides the following operation on the producer +side: + +- assumeRole + +- getSessionToken + +- getFederationToken + +# Producer Examples + +- assumeRole: this operation will make an AWS user assume a different + role temporary + + + + from("direct:assumeRole") + .setHeader(STS2Constants.ROLE_ARN, constant("arn:123")) + .setHeader(STS2Constants.ROLE_SESSION_NAME, constant("groot")) + .to("aws2-sts://test?stsClient=#amazonSTSClient&operation=assumeRole") + +- getSessionToken: this operation will return a temporary session + token + + + + from("direct:getSessionToken") + .to("aws2-sts://test?stsClient=#amazonSTSClient&operation=getSessionToken") + +- getFederationToken: this operation will return a temporary + federation token + + + + from("direct:getFederationToken") + .setHeader(STS2Constants.FEDERATED_NAME, constant("federation-account")) + .to("aws2-sts://test?stsClient=#amazonSTSClient&operation=getSessionToken") + +# Using a POJO as body + +Sometimes building an AWS Request can be complex because of multiple +options. We introduce the possibility to use a POJO as the body. In AWS +STS, as an example for Assume Role request, you can do something like: + + from("direct:createUser") + .setBody(AssumeRoleRequest.builder().roleArn("arn:123").roleSessionName("groot").build()) + .to("aws2-sts://test?stsClient=#amazonSTSClient&operation=assumeRole&pojoRequest=true") + +In this way, you’ll pass the request directly without the need of +passing headers and options specifically related to this operation. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-sts + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform|assumeRole|object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which the STS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()|aws-global|string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|stsClient|To use an existing configured AWS STS client||object| +|proxyHost|To define a proxy host when instantiating the STS client||string| +|proxyPort|To define a proxy port when instantiating the STS client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the STS client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the STS client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the STS client should expect to load credentials through a profile credentials provider.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|operation|The operation to perform|assumeRole|object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which the STS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()|aws-global|string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|stsClient|To use an existing configured AWS STS client||object| +|proxyHost|To define a proxy host when instantiating the STS client||string| +|proxyPort|To define a proxy port when instantiating the STS client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the STS client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the STS client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the STS client should expect to load credentials through a profile credentials provider.|false|boolean| diff --git a/camel-aws2-timestream.md b/camel-aws2-timestream.md new file mode 100644 index 0000000000000000000000000000000000000000..b2a03ac2d344f9887fdfff5da7ad2e643f3cdbc4 --- /dev/null +++ b/camel-aws2-timestream.md @@ -0,0 +1,270 @@ +# Aws2-timestream + +**Since Camel 4.1** + +**Only producer is supported** + +The AWS2 Timestream component supports the following operations on [AWS +Timestream](https://aws.amazon.com/timestream/): + +- Write Operations + + - Describe Write Endpoints + + - Create, Describe, Resume, List Batch Load Tasks + + - Create, Delete, Update, Describe, List Databases + + - Create, Delete, Update, Describe, List Tables + + - Write Records + +- Query Operations + + - Describe Query Endpoints + + - Prepare Query, Query, Cancel Query + + - Create, Delete, Execute, Update, Describe, List Scheduled + Queries + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon Timestream. More information is available at +[AWS Timestream](https://aws.amazon.com/timestream/). + +# URI Format + + aws2-timestream://clientType:label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required Timestream component options + +Based on the type of operation to be performed, the type of client +(write/query) needs to be provided as clientType URI path parameter + +You have to provide either the awsTimestreamWriteClient(for write +operations) or awsTimestreamQueryClient(for query operations) in the +Registry or your accessKey and secretKey to access the [AWS +Timestream](https://aws.amazon.com/timestream/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey`. + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## Timestream Producer operations + +Camel-AWS Timestream component provides the following operation on the +producer side: + +- Write Operations + + - describeEndpoints + + - createBatchLoadTask + + - describeBatchLoadTask + + - resumeBatchLoadTask + + - listBatchLoadTasks + + - createDatabase + + - deleteDatabase + + - describeDatabase + + - updateDatabase + + - listDatabases + + - createTable + + - deleteTable + + - describeTable + + - updateTable + + - listTables + + - writeRecords + +- Query Operations + + - describeEndpoints + + - createScheduledQuery + + - deleteScheduledQuery + + - executeScheduledQuery + + - updateScheduledQuery + + - describeScheduledQuery + + - listScheduledQueries + + - prepareQuery + + - query + + - cancelQuery + +# Producer Examples + +- Write Operation + + - createDatabase: this operation will create a timestream database + + + + from("direct:createDatabase") + .setHeader(Timestream2Constants.DATABASE_NAME, constant("testDb")) + .setHeader(Timestream2Constants.KMS_KEY_ID, constant("testKmsKey")) + .to("aws2-timestream://write:test?awsTimestreamWriteClient=#awsTimestreamWriteClient&operation=createDatabase") + +- Query Operation + + - query: this operation will execute a timestream query + + + + from("direct:query") + .setHeader(Timestream2Constants.QUERY_STRING, constant("SELECT * FROM testDb.testTable ORDER BY time DESC LIMIT 10")) + .to("aws2-timestream://query:test?awsTimestreamQueryClient=#awsTimestreamQueryClient&operation=query") + +# Using a POJO as body + +Sometimes building an AWS Request can be complex because of multiple +options. We introduce the possibility to use a POJO as the body. In AWS +Timestream there are multiple operations you can submit, as an example +for Create state machine request, you can do something like: + +- Write Operation + + - createDatabase: this operation will create a timestream database + + + + from("direct:start") + .setBody(CreateDatabaseRequest.builder().database(Database.builder().databaseName("testDb").kmsKeyId("testKmsKey").build()).build()) + .to("aws2-timestream://write:test?awsTimestreamWriteClient=#awsTimestreamWriteClient&operation=createDatabase&pojoRequest=true") + +- Query Operation + + - query: this operation will execute a timestream query + + + + from("direct:query") + .setBody(QueryRequest.builder().queryString("SELECT * FROM testDb.testTable ORDER BY time DESC LIMIT 10").build()) + .to("aws2-timestream://query:test?awsTimestreamQueryClient=#awsTimestreamQueryClient&operation=query&pojoRequest=true") + +In this way, you’ll pass the request directly without the need of +passing headers and options specifically related to this operation. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-timestream + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform. It can be describeEndpoints,createBatchLoadTask,describeBatchLoadTask, resumeBatchLoadTask,listBatchLoadTasks,createDatabase,deleteDatabase,describeDatabase,updateDatabase, listDatabases,createTable,deleteTable,describeTable,updateTable,listTables,writeRecords, createScheduledQuery,deleteScheduledQuery,executeScheduledQuery,updateScheduledQuery, describeScheduledQuery,listScheduledQueries,prepareQuery,query,cancelQuery||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|region|The region in which the Timestream client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useDefaultCredentialsProvider|Set whether the Timestream client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Timestream client should expect to load credentials through a profile credentials provider.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|awsTimestreamQueryClient|To use an existing configured AwsTimestreamQueryClient client||object| +|awsTimestreamWriteClient|To use an existing configured AwsTimestreamWriteClient client||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the Timestream client||string| +|proxyPort|To define a proxy port when instantiating the Timestream client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Timestream client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|clientType|Type of client - write/query||object| +|label|Logical name||string| +|operation|The operation to perform. It can be describeEndpoints,createBatchLoadTask,describeBatchLoadTask, resumeBatchLoadTask,listBatchLoadTasks,createDatabase,deleteDatabase,describeDatabase,updateDatabase, listDatabases,createTable,deleteTable,describeTable,updateTable,listTables,writeRecords, createScheduledQuery,deleteScheduledQuery,executeScheduledQuery,updateScheduledQuery, describeScheduledQuery,listScheduledQueries,prepareQuery,query,cancelQuery||object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|region|The region in which the Timestream client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|useDefaultCredentialsProvider|Set whether the Timestream client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Timestream client should expect to load credentials through a profile credentials provider.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|awsTimestreamQueryClient|To use an existing configured AwsTimestreamQueryClient client||object| +|awsTimestreamWriteClient|To use an existing configured AwsTimestreamWriteClient client||object| +|proxyHost|To define a proxy host when instantiating the Timestream client||string| +|proxyPort|To define a proxy port when instantiating the Timestream client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Timestream client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|secretKey|Amazon AWS Secret Key||string| diff --git a/camel-aws2-translate.md b/camel-aws2-translate.md new file mode 100644 index 0000000000000000000000000000000000000000..0ca0c2d8b1bd080ac4241e7231a5ce0c912fe21f --- /dev/null +++ b/camel-aws2-translate.md @@ -0,0 +1,171 @@ +# Aws2-translate + +**Since Camel 3.1** + +**Only producer is supported** + +The AWS2 Translate component supports translate a text in multiple +languages. [AWS Translate](https://aws.amazon.com/translate/) clusters +instances. + +Prerequisites + +You must have a valid Amazon Web Services developer account, and be +signed up to use Amazon Translate. More information is available at +[Amazon Translate](https://aws.amazon.com/translate/). + +# URI Format + + aws2-translate://label[?options] + +You can append query options to the URI in the following format: + +`?options=value&option2=value&...` + +Required Translate component options + +You have to provide the amazonTranslateClient in the Registry or your +accessKey and secretKey to access the [Amazon +Translate](https://aws.amazon.com/translate/) service. + +# Usage + +## Static credentials, Default Credential Provider and Profile Credentials Provider + +You have the possibility of avoiding the usage of explicit static +credentials by specifying the useDefaultCredentialsProvider option and +set it to true. + +The order of evaluation for Default Credentials Provider is the +following: + +- Java system properties - `aws.accessKeyId` and `aws.secretKey` + +- Environment variables - `AWS_ACCESS_KEY_ID` and + `AWS_SECRET_ACCESS_KEY`. + +- Web Identity Token from AWS STS. + +- The shared credentials and config files. + +- Amazon ECS container credentials - loaded from the Amazon ECS if the + environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is + set. + +- Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by +specifying the useProfileCredentialsProvider option to true and +profileCredentialsName to the profile name. + +Only one of static, default and profile credentials could be used at the +same time. + +For more information about this you can look at [AWS credentials +documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html) + +## Translate Producer operations + +Camel-AWS Translate component provides the following operation on the +producer side: + +- translateText + +# Translate Text example + + from("direct:start") + .setHeader(TranslateConstants.SOURCE_LANGUAGE, TranslateLanguageEnum.ITALIAN) + .setHeader(TranslateConstants.TARGET_LANGUAGE, TranslateLanguageEnum.GERMAN) + .setBody("Ciao") + .to("aws2-translate://test?translateClient=#amazonTranslateClient&operation=translateText"); + +As a result, you’ll get an exchange containing the translated text. + +# Using a POJO as body + +Sometimes building an AWS Request can be complex because of multiple +options. We introduce the possibility to use a POJO as the body. In AWS +Translate, the only operation available is TranslateText, so you can do +something like: + + from("direct:start") + .setBody(TranslateTextRequest.builder().sourceLanguageCode(Translate2LanguageEnum.ITALIAN.toString()) + .targetLanguageCode(Translate2LanguageEnum.GERMAN.toString()).text("Ciao").build()) + .to("aws2-translate://test?translateClient=#amazonTranslateClient&operation=translateText&pojoRequest=true"); + +In this way, you’ll pass the request directly without the need of +passing headers and options specifically related to this operation. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-aws2-translate + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|autodetectSourceLanguage|Being able to autodetect the source language|false|boolean| +|configuration|Component configuration||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform|translateText|object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which the Translate client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|sourceLanguage|Source language to use||string| +|targetLanguage|Target language to use||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|translateClient|To use an existing configured AWS Translate client||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|proxyHost|To define a proxy host when instantiating the Translate client||string| +|proxyPort|To define a proxy port when instantiating the Translate client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Translate client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Translate client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Translate client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the Translate client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Translate.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Logical name||string| +|autodetectSourceLanguage|Being able to autodetect the source language|false|boolean| +|operation|The operation to perform|translateText|object| +|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean| +|pojoRequest|If we want to use a POJO request as body or not|false|boolean| +|region|The region in which the Translate client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example, ap-east-1) You'll need to use the name Region.EU\_WEST\_1.id()||string| +|sourceLanguage|Source language to use||string| +|targetLanguage|Target language to use||string| +|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|translateClient|To use an existing configured AWS Translate client||object| +|proxyHost|To define a proxy host when instantiating the Translate client||string| +|proxyPort|To define a proxy port when instantiating the Translate client||integer| +|proxyProtocol|To define a proxy protocol when instantiating the Translate client|HTTPS|object| +|accessKey|Amazon AWS Access Key||string| +|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string| +|secretKey|Amazon AWS Secret Key||string| +|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string| +|trustAllCertificates|If we want to trust all certificates in case of overriding the endpoint|false|boolean| +|useDefaultCredentialsProvider|Set whether the Translate client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.|false|boolean| +|useProfileCredentialsProvider|Set whether the Translate client should expect to load credentials through a profile credentials provider.|false|boolean| +|useSessionCredentials|Set whether the Translate client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Translate.|false|boolean| diff --git a/camel-azure-cosmosdb.md b/camel-azure-cosmosdb.md new file mode 100644 index 0000000000000000000000000000000000000000..fa618c2fac0d7e26ab5014ae8138ecedf9f168f0 --- /dev/null +++ b/camel-azure-cosmosdb.md @@ -0,0 +1,791 @@ +# Azure-cosmosdb + +**Since Camel 3.10** + +**Both producer and consumer are supported** + +[Azure Cosmos DB](https://azure.microsoft.com/en-us/services/cosmos-db/) +is Microsoft’s globally distributed, multimodel database service for +operational and analytics workloads. It offers multi-mastering feature +by automatically scaling throughput, compute, and storage. This +component interacts with Azure CosmosDB through Azure SQL API. + +Prerequisites + +You must have a valid Windows Azure Storage account. More information is +available at [Azure Documentation +Portal](https://docs.microsoft.com/azure/). + + + org.apache.camel + camel-azure-cosmosdb + x.x.x + + + +# URI Format + + azure-cosmosdb://[databaseName][/containerName][?options] + +In case of the consumer, `databaseName`, `containerName` are required, +In case of the producer, it depends on the operation that being +requested, for example if operation is on a database level, e.b: +deleteDatabase, only `databaseName` is required, but in case of +operation being requested in container level, e.g: readItem, then +`databaseName` and `containerName` are required. + +You can append query options to the URI in the following format, +`?options=value&option2=value&`… + +# Authentication Information + +To use this component, you have two options to provide the required +Azure authentication information: + +- Provide `accountKey` and `databaseEndpoint` for your Azure CosmosDB + account. The account key can be generated through your CosmosDB + Azure portal. + +- Provide a + [CosmosAsyncClient](https://docs.microsoft.com/en-us/java/api/com.azure.cosmos.cosmosasyncclient?view=azure-java-stable) + instance which can be provided into `cosmosAsyncClient`. + +# Async Consumer and Producer + +This component implements the async Consumer and producer. + +This allows camel route to consume and produce events asynchronously +without blocking any threads. + +# Usage + +For example, to consume records from a specific container in a specific +database to a file, use the following snippet: + + from("azure-cosmosdb://camelDb/myContainer?accountKey=MyaccountKey&databaseEndpoint=https//myazure.com:443&leaseDatabaseName=myLeaseDB&createLeaseDatabaseIfNotExists=true&createLeaseContainerIfNotExists=true"). + to("file://directory"); + +## Message headers evaluated by the component producer + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
HeaderVariable NameTypeDescription

CamelAzureCosmosDbDatabaseName

CosmosDbConstants.DATABASE_NAME

String

Overrides the database name which is +the name of the Cosmos database that component should connect to. In +case you are producing data and have createDatabaseIfNotExists=true, the +component will automatically auto create a Cosmos database.

CamelAzureCosmosDbContainerName

CosmosDbConstants.CONTAINER_NAME

String

Overrides the container name which is +the name of the Cosmos container that component should connect to. In +case you are producing data and have createContainerIfNotExists=true, +the component will automatically auto create a Cosmos +container.

CamelAzureCosmosDbOperation

CosmosDbConstants.OPERATION

CosmosDbOperationsDefinition

Set the producer operation which can be +used to execute a specific operation on the producer.

CamelAzureCosmosDbQuery

CosmosDbConstants.QUERY

String

Set the SQL query to execute on a given +producer query operations.

CamelAzureCosmosDbQueryRequestOptions

CosmosDbConstants.QUERY_REQUEST_OPTIONS

CosmosQueryRequestOptions

Set additional QueryRequestOptions that +can be used with queryItems, queryContainers, queryDatabases, +listDatabases, listItems, listContainers operations.

CamelAzureCosmosDbCreateDatabaseIfNotExist

CosmosDbConstants.CREATE_DATABASE_IF_NOT_EXIST

boolean

Sets if the component should create the +Cosmos database automatically in case it doesn’t exist in the Cosmos +account.

CamelAzureCosmosDbCreateContainerIfNotExist

CosmosDbConstants.CREATE_CONTAINER_IF_NOT_EXIST

boolean

Sets if the component should create +Cosmos container automatically in case it doesn’t exist in the Cosmos +account.

CamelAzureCosmosDbThroughputProperties

CosmosDbConstants.THROUGHPUT_PROPERTIES

ThroughputProperties

Sets throughput of the resources in the +Azure Cosmos DB service.

CamelAzureCosmosDbDatabaseRequestOptions

CosmosDbConstants.DATABASE_REQUEST_OPTIONS

CosmosDatabaseRequestOptions

Sets additional options to execute on +database operations.

CamelAzureCosmosDbContainerPartitionKeyPath

CosmosDbConstants.CONTAINER_PARTITION_KEY_PATH

String

Set the container partition key +path.

CamelAzureCosmosDbContainerRequestOptions

CosmosDbConstants.CONTAINER_REQUEST_OPTIONS

CosmosContainerRequestOptions

Set additional options to execute on +container operations.

CamelAzureCosmosDbItemPartitionKey

CosmosDbConstants.ITEM_PARTITION_KEY

String

Set the partition key. Represents a +partition key value in the Azure Cosmos DB database service. A partition +key identifies the partition where the item is stored in.

CamelAzureCosmosDbItemRequestOptions

CosmosDbConstants.ITEM_REQUEST_OPTIONS

CosmosItemRequestOptions

Set additional options to execute on +item operations.

CamelAzureCosmosDbItemId

CosmosDbConstants.ITEM_ID

String

Set the itemId in case needed for +operation on item like delete, replace.

+ +## Message headers set by the component producer + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
HeaderVariable NameTypeDescription

CamelAzureCosmosDbRecourseId

CosmosDbConstants.RESOURCE_ID

String

The resource ID of the requested +resource.

CamelAzureCosmosDbEtag

CosmosDbConstants.E_TAG

String

The Etag ID of the requested +resource.

CamelAzureCosmosDbTimestamp

CosmosDbConstants.TIMESTAMP

String

The timestamp of the requested +resource.

CamelAzureCosmosDbResponseHeaders

CosmosDbConstants.RESPONSE_HEADERS

Map

The response headers of the requested +resource.

CamelAzureCosmosDbStatusCode

CosmosDbConstants.STATUS_CODE

Integer

The status code of the requested +resource.

CamelAzureCosmosDbDefaultTimeToLiveInSeconds

CosmosDbConstants.DEFAULT_TIME_TO_LIVE_SECONDS

Integer

The TTL of the requested +resource.

CamelAzureCosmosDbManualThroughput

CosmosDbConstants.MANUAL_THROUGHPUT

Integer

The manual throughput of the requested +resource.

CamelAzureCosmosDbAutoscaleMaxThroughput

CosmosDbConstants.AUTOSCALE_MAX_THROUGHPUT

Integer

The AutoscaleMaxThroughput of the +requested resource.

+ +## Azure CosmosDB Producer operations + +Camel Azure CosmosDB component provides a wide range of operations on +the producer side: + +**Operations on the service level** + +For these operations, `databaseName` is **required** except for +`queryDatabases` and `listDatabases` operations. + + ++++ + + + + + + + + + + + + + + + + + + + + +
OperationDescription

listDatabases

Gets a list of all databases as +List<CosmosDatabaseProperties> set in the exchange +message body.

createDatabase

Create a database in the specified +Azure CosmosDB account.

queryDatabases

query is +required Execute an SQL query against the service level in +order for example return only a small subset of the databases list. It +will set List<CosmosDatabaseProperties> set in the +exchange message body.

+ +**Operations on the database level** + +For these operations, `databaseName` is **required** for all operations +here and `containerName` only for `createContainer` and +`queryContainers`. + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

deleteDatabase

Delete a database from the Azure +CosmosDB account.

createContainer

Create a container in the specified +Azure CosmosDB database.

replaceDatabaseThroughput

Replace the throughput for the +specified Azure CosmosDB database.

listContainers

Gets a list of all containers in the +specified database as List<CosmosContainerProperties> +set in the exchange message body.

queryContainers

query is +required Executes an SQL query against the database level in +order for example return only a small subset of the container list for +the specified database. It will set +List<CosmosContainerProperties> set in the exchange +message body.

+ +**Operations on the container level** + +For these operations, `databaseName` and `containerName` is **required** +for all operations here. + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

deleteContainer

Delete a container from the specified +Azure CosmosDB database.

replaceContainerThroughput

Replace the throughput for the +specified Azure CosmosDB container.

createItem

itemPartitionKey +is required Creates an item in the specified container, it +accepts POJO or key value as Map<String, ?>.

upsertItem

itemPartitionKey +is required Creates an item in the specified container if it +doesn’t exist otherwise overwrite it if it exists, it accepts POJO or +key value as Map<String, ?>.

replaceItem

itemPartitionKey +and itemId are required Overwrites an item in the +specified container, it accepts POJO or key value as +Map<String, ?>.

deleteItem

itemPartitionKey +and itemId are required Deletes an item in the +specified container.

readItem

itemPartitionKey +and itemId are required Gets an item in the +specified container as Map<String,?> set in the +exchange body message.

readItem

itemPartitionKey +Gets a list of items in the specified container per the +itemPartitionKey as +List<Map<String,?>> set in the exchange body +message.

queryItems

query is +required Execute an SQL query against the container level in +order, for example, return only matching items per the SQL query. It +will set List<Map<String,>?> in the exchange +message body.

+ +Refer to the example section in this page to learn how to use these +operations into your camel application. + +### Examples + +- `listDatabases`: + + + + from("direct:start") + .to("azure-cosmosdb://?operation=listDatabases") + .to("mock:result"); + +- `createDatabase`: + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(CosmosDbConstants.DATABASE_NAME, "myDb"); + }) + .to("azure-cosmosdb://?operation=createDatabase") + .to("mock:result"); + +- `deleteDatabase`: + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(CosmosDbConstants.DATABASE_NAME, "myDb"); + }) + .to("azure-cosmosdb://?operation=deleteDatabase") + .to("mock:result"); + +- `createContainer`: + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(CosmosDbConstants.DATABASE_NAME, "databaseName"); + exchange.getIn().setHeader(CosmosDbConstants.CONTAINER_NAME, "containerName"); + exchange.getIn().setHeader(CosmosDbConstants.CONTAINER_PARTITION_KEY_PATH, "path"); + exchange.getIn().setHeader(CosmosDbConstants.CREATE_DATABASE_IF_NOT_EXIST, true); + }) + .to("azure-cosmosdb://?operation=createContainer") + .to("mock:result"); + +- `deleteContainer`: + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(CosmosDbConstants.DATABASE_NAME, "databaseName"); + exchange.getIn().setHeader(CosmosDbConstants.CONTAINER_NAME, "containerName"); + }) + .to("azure-cosmosdb://?operation=deleteContainer") + .to("mock:result"); + +- `replaceDatabaseThroughput`: + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(CosmosDbConstants.DATABASE_NAME, "databaseName"); + exchange.getIn().setHeader(CosmosDbConstants.THROUGHPUT_PROPERTIES, + ThroughputProperties.createManualThroughput(700)); + }) + .to("azure-cosmosdb://?operation=replaceDatabaseThroughput") + .to("mock:result"); + +- `queryContainers`: + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(CosmosDbConstants.DATABASE_NAME, "databaseName"); + exchange.getIn().setHeader(CosmosDbConstants.QUERY, "SELECT * from c where c.id = 'myAwersomeContainer'"); + }) + .to("azure-cosmosdb://?operation=queryContainers") + .to("mock:result"); + +- `createItem`: + + + + from("direct:start") + .process(exchange -> { + // create item to send + final Map item = new HashMap<>(); + item1.put("id", "test-id-1"); + item1.put("partition", "test-1"); + item1.put("field1", "awesome!"); + + exchange.getIn().setHeader(CosmosDbConstants.DATABASE_NAME, "databaseName"); + exchange.getIn().setHeader(CosmosDbConstants.CONTAINER_NAME, "containerName"); + exchange.getIn().setHeader(CosmosDbConstants.CONTAINER_PARTITION_KEY_PATH, "partition"); + exchange.getIn().setHeader(CosmosDbConstants.ITEM_PARTITION_KEY, "test-1"); + exchange.getIn().setBody(item); + }) + .to("azure-cosmosdb://?operation=createItem") + .to("mock:result"); + +- `replaceItem`: + + + + from("direct:start") + .process(exchange -> { + // create item to send + final Map item = new HashMap<>(); + item1.put("id", "test-id-1"); + item1.put("partition", "test-1"); + item1.put("field1", "awesome!"); + + exchange.getIn().setHeader(CosmosDbConstants.DATABASE_NAME, "databaseName"); + exchange.getIn().setHeader(CosmosDbConstants.CONTAINER_NAME, "containerName"); + exchange.getIn().setHeader(CosmosDbConstants.ITEM_PARTITION_KEY, "test-1"); + exchange.getIn().setHeader(CosmosDbConstants.ITEM_ID, "test-id-1"); + exchange.getIn().setBody(item); + }) + .to("azure-cosmosdb://?operation=replaceItem") + .to("mock:result"); + +- `deleteItem`: + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(CosmosDbConstants.DATABASE_NAME, "databaseName"); + exchange.getIn().setHeader(CosmosDbConstants.CONTAINER_NAME, "containerName"); + exchange.getIn().setHeader(CosmosDbConstants.ITEM_PARTITION_KEY, "test-1"); + exchange.getIn().setHeader(CosmosDbConstants.ITEM_ID, "test-id-1"); + exchange.getIn().setBody(item); + }) + .to("azure-cosmosdb://?operation=deleteItem") + .to("mock:result"); + +- `queryItems`: + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(CosmosDbConstants.DATABASE_NAME, "databaseName"); + exchange.getIn().setHeader(CosmosDbConstants.CONTAINER_NAME, "containerName"); + exchange.getIn().setHeader(CosmosDbConstants.QUERY, "SELECT c.id,c.field2,c.field1 from c where c.id = 'test-id-1'"); + }) + .to("azure-cosmosdb://?operation=queryItems") + .to("mock:result"); + +## Azure CosmosDB Consumer + +Camel Azure CosmosDB uses [ChangeFeed +pattern](https://docs.microsoft.com/en-us/azure/cosmos-db/change-feed-design-patterns) +to capture a feed of events and feed them into the Camel in an Async +manner, something similar to the Change Data Capture (CDC) design +pattern. However, it doesn’t capture deletions as these are removed from +the feed as well. + +To use the Camel Azure CosmosDB, `containerName` and `databaseName` are +required. However, there are more options that need to be set to use +this feature: + +- `leaseDatabaseName`: Sets the lease database where the + `leaseContainerName` will be stored. If it is not specified, this + component will store the lease container in the same database that + is specified in databaseName. It will be auto-created if + `createLeaseDatabaseIfNotExists` is set to true. + +- `leaseContainerName`: Sets the lease container which acts as a state + storage and coordinates processing the change feed across multiple + workers. The lease container can be stored in the same account as + the monitored container or in a separate account. It will be + auto-created if `createLeaseContainerIfNotExists` is set to true. If + not specified, this component will create container called + `camel-lease`. + +- `hostName`: Sets the hostname. The host: a host is an application + instance that uses the change feed processor to listen for changes. + Multiple instances with the same lease configuration can run in + parallel, but each instance should have a different instance name. + If not specified, this will be a generated random hostname. + +- `changeFeedProcessorOptions`: Sets additional options for the change + feed processor. + +The consumer will set `List>` in exchange message body +which reflect the list of items in a single feed. + +### Example: + +For example, to listen to the events in `myContainer` container in +`myDb`: + + from("azure-cosmosdb://myDb/myContainer?leaseDatabaseName=myLeaseDb&createLeaseDatabaseIfNotExists=true&createLeaseContainerIfNotExists=true") + .to("mock:result"); + +## Development Notes (Important) + +When developing on this component, you will need to obtain your Azure +accessKey in order to run the integration tests. In addition to the +mocked unit tests, you **will need to run the integration tests with +every change you make or even client upgrade as the Azure client can +break things even on minor versions upgrade.** To run the integration +tests, on this component directory, run the following maven command: + + mvn clean install -Dendpoint={{dbaddress}} -DaccessKey={{accessKey}} + +Whereby `endpoint` is your Azure CosmosDB endpoint name and `accessKey` +is the access key being generated from Azure CosmosDB portal. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|clientTelemetryEnabled|Sets the flag to enable client telemetry which will periodically collect database operations aggregation statistics, system information like cpu/memory and send it to cosmos monitoring service, which will be helpful during debugging. DEFAULT value is false indicating this is opt in feature, by default no telemetry collection.|false|boolean| +|configuration|The component configurations||object| +|connectionSharingAcrossClientsEnabled|Enables connections sharing across multiple Cosmos Clients. The default is false. When you have multiple instances of Cosmos Client in the same JVM interacting to multiple Cosmos accounts, enabling this allows connection sharing in Direct mode if possible between instances of Cosmos Client. Please note, when setting this option, the connection configuration (e.g., socket timeout config, idle timeout config) of the first instantiated client will be used for all other client instances.|false|boolean| +|consistencyLevel|Sets the consistency levels supported for Azure Cosmos DB client operations in the Azure Cosmos DB service. The requested ConsistencyLevel must match or be weaker than that provisioned for the database account. Consistency levels by order of strength are STRONG, BOUNDED\_STALENESS, SESSION and EVENTUAL. Refer to consistency level documentation for additional details: https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels|SESSION|object| +|containerPartitionKeyPath|Sets the container partition key path.||string| +|contentResponseOnWriteEnabled|Sets the boolean to only return the headers and status code in Cosmos DB response in case of Create, Update and Delete operations on CosmosItem. In Consumer, it is enabled by default because of the ChangeFeed in the consumer that needs this flag to be enabled and thus is shouldn't be overridden. In Producer, it advised to disable it since it reduces the network overhead|true|boolean| +|cosmosAsyncClient|Inject an external CosmosAsyncClient into the component which provides a client-side logical representation of the Azure Cosmos DB service. This asynchronous client is used to configure and execute requests against the service.||object| +|createContainerIfNotExists|Sets if the component should create Cosmos container automatically in case it doesn't exist in Cosmos database|false|boolean| +|createDatabaseIfNotExists|Sets if the component should create Cosmos database automatically in case it doesn't exist in Cosmos account|false|boolean| +|databaseEndpoint|Sets the Azure Cosmos database endpoint the component will connect to.||string| +|multipleWriteRegionsEnabled|Sets the flag to enable writes on any regions for geo-replicated database accounts in the Azure Cosmos DB service. When the value of this property is true, the SDK will direct write operations to available writable regions of geo-replicated database account. Writable regions are ordered by PreferredRegions property. Setting the property value to true has no effect until EnableMultipleWriteRegions in DatabaseAccount is also set to true. DEFAULT value is true indicating that writes are directed to available writable regions of geo-replicated database account.|true|boolean| +|preferredRegions|Sets the comma separated preferred regions for geo-replicated database accounts. For example, East US as the preferred region. When EnableEndpointDiscovery is true and PreferredRegions is non-empty, the SDK will prefer to use the regions in the container in the order they are specified to perform operations.||string| +|readRequestsFallbackEnabled|Sets whether to allow for reads to go to multiple regions configured on an account of Azure Cosmos DB service. DEFAULT value is true. If this property is not set, the default is true for all Consistency Levels other than Bounded Staleness, The default is false for Bounded Staleness. 1. endpointDiscoveryEnabled is true 2. the Azure Cosmos DB account has more than one region|true|boolean| +|throughputProperties|Sets throughput of the resources in the Azure Cosmos DB service.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|changeFeedProcessorOptions|Sets the ChangeFeedProcessorOptions to be used. Unless specifically set the default values that will be used are: maximum items per page or FeedResponse: 100 lease renew interval: 17 seconds lease acquire interval: 13 seconds lease expiration interval: 60 seconds feed poll delay: 5 seconds maximum scale count: unlimited||object| +|createLeaseContainerIfNotExists|Sets if the component should create Cosmos lease container for the consumer automatically in case it doesn't exist in Cosmos database|false|boolean| +|createLeaseDatabaseIfNotExists|Sets if the component should create Cosmos lease database for the consumer automatically in case it doesn't exist in Cosmos account|false|boolean| +|hostName|Sets the hostname. The host: a host is an application instance that uses the change feed processor to listen for changes. Multiple instances with the same lease configuration can run in parallel, but each instance should have a different instance name. If not specified, this will be a generated random hostname.||string| +|leaseContainerName|Sets the lease container which acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account. It will be auto created if createLeaseContainerIfNotExists is set to true.|camel-lease|string| +|leaseDatabaseName|Sets the lease database where the leaseContainerName will be stored. If it is not specified, this component will store the lease container in the same database that is specified in databaseName. It will be auto created if createLeaseDatabaseIfNotExists is set to true.||string| +|itemId|Sets the itemId in case needed for operation on item like delete, replace||string| +|itemPartitionKey|Sets partition key. Represents a partition key value in the Azure Cosmos DB database service. A partition key identifies the partition where the item is stored in.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The CosmosDB operation that can be used with this component on the producer.|listDatabases|object| +|query|An SQL query to execute on a given resources. To learn more about Cosmos SQL API, check this link {link https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started}||string| +|queryRequestOptions|Set additional QueryRequestOptions that can be used with queryItems, queryContainers, queryDatabases, listDatabases, listItems, listContainers operations||object| +|indexingPolicy|The CosmosDB Indexing Policy that will be set in case of container creation, this option is related to createLeaseContainerIfNotExists and it will be taken into account when the latter is true.||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|accountKey|Sets either a master or readonly key used to perform authentication for accessing resource.||string| +|credentialType|Determines the credential strategy to adopt|SHARED\_ACCOUNT\_KEY|object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|databaseName|The name of the Cosmos database that component should connect to. In case you are producing data and have createDatabaseIfNotExists=true, the component will automatically auto create a Cosmos database.||string| +|containerName|The name of the Cosmos container that component should connect to. In case you are producing data and have createContainerIfNotExists=true, the component will automatically auto create a Cosmos container.||string| +|clientTelemetryEnabled|Sets the flag to enable client telemetry which will periodically collect database operations aggregation statistics, system information like cpu/memory and send it to cosmos monitoring service, which will be helpful during debugging. DEFAULT value is false indicating this is opt in feature, by default no telemetry collection.|false|boolean| +|connectionSharingAcrossClientsEnabled|Enables connections sharing across multiple Cosmos Clients. The default is false. When you have multiple instances of Cosmos Client in the same JVM interacting to multiple Cosmos accounts, enabling this allows connection sharing in Direct mode if possible between instances of Cosmos Client. Please note, when setting this option, the connection configuration (e.g., socket timeout config, idle timeout config) of the first instantiated client will be used for all other client instances.|false|boolean| +|consistencyLevel|Sets the consistency levels supported for Azure Cosmos DB client operations in the Azure Cosmos DB service. The requested ConsistencyLevel must match or be weaker than that provisioned for the database account. Consistency levels by order of strength are STRONG, BOUNDED\_STALENESS, SESSION and EVENTUAL. Refer to consistency level documentation for additional details: https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels|SESSION|object| +|containerPartitionKeyPath|Sets the container partition key path.||string| +|contentResponseOnWriteEnabled|Sets the boolean to only return the headers and status code in Cosmos DB response in case of Create, Update and Delete operations on CosmosItem. In Consumer, it is enabled by default because of the ChangeFeed in the consumer that needs this flag to be enabled and thus is shouldn't be overridden. In Producer, it advised to disable it since it reduces the network overhead|true|boolean| +|cosmosAsyncClient|Inject an external CosmosAsyncClient into the component which provides a client-side logical representation of the Azure Cosmos DB service. This asynchronous client is used to configure and execute requests against the service.||object| +|createContainerIfNotExists|Sets if the component should create Cosmos container automatically in case it doesn't exist in Cosmos database|false|boolean| +|createDatabaseIfNotExists|Sets if the component should create Cosmos database automatically in case it doesn't exist in Cosmos account|false|boolean| +|databaseEndpoint|Sets the Azure Cosmos database endpoint the component will connect to.||string| +|multipleWriteRegionsEnabled|Sets the flag to enable writes on any regions for geo-replicated database accounts in the Azure Cosmos DB service. When the value of this property is true, the SDK will direct write operations to available writable regions of geo-replicated database account. Writable regions are ordered by PreferredRegions property. Setting the property value to true has no effect until EnableMultipleWriteRegions in DatabaseAccount is also set to true. DEFAULT value is true indicating that writes are directed to available writable regions of geo-replicated database account.|true|boolean| +|preferredRegions|Sets the comma separated preferred regions for geo-replicated database accounts. For example, East US as the preferred region. When EnableEndpointDiscovery is true and PreferredRegions is non-empty, the SDK will prefer to use the regions in the container in the order they are specified to perform operations.||string| +|readRequestsFallbackEnabled|Sets whether to allow for reads to go to multiple regions configured on an account of Azure Cosmos DB service. DEFAULT value is true. If this property is not set, the default is true for all Consistency Levels other than Bounded Staleness, The default is false for Bounded Staleness. 1. endpointDiscoveryEnabled is true 2. the Azure Cosmos DB account has more than one region|true|boolean| +|throughputProperties|Sets throughput of the resources in the Azure Cosmos DB service.||object| +|changeFeedProcessorOptions|Sets the ChangeFeedProcessorOptions to be used. Unless specifically set the default values that will be used are: maximum items per page or FeedResponse: 100 lease renew interval: 17 seconds lease acquire interval: 13 seconds lease expiration interval: 60 seconds feed poll delay: 5 seconds maximum scale count: unlimited||object| +|createLeaseContainerIfNotExists|Sets if the component should create Cosmos lease container for the consumer automatically in case it doesn't exist in Cosmos database|false|boolean| +|createLeaseDatabaseIfNotExists|Sets if the component should create Cosmos lease database for the consumer automatically in case it doesn't exist in Cosmos account|false|boolean| +|hostName|Sets the hostname. The host: a host is an application instance that uses the change feed processor to listen for changes. Multiple instances with the same lease configuration can run in parallel, but each instance should have a different instance name. If not specified, this will be a generated random hostname.||string| +|leaseContainerName|Sets the lease container which acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account. It will be auto created if createLeaseContainerIfNotExists is set to true.|camel-lease|string| +|leaseDatabaseName|Sets the lease database where the leaseContainerName will be stored. If it is not specified, this component will store the lease container in the same database that is specified in databaseName. It will be auto created if createLeaseDatabaseIfNotExists is set to true.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|itemId|Sets the itemId in case needed for operation on item like delete, replace||string| +|itemPartitionKey|Sets partition key. Represents a partition key value in the Azure Cosmos DB database service. A partition key identifies the partition where the item is stored in.||string| +|operation|The CosmosDB operation that can be used with this component on the producer.|listDatabases|object| +|query|An SQL query to execute on a given resources. To learn more about Cosmos SQL API, check this link {link https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started}||string| +|queryRequestOptions|Set additional QueryRequestOptions that can be used with queryItems, queryContainers, queryDatabases, listDatabases, listItems, listContainers operations||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|indexingPolicy|The CosmosDB Indexing Policy that will be set in case of container creation, this option is related to createLeaseContainerIfNotExists and it will be taken into account when the latter is true.||object| +|accountKey|Sets either a master or readonly key used to perform authentication for accessing resource.||string| +|credentialType|Determines the credential strategy to adopt|SHARED\_ACCOUNT\_KEY|object| diff --git a/camel-azure-eventhubs.md b/camel-azure-eventhubs.md new file mode 100644 index 0000000000000000000000000000000000000000..e1825ff3366bf0018534024302f2273dd0e1de94 --- /dev/null +++ b/camel-azure-eventhubs.md @@ -0,0 +1,271 @@ +# Azure-eventhubs + +**Since Camel 3.5** + +**Both producer and consumer are supported** + +The Azure Event Hubs used to integrate [Azure Event +Hubs](https://azure.microsoft.com/en-us/services/event-hubs/) using +[AMQP +protocol](https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol). +Azure EventHubs is a highly scalable publish-subscribe service that can +ingest millions of events per second and stream them to multiple +consumers. + +Besides AMQP protocol support, Event Hubs as well supports Kafka and +HTTPS protocols. Therefore, you can also use the [Camel +Kafka](#components::kafka-component.adoc) component to produce and +consume to Azure Event Hubs. You can lean more +[here](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs). + +Prerequisites + +You must have a valid Windows Azure Event Hubs account. More information +is available at [Azure Documentation +Portal](https://docs.microsoft.com/azure/). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-azure-eventhubs + x.x.x + + + +# URI Format + + azure-eventhubs://[namespace/eventHubName][?options] + +In case you supply the `connectionString`, `namespace` and +`eventHubName` are not required as these options already included in the +`connectionString` + +# Authentication Information + +You have three different Credential Types: AZURE\_IDENTITY, +TOKEN\_CREDENTIAL and CONNECTION\_STRING. You can also provide a client +instance yourself. To use this component, you have three options to +provide the required Azure authentication information: + +**CONNECTION\_STRING**: + +- Provide `sharedAccessName` and `sharedAccessKey` for your Azure + Event Hubs account. The sharedAccessKey can be generated through + your Event Hubs Azure portal. + +- Provide `connectionString` string, if you provide the connection + string, you don’t supply `namespace`, `eventHubName`, + `sharedAccessKey` and `sharedAccessName` as these data already + included in the `connectionString`, therefore is the simplest option + to get started. Learn more + [here](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-get-connection-string) + on how to generate the connection string. + +**TOKEN\_CREDENTIAL**: + +- Provide an implementation of + `com.azure.core.credential.TokenCredential` into the Camel’s + Registry, e.g., using the + `com.azure.identity.DefaultAzureCredentialBuilder().build();` API. + See the documentation [here about Azure-AD + authentication](https://docs.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication). + +AZURE\_IDENTITY: - This will use +`com.azure.identity.DefaultAzureCredentialBuilder().build();` instance. +This will follow the Default Azure Credential Chain. See the +documentation [here about Azure-AD +authentication](https://docs.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication). + +**Client instance**: + +- Provide a + [EventHubProducerAsyncClient](https://docs.microsoft.com/en-us/java/api/com.azure.messaging.eventhubs.eventhubproducerasyncclient?view=azure-java-stable) + instance which can be provided into `producerAsyncClient`. However, + this is **only possible for camel producer**, for the camel + consumer, is not possible to inject the client due to some design + constraint by the `EventProcessorClient`. + +# Checkpoint Store Information + +A checkpoint store stores and retrieves partition ownership information +and checkpoint details for each partition in a given consumer group of +an event hub instance. Users are not meant to implement a +CheckpointStore. Users are expected to choose existing implementations +of this interface, instantiate it, and pass it to the component through +`checkpointStore` option. Users are not expected to use any of the +methods on a checkpoint store, these are used internally by the client. + +Having said that, if the user does not pass any `CheckpointStore` +implementation, the component will fall back to use +[`BlobCheckpointStore`](https://docs.microsoft.com/en-us/javascript/api/@azure/eventhubs-checkpointstore-blob/blobcheckpointstore?view=azure-node-latest) +to store the checkpoint info in the Azure Blob Storage account. If you +chose to use the default `BlobCheckpointStore`, you will need to supply +the following options: + +- `blobAccountName`: It sets Azure account name to be used for + authentication with azure blob services. + +- `blobAccessKey`: It sets the access key for the associated azure + account name to be used for authentication with azure blob services. + +- `blobContainerName`: It sets the blob container that shall be used + by the BlobCheckpointStore to store the checkpoint offsets. + +# Async Consumer and Producer + +This component implements the async Consumer and producer. + +This allows camel route to consume and produce events asynchronously +without blocking any threads. + +# Usage + +For example, to consume event from EventHub, use the following snippet: + + from("azure-eventhubs:/camel/camelHub?sharedAccessName=SASaccountName&sharedAccessKey=SASaccessKey&blobAccountName=accountName&blobAccessKey=accessKey&blobContainerName=containerName") + .to("file://queuedirectory"); + +## Message body type + +The component’s producer expects the data in the message body to be in +`byte[]`. This allows the user to utilize Camel TypeConverter to +marshal/unmarshal data with ease. The same goes as well for the +component’s consumer, it will set the encoded data as `byte[]` in the +message body. + +## Automatic detection of EventHubProducerAsyncClient client in registry + +The component is capable of detecting the presence of an +EventHubProducerAsyncClient bean into the registry. If it’s the only +instance of that type, it will be used as the client, and you won’t have +to define it as uri parameter, like the example above. This may be +really useful for smarter configuration of the endpoint. + +## Consumer Example + +The example below will unmarshal the events that were originally +produced in JSON: + + from("azure-eventhubs:?connectionString=RAW({{connectionString}})"&blobContainerName=containerTest&eventPosition=#eventPosition" + +"&blobAccountName={{blobAccountName}}&blobAccessKey=RAW({{blobAccessKey}})") + .unmarshal().json(JsonLibrary.Jackson) + .to(result); + +## Producer Example + +The example below will send events as String to EventHubs: + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(EventHubsConstants.PARTITION_ID, firstPartition); + exchange.getIn().setBody("test event"); + }) + .to("azure-eventhubs:?connectionString=RAW({{connectionString}})" + +Also, the component supports as well **aggregation** of messages by +sending events as **iterable** of either Exchanges/Messages or normal +data (e.g.: list of Strings). For example: + + from("direct:start") + .process(exchange -> { + final List messages = new LinkedList<>(); + messages.add("Test String Message 1"); + messages.add("Test String Message 2"); + + exchange.getIn().setHeader(EventHubsConstants.PARTITION_ID, firstPartition); + exchange.getIn().setBody(messages); + }) + .to("azure-eventhubs:?connectionString=RAW({{connectionString}})" + +## Azure-AD Authentication example + +The example below makes use of the Azure-AD authentication. See +[here](https://docs.microsoft.com/en-us/java/api/overview/azure/identity-readme?view=azure-java-stable#environment-variables) +about what environment variables you need to set for this to work: + + @BindToRegistry("myTokenCredential") + public com.azure.core.credential.TokenCredential myTokenCredential() { + return com.azure.identity.DefaultAzureCredentialBuilder().build(); + } + + from("direct:start") + .to("azure-eventhubs:namespace/eventHubName?tokenCredential=#myTokenCredential&credentialType=TOKEN_CREDENTIAL)" + +## Development Notes (Important) + +When developing on this component, you will need to obtain your Azure +accessKey to run the integration tests. In addition to the mocked unit +tests, you **will need to run the integration tests with every change +you make or even client upgrade as the Azure client can break things +even on minor versions upgrade.** To run the integration tests, on this +component directory, run the following maven command: + + mvn verify -DconnectionString=string -DblobAccountName=blob -DblobAccessKey=key + +Whereby `blobAccountName` is your Azure account name and `blobAccessKey` +is the access key being generated from Azure portal and +`connectionString` is the eventHub connection string. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|amqpRetryOptions|Sets the retry policy for EventHubAsyncClient. If not specified, the default retry options are used.||object| +|amqpTransportType|Sets the transport type by which all the communication with Azure Event Hubs occurs. Default value is AmqpTransportType#AMQP.|AMQP|object| +|configuration|The component configurations||object| +|blobAccessKey|In case you chose the default BlobCheckpointStore, this sets access key for the associated azure account name to be used for authentication with azure blob services.||string| +|blobAccountName|In case you chose the default BlobCheckpointStore, this sets Azure account name to be used for authentication with azure blob services.||string| +|blobContainerName|In case you chose the default BlobCheckpointStore, this sets the blob container that shall be used by the BlobCheckpointStore to store the checkpoint offsets.||string| +|blobStorageSharedKeyCredential|In case you chose the default BlobCheckpointStore, StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|checkpointBatchSize|Sets the batch size between each checkpoint updates. Works jointly with checkpointBatchTimeout.|500|integer| +|checkpointBatchTimeout|Sets the batch timeout between each checkpoint updates. Works jointly with checkpointBatchSize.|5000|integer| +|checkpointStore|Sets the CheckpointStore the EventProcessorClient will use for storing partition ownership and checkpoint information. Users can, optionally, provide their own implementation of CheckpointStore which will store ownership and checkpoint information. By default it set to use com.azure.messaging.eventhubs.checkpointstore.blob.BlobCheckpointStore which stores all checkpoint offsets into Azure Blob Storage.|BlobCheckpointStore|object| +|consumerGroupName|Sets the name of the consumer group this consumer is associated with. Events are read in the context of this group. The name of the consumer group that is created by default is {code $Default}.|$Default|string| +|eventPosition|Sets the map containing the event position to use for each partition if a checkpoint for the partition does not exist in CheckpointStore. This map is keyed off of the partition id. If there is no checkpoint in CheckpointStore and there is no entry in this map, the processing of the partition will start from {link EventPosition#latest() latest} position.||object| +|prefetchCount|Sets the count used by the receiver to control the number of events the Event Hub consumer will actively receive and queue locally without regard to whether a receive operation is currently active.|500|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|partitionId|Sets the identifier of the Event Hub partition that the events will be sent to. If the identifier is not specified, the Event Hubs service will be responsible for routing events that are sent to an available partition.||string| +|partitionKey|Sets a hashing key to be provided for the batch of events, which instructs the Event Hubs service to map this key to a specific partition. The selection of a partition is stable for a given partition hashing key. Should any other batches of events be sent using the same exact partition hashing key, the Event Hubs service will route them all to the same partition. This should be specified only when there is a need to group events by partition, but there is flexibility into which partition they are routed. If ensuring that a batch of events is sent only to a specific partition, it is recommended that the {link #setPartitionId(String) identifier of the position be specified directly} when sending the batch.||string| +|producerAsyncClient|Sets the EventHubProducerAsyncClient.An asynchronous producer responsible for transmitting EventData to a specific Event Hub, grouped together in batches. Depending on the options specified when creating an {linkEventDataBatch}, the events may be automatically routed to an available partition or specific to a partition. Use by this component to produce the data in camel producer.||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|connectionString|Instead of supplying namespace, sharedAccessKey, sharedAccessName ... etc, you can just supply the connection string for your eventHub. The connection string for EventHubs already include all the necessary information to connection to your EventHub. To learn on how to generate the connection string, take a look at this documentation: https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-get-connection-string||string| +|credentialType|Determines the credential strategy to adopt|CONNECTION\_STRING|object| +|sharedAccessKey|The generated value for the SharedAccessName.||string| +|sharedAccessName|The name you chose for your EventHubs SAS keys.||string| +|tokenCredential|Still another way of authentication (beside supplying namespace, sharedAccessKey, sharedAccessName or connection string) is through Azure-AD authentication using an implementation instance of TokenCredential.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|namespace|EventHubs namespace created in Azure Portal.||string| +|eventHubName|EventHubs name under a specific namespace.||string| +|amqpRetryOptions|Sets the retry policy for EventHubAsyncClient. If not specified, the default retry options are used.||object| +|amqpTransportType|Sets the transport type by which all the communication with Azure Event Hubs occurs. Default value is AmqpTransportType#AMQP.|AMQP|object| +|blobAccessKey|In case you chose the default BlobCheckpointStore, this sets access key for the associated azure account name to be used for authentication with azure blob services.||string| +|blobAccountName|In case you chose the default BlobCheckpointStore, this sets Azure account name to be used for authentication with azure blob services.||string| +|blobContainerName|In case you chose the default BlobCheckpointStore, this sets the blob container that shall be used by the BlobCheckpointStore to store the checkpoint offsets.||string| +|blobStorageSharedKeyCredential|In case you chose the default BlobCheckpointStore, StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information.||object| +|checkpointBatchSize|Sets the batch size between each checkpoint updates. Works jointly with checkpointBatchTimeout.|500|integer| +|checkpointBatchTimeout|Sets the batch timeout between each checkpoint updates. Works jointly with checkpointBatchSize.|5000|integer| +|checkpointStore|Sets the CheckpointStore the EventProcessorClient will use for storing partition ownership and checkpoint information. Users can, optionally, provide their own implementation of CheckpointStore which will store ownership and checkpoint information. By default it set to use com.azure.messaging.eventhubs.checkpointstore.blob.BlobCheckpointStore which stores all checkpoint offsets into Azure Blob Storage.|BlobCheckpointStore|object| +|consumerGroupName|Sets the name of the consumer group this consumer is associated with. Events are read in the context of this group. The name of the consumer group that is created by default is {code $Default}.|$Default|string| +|eventPosition|Sets the map containing the event position to use for each partition if a checkpoint for the partition does not exist in CheckpointStore. This map is keyed off of the partition id. If there is no checkpoint in CheckpointStore and there is no entry in this map, the processing of the partition will start from {link EventPosition#latest() latest} position.||object| +|prefetchCount|Sets the count used by the receiver to control the number of events the Event Hub consumer will actively receive and queue locally without regard to whether a receive operation is currently active.|500|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|partitionId|Sets the identifier of the Event Hub partition that the events will be sent to. If the identifier is not specified, the Event Hubs service will be responsible for routing events that are sent to an available partition.||string| +|partitionKey|Sets a hashing key to be provided for the batch of events, which instructs the Event Hubs service to map this key to a specific partition. The selection of a partition is stable for a given partition hashing key. Should any other batches of events be sent using the same exact partition hashing key, the Event Hubs service will route them all to the same partition. This should be specified only when there is a need to group events by partition, but there is flexibility into which partition they are routed. If ensuring that a batch of events is sent only to a specific partition, it is recommended that the {link #setPartitionId(String) identifier of the position be specified directly} when sending the batch.||string| +|producerAsyncClient|Sets the EventHubProducerAsyncClient.An asynchronous producer responsible for transmitting EventData to a specific Event Hub, grouped together in batches. Depending on the options specified when creating an {linkEventDataBatch}, the events may be automatically routed to an available partition or specific to a partition. Use by this component to produce the data in camel producer.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionString|Instead of supplying namespace, sharedAccessKey, sharedAccessName ... etc, you can just supply the connection string for your eventHub. The connection string for EventHubs already include all the necessary information to connection to your EventHub. To learn on how to generate the connection string, take a look at this documentation: https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-get-connection-string||string| +|credentialType|Determines the credential strategy to adopt|CONNECTION\_STRING|object| +|sharedAccessKey|The generated value for the SharedAccessName.||string| +|sharedAccessName|The name you chose for your EventHubs SAS keys.||string| +|tokenCredential|Still another way of authentication (beside supplying namespace, sharedAccessKey, sharedAccessName or connection string) is through Azure-AD authentication using an implementation instance of TokenCredential.||object| diff --git a/camel-azure-files.md b/camel-azure-files.md new file mode 100644 index 0000000000000000000000000000000000000000..de9eb4d2be6791ea36977b55851e7d99d46fac21 --- /dev/null +++ b/camel-azure-files.md @@ -0,0 +1,430 @@ +# Azure-files + +**Since Camel 3.22** + +**Both producer and consumer are supported** + +This component provides access to Azure Files. + +This is preview component, therefore, anything can change in future +releases (features and behavior can be changed, modified or even dropped +without notice). At the same time it is consolidated enough, sparingly +documented, a few users reported it was working in their environment, +and it is ready for wider feedback. + +When consuming from remote files server, make sure you read the section +titled *Consuming Files* further below for details related to consuming +files. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-azure-files + x.y.z + + + +# Endpoint URI Format + + azure-files://account[.file.core.windows.net][:port]/share[/directory] + +Where **directory** represents the underlying directory. The directory +is a relative path and does not include the share name. The relative +path can contain nested folders, such as `inbox/spam`. It defaults to +the share root directory. + +The `autoCreate` option is supported for the directory; when consumer or +producer starts, there’s an additional operation performed to create the +directory configured for the endpoint. The default value for +`autoCreate` is `true`. On the contrary, the share must exist; it is not +automatically created. + +If no **port** number is provided, Camel will provide default values +according to the protocol (https 443). + +You can append query options to the URI in the following format +`?option=value&option2=value&...`. + +To use this component, you have multiple options to provide the required +Azure authentication information: + +- Via Azure Identity, when specifying `credentialType=AZURE_IDENTITY` + and providing required [environment + variables](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/identity/azure-identity#environment-variables). + This enables service principal (e.g. app registration) + authentication with secret/certificate as well as username password. + +- Via shared storage account key, when specifying + `credentialType=SHARED_ACCOUNT_KEY` and providing `sharedKey` for + your Azure account, this is the simplest way to get started. The + sharedKey can be generated through your Azure portal. + +- Via Azure SAS, when specifying `credentialType=AZURE_SAS` and + providing a SAS Token parameter through the `token` parameter. + +## Endpoint URI Examples + + azure-files://camelazurefiles.file.core.windows.net/samples?sv=2022-11-02&ss=f&srt=sco&sp=rwdlc&se=2023-06-18T22:29:13Z&st=2023-06-05T14:29:13Z&spr=https&sig=MPsMh8zci0v3To7IT9SKdaFGZV8ezno63m9C8s9bdVQ%3D + + azure-files://camelazurefiles/samples/inbox/spam?sharedKey=FAKE502UyuBD...3Z%2BASt9dCmJg%3D%3D&delete=true + +# Paths + +The path separator is `/`. The absolute paths start with the path +separator. The absolute paths do not include the share name, and they +are relative to the share root rather than to the endpoint starting +directory. + +**NOTE:** At some places, namely logs of used libraries, OS-specific +path separator appears, and the relative paths are relative to the share +root (rather than to the current working directory or to the endpoint +starting directory) so interpret them with a grain of salt. + +# Concurrency + +This component does not support concurrency on its endpoints. + +# More Information + +This component mimics the FTP component. So, there are more samples and +details on the FTP component page. + +This component uses the Azure Java SDK libraries for the actual work. + +# Consuming Files + +The remote consumer will by default leave the consumed files untouched +on the remote cloud files server. You have to configure it explicitly if +you want it to delete the files or move them to another location. For +example, you can use `delete=true` to delete the files, or use +`move=.done` to move the files into `.done` sub directory. + +In Camel, the `.`-prefixed folders are excluded from recursive polling. + +The regular File consumer is different as it will by default move files +to a `.camel` sub directory. The reason Camel does **not** do this by +default for the remote consumer is that it may lack permissions by +default to be able to move or delete files. + +## Body Type Options + +For each matching file, the consumer sends to the Camel exchange a +message with a selected body type: + +- `byte[]` by default + +- `java.io.InputStream` if `streamDownload=true` is configured + +- `java.io.File` if `localWorkDirectory` is configured + +The body type configuration should be tuned to fit available resources, +performance targets, route processors, caching, resuming, etc. + +## Limitations + +The option **readLock** can be used to force Camel **not** to consume +files that are currently in the progress of being written. However, this +option is turned off by default, as it requires that the user has write +access. See the endpoint options table for more details about read +locks. +There are other solutions to avoid consuming files that are currently +being written; for instance, you can write to a temporary destination +and move the file after it has been written. + +For the `readLock=changed`, it relies only on the last modified; +furthermore a precision finer than 5 seconds might be problematic. + +When moving files using `move` or `preMove` option, the files are +restricted to the share. That prevents consumer from moving files +outside the endpoint share. + +## Exchange Properties + +The consumer sets the following exchange properties + + ++++ + + + + + + + + + + + + + + + + + + + + +
HeaderDescription

CamelBatchIndex

The current index out of total number +of files being consumed in this batch.

CamelBatchSize

The total number of files being +consumed in this batch.

CamelBatchComplete

True if there are no more files in this +batch.

+ +# Producing Files + +The Files producer is optimized for two body types: + +- `java.io.InputStream` if `CamelFileLength` header is set + +- `byte[]` + +In either case, the remote file size is allocated and then rewritten +with body content. Any inconsistency between declared file length and +stream length results in a corrupted remote file. + +## Limitations + +The underlying Azure Files service does not allow growing files. The +file length must be known at its creation time, consequently: + +- `CamelFileLength` header has an important meaning even for + producers. + +- No appending mode is supported. + +# About Timeouts + +You can use the `connectTimeout` option to set a timeout in millis to +connect or disconnect. + +The `timeout` option only applies as the data timeout in millis. + +The meta-data operations timeout is minimum of: `readLockCheckInterval`, +`timeout` and 20\_000 millis. + +For now, the file upload has no timeout. During the upload, the +underlying library could log timeout warnings. They are recoverable and +upload could continue. + +# Using Local Work Directory + +Camel supports consuming from remote files servers and downloading the +files directly into a local work directory. This avoids reading the +entire remote file content into memory as it is streamed directly into +the local file using `FileOutputStream`. + +Camel will store to a local file with the same name as the remote file, +though with `.inprogress` as an extension while the file is being +downloaded. Afterward, the file is renamed to remove the `.inprogress` +suffix. And finally, when the Exchange is complete, the local file is +deleted. + +So if you want to download files from a remote files server and store it +as local files, then you need to route to a file endpoint such as: + + from("azure-files://...&localWorkDirectory=/tmp").to("file://inbox"); + +The route above is ultra efficient as it avoids reading the entire file +content into memory. It will download the remote file directly to a +local file stream. The `java.io.File` handle is then used as the +Exchange body. The file producer leverages this fact and can work +directly on the work file `java.io.File` handle and perform a +`java.io.File.rename` to the target filename. As Camel knows it’s a +local work file, it can optimize and use a rename instead of a file +copy, as the work file is meant to be deleted anyway. + +# Custom Filtering + +Camel supports pluggable filtering strategies. This strategy it to use +the build in `org.apache.camel.component.file.GenericFileFilter` in +Java. You can then configure the endpoint with such a filter to skip +certain filters before being processed. + +In the sample, we have built our own filter that only accepts files +starting with the report in the filename. + +And then we can configure our route using the **filter** attribute to +reference our filter (using `#` notation) that we have defined in the +spring XML file: + +The accept(file) file argument has properties: + +- endpoint path: the share name such as `/samples` + +- relative path: a path to the file such as `subdir/a file` + +- directory: `true` if a directory + +- file length: if not a directory, then a length of the file in bytes + +# Filtering using ANT path matcher + +The ANT path matcher is a filter shipped out-of-the-box in the +**camel-spring** jar. So you need to depend on **camel-spring** if you +are using Maven. +The reason is that we leverage Spring’s +[AntPathMatcher](http://static.springsource.org/spring/docs/3.0.x/api/org/springframework/util/AntPathMatcher.html) +to do the actual matching. + +The file paths are matched with the following rules: + +- `?` matches one character + +- `*` matches zero or more characters + +- `**` matches zero or more directories in a path + +The sample below demonstrates how to use it: + + from("azure-files://...&antInclude=**/*.txt").to("..."); + +# Using a Proxy + +Consult the [underlying +library](https://learn.microsoft.com/en-us/azure/developer/java/sdk/proxying) +documentation. + +# Consuming a single file using a fixed name + +Unlike FTP component that features a special combination of options: + +- `useList=false` + +- `fileName=myFileName.txt` + +- `ignoreFileNotFoundOrPermissionError=true` + +to optimize *the single file using a fixed name* use case, it is +necessary to fall back to regular filters (i.e. the list permission is +needed). + +# Debug logging + +This component has log level **TRACE** that can be helpful if you have +problems. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|account|The account to use||string| +|share|The share to use||string| +|port|Port of the FTP server||integer| +|directoryName|The starting directory||string| +|credentialType|Determines the credential strategy to adopt|SHARED\_ACCOUNT\_KEY|object| +|disconnect|Whether or not to disconnect from remote FTP server right after use. Disconnect will only disconnect the current connection to the FTP server. If you have a consumer which you want to stop, then you need to stop the consumer/route instead.|false|boolean| +|doneFileName|Producer: If provided, then Camel will write a 2nd done file when the original file has been written. The done file will be empty. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders. The done file will always be written in the same folder as the original file. Consumer: If provided, Camel will only consume files if a done file exists. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders.The done file is always expected in the same folder as the original file. Only ${file.name} and ${file.name.next} is supported as dynamic placeholders.||string| +|fileName|Use Expression such as File Language to dynamically set the filename. For consumers, it's used as a filename filter. For producers, it's used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today's file using the File Language syntax: mydata-${date:now:yyyyMMdd}.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards.||string| +|sharedKey|Shared key (storage account key)||string| +|delete|If true, the file will be deleted after it is processed successfully.|false|boolean| +|moveFailed|Sets the move failure expression based on Simple language. For example, to move files into a .error subdirectory use: .error. Note: When moving the files to the fail location Camel will handle the error and will not pick up the file again.||string| +|noop|If true, the file is not moved or deleted in any way. This option is good for readonly data, or for ETL type requirements. If noop=true, Camel will set idempotent=true as well, to avoid consuming the same files over and over again.|false|boolean| +|preMove|Expression (such as File Language) used to dynamically set the filename when moving it before processing. For example to move in-progress files into the order directory set this value to order.||string| +|preSort|When pre-sort is enabled then the consumer will sort the file and directory names during polling, that was retrieved from the file system. You may want to do this in case you need to operate on the files in a sorted order. The pre-sort is executed before the consumer starts to filter, and accept files to process by Camel. This option is default=false meaning disabled.|false|boolean| +|recursive|If a directory, will look for files in all the sub-directories as well.|false|boolean| +|resumeDownload|Configures whether resume download is enabled. In addition the options localWorkDirectory must be configured so downloaded files are stored in a local directory, which is required to support resuming of downloads.|false|boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|streamDownload|Sets the download method to use when not using a local working directory. If set to true, the remote files are streamed to the route as they are read. When set to false, the remote files are loaded into memory before being sent into the route. If enabling this option then you must set stepwise=false as both cannot be enabled at the same time.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|download|Whether the FTP consumer should download the file. If this option is set to false, then the message body will be null, but the consumer will still trigger a Camel Exchange that has details about the file such as file name, file size, etc. It's just that the file will not be downloaded.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|inProgressRepository|A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used.||object| +|localWorkDirectory|When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory.||string| +|onCompletionExceptionHandler|To use a custom org.apache.camel.spi.ExceptionHandler to handle any thrown exceptions that happens during the file on completion process where the consumer does either a commit or rollback. The default implementation will log any exception at WARN level and ignore.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|processStrategy|A pluggable org.apache.camel.component.file.GenericFileProcessStrategy allowing you to implement your own readLock option or similar. Can also be used when special conditions must be met before a file can be consumed, such as a special ready file exists. If this option is set then the readLock option does not apply.||object| +|checksumFileAlgorithm|If provided, then Camel will write a checksum file when the original file has been written. The checksum file will contain the checksum created with the provided algorithm for the original file. The checksum file will always be written in the same folder as the original file.||string| +|fileExist|What to do if a file already exists with the same name. Override, which is the default, replaces the existing file. - Append - adds content to the existing file. - Fail - throws a GenericFileOperationException, indicating that there is already an existing file. - Ignore - silently ignores the problem and does not override the existing file, but assumes everything is okay. - Move - option requires to use the moveExisting option to be configured as well. The option eagerDeleteTargetFile can be used to control what to do if an moving the file, and there exists already an existing file, otherwise causing the move operation to fail. The Move option will move any existing files, before writing the target file. - TryRename is only applicable if tempFileName option is in use. This allows to try renaming the file from the temporary name to the actual name, without doing any exists check. This check may be faster on some file systems and especially FTP servers.|Override|object| +|flatten|Flatten is used to flatten the file name path to strip any leading paths, so it's just the file name. This allows you to consume recursively into sub-directories, but when you eg write the files to another directory they will be written in a single directory. Setting this to true on the producer enforces that any file name in CamelFileName header will be stripped for any leading paths.|false|boolean| +|jailStartingDirectory|Used for jailing (restricting) writing files to the starting directory (and sub) only. This is enabled by default to not allow Camel to write files to outside directories (to be more secured out of the box). You can turn this off to allow writing files to directories outside the starting directory, such as parent or root folders.|true|boolean| +|tempFileName|The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. The location for tempFilename is relative to the final file location in the option 'fileName', not the target directory in the base uri. For example if option fileName includes a directory prefix: dir/finalFilename then tempFileName is relative to that subdirectory dir.||string| +|tempPrefix|This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files.||string| +|allowNullBody|Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged.|false|boolean| +|disconnectOnBatchComplete|Whether or not to disconnect from remote FTP server right after a Batch upload is complete. disconnectOnBatchComplete will only disconnect the current connection to the FTP server.|false|boolean| +|eagerDeleteTargetFile|Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation.|true|boolean| +|keepLastModified|Will keep the last modified timestamp from the source file (if any). Will use the FileConstants.FILE\_LAST\_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|moveExistingFileStrategy|Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided||object| +|autoCreate|Automatically create missing directories in the file's pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to.|true|boolean| +|connectTimeout|Sets the connect timeout for waiting for a connection to be established Used by both FTPClient and JSCH|10000|duration| +|maximumReconnectAttempts|Specifies the maximum reconnect attempts Camel performs when it tries to connect to the remote FTP server. Use 0 to disable this behavior.||integer| +|reconnectDelay|Delay in millis Camel will wait before performing a reconnect attempt.|1000|duration| +|throwExceptionOnConnectFailed|Should an exception be thrown if connection failed (exhausted)By default exception is not thrown and a WARN is logged. You can use this to enable exception being thrown and handle the thrown exception from the org.apache.camel.spi.PollingConsumerPollStrategy rollback method.|false|boolean| +|timeout|Sets the data timeout for waiting for reply Used only by FTPClient|30000|duration| +|antExclude|Ant style filter exclusion. If both antInclude and antExclude are used, antExclude takes precedence over antInclude. Multiple exclusions may be specified in comma-delimited format.||string| +|antFilterCaseSensitive|Sets case sensitive flag on ant filter.|true|boolean| +|antInclude|Ant style filter inclusion. Multiple inclusions may be specified in comma-delimited format.||string| +|eagerMaxMessagesPerPoll|Allows for controlling whether the limit from maxMessagesPerPoll is eager or not. If eager then the limit is during the scanning of files. Where as false would scan all files, and then perform sorting. Setting this option to false allows for sorting all files first, and then limit the poll. Mind that this requires a higher memory usage as all file details are in memory to perform the sorting.|true|boolean| +|exclude|Is used to exclude files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris||string| +|excludeExt|Is used to exclude files matching file extension name (case insensitive). For example to exclude bak files, then use excludeExt=bak. Multiple extensions can be separated by comma, for example to exclude bak and dat files, use excludeExt=bak,dat. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options.||string| +|filter|Pluggable filter as a org.apache.camel.component.file.GenericFileFilter class. Will skip files if filter returns false in its accept() method.||object| +|filterDirectory|Filters the directory based on Simple language. For example to filter on current date, you can use a simple date pattern such as ${date:now:yyyMMdd}||string| +|filterFile|Filters the file based on Simple language. For example to filter on file size, you can use ${file:size} 5000||string| +|idempotent|Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again.|false|boolean| +|idempotentEager|Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again.|false|boolean| +|idempotentKey|To use a custom idempotent key. By default the absolute path of the file is used. You can use the File Language, for example to use the file name and file size, you can do: idempotentKey=${file:name}-${file:size}||string| +|idempotentRepository|A pluggable repository org.apache.camel.spi.IdempotentRepository which by default use MemoryIdempotentRepository if none is specified and idempotent is true.||object| +|include|Is used to include files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris||string| +|includeExt|Is used to include files matching file extension name (case insensitive). For example to include txt files, then use includeExt=txt. Multiple extensions can be separated by comma, for example to include txt and xml files, use includeExt=txt,xml. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options.||string| +|maxDepth|The maximum depth to traverse when recursively processing a directory.|2147483647|integer| +|maxMessagesPerPoll|To define a maximum messages to gather per poll. By default no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disabled it. Notice: If this option is in use then the File and FTP components will limit before any sorting. For example if you have 100000 files and use maxMessagesPerPoll=500, then only the first 500 files will be picked up, and then sorted. You can use the eagerMaxMessagesPerPoll option and set this to false to allow to scan all files first and then sort afterwards.||integer| +|minDepth|The minimum depth to start processing when recursively processing a directory. Using minDepth=1 means the base directory. Using minDepth=2 means the first sub directory.||integer| +|move|Expression (such as Simple Language) used to dynamically set the filename when moving it after processing. To move files into a .done subdirectory just enter .done.||string| +|exclusiveReadLockStrategy|Pluggable read-lock as a org.apache.camel.component.file.GenericFileExclusiveReadLockStrategy implementation.||object| +|readLock|Used by consumer, to only poll the files if it has exclusive read-lock on the file (i.e. the file is not in-progress or being written). Camel will wait until the file lock is granted. This option provides the build in strategies: - none - No read lock is in use - markerFile - Camel creates a marker file (fileName.camelLock) and then holds a lock on it. This option is not available for the FTP component - changed - Changed is using file length/modification timestamp to detect whether the file is currently being copied or not. Will at least use 1 sec to determine this, so this option cannot consume files as fast as the others, but can be more reliable as the JDK IO API cannot always determine whether a file is currently being used by another process. The option readLockCheckInterval can be used to set the check frequency. - fileLock - is for using java.nio.channels.FileLock. This option is not avail for Windows OS and the FTP component. This approach should be avoided when accessing a remote file system via a mount/share unless that file system supports distributed file locks. - rename - rename is for using a try to rename the file as a test if we can get exclusive read-lock. - idempotent - (only for file component) idempotent is for using a idempotentRepository as the read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-changed - (only for file component) idempotent-changed is for using a idempotentRepository and changed as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-rename - (only for file component) idempotent-rename is for using a idempotentRepository and rename as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that.Notice: The various read locks is not all suited to work in clustered mode, where concurrent consumers on different nodes is competing for the same files on a shared file system. The markerFile using a close to atomic operation to create the empty marker file, but its not guaranteed to work in a cluster. The fileLock may work better but then the file system need to support distributed file locks, and so on. Using the idempotent read lock can support clustering if the idempotent repository supports clustering, such as Hazelcast Component or Infinispan.|none|string| +|readLockCheckInterval|Interval in millis for the read-lock, if supported by the read lock. This interval is used for sleeping between attempts to acquire the read lock. For example when using the changed read lock, you can set a higher interval period to cater for slow writes. The default of 1 sec. may be too fast if the producer is very slow writing the file. Notice: For FTP the default readLockCheckInterval is 5000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that ample time is allowed for the read lock process to try to grab the lock before the timeout was hit.|1000|integer| +|readLockDeleteOrphanLockFiles|Whether or not read lock with marker files should upon startup delete any orphan read lock files, which may have been left on the file system, if Camel was not properly shutdown (such as a JVM crash). If turning this option to false then any orphaned lock file will cause Camel to not attempt to pickup that file, this could also be due another node is concurrently reading files from the same shared directory.|true|boolean| +|readLockLoggingLevel|Logging level used when a read lock could not be acquired. By default a DEBUG is logged. You can change this level, for example to OFF to not have any logging. This option is only applicable for readLock of types: changed, fileLock, idempotent, idempotent-changed, idempotent-rename, rename.|DEBUG|object| +|readLockMarkerFile|Whether to use marker file with the changed, rename, or exclusive read lock types. By default a marker file is used as well to guard against other processes picking up the same files. This behavior can be turned off by setting this option to false. For example if you do not want to write marker files to the file systems by the Camel application.|true|boolean| +|readLockMinAge|This option is applied only for readLock=changed. It allows to specify a minimum age the file must be before attempting to acquire the read lock. For example use readLockMinAge=300s to require the file is at last 5 minutes old. This can speedup the changed read lock as it will only attempt to acquire files which are at least that given age.|0|integer| +|readLockMinLength|This option is applied only for readLock=changed. It allows you to configure a minimum file length. By default Camel expects the file to contain data, and thus the default value is 1. You can set this option to zero, to allow consuming zero-length files.|1|integer| +|readLockRemoveOnCommit|This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file is succeeded and a commit happens. By default the file is not removed which ensures that any race-condition do not occur so another active node may attempt to grab the file. Instead the idempotent repository may support eviction strategies that you can configure to evict the file name entry after X minutes - this ensures no problems with race conditions. See more details at the readLockIdempotentReleaseDelay option.|false|boolean| +|readLockRemoveOnRollback|This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file failed and a rollback happens. If this option is false, then the file name entry is confirmed (as if the file did a commit).|true|boolean| +|readLockTimeout|Optional timeout in millis for the read-lock, if supported by the read-lock. If the read-lock could not be granted and the timeout triggered, then Camel will skip the file. At next poll Camel, will try the file again, and this time maybe the read-lock could be granted. Use a value of 0 or lower to indicate forever. Currently fileLock, changed and rename support the timeout. Notice: For FTP the default readLockTimeout value is 20000 instead of 10000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that ample time is allowed for the read lock process to try to grab the lock before the timeout was hit.|10000|integer| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|sdd|part of service SAS token||string| +|se|part of SAS token||string| +|si|part of service SAS token||string| +|sig|part of SAS token||string| +|sip|part of SAS token||string| +|sp|part of SAS token||string| +|spr|part of SAS token||string| +|sr|part of service SAS token||string| +|srt|part of SAS token||string| +|ss|part of account SAS token||string| +|st|part of SAS token||string| +|sv|part of SAS token||string| +|shuffle|To shuffle the list of files (sort in random order)|false|boolean| +|sortBy|Built-in sort by using the File Language. Supports nested sorts, so you can have a sort by file name and as a 2nd group sort by modified date.||string| +|sorter|Pluggable sorter as a java.util.Comparator class.||object| diff --git a/camel-azure-key-vault.md b/camel-azure-key-vault.md new file mode 100644 index 0000000000000000000000000000000000000000..d1d99721de3f5a73f8732c7a155a583c55d37c20 --- /dev/null +++ b/camel-azure-key-vault.md @@ -0,0 +1,298 @@ +# Azure-key-vault + +**Since Camel 3.17** + +**Only producer is supported** + +The azure-key-vault component that integrates [Azure Key +Vault](https://azure.microsoft.com/en-us/services/key-vault/). + +Prerequisites + +You must have a valid Windows Azure Key Vault account. More information +is available at [Azure Documentation +Portal](https://docs.microsoft.com/azure/). + +# URI Format + + + org.apache.camel + camel-azure-key-vault + x.x.x + + + +# Usage + +## Using Azure Key Vault Property Function + +To use this function, you’ll need to provide credentials to Azure Key +Vault Service as environment variables: + + export $CAMEL_VAULT_AZURE_TENANT_ID=tenantId + export $CAMEL_VAULT_AZURE_CLIENT_ID=clientId + export $CAMEL_VAULT_AZURE_CLIENT_SECRET=clientSecret + export $CAMEL_VAULT_AZURE_VAULT_NAME=vaultName + +You can also configure the credentials in the `application.properties` +file such as: + + camel.vault.azure.tenantId = accessKey + camel.vault.azure.clientId = clientId + camel.vault.azure.clientSecret = clientSecret + camel.vault.azure.vaultName = vaultName + +Or you can enable the usage of Azure Identity in the following way: + + export $CAMEL_VAULT_AZURE_IDENTITY_ENABLED=true + export $CAMEL_VAULT_AZURE_VAULT_NAME=vaultName + +You can also enable the usage of Azure Identity in the +`application.properties` file such as: + + camel.vault.azure.azureIdentityEnabled = true + camel.vault.azure.vaultName = vaultName + +At this point, you’ll be able to reference a property in the following +way: + + + + + + + + +Where route will be the name of the secret stored in the Azure Key Vault +Service. + +You could specify a default value in case the secret is not present on +Azure Key Vault Service: + + + + + + + + +In this case, if the secret doesn’t exist, the property will fall back +to "default" as value. + +Also, you are able to get a particular field of the secret, if you have, +for example, a secret named database of this form: + + { + "username": "admin", + "password": "password123", + "engine": "postgres", + "host": "127.0.0.1", + "port": "3128", + "dbname": "db" + } + +You’re able to do get single secret value in your route, like for +example: + + + + + + + + +Or re-use the property as part of an endpoint. + +You could specify a default value in case the particular field of secret +is not present on Azure Key Vault: + + + + + + + + +In this case, if the secret doesn’t exist or the secret exists, but the +username field is not part of the secret, the property will fall back to +"admin" as value. + +There is also the syntax to get a particular version of the secret for +both the approach, with field/default value specified or only with +secret: + + + + + + + + +This approach will return the RAW route secret with the version +*bf9b4f4b-8e63-43fd-a73c-3e2d3748b451*. + + + + + + + + +This approach will return the route secret value with version +*bf9b4f4b-8e63-43fd-a73c-3e2d3748b451* or default value in case the +secret doesn’t exist or the version doesn’t exist. + + + + + + + + +This approach will return the username field of the database secret with +version *bf9b4f4b-8e63-43fd-a73c-3e2d3748b451* or admin in case the +secret doesn’t exist or the version doesn’t exist. + +For the moment we are not considering the rotation function if any are +applied, but it is in the work to be done. + +The only requirement is adding the camel-azure-key-vault jar to your +Camel application. + +## Automatic Camel context reloading on Secret Refresh + +Being able to reload Camel context on a Secret Refresh could be done by +specifying the usual credentials (the same used for Azure Key Vault +Property Function). + +With Environment variables: + + export $CAMEL_VAULT_AZURE_TENANT_ID=tenantId + export $CAMEL_VAULT_AZURE_CLIENT_ID=clientId + export $CAMEL_VAULT_AZURE_CLIENT_SECRET=clientSecret + export $CAMEL_VAULT_AZURE_VAULT_NAME=vaultName + +or as plain Camel main properties: + + camel.vault.azure.tenantId = accessKey + camel.vault.azure.clientId = clientId + camel.vault.azure.clientSecret = clientSecret + camel.vault.azure.vaultName = vaultName + +If you want to use Azure Identity with environment variables, you can do +in the following way: + + export $CAMEL_VAULT_AZURE_IDENTITY_ENABLED=true + export $CAMEL_VAULT_AZURE_VAULT_NAME=vaultName + +You can also enable the usage of Azure Identity in the +`application.properties` file such as: + + camel.vault.azure.azureIdentityEnabled = true + camel.vault.azure.vaultName = vaultName + +To enable the automatic refresh, you’ll need additional properties to +set: + + camel.vault.azure.refreshEnabled=true + camel.vault.azure.refreshPeriod=60000 + camel.vault.azure.secrets=Secret + camel.vault.azure.eventhubConnectionString=eventhub_conn_string + camel.vault.azure.blobAccountName=blob_account_name + camel.vault.azure.blobContainerName=blob_container_name + camel.vault.azure.blobAccessKey=blob_access_key + camel.main.context-reload-enabled = true + +where `camel.vault.azure.refreshEnabled` will enable the automatic +context reload, `camel.vault.azure.refreshPeriod` is the interval of +time between two different checks for update events and +`camel.vault.azure.secrets` is a regex representing the secrets we want +to track for updates. + +where `camel.vault.azure.eventhubConnectionString` is the eventhub +connection string to get notification from, +`camel.vault.azure.blobAccountName`, +`camel.vault.azure.blobContainerName` and +`camel.vault.azure.blobAccessKey` are the Azure Storage Blob parameters +for the checkpoint store needed by Azure Eventhub. + +Note that `camel.vault.azure.secrets` is not mandatory: if not specified +the task responsible for checking updates events will take into accounts +or the properties with an `azure:` prefix. + +The only requirement is adding the camel-azure-key-vault jar to your +Camel application. + +## Azure Key Vault Producer operations + +Azure Key Vault component provides the following operation on the +producer side: + +- createSecret + +- getSecret + +- deleteSecret + +- purgeDeletedSecret + +# Examples + +## Producer Examples + +- createSecret: this operation will create a secret in Azure Key Vault + + + + from("direct:createSecret") + .setHeader(KeyVaultConstants.SECRET_NAME, "Test") + .setBody(constant("Test")) + .to("azure-key-vault://test123?clientId=RAW({{clientId}})&clientSecret=RAW({{clientSecret}})&tenantId=RAW({{tenantId}})") + +- getSecret: this operation will get a secret from Azure Key Vault + + + + from("direct:getSecret") + .setHeader(KeyVaultConstants.SECRET_NAME, "Test") + .to("azure-key-vault://test123?clientId=RAW({{clientId}})&clientSecret=RAW({{clientSecret}})&tenantId=RAW({{tenantId}})&operation=getSecret") + +- deleteSecret: this operation will delete a Secret from Azure Key + Vault + + + + from("direct:deleteSecret") + .setHeader(KeyVaultConstants.SECRET_NAME, "Test") + .to("azure-key-vault://test123?clientId=RAW({{clientId}})&clientSecret=RAW({{clientSecret}})&tenantId=RAW({{tenantId}})&operation=deleteSecret") + +- purgeDeletedSecret: this operation will purge a deleted Secret from + Azure Key Vault + + + + from("direct:purgeDeletedSecret") + .setHeader(KeyVaultConstants.SECRET_NAME, "Test") + .to("azure-key-vault://test123?clientId=RAW({{clientId}})&clientSecret=RAW({{clientSecret}})&tenantId=RAW({{tenantId}})&operation=purgeDeletedSecret") + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|vaultName|Vault Name to be used||string| +|credentialType|Determines the credential strategy to adopt|CLIENT\_SECRET|object| +|operation|Operation to be performed||object| +|secretClient|Instance of Secret client||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|clientId|Client Id to be used||string| +|clientSecret|Client Secret to be used||string| +|tenantId|Tenant Id to be used||string| diff --git a/camel-azure-servicebus.md b/camel-azure-servicebus.md new file mode 100644 index 0000000000000000000000000000000000000000..6ed9d2ebd949bbacfab24c4c990bafb0bcca9306 --- /dev/null +++ b/camel-azure-servicebus.md @@ -0,0 +1,246 @@ +# Azure-servicebus + +**Since Camel 3.12** + +**Both producer and consumer are supported** + +The azure-servicebus component that integrates [Azure +ServiceBus](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-overview). +Azure ServiceBus is a fully managed enterprise integration message +broker. Service Bus can decouple applications and services. Service Bus +offers a reliable and secure platform for asynchronous transfer of data +and state. Data is transferred between different applications and +services using messages. + +Prerequisites + +You must have a valid Windows Azure Storage account. More information is +available at [Azure Documentation +Portal](https://docs.microsoft.com/azure/). + + + org.apache.camel + camel-azure-servicebus + x.x.x + + + +# Consumer and Producer + +This component implements the Consumer and Producer. + +# Usage + +## Authentication Information + +You have three different Credential Types: AZURE\_IDENTITY, +TOKEN\_CREDENTIAL and CONNECTION\_STRING. You can also provide a client +instance yourself. To use this component, you have three options to +provide the required Azure authentication information: + +**CONNECTION\_STRING**: + +- Provide `connectionString` string it is the simplest option to get + started. + +**TOKEN\_CREDENTIAL**: + +- Provide an implementation of + `com.azure.core.credential.TokenCredential` into the Camel’s + Registry, e.g., using the + `com.azure.identity.DefaultAzureCredentialBuilder().build();` API. + See the documentation [here about Azure-AD + authentication](https://docs.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication). + +**AZURE\_IDENTITY**: + +- This will use + `com.azure.identity.DefaultAzureCredentialBuilder().build();` + instance. This will follow the Default Azure Credential Chain. See + the documentation [here about Azure-AD + authentication](https://docs.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication). + +**Client instance**: + +- You can provide a + `com.azure.messaging.servicebus.ServiceBusSenderClient` for sending + message and/or + `com.azure.messaging.servicebus.ServiceBusReceiverClient` to receive + messages. If you provide the instances, they will be autowired. + +## Message Body + +In the producer, this component accepts message body of `String`, +`byte[]` and `BinaryData` types or `List`, `List` and +`List` to send batch messages. + +In the consumer, the returned message body will be of type \`String. + +## Azure ServiceBus Producer operations + + ++++ + + + + + + + + + + + + + + + + +
OperationDescription

sendMessages

Sends a set of messages to a Service +Bus queue or topic using a batched approach.

scheduleMessages

Sends a scheduled message to the Azure +Service Bus entity this sender is connected to. A scheduled message is +enqueued and made available to receivers only at the scheduled enqueue +time.

+ +## Azure ServiceBus Consumer operations + + ++++ + + + + + + + + + + + + + + + + +
OperationDescription

receiveMessages

Receives an <b>infinite</b> +stream of messages from the Service Bus entity.

peekMessages

Reads the next batch of active messages +without changing the state of the receiver or the message +source.

+ +### Examples + +- `sendMessages` + + + + from("direct:start") + .process(exchange -> { + final List inputBatch = new LinkedList<>(); + inputBatch.add("test batch 1"); + inputBatch.add("test batch 2"); + inputBatch.add("test batch 3"); + inputBatch.add(123456); + + exchange.getIn().setBody(inputBatch); + }) + .to("azure-servicebus:test//?connectionString=test") + .to("mock:result"); + +- `scheduleMessages` + + + + from("direct:start") + .process(exchange -> { + final List inputBatch = new LinkedList<>(); + inputBatch.add("test batch 1"); + inputBatch.add("test batch 2"); + inputBatch.add("test batch 3"); + inputBatch.add(123456); + + exchange.getIn().setHeader(ServiceBusConstants.SCHEDULED_ENQUEUE_TIME, OffsetDateTime.now()); + exchange.getIn().setBody(inputBatch); + }) + .to("azure-servicebus:test//?connectionString=test&producerOperation=scheduleMessages") + .to("mock:result"); + +- `receiveMessages` + + + + from("azure-servicebus:test//?connectionString=test") + .log("${body}") + .to("mock:result"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|amqpRetryOptions|Sets the retry options for Service Bus clients. If not specified, the default retry options are used.||object| +|amqpTransportType|Sets the transport type by which all the communication with Azure Service Bus occurs. Default value is AMQP.|AMQP|object| +|clientOptions|Sets the ClientOptions to be sent from the client built from this builder, enabling customization of certain properties, as well as support the addition of custom header information.||object| +|configuration|The component configurations||object| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter Service Bus application properties to and from Camel message headers.||object| +|proxyOptions|Sets the proxy configuration to use for ServiceBusSenderClient. When a proxy is configured, AMQP\_WEB\_SOCKETS must be used for the transport type.||object| +|serviceBusType|The service bus type of connection to execute. Queue is for typical queue option and topic for subscription based model.|queue|object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|enableDeadLettering|Enable application level deadlettering to the subscription deadletter subqueue if deadletter related headers are set.|false|boolean| +|maxAutoLockRenewDuration|Sets the amount of time to continue auto-renewing the lock. Setting ZERO disables auto-renewal. For ServiceBus receive mode (RECEIVE\_AND\_DELETE RECEIVE\_AND\_DELETE), auto-renewal is disabled.|5m|object| +|maxConcurrentCalls|Sets maximum number of concurrent calls|1|integer| +|prefetchCount|Sets the prefetch count of the receiver. For both PEEK\_LOCK PEEK\_LOCK and RECEIVE\_AND\_DELETE RECEIVE\_AND\_DELETE receive modes the default value is 1. Prefetch speeds up the message flow by aiming to have a message readily available for local retrieval when and before the application asks for one using receive message. Setting a non-zero value will prefetch that number of messages. Setting the value to zero turns prefetch off.||integer| +|processorClient|Sets the processorClient in order to consume messages by the consumer||object| +|serviceBusReceiveMode|Sets the receive mode for the receiver.|PEEK\_LOCK|object| +|subQueue|Sets the type of the SubQueue to connect to.||object| +|subscriptionName|Sets the name of the subscription in the topic to listen to. topicOrQueueName and serviceBusType=topic must also be set. This property is required if serviceBusType=topic and the consumer is in use.||string| +|binary|Set binary mode. If true, message body will be sent as byte. By default, it is false.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|producerOperation|Sets the desired operation to be used in the producer|sendMessages|object| +|scheduledEnqueueTime|Sets OffsetDateTime at which the message should appear in the Service Bus queue or topic.||object| +|senderClient|Sets senderClient to be used in the producer.||object| +|serviceBusTransactionContext|Represents transaction in service. This object just contains transaction id.||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|connectionString|Sets the connection string for a Service Bus namespace or a specific Service Bus resource.||string| +|credentialType|Determines the credential strategy to adopt|CONNECTION\_STRING|object| +|fullyQualifiedNamespace|Fully Qualified Namespace of the service bus||string| +|tokenCredential|A TokenCredential for Azure AD authentication.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|topicOrQueueName|Selected topic name or the queue name, that is depending on serviceBusType config. For example if serviceBusType=queue, then this will be the queue name and if serviceBusType=topic, this will be the topic name.||string| +|amqpRetryOptions|Sets the retry options for Service Bus clients. If not specified, the default retry options are used.||object| +|amqpTransportType|Sets the transport type by which all the communication with Azure Service Bus occurs. Default value is AMQP.|AMQP|object| +|clientOptions|Sets the ClientOptions to be sent from the client built from this builder, enabling customization of certain properties, as well as support the addition of custom header information.||object| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter Service Bus application properties to and from Camel message headers.||object| +|proxyOptions|Sets the proxy configuration to use for ServiceBusSenderClient. When a proxy is configured, AMQP\_WEB\_SOCKETS must be used for the transport type.||object| +|serviceBusType|The service bus type of connection to execute. Queue is for typical queue option and topic for subscription based model.|queue|object| +|enableDeadLettering|Enable application level deadlettering to the subscription deadletter subqueue if deadletter related headers are set.|false|boolean| +|maxAutoLockRenewDuration|Sets the amount of time to continue auto-renewing the lock. Setting ZERO disables auto-renewal. For ServiceBus receive mode (RECEIVE\_AND\_DELETE RECEIVE\_AND\_DELETE), auto-renewal is disabled.|5m|object| +|maxConcurrentCalls|Sets maximum number of concurrent calls|1|integer| +|prefetchCount|Sets the prefetch count of the receiver. For both PEEK\_LOCK PEEK\_LOCK and RECEIVE\_AND\_DELETE RECEIVE\_AND\_DELETE receive modes the default value is 1. Prefetch speeds up the message flow by aiming to have a message readily available for local retrieval when and before the application asks for one using receive message. Setting a non-zero value will prefetch that number of messages. Setting the value to zero turns prefetch off.||integer| +|processorClient|Sets the processorClient in order to consume messages by the consumer||object| +|serviceBusReceiveMode|Sets the receive mode for the receiver.|PEEK\_LOCK|object| +|subQueue|Sets the type of the SubQueue to connect to.||object| +|subscriptionName|Sets the name of the subscription in the topic to listen to. topicOrQueueName and serviceBusType=topic must also be set. This property is required if serviceBusType=topic and the consumer is in use.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|binary|Set binary mode. If true, message body will be sent as byte. By default, it is false.|false|boolean| +|producerOperation|Sets the desired operation to be used in the producer|sendMessages|object| +|scheduledEnqueueTime|Sets OffsetDateTime at which the message should appear in the Service Bus queue or topic.||object| +|senderClient|Sets senderClient to be used in the producer.||object| +|serviceBusTransactionContext|Represents transaction in service. This object just contains transaction id.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionString|Sets the connection string for a Service Bus namespace or a specific Service Bus resource.||string| +|credentialType|Determines the credential strategy to adopt|CONNECTION\_STRING|object| +|fullyQualifiedNamespace|Fully Qualified Namespace of the service bus||string| +|tokenCredential|A TokenCredential for Azure AD authentication.||object| diff --git a/camel-azure-storage-blob.md b/camel-azure-storage-blob.md new file mode 100644 index 0000000000000000000000000000000000000000..0ffc853badbab908bbb71def85c8db1ab6743fa1 --- /dev/null +++ b/camel-azure-storage-blob.md @@ -0,0 +1,806 @@ +# Azure-storage-blob + +**Since Camel 3.3** + +**Both producer and consumer are supported** + +The Azure Storage Blob component is used for storing and retrieving +blobs from [Azure Storage +Blob](https://azure.microsoft.com/services/storage/blobs/) Service using +**Azure APIs v12**. However, in the case of versions above v12, we will +see if this component can adopt these changes depending on how much +breaking changes can result. + +Prerequisites + +You must have a valid Windows Azure Storage account. More information is +available at [Azure Documentation +Portal](https://docs.microsoft.com/azure/). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-azure-storage-blob + x.x.x + + + +# URI Format + + azure-storage-blob://accountName[/containerName][?options] + +In the case of a consumer, `accountName`, `containerName` are required. + +In the case of a producer, it depends on the operation that is being +requested, for example, if operation is on a container level, e.b: +createContainer, accountName and containerName are only required, but in +case of operation being requested in blob level, e.g: getBlob, +accountName, containerName and blobName are required. + +The blob will be created if it does not already exist. You can append +query options to the URI in the following format, +`?options=value&option2=value&...` + +**Required information options:** + +To use this component, you have multiple options to provide the required +Azure authentication information: + +- By providing your own + [BlobServiceClient](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-storage-blob/12.0.0/com/azure/storage/blob/BlobServiceClient.html) + instance which can be injected into `blobServiceClient`. Note: You + don’t need to create a specific client, e.g.: BlockBlobClient, the + BlobServiceClient represents the upper level which can be used to + retrieve lower level clients. + +- Via Azure Identity, when specifying `credentialType=AZURE_IDENTITY` + and providing required [environment + variables](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/identity/azure-identity#environment-variables). + This enables service principal (e.g. app registration) + authentication with secret/certificate as well as username password. + Note that this is the default authentication strategy. + +- Via shared storage account key, when specifying + `credentialType=SHARED_ACCOUNT_KEY` and providing `accountName` and + `accessKey` for your Azure account, this is the simplest way to get + started. The accessKey can be generated through your Azure portal. + +- Via shared storage account key, when specifying + `credentialType=SHARED_KEY_CREDENTIAL` and providing a + [StorageSharedKeyCredential](https://azuresdkartifacts.blob.core.windows.net/azure-sdk-for-java/staging/apidocs/com/azure/storage/common/StorageSharedKeyCredential.html) + instance which can be injected into `credentials` option. + +- Via Azure SAS, when specifying `credentialType=AZURE_SAS` and + providing a SAS Token parameter through the `sasToken` parameter. + +# Usage + +For example, to download a blob content from the block blob `hello.txt` +located on the `container1` in the `camelazure` storage account, use the +following snippet: + + from("azure-storage-blob://camelazure/container1?blobName=hello.txt&credentialType=SHARED_ACCOUNT_KEY&accessKey=RAW(yourAccessKey)"). + to("file://blobdirectory"); + +## Advanced Azure Storage Blob configuration + +If your Camel Application is running behind a firewall or if you need to +have more control over the `BlobServiceClient` instance configuration, +you can create your own instance: + + StorageSharedKeyCredential credential = new StorageSharedKeyCredential("yourAccountName", "yourAccessKey"); + String uri = String.format("https://%s.blob.core.windows.net", "yourAccountName"); + + BlobServiceClient client = new BlobServiceClientBuilder() + .endpoint(uri) + .credential(credential) + .buildClient(); + // This is camel context + context.getRegistry().bind("client", client); + +Then refer to this instance in your Camel `azure-storage-blob` component +configuration: + + from("azure-storage-blob://cameldev/container1?blobName=myblob&serviceClient=#client") + .to("mock:result"); + +## Automatic detection of BlobServiceClient client in registry + +The component is capable of detecting the presence of an +BlobServiceClient bean into the registry. If it’s the only instance of +that type, it will be used as the client, and you won’t have to define +it as uri parameter, like the example above. This may be really useful +for smarter configuration of the endpoint. + +## Azure Storage Blob Producer operations + +Camel Azure Storage Blob component provides a wide range of operations +on the producer side: + +**Operations on the service level** + +For these operations, `accountName` is **required**. + + ++++ + + + + + + + + + + + + + + + + +
OperationDescription

listBlobContainers

Get the content of the blob. You can +restrict the output of this operation to a blob range.

getChangeFeed

Returns transaction logs of all the +changes that occur to the blobs and the blob metadata in your storage +account. The change feed provides ordered, guaranteed, durable, +immutable, read-only log of these changes.

+ +**Operations on the container level** + +For these operations, `accountName` and `containerName` are +**required**. + + ++++ + + + + + + + + + + + + + + + + + + + + +
OperationDescription

createBlobContainer

Create a new container within a storage +account. If a container with the same name already exists, the producer +will ignore it.

deleteBlobContainer

Delete the specified container in the +storage account. If the container doesn’t exist, the operation +fails.

listBlobs

Returns a list of blobs in this +container, with folder structures flattened.

+ +**Operations on the blob level** + +For these operations, `accountName`, `containerName` and `blobName` are +**required**. + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationBlob TypeDescription

getBlob

Common

Get the content of the blob. You can +restrict the output of this operation to a blob range.

deleteBlob

Common

Delete a blob.

downloadBlobToFile

Common

Download the entire blob into a file +specified by the path. The file will be created and must not exist, if +the file already exists a FileAlreadyExistsException will +be thrown.

downloadLink

Common

Generate the download link for the +specified blob using shared access signatures (SAS). This by default +only limits to 1hour of allowed access. However, you can override the +default expiration duration through the headers.

uploadBlockBlob

BlockBlob

Creates a new block blob, or updates +the content of an existing block blob. Updating an existing block blob +overwrites any existing metadata on the blob. Partial updates are not +supported with PutBlob; the content of the existing blob is overwritten +with the new content.

stageBlockBlobList

BlockBlob

Uploads the specified block to the +block blob’s "staging area" to be later committed by a call to +commitBlobBlockList. However, in case header +CamelAzureStorageBlobCommitBlobBlockListLater or config +commitBlockListLater is set to false, this will commit the +blocks immediately after staging the blocks.

commitBlobBlockList

BlockBlob

Write a blob by specifying the list of +block IDs that are to make up the blob. To be written as part of a blob, +a block must have been successfully written to the server in a prior +stageBlockBlobList operation. You can call +commitBlobBlockList to update a blob by uploading only +those blocks that have changed, then committing the new and existing +blocks together. Any blocks not specified in the block list and +permanently deleted.

getBlobBlockList

BlockBlob

Returns the list of blocks that have +been uploaded as part of a block blob using the specified blocklist +filter.

createAppendBlob

AppendBlob

Creates a 0-length append blob. Call +commitAppendBlo`b operation to append data to an append blob.

commitAppendBlob

AppendBlob

Commits a new block of data to the end +of the existing append blob. In case of header +CamelAzureStorageBlobCreateAppendBlob or config +createAppendBlob is set to true, it will attempt to create +the appendBlob through internal call to createAppendBlob +operation first before committing.

createPageBlob

PageBlob

Creates a page blob of the specified +length. Call uploadPageBlob operation to upload data to a +page blob.

uploadPageBlob

PageBlob

Write one or more pages to the page +blob. The size must be a multiple of 512. In case of header +CamelAzureStorageBlobCreatePageBlob or config +createPageBlob is set to true, it will attempt to create +the appendBlob through internal call to createPageBlob +operation first before uploading.

resizePageBlob

PageBlob

Resizes the page blob to the specified +size, which must be a multiple of 512.

clearPageBlob

PageBlob

Free the specified pages from the page +blob. The size of the range must be a multiple of 512.

getPageBlobRanges

PageBlob

Returns the list of valid page ranges +for a page blob or snapshot of a page blob.

copyBlob

Common

Copy a blob from one container to +another one, even from different accounts.

+ +Refer to the example section in this page to learn how to use these +operations into your camel application. + +## Consumer Examples + +To consume a blob into a file using the file component, this can be done +like this: + + from("azure-storage-blob://camelazure/container1?blobName=hello.txt&accountName=yourAccountName&accessKey=yourAccessKey"). + to("file://blobdirectory"); + +However, you can also write to file directly without using the file +component, you will need to specify `fileDir` folder path to save your +blob in your machine. + + from("azure-storage-blob://camelazure/container1?blobName=hello.txt&accountName=yourAccountName&accessKey=yourAccessKey&fileDir=/var/to/awesome/dir"). + to("mock:results"); + +Also, the component supports batch consumer, hence you can consume +multiple blobs with only specifying the container name, the consumer +will return multiple exchanges depending on the number of the blobs in +the container. Example: + + from("azure-storage-blob://camelazure/container1?accountName=yourAccountName&accessKey=yourAccessKey&fileDir=/var/to/awesome/dir"). + to("mock:results"); + +## Producer Operations Examples + +- `listBlobContainers`: + + + + from("direct:start") + .process(exchange -> { + // set the header you want the producer to evaluate, refer to the previous + // section to learn about the headers that can be set + // e.g.: + exchange.getIn().setHeader(BlobConstants.LIST_BLOB_CONTAINERS_OPTIONS, new ListBlobContainersOptions().setMaxResultsPerPage(10)); + }) + .to("azure-storage-blob://camelazure?operation=listBlobContainers&client&serviceClient=#client") + .to("mock:result"); + +- `createBlobContainer`: + + + + from("direct:start") + .process(exchange -> { + // set the header you want the producer to evaluate, refer to the previous + // section to learn about the headers that can be set + // e.g.: + exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "newContainerName"); + }) + .to("azure-storage-blob://camelazure/container1?operation=createBlobContainer&serviceClient=#client") + .to("mock:result"); + +- `deleteBlobContainer`: + + + + from("direct:start") + .process(exchange -> { + // set the header you want the producer to evaluate, refer to the previous + // section to learn about the headers that can be set + // e.g.: + exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "overridenName"); + }) + .to("azure-storage-blob://camelazure/container1?operation=deleteBlobContainer&serviceClient=#client") + .to("mock:result"); + +- `listBlobs`: + + + + from("direct:start") + .process(exchange -> { + // set the header you want the producer to evaluate, refer to the previous + // section to learn about the headers that can be set + // e.g.: + exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "overridenName"); + }) + .to("azure-storage-blob://camelazure/container1?operation=listBlobs&serviceClient=#client") + .to("mock:result"); + +- `getBlob`: + +We can either set an `outputStream` in the exchange body and write the +data to it. E.g.: + + from("direct:start") + .process(exchange -> { + // set the header you want the producer to evaluate, refer to the previous + // section to learn about the headers that can be set + // e.g.: + exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "overridenName"); + + // set our body + exchange.getIn().setBody(outputStream); + }) + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getBlob&serviceClient=#client") + .to("mock:result"); + +If we don’t set a body, then this operation will give us an +`InputStream` instance which can proceeded further downstream: + + from("direct:start") + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getBlob&serviceClient=#client") + .process(exchange -> { + InputStream inputStream = exchange.getMessage().getBody(InputStream.class); + // We use Apache common IO for simplicity, but you are free to do whatever dealing + // with inputStream + System.out.println(IOUtils.toString(inputStream, StandardCharsets.UTF_8.name())); + }) + .to("mock:result"); + +- `deleteBlob`: + + + + from("direct:start") + .process(exchange -> { + // set the header you want the producer to evaluate, refer to the previous + // section to learn about the headers that can be set + // e.g.: + exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "overridenName"); + }) + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=deleteBlob&serviceClient=#client") + .to("mock:result"); + +- `downloadBlobToFile`: + + + + from("direct:start") + .process(exchange -> { + // set the header you want the producer to evaluate, refer to the previous + // section to learn about the headers that can be set + // e.g.: + exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "overridenName"); + }) + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=downloadBlobToFile&fileDir=/var/mydir&serviceClient=#client") + .to("mock:result"); + +- `downloadLink` + + + + from("direct:start") + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=downloadLink&serviceClient=#client") + .process(exchange -> { + String link = exchange.getMessage().getHeader(BlobConstants.DOWNLOAD_LINK, String.class); + System.out.println("My link " + link); + }) + .to("mock:result"); + +- `uploadBlockBlob` + + + + from("direct:start") + .process(exchange -> { + // set the header you want the producer to evaluate, refer to the previous + // section to learn about the headers that can be set + // e.g.: + exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "overridenName"); + exchange.getIn().setBody("Block Blob"); + }) + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=uploadBlockBlob&serviceClient=#client") + .to("mock:result"); + +- `stageBlockBlobList` + + + + from("direct:start") + .process(exchange -> { + final List blocks = new LinkedList<>(); + blocks.add(BlobBlock.createBlobBlock(new ByteArrayInputStream("Hello".getBytes()))); + blocks.add(BlobBlock.createBlobBlock(new ByteArrayInputStream("From".getBytes()))); + blocks.add(BlobBlock.createBlobBlock(new ByteArrayInputStream("Camel".getBytes()))); + + exchange.getIn().setBody(blocks); + }) + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=stageBlockBlobList&serviceClient=#client") + .to("mock:result"); + +- `commitBlockBlobList` + + + + from("direct:start") + .process(exchange -> { + // We assume here you have the knowledge of these blocks you want to commit + final List blockIds = new LinkedList<>(); + blockIds.add(new Block().setName("id-1")); + blockIds.add(new Block().setName("id-2")); + blockIds.add(new Block().setName("id-3")); + + exchange.getIn().setBody(blockIds); + }) + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=commitBlockBlobList&serviceClient=#client") + .to("mock:result"); + +- `getBlobBlockList` + + + + from("direct:start") + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getBlobBlockList&serviceClient=#client") + .log("${body}") + .to("mock:result"); + +- `createAppendBlob` + + + + from("direct:start") + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=createAppendBlob&serviceClient=#client") + .to("mock:result"); + +- `commitAppendBlob` + + + + from("direct:start") + .process(exchange -> { + final String data = "Hello world from my awesome tests!"; + final InputStream dataStream = new ByteArrayInputStream(data.getBytes(StandardCharsets.UTF_8)); + + exchange.getIn().setBody(dataStream); + + // of course, you can set whatever headers you like, refer to the headers section to learn more + }) + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=commitAppendBlob&serviceClient=#client") + .to("mock:result"); + +- `createPageBlob` + + + + from("direct:start") + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=createPageBlob&serviceClient=#client") + .to("mock:result"); + +- `uploadPageBlob` + + + + from("direct:start") + .process(exchange -> { + byte[] dataBytes = new byte[512]; // we set range for the page from 0-511 + new Random().nextBytes(dataBytes); + final InputStream dataStream = new ByteArrayInputStream(dataBytes); + final PageRange pageRange = new PageRange().setStart(0).setEnd(511); + + exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); + exchange.getIn().setBody(dataStream); + }) + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=uploadPageBlob&serviceClient=#client") + .to("mock:result"); + +- `resizePageBlob` + + + + from("direct:start") + .process(exchange -> { + final PageRange pageRange = new PageRange().setStart(0).setEnd(511); + + exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); + }) + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=resizePageBlob&serviceClient=#client") + .to("mock:result"); + +- `clearPageBlob` + + + + from("direct:start") + .process(exchange -> { + final PageRange pageRange = new PageRange().setStart(0).setEnd(511); + + exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); + }) + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=clearPageBlob&serviceClient=#client") + .to("mock:result"); + +- `getPageBlobRanges` + + + + from("direct:start") + .process(exchange -> { + final PageRange pageRange = new PageRange().setStart(0).setEnd(511); + + exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); + }) + .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getPageBlobRanges&serviceClient=#client") + .log("${body}") + .to("mock:result"); + +- `copyBlob` + + + + from("direct:copyBlob") + .process(exchange -> { + exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "file.txt"); + exchange.getMessage().setHeader(BlobConstants.SOURCE_BLOB_CONTAINER_NAME, "containerblob1"); + exchange.getMessage().setHeader(BlobConstants.SOURCE_BLOB_ACCOUNT_NAME, "account"); + }) + .to("azure-storage-blob://account/containerblob2?operation=copyBlob&sourceBlobAccessKey=RAW(accessKey)") + .to("mock:result"); + +In this way the `file.txt` in the container `containerblob1` of the +account `account`, will be copied to the container `containerblob2` of +the same account. + +## SAS Token generation example + +SAS Blob Container tokens can be generated programmatically or via Azure +UI. To generate the token with java code, the following can be done: + + BlobContainerClient blobClient = new BlobContainerClientBuilder() + .endpoint(String.format("https://%s.blob.core.windows.net/%s", accountName, accessKey)) + .containerName(containerName) + .credential(new StorageSharedKeyCredential(accountName, accessKey)) + .buildClient(); + + // Create a SAS token that's valid for 1 day, as an example + OffsetDateTime expiryTime = OffsetDateTime.now().plusDays(1); + + // Assign permissions to the SAS token + BlobContainerSasPermission blobContainerSasPermission = new BlobContainerSasPermission() + .setWritePermission(true) + .setListPermission(true) + .setCreatePermission(true) + .setDeletePermission(true) + .setAddPermission(true) + .setReadPermission(true); + + BlobServiceSasSignatureValues sasSignatureValues = new BlobServiceSasSignatureValues(expiryTime, blobContainerSasPermission); + + return blobClient.generateSas(sasSignatureValues); + +The generated SAS token can be then stored to an application.properties +file so that it can be loaded by the camel route, for example: + + camel.component.azure-storage-blob.sas-token=MY_TOKEN_HERE + + from("direct:copyBlob") + .to("azure-storage-blob://account/containerblob2?operation=uploadBlockBlob&credentialType=AZURE_SAS") + +## Development Notes (Important) + +All integration tests use +[Testcontainers](https://www.testcontainers.org/) and run by default. +Obtaining of Azure accessKey and accountName is needed to be able to run +all integration tests using Azure services. In addition to the mocked +unit tests, you **will need to run the integration tests with every +change you make or even client upgrade as the Azure client can break +things even on minor versions upgrade.** To run the integration tests, +on this component directory, run the following maven command: + + mvn verify -DaccountName=myacc -DaccessKey=mykey -DcredentialType=SHARED_ACCOUNT_KEY + +Whereby `accountName` is your Azure account name and `accessKey` is the +access key being generated from Azure portal. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|blobName|The blob name, to consume specific blob from a container. However, on producer it is only required for the operations on the blob level||string| +|blobOffset|Set the blob offset for the upload or download operations, default is 0|0|integer| +|blobType|The blob type in order to initiate the appropriate settings for each blob type|blockblob|object| +|closeStreamAfterRead|Close the stream after read or keep it open, default is true|true|boolean| +|configuration|The component configurations||object| +|credentials|StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information||object| +|credentialType|Determines the credential strategy to adopt|AZURE\_IDENTITY|object| +|dataCount|How many bytes to include in the range. Must be greater than or equal to 0 if specified.||integer| +|fileDir|The file directory where the downloaded blobs will be saved to, this can be used in both, producer and consumer||string| +|maxResultsPerPage|Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxResultsPerPage or specifies a value greater than 5,000, the server will return up to 5,000 items.||integer| +|maxRetryRequests|Specifies the maximum number of additional HTTP Get requests that will be made while reading the data from a response body.|0|integer| +|prefix|Filters the results to return only blobs whose names begin with the specified prefix. May be null to return all blobs.||string| +|regex|Filters the results to return only blobs whose names match the specified regular expression. May be null to return all if both prefix and regex are set, regex takes the priority and prefix is ignored.||string| +|sasToken|In case of usage of Shared Access Signature we'll need to set a SAS Token||string| +|serviceClient|Client to a storage account. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. It may also be used to construct URLs to blobs and containers. This client contains operations on a service account. Operations on a container are available on BlobContainerClient through BlobServiceClient#getBlobContainerClient(String), and operations on a blob are available on BlobClient through BlobContainerClient#getBlobClient(String).||object| +|timeout|An optional timeout value beyond which a RuntimeException will be raised.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|blobSequenceNumber|A user-controlled value that you can use to track requests. The value of the sequence number must be between 0 and 263 - 1.The default value is 0.|0|integer| +|blockListType|Specifies which type of blocks to return.|COMMITTED|object| +|changeFeedContext|When using getChangeFeed producer operation, this gives additional context that is passed through the Http pipeline during the service call.||object| +|changeFeedEndTime|When using getChangeFeed producer operation, this filters the results to return events approximately before the end time. Note: A few events belonging to the next hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the end time up by an hour.||object| +|changeFeedStartTime|When using getChangeFeed producer operation, this filters the results to return events approximately after the start time. Note: A few events belonging to the previous hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the start time down by an hour.||object| +|closeStreamAfterWrite|Close the stream after write or keep it open, default is true|true|boolean| +|commitBlockListLater|When is set to true, the staged blocks will not be committed directly.|true|boolean| +|createAppendBlob|When is set to true, the append blocks will be created when committing append blocks.|true|boolean| +|createPageBlob|When is set to true, the page blob will be created when uploading page blob.|true|boolean| +|downloadLinkExpiration|Override the default expiration (millis) of URL download link.||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The blob operation that can be used with this component on the producer|listBlobContainers|object| +|pageBlobSize|Specifies the maximum size for the page blob, up to 8 TB. The page blob size must be aligned to a 512-byte boundary.|512|integer| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|accessKey|Access key for the associated azure account name to be used for authentication with azure blob services||string| +|sourceBlobAccessKey|Source Blob Access Key: for copyblob operation, sadly, we need to have an accessKey for the source blob we want to copy Passing an accessKey as header, it's unsafe so we could set as key.||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|accountName|Azure account name to be used for authentication with azure blob services||string| +|containerName|The blob container name||string| +|blobName|The blob name, to consume specific blob from a container. However, on producer it is only required for the operations on the blob level||string| +|blobOffset|Set the blob offset for the upload or download operations, default is 0|0|integer| +|blobServiceClient|Client to a storage account. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. It may also be used to construct URLs to blobs and containers. This client contains operations on a service account. Operations on a container are available on BlobContainerClient through getBlobContainerClient(String), and operations on a blob are available on BlobClient through getBlobContainerClient(String).getBlobClient(String).||object| +|blobType|The blob type in order to initiate the appropriate settings for each blob type|blockblob|object| +|closeStreamAfterRead|Close the stream after read or keep it open, default is true|true|boolean| +|credentials|StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information||object| +|credentialType|Determines the credential strategy to adopt|AZURE\_IDENTITY|object| +|dataCount|How many bytes to include in the range. Must be greater than or equal to 0 if specified.||integer| +|fileDir|The file directory where the downloaded blobs will be saved to, this can be used in both, producer and consumer||string| +|maxResultsPerPage|Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxResultsPerPage or specifies a value greater than 5,000, the server will return up to 5,000 items.||integer| +|maxRetryRequests|Specifies the maximum number of additional HTTP Get requests that will be made while reading the data from a response body.|0|integer| +|prefix|Filters the results to return only blobs whose names begin with the specified prefix. May be null to return all blobs.||string| +|regex|Filters the results to return only blobs whose names match the specified regular expression. May be null to return all if both prefix and regex are set, regex takes the priority and prefix is ignored.||string| +|sasToken|In case of usage of Shared Access Signature we'll need to set a SAS Token||string| +|serviceClient|Client to a storage account. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. It may also be used to construct URLs to blobs and containers. This client contains operations on a service account. Operations on a container are available on BlobContainerClient through BlobServiceClient#getBlobContainerClient(String), and operations on a blob are available on BlobClient through BlobContainerClient#getBlobClient(String).||object| +|timeout|An optional timeout value beyond which a RuntimeException will be raised.||object| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|blobSequenceNumber|A user-controlled value that you can use to track requests. The value of the sequence number must be between 0 and 263 - 1.The default value is 0.|0|integer| +|blockListType|Specifies which type of blocks to return.|COMMITTED|object| +|changeFeedContext|When using getChangeFeed producer operation, this gives additional context that is passed through the Http pipeline during the service call.||object| +|changeFeedEndTime|When using getChangeFeed producer operation, this filters the results to return events approximately before the end time. Note: A few events belonging to the next hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the end time up by an hour.||object| +|changeFeedStartTime|When using getChangeFeed producer operation, this filters the results to return events approximately after the start time. Note: A few events belonging to the previous hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the start time down by an hour.||object| +|closeStreamAfterWrite|Close the stream after write or keep it open, default is true|true|boolean| +|commitBlockListLater|When is set to true, the staged blocks will not be committed directly.|true|boolean| +|createAppendBlob|When is set to true, the append blocks will be created when committing append blocks.|true|boolean| +|createPageBlob|When is set to true, the page blob will be created when uploading page blob.|true|boolean| +|downloadLinkExpiration|Override the default expiration (millis) of URL download link.||integer| +|operation|The blob operation that can be used with this component on the producer|listBlobContainers|object| +|pageBlobSize|Specifies the maximum size for the page blob, up to 8 TB. The page blob size must be aligned to a 512-byte boundary.|512|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessKey|Access key for the associated azure account name to be used for authentication with azure blob services||string| +|sourceBlobAccessKey|Source Blob Access Key: for copyblob operation, sadly, we need to have an accessKey for the source blob we want to copy Passing an accessKey as header, it's unsafe so we could set as key.||string| diff --git a/camel-azure-storage-datalake.md b/camel-azure-storage-datalake.md new file mode 100644 index 0000000000000000000000000000000000000000..1263a9313d53310fdb7fd3685e06f105a55731c8 --- /dev/null +++ b/camel-azure-storage-datalake.md @@ -0,0 +1,557 @@ +# Azure-storage-datalake + +**Since Camel 3.8** + +**Both producer and consumer are supported** + +The Azure storage datalake component is used for storing and retrieving +file from Azure Storage Data Lake Service using the **Azure APIs v12**. + +Prerequisites + +You need to have a valid Azure account with Azure storage set up. More +information can be found at [Azure Documentation +Portal](https://docs.microsoft.com/azure/). + +Maven users will need to add the following dependency to their `pom.xml` +for this component. + + + org.apache.camel + camel-azure-storage-datalake + x.x.x + + + +# Uri Format + + azure-storage-datalake:accountName[/fileSystemName][?options] + +In the case of the consumer, both `accountName` and `fileSystemName` are +required. In the case of the producer, it depends on the operation being +requested. + +You can append query options to the URI in the following format: +`?option1=value&option2=value&...` + +## Methods of authentication + +To use this component, you will have to provide at least one of the +specific credentialType parameters: + +- `SHARED_KEY_CREDENTIAL`: Provide `accountName` and `accessKey` for + your azure account or provide StorageSharedKeyCredential instance + which can be provided into `sharedKeyCredential` option. + +- `CLIENT_SECRET`: Provide ClientSecretCredential instance which can + be provided into `clientSecretCredential` option or provide + `accountName`, `clientId`, `clientSecret` and `tenantId` for + authentication with Azure Active Directory. + +- `SERVICE_CLIENT_INSTANCE`: Provide a DataLakeServiceClient instance + which can be provided into `serviceClient` option. + +- `AZURE_IDENTITY`: Use the Default Azure Credential Provider Chain + +- `AZURE_SAS`: Provide `sasSignature` or `sasCredential` parameters to + use SAS mechanism + +The default is `CLIENT_SECRET`. + +# Usage + +For example, to download content from file `test.txt` located on the +`filesystem` in `camelTesting` storage account, use the following +snippet: + + from("azure-storage-datalake:camelTesting/filesystem?fileName=test.txt&accountKey=key"). + to("file://fileDirectory"); + +## Automatic detection of a service client + +The component is capable of automatically detecting the presence of a +DataLakeServiceClient bean in the registry. Hence, if your registry has +only one instance of type DataLakeServiceClient, it will be +automatically used as the default client. You won’t have to explicitly +define it as an uri parameter. + +## Azure Storage DataLake Producer Operations + +The various operations supported by Azure Storage DataLake are as given +below: + +**Operations on Service level** + +For these operations, `accountName` option is required + + ++++ + + + + + + + + + + + + +
OperationDescription

listFileSystem

List all the file systems that are +present in the given azure account.

+ +**Operations on File system level** + +For these operations, `accountName` and `fileSystemName` options are +required + + ++++ + + + + + + + + + + + + + + + + + + + + +
OperationDescription

createFileSystem

Create a new file System with the +storage account

deleteFileSystem

Delete the specified file system within +the storage account

listPaths

Returns list of all the files within +the given path in the given file system, with folder structure +flattened

+ +**Operations on Directory level** + +For these operations, `accountName`, `fileSystemName` and +`directoryName` options are required + + ++++ + + + + + + + + + + + + + + + + +
OperationDescription

createFile

Create a new file in the specified +directory within the fileSystem

deleteDirectory

Delete the specified directory within +the file system

+ +**Operations on file level** + +For these operations, `accountName`, `fileSystemName` and `fileName` +options are required + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

getFile

Get the contents of a file

downloadToFile

Download the entire file from the file +system into a path specified by fileDir.

downloadLink

Generate a download link for the +specified file using Shared Access Signature (SAS). The expiration time +to be set for the link can be specified otherwise 1 hour is taken as +default.

deleteFile

Delete the specified file.

appendToFile

Appends the data passed to the +specified file in the file System. Flush command is required after +append.

flushToFile

Flushes the data already appended to +the specified file.

openQueryInputStream

Opens an InputStream based +on the query passed to the endpoint. For this operation, you must first +register the query acceleration feature with your subscription.

+ +Refer to the examples section below for more details on how to use these +operations + +## Consumer Examples + +To consume a file from the storage datalake into a file using the file +component, this can be done like this: + + from("azure-storage-datalake":cameltesting/filesystem?fileName=test.txt&accountKey=yourAccountKey"). + to("file:/filelocation"); + +You can also directly write to a file without using the file component. +For this, you will need to specify the path in `fileDir` option, to save +it to your machine. + + from("azure-storage-datalake":cameltesting/filesystem?fileName=test.txt&accountKey=yourAccountKey&fileDir=/test/directory"). + to("mock:results"); + +This component also supports batch consumer. So, you can consume +multiple files from a file system by specifying the path from where you +want to consume the files. + + from("azure-storage-datalake":cameltesting/filesystem?accountKey=yourAccountKey&fileDir=/test/directory&path=abc/test"). + to("mock:results"); + +## Producer Examples + +- `listFileSystem` + + + + from("direct:start") + .process(exchange -> { + //required headers can be added here + exchange.getIn().setHeader(DataLakeConstants.LIST_FILESYSTEMS_OPTIONS, new ListFileSystemsOptions().setMaxResultsPerPage(10)); + }) + .to("azure-storage-datalake:cameltesting?operation=listFileSystem&dataLakeServiceClient=#serviceClient") + .to("mock:results"); + +- `createFileSystem` + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(DataLakeConstants.FILESYSTEM_NAME, "test1"); + }) + .to("azure-storage-datalake:cameltesting?operation=createFileSystem&dataLakeServiceClient=#serviceClient") + .to("mock:results"); + +- `deleteFileSystem` + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(DataLakeConstants.FILESYSTEM_NAME, "test1"); + }) + .to("azure-storage-datalake:cameltesting?operation=deleteFileSystem&dataLakeServiceClient=#serviceClient") + .to("mock:results"); + +- `listPaths` + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(DataLakeConstants.LIST_PATH_OPTIONS, new ListPathsOptions().setPath("/main")); + }) + .to("azure-storage-datalake:cameltesting/filesystem?operation=listPaths&dataLakeServiceClient=#serviceClient") + .to("mock:results"); + +- `getFile` + +This can be done in two ways, We can either set an `OutputStream` in the +exchange body + + from("direct:start") + .process(exchange -> { + // set an OutputStream where the file data can should be written + exchange.getIn().setBody(outputStream); + }) + .to("azure-storage-datalake:cameltesting/filesystem?operation=getFile&fileName=test.txt&dataLakeServiceClient=#serviceClient") + .to("mock:results"); + +Or if the body is not set, the operation will give an `InputStream`, +given that you have already registered for query acceleration in azure +portal. + + from("direct:start") + .to("azure-storage-datalake:cameltesting/filesystem?operation=getFile&fileName=test.txt&dataLakeServiceClient=#serviceClient") + .process(exchange -> { + InputStream inputStream = exchange.getMessage().getBody(InputStream.class); + System.out.Println(IOUtils.toString(inputStream, StandardCharcets.UTF_8.name())); + }) + .to("mock:results"); + +- `deleteFile` + + + + from("direct:start") + .to("azure-storage-datalake:cameltesting/filesystem?operation=deleteFile&fileName=test.txt&dataLakeServiceClient=#serviceClient") + .to("mock:results"); + +- `downloadToFile` + + + + from("direct:start") + .to("azure-storage-datalake:cameltesting/filesystem?operation=downloadToFile&fileName=test.txt&fileDir=/test/mydir&dataLakeServiceClient=#serviceClient") + .to("mock:results"); + +- `downloadLink` + + + + from("direct:start") + .to("azure-storage-datalake:cameltesting/filesystem?operation=downloadLink&fileName=test.txt&dataLakeServiceClient=#serviceClient") + .process(exchange -> { + String link = exchange.getMessage().getBody(String.class); + System.out.println(link); + }) + .to("mock:results"); + +- `appendToFile` + + + + from("direct:start") + .process(exchange -> { + final String data = "test data"; + final InputStream inputStream = new ByteArrayInputStream(data.getBytes(StandardCharsets.UTF_8)); + exchange.getIn().setBody(inputStream); + }) + .to("azure-storage-datalake:cameltesting/filesystem?operation=appendToFile&fileName=test.txt&dataLakeServiceClient=#serviceClient") + .to("mock:results"); + +- `flushToFile` + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(DataLakeConstants.POSITION, 0); + }) + .to("azure-storage-datalake:cameltesting/filesystem?operation=flushToFile&fileName=test.txt&dataLakeServiceClient=#serviceClient") + .to("mock:results"); + +- `openQueryInputStream` + +For this operation, you should have already registered for query +acceleration on the azure portal + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(DataLakeConstants.QUERY_OPTIONS, new FileQueryOptions("SELECT * from BlobStorage")); + }) + .to("azure-storage-datalake:cameltesting/filesystem?operation=openQueryInputStream&fileName=test.txt&dataLakeServiceClient=#serviceClient") + .to("mock:results"); + +- `upload` + + + + from("direct:start") + .process(exchange -> { + final String data = "test data"; + final InputStream inputStream = new ByteArrayInputStream(data.getBytes(StandardCharsets.UTF_8)); + exchange.getIn().setBody(inputStream); + }) + .to("azure-storage-datalake:cameltesting/filesystem?operation=upload&fileName=test.txt&dataLakeServiceClient=#serviceClient") + .to("mock:results"); + +- `uploadFromFile` + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(DataLakeConstants.PATH, "test/file.txt"); + }) + .to("azure-storage-datalake:cameltesting/filesystem?operation=uploadFromFile&fileName=test.txt&dataLakeServiceClient=#serviceClient") + .to("mock:results"); + +- `createFile` + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(DataLakeConstants.DIRECTORY_NAME, "test/file/"); + }) + .to("azure-storage-datalake:cameltesting/filesystem?operation=createFile&fileName=test.txt&dataLakeServiceClient=#serviceClient") + .to("mock:results"); + +- `deleteDirectory` + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(DataLakeConstants.DIRECTORY_NAME, "test/file/"); + }) + .to("azure-storage-datalake:cameltesting/filesystem?operation=deleteDirectory&dataLakeServiceClient=#serviceClient") + .to("mock:results"); + +## Testing + +Please run all the unit tests and integration tests while making changes +to the component as changes or version upgrades can break things. For +running all the tests in the component, you will need to obtain azure +`accountName` and `accessKey`. After obtaining the same, you can run the +full test on this component directory by running the following maven +command + + mvn verify -Dazure.storage.account.name= -Dazure.storage.account.key= + +You can also skip the integration test and run only basic unit test by +using the command + + mvn test + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|clientId|client id for azure account||string| +|close|Whether or not a file changed event raised indicates completion (true) or modification (false)||boolean| +|closeStreamAfterRead|check for closing stream after read||boolean| +|configuration|configuration object for data lake||object| +|credentialType|Determines the credential strategy to adopt|CLIENT\_SECRET|object| +|dataCount|count number of bytes to download||integer| +|directoryName|directory of the file to be handled in component||string| +|downloadLinkExpiration|download link expiration time||integer| +|expression|expression for queryInputStream||string| +|fileDir|directory of file to do operations in the local system||string| +|fileName|name of file to be handled in component||string| +|fileOffset|offset position in file for different operations||integer| +|maxResults|maximum number of results to show at a time||integer| +|maxRetryRequests|no of retries to a given request||integer| +|openOptions|set open options for creating file||object| +|path|path in azure data lake for operations||string| +|permission|permission string for the file||string| +|position|This parameter allows the caller to upload data in parallel and control the order in which it is appended to the file.||integer| +|recursive|recursively include all paths||boolean| +|regex|regular expression for matching file names||string| +|retainUncommitedData|Whether or not uncommitted data is to be retained after the operation||boolean| +|serviceClient|data lake service client for azure storage data lake||object| +|sharedKeyCredential|shared key credential for azure data lake gen2||object| +|tenantId|tenant id for azure account||string| +|timeout|Timeout for operation||object| +|umask|umask permission for file||string| +|userPrincipalNameReturned|whether or not to use upn||boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|operation to be performed|listFileSystem|object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|accountKey|account key for authentication||string| +|clientSecret|client secret for azure account||string| +|clientSecretCredential|client secret credential for authentication||object| +|sasCredential|SAS token credential||object| +|sasSignature|SAS token signature||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|accountName|name of the azure account||string| +|fileSystemName|name of filesystem to be used||string| +|clientId|client id for azure account||string| +|close|Whether or not a file changed event raised indicates completion (true) or modification (false)||boolean| +|closeStreamAfterRead|check for closing stream after read||boolean| +|credentialType|Determines the credential strategy to adopt|CLIENT\_SECRET|object| +|dataCount|count number of bytes to download||integer| +|dataLakeServiceClient|service client of data lake||object| +|directoryName|directory of the file to be handled in component||string| +|downloadLinkExpiration|download link expiration time||integer| +|expression|expression for queryInputStream||string| +|fileDir|directory of file to do operations in the local system||string| +|fileName|name of file to be handled in component||string| +|fileOffset|offset position in file for different operations||integer| +|maxResults|maximum number of results to show at a time||integer| +|maxRetryRequests|no of retries to a given request||integer| +|openOptions|set open options for creating file||object| +|path|path in azure data lake for operations||string| +|permission|permission string for the file||string| +|position|This parameter allows the caller to upload data in parallel and control the order in which it is appended to the file.||integer| +|recursive|recursively include all paths||boolean| +|regex|regular expression for matching file names||string| +|retainUncommitedData|Whether or not uncommitted data is to be retained after the operation||boolean| +|serviceClient|data lake service client for azure storage data lake||object| +|sharedKeyCredential|shared key credential for azure data lake gen2||object| +|tenantId|tenant id for azure account||string| +|timeout|Timeout for operation||object| +|umask|umask permission for file||string| +|userPrincipalNameReturned|whether or not to use upn||boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|operation|operation to be performed|listFileSystem|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accountKey|account key for authentication||string| +|clientSecret|client secret for azure account||string| +|clientSecretCredential|client secret credential for authentication||object| +|sasCredential|SAS token credential||object| +|sasSignature|SAS token signature||string| diff --git a/camel-azure-storage-queue.md b/camel-azure-storage-queue.md new file mode 100644 index 0000000000000000000000000000000000000000..c219a0229217fc819ccad22b4c186f5133546118 --- /dev/null +++ b/camel-azure-storage-queue.md @@ -0,0 +1,424 @@ +# Azure-storage-queue + +**Since Camel 3.3** + +**Both producer and consumer are supported** + +The Azure Storage Queue component supports storing and retrieving the +messages to/from [Azure Storage +Queue](https://azure.microsoft.com/services/storage/queues/) service +using **Azure APIs v12**. However, in the case of versions above v12, we +will see if this component can adopt these changes depending on how much +breaking changes can result. + +Prerequisites + +You must have a valid Windows Azure Storage account. More information is +available at [Azure Documentation +Portal](https://docs.microsoft.com/azure/). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-azure-storage-queue + x.x.x + + + +# URI Format + + azure-storage-queue://accountName[/queueName][?options] + +In the case of consumer, `accountName` and `queueName` are required. + +In the case of producer, it depends on the operation that being +requested, for example, if operation is on a service level, e.b: +`listQueues`, only `accountName` is required, but in the case of +operation being requested on the queue level, e.g.: +``createQueue, `sendMessage``, etc., both `accountName` and +`queueName` are required. + +The queue will be created if it does not already exist. You can append +query options to the URI in the following format: +`?options=value&option2=value&...` + +**Required information options:** + +**Required information options:** + +To use this component, you have multiple options to provide the required +Azure authentication information: + +- By providing your own QueueServiceClient instance which can be + injected into `serviceClient`. + +- Via Azure Identity, when specifying `credentialType=AZURE_IDENTITY` + and providing required [environment + variables](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/identity/azure-identity#environment-variables). + This enables service principal (e.g. app registration) + authentication with secret/certificate as well as username password. + +- Via shared storage account key, when specifying + `credentialType=SHARED_ACCOUNT_KEY` and providing `accountName` and + `accessKey` for your Azure account, this is the simplest way to get + started. The accessKey can be generated through your Azure portal. + Note that this is the default authentication strategy. + +- Via shared storage account key, when specifying + `credentialType=SHARED_KEY_CREDENTIAL` and providing a + [StorageSharedKeyCredential](https://azuresdkartifacts.blob.core.windows.net/azure-sdk-for-java/staging/apidocs/com/azure/storage/common/StorageSharedKeyCredential.html) + instance which can be injected into `credentials` option. + +# Usage + +For example, to get a message content from the queue `messageQueue` in +the `storageAccount` storage account and, use the following snippet: + + from("azure-storage-queue://storageAccount/messageQueue?accessKey=yourAccessKey"). + to("file://queuedirectory"); + +## Advanced Azure Storage Queue configuration + +If your Camel Application is running behind a firewall or if you need to +have more control over the `QueueServiceClient` instance configuration, +you can create your own instance: + + StorageSharedKeyCredential credential = new StorageSharedKeyCredential("yourAccountName", "yourAccessKey"); + String uri = String.format("https://%s.queue.core.windows.net", "yourAccountName"); + + QueueServiceClient client = new QueueServiceClientBuilder() + .endpoint(uri) + .credential(credential) + .buildClient(); + // This is camel context + context.getRegistry().bind("client", client); + +Then refer to this instance in your Camel `azure-storage-queue` +component configuration: + + from("azure-storage-queue://cameldev/queue1?serviceClient=#client") + .to("file://outputFolder?fileName=output.txt&fileExist=Append"); + +## Automatic detection of QueueServiceClient client in registry + +The component is capable of detecting the presence of an +QueueServiceClient bean into the registry. If it’s the only instance of +that type, it will be used as the client, and you won’t have to define +it as uri parameter, like the example above. This may be really useful +for smarter configuration of the endpoint. + +## Azure Storage Queue Producer operations + +Camel Azure Storage Queue component provides a wide range of operations +on the producer side: + +**Operations on the service level** + +For these operations, `accountName` is **required**. + + ++++ + + + + + + + + + + + + +
OperationDescription

listQueues

Lists the queues in the storage account +that pass the filter starting at the specified marker.

+ +**Operations on the queue level** + +For these operations, `accountName` and `queueName` are **required**. + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

createQueue

Creates a new queue.

deleteQueue

Permanently deletes the queue.

clearQueue

Deletes all messages in the +queue..

sendMessage

Default Producer +Operation Sends a message with a given time-to-live and timeout +period where the message is invisible in the queue. The message text is +evaluated from the exchange message body. By default, if the queue +doesn’t exist, it will create an empty queue first. If you want to +disable this, set the config createQueue or header +CamelAzureStorageQueueCreateQueue to +false.

deleteMessage

Deletes the specified message in the +queue.

receiveMessages

Retrieves up to the maximum number of +messages from the queue and hides them from other operations for the +timeout period. However, it will not dequeue the message from the queue +due to reliability reasons.

peekMessages

Peek messages from the front of the +queue up to the maximum number of messages.

updateMessage

Updates the specific message in the +queue with a new message and resets the visibility timeout. The message +text is evaluated from the exchange message body.

+ +Refer to the example section in this page to learn how to use these +operations into your camel application. + +## Consumer Examples + +To consume a queue into a file component with maximum five messages in +one batch, this can be done like this: + + from("azure-storage-queue://cameldev/queue1?serviceClient=#client&maxMessages=5") + .to("file://outputFolder?fileName=output.txt&fileExist=Append"); + +## Producer Operations Examples + +- `listQueues`: + + + + from("direct:start") + .process(exchange -> { + // set the header you want the producer to evaluate, refer to the previous + // section to learn about the headers that can be set + // e.g., to only returns a list of queues with 'awesome' prefix: + exchange.getIn().setHeader(QueueConstants.QUEUES_SEGMENT_OPTIONS, new QueuesSegmentOptions().setPrefix("awesome")); + }) + .to("azure-storage-queue://cameldev?serviceClient=#client&operation=listQueues") + .log("${body}") + .to("mock:result"); + +- `createQueue`: + + + + from("direct:start") + .process(exchange -> { + // set the header you want the producer to evaluate, refer to the previous + // section to learn about the headers that can be set + // e.g.: + exchange.getIn().setHeader(QueueConstants.QUEUE_NAME, "overrideName"); + }) + .to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=createQueue"); + +- `deleteQueue`: + + + + from("direct:start") + .process(exchange -> { + // set the header you want the producer to evaluate, refer to the previous + // section to learn about the headers that can be set + // e.g.: + exchange.getIn().setHeader(QueueConstants.QUEUE_NAME, "overrideName"); + }) + .to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=deleteQueue"); + +- `clearQueue`: + + + + from("direct:start") + .process(exchange -> { + // set the header you want the producer to evaluate, refer to the previous + // section to learn about the headers that can be set + // e.g.: + exchange.getIn().setHeader(QueueConstants.QUEUE_NAME, "overrideName"); + }) + .to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=clearQueue"); + +- `sendMessage`: + + + + from("direct:start") + .process(exchange -> { + // set the header you want the producer to evaluate, refer to the previous + // section to learn about the headers that can be set + // e.g.: + exchange.getIn().setBody("message to send"); + // we set a visibility of 1min + exchange.getIn().setHeader(QueueConstants.VISIBILITY_TIMEOUT, Duration.ofMinutes(1)); + }) + .to("azure-storage-queue://cameldev/test?serviceClient=#client"); + +- `deleteMessage`: + + + + from("direct:start") + .process(exchange -> { + // set the header you want the producer to evaluate, refer to the previous + // section to learn about the headers that can be set + // e.g.: + // Mandatory header: + exchange.getIn().setHeader(QueueConstants.MESSAGE_ID, "1"); + // Mandatory header: + exchange.getIn().setHeader(QueueConstants.POP_RECEIPT, "PAAAAHEEERXXX-1"); + }) + .to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=deleteMessage"); + +- `receiveMessages`: + + + + from("direct:start") + .to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=receiveMessages") + .process(exchange -> { + final List messageItems = exchange.getMessage().getBody(List.class); + messageItems.forEach(messageItem -> System.out.println(messageItem.getMessageText())); + }) + .to("mock:result"); + +- `peekMessages`: + + + + from("direct:start") + .to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=peekMessages") + .process(exchange -> { + final List messageItems = exchange.getMessage().getBody(List.class); + messageItems.forEach(messageItem -> System.out.println(messageItem.getMessageText())); + }) + .to("mock:result"); + +- `updateMessage`: + + + + from("direct:start") + .process(exchange -> { + // set the header you want the producer to evaluate, refer to the previous + // section to learn about the headers that can be set + // e.g.: + exchange.getIn().setBody("new message text"); + // Mandatory header: + exchange.getIn().setHeader(QueueConstants.MESSAGE_ID, "1"); + // Mandatory header: + exchange.getIn().setHeader(QueueConstants.POP_RECEIPT, "PAAAAHEEERXXX-1"); + // Mandatory header: + exchange.getIn().setHeader(QueueConstants.VISIBILITY_TIMEOUT, Duration.ofMinutes(1)); + }) + .to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=updateMessage"); + +## Development Notes (Important) + +When developing on this component, you will need to obtain your Azure +`accessKey` to run the integration tests. In addition to the mocked unit +tests, you **will need to run the integration tests with every change +you make or even client upgrade as the Azure client can break things +even on minor versions upgrade.** To run the integration tests, on this +component directory, run the following maven command: + + mvn verify -DaccountName=myacc -DaccessKey=mykey + +Whereby `accountName` is your Azure account name and `accessKey` is the +access key being generated from Azure portal. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|The component configurations||object| +|credentialType|Determines the credential strategy to adopt|SHARED\_ACCOUNT\_KEY|object| +|serviceClient|Service client to a storage account to interact with the queue service. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. This client contains all the operations for interacting with a queue account in Azure Storage. Operations allowed by the client are creating, listing, and deleting queues, retrieving and updating properties of the account, and retrieving statistics of the account.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|createQueue|When is set to true, the queue will be automatically created when sending messages to the queue.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|Queue service operation hint to the producer||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|maxMessages|Maximum number of messages to get, if there are less messages exist in the queue than requested all the messages will be returned. If left empty only 1 message will be retrieved, the allowed range is 1 to 32 messages.|1|integer| +|messageId|The ID of the message to be deleted or updated.||string| +|popReceipt|Unique identifier that must match for the message to be deleted or updated.||string| +|timeout|An optional timeout applied to the operation. If a response is not returned before the timeout concludes a RuntimeException will be thrown.||object| +|timeToLive|How long the message will stay alive in the queue. If unset the value will default to 7 days, if -1 is passed the message will not expire. The time to live must be -1 or any positive number. The format should be in this form: PnDTnHnMn.nS., e.g: PT20.345S -- parses as 20.345 seconds, P2D -- parses as 2 days However, in case you are using EndpointDsl/ComponentDsl, you can do something like Duration.ofSeconds() since these Java APIs are typesafe.||object| +|visibilityTimeout|The timeout period for how long the message is invisible in the queue. The timeout must be between 1 seconds and 7 days. The format should be in this form: PnDTnHnMn.nS., e.g: PT20.345S -- parses as 20.345 seconds, P2D -- parses as 2 days However, in case you are using EndpointDsl/ComponentDsl, you can do something like Duration.ofSeconds() since these Java APIs are typesafe.||object| +|accessKey|Access key for the associated azure account name to be used for authentication with azure queue services||string| +|credentials|StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|accountName|Azure account name to be used for authentication with azure queue services||string| +|queueName|The queue resource name||string| +|credentialType|Determines the credential strategy to adopt|SHARED\_ACCOUNT\_KEY|object| +|serviceClient|Service client to a storage account to interact with the queue service. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. This client contains all the operations for interacting with a queue account in Azure Storage. Operations allowed by the client are creating, listing, and deleting queues, retrieving and updating properties of the account, and retrieving statistics of the account.||object| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|createQueue|When is set to true, the queue will be automatically created when sending messages to the queue.|false|boolean| +|operation|Queue service operation hint to the producer||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|maxMessages|Maximum number of messages to get, if there are less messages exist in the queue than requested all the messages will be returned. If left empty only 1 message will be retrieved, the allowed range is 1 to 32 messages.|1|integer| +|messageId|The ID of the message to be deleted or updated.||string| +|popReceipt|Unique identifier that must match for the message to be deleted or updated.||string| +|timeout|An optional timeout applied to the operation. If a response is not returned before the timeout concludes a RuntimeException will be thrown.||object| +|timeToLive|How long the message will stay alive in the queue. If unset the value will default to 7 days, if -1 is passed the message will not expire. The time to live must be -1 or any positive number. The format should be in this form: PnDTnHnMn.nS., e.g: PT20.345S -- parses as 20.345 seconds, P2D -- parses as 2 days However, in case you are using EndpointDsl/ComponentDsl, you can do something like Duration.ofSeconds() since these Java APIs are typesafe.||object| +|visibilityTimeout|The timeout period for how long the message is invisible in the queue. The timeout must be between 1 seconds and 7 days. The format should be in this form: PnDTnHnMn.nS., e.g: PT20.345S -- parses as 20.345 seconds, P2D -- parses as 2 days However, in case you are using EndpointDsl/ComponentDsl, you can do something like Duration.ofSeconds() since these Java APIs are typesafe.||object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessKey|Access key for the associated azure account name to be used for authentication with azure queue services||string| +|credentials|StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information||object| diff --git a/camel-bean-validator.md b/camel-bean-validator.md new file mode 100644 index 0000000000000000000000000000000000000000..672f394f8ba064b6ca6ea59e380cd3789b1c6362 --- /dev/null +++ b/camel-bean-validator.md @@ -0,0 +1,218 @@ +# Bean-validator + +**Since Camel 2.3** + +**Only producer is supported** + +The Validator component performs bean validation of the message body +using the Java Bean Validation API ([JSR +303](http://jcp.org/en/jsr/detail?id=303)). Camel uses the reference +implementation, which is [Hibernate +Validator](https://docs.jboss.org/hibernate/validator/6.2/reference/en-US/html_single/). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-bean-validator + x.y.z + + + +# URI format + + bean-validator:label[?options] + +Where **label** is an arbitrary text value describing the endpoint. You +can append query options to the URI in the following format: +`?option=value&option=value&...` + +# OSGi deployment + +To use Hibernate Validator in the OSGi environment use dedicated +`ValidationProviderResolver` implementation, just as +`org.apache.camel.component.bean.validator.HibernateValidationProviderResolver`. +The snippet below demonstrates this approach. You can also use +`HibernateValidationProviderResolver`. + +## Using HibernateValidationProviderResolver + +Java +from("direct:test"). +to("bean-validator://ValidationProviderResolverTest?validationProviderResolver=#myValidationProviderResolver"); + +XML + + +If no custom `ValidationProviderResolver` is defined and the validator +component has been deployed into the OSGi environment, the +`HibernateValidationProviderResolver` will be automatically used. + +# Example + +Assumed we have a java bean with the following annotations + +**Car.java** + + public class Car { + + @NotNull + private String manufacturer; + + @NotNull + @Size(min = 5, max = 14, groups = OptionalChecks.class) + private String licensePlate; + + // getter and setter + } + +and an interface definition for our custom validation group + +**OptionalChecks.java** + + public interface OptionalChecks { + } + +with the following Camel route, only the **@NotNull** constraints on the +attributes `manufacturer` and `licensePlate` will be validated (Camel +uses the default group `jakarta.validation.groups.Default`). + + from("direct:start") + .to("bean-validator://x") + .to("mock:end") + +If you want to check the constraints from the group `OptionalChecks`, +you have to define the route like this + + from("direct:start") + .to("bean-validator://x?group=OptionalChecks") + .to("mock:end") + +If you want to check the constraints from both groups, you have to +define a new interface first: + +**AllChecks.java** + + @GroupSequence({Default.class, OptionalChecks.class}) + public interface AllChecks { + } + +And then your route definition should look like this: + + from("direct:start") + .to("bean-validator://x?group=AllChecks") + .to("mock:end") + +And if you have to provide your own message interpolator, traversable +resolver and constraint validator factory, you have to write a route +like this: + +Java + + + + +XML +from("direct:start") +.to("bean-validator://x?group=AllChecks\&messageInterpolator=#myMessageInterpolator +\&traversableResolver=#myTraversableResolver\&constraintValidatorFactory=#myConstraintValidatorFactory") +.to("mock:end") + +It’s also possible to describe your constraints as XML and not as Java +annotations. In this case, you have to provide the files +`META-INF/validation.xml` and `constraints-car.xml` which could look +like this: + +validation.xml + + + org.hibernate.validator.HibernateValidator + org.hibernate.validator.engine.ResourceBundleMessageInterpolator + org.hibernate.validator.engine.resolver.DefaultTraversableResolver + org.hibernate.validator.engine.ConstraintValidatorFactoryImpl + /constraints-car.xml + + + +constraints-car.xml + + + org.apache.camel.component.bean.validator + + + + + + + + + + + + org.apache.camel.component.bean.validator.OptionalChecks + + 5 + 14 + + + + + +Here is the XML syntax for the example route definition where +**OrderedChecks** can be +[https://github.com/apache/camel/blob/main/components/camel-bean-validator/src/test/java/org/apache/camel/component/bean/validator/OrderedChecks.java](https://github.com/apache/camel/blob/main/components/camel-bean-validator/src/test/java/org/apache/camel/component/bean/validator/OrderedChecks.java) + +Note that the body should include an instance of a class to validate. + + + + + + + + + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|ignoreXmlConfiguration|Whether to ignore data from the META-INF/validation.xml file.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|constraintValidatorFactory|To use a custom ConstraintValidatorFactory||object| +|messageInterpolator|To use a custom MessageInterpolator||object| +|traversableResolver|To use a custom TraversableResolver||object| +|validationProviderResolver|To use a a custom ValidationProviderResolver||object| +|validatorFactory|To use a custom ValidatorFactory||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|label|Where label is an arbitrary text value describing the endpoint||string| +|group|To use a custom validation group|jakarta.validation.groups.Default|string| +|ignoreXmlConfiguration|Whether to ignore data from the META-INF/validation.xml file.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|constraintValidatorFactory|To use a custom ConstraintValidatorFactory||object| +|messageInterpolator|To use a custom MessageInterpolator||object| +|traversableResolver|To use a custom TraversableResolver||object| +|validationProviderResolver|To use a a custom ValidationProviderResolver||object| +|validatorFactory|To use a custom ValidatorFactory||object| diff --git a/camel-bean.md b/camel-bean.md new file mode 100644 index 0000000000000000000000000000000000000000..5a70f98e274c9230ad1b5fbe85834875305bb5d5 --- /dev/null +++ b/camel-bean.md @@ -0,0 +1,124 @@ +# Bean + +**Since Camel 1.0** + +**Only producer is supported** + +The Bean component binds beans to Camel message exchanges. + +# URI format + + bean:beanName[?options] + +Where `beanName` can be any string used to look up the bean in the +Registry + +# Examples + +A **bean:** endpoint cannot be defined as the input to the route; i.e., +you cannot consume from it, you can only route from some inbound message +Endpoint to the bean endpoint as output, such as the **direct** endpoint +as input. + +Suppose you have the following POJO class to be used by Camel + + package com.foo; + + public class MyBean { + + public String saySomething(String input) { + return "Hello " + input; + } + } + +Then the bean can be called in a Camel route by the fully qualified +class name: + +Java +from("direct:hello") +.to("bean:com.foo.MyBean"); + +XML + + + + + +What happens is that when the exchange is routed to the MyBean, then +Camel will use the Bean Binding to invoke the bean, in this case the +*saySomething* method, by converting the `Exchange` in body to the +`String` type and storing the output of the method back to the Exchange +again. + +The bean component can also call a bean by *bean id* by looking up the +bean in the [Registry](#manual::registry.adoc) instead of using the +class name. + +# Java DSL specific bean syntax + +Java DSL comes with syntactic sugar for the [Bean](#bean-component.adoc) +component. Instead of specifying the bean explicitly as the endpoint +(i.e., `to("bean:beanName")`) you can use the following syntax: + + // Send a message to the bean endpoint + // and invoke method using Bean Binding. + from("direct:start").bean("beanName"); + + // Send a message to the bean endpoint + // and invoke given method. + from("direct:start").bean("beanName", "methodName"); + +Instead of passing the name of the reference to the bean (so that Camel +will look up for it in the [Registry](#manual::registry.adoc)), you can +specify the bean itself: + + // Send a message to the given bean instance. + from("direct:start").bean(new ExampleBean()); + + // Explicit selection of bean method to be invoked. + from("direct:start").bean(new ExampleBean(), "methodName"); + + // Camel will create the instance of bean and cache it for you. + from("direct:start").bean(ExampleBean.class); + +This bean could be a lambda if you cast the lambda to a +`@FunctionalInterface` + + @FunctionalInterface + public interface ExampleInterface() { + @Handler String methodName(); + } + + from("direct:start") + .bean((ExampleInterface) () -> "")) + +# Bean Binding + +The [Bean Binding](#manual::bean-binding.adoc) mechanism defines how +methods to be invoked are chosen (if they are not specified explicitly +through the **method** parameter) and how parameter values are +constructed from the Message. These are used throughout all the various +[Bean Integration](#manual::bean-integration.adoc) mechanisms in Camel. + +See also related [Bean Language](#languages:bean-language.adoc). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|scope|Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using delegate scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry.|Singleton|object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|beanInfoCacheSize|Maximum cache size of internal cache for bean introspection. Setting a value of 0 or negative will disable the cache.|1000|integer| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|beanName|Sets the name of the bean to invoke||string| +|method|Sets the name of the method to invoke on the bean||string| +|scope|Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using prototype scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry.|Singleton|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|parameters|Used for configuring additional properties on the bean||object| diff --git a/camel-bonita.md b/camel-bonita.md new file mode 100644 index 0000000000000000000000000000000000000000..ee15cefa3c965fa321a9543544307504fe229332 --- /dev/null +++ b/camel-bonita.md @@ -0,0 +1,60 @@ +# Bonita + +**Since Camel 2.19** + +**Only producer is supported** + +Used for communicating with a remote Bonita BPM process engine. + +# URI format + + bonita://[operation]?[options] + +Where **operation** is the specific action to perform on Bonita. + +# Body content + +For the startCase operation, the input variables are retrieved from the +body message. This one has to contain a `Map`. + +# Examples + +The following example starts a new case in Bonita: + + from("direct:start").to("bonita:startCase?hostname=localhost&port=8080&processName=TestProcess&username=install&password=install"); + +# Dependencies + +To use Bonita in your Camel routes, you need to add a dependency on +**camel-bonita**, which implements the component. + +If you use Maven, you can add the following to your pom.xml, +substituting the version number for the latest and greatest release (see +the download page for the latest versions). + + + org.apache.camel + camel-bonita + x.x.x + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|operation|Operation to use||object| +|hostname|Hostname where Bonita engine runs|localhost|string| +|port|Port of the server hosting Bonita engine|8080|string| +|processName|Name of the process involved in the operation||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|password|Password to authenticate to Bonita engine.||string| +|username|Username to authenticate to Bonita engine.||string| diff --git a/camel-box.md b/camel-box.md new file mode 100644 index 0000000000000000000000000000000000000000..5f1cb53f4b9c630eb91465dcc74bb41746209d01 --- /dev/null +++ b/camel-box.md @@ -0,0 +1,138 @@ +# Box + +**Since Camel 2.14** + +**Both producer and consumer are supported** + +The Box component provides access to all the Box.com APIs accessible +using [Box Java SDK](https://github.com/box/box-java-sdk/). It allows +producing messages to upload and download files, create, edit, and +manage folders, etc. It also supports APIs that allow polling for +updates to user accounts and even changes to enterprise accounts, etc. + +Box.com requires the use of OAuth2.0 for all client application +authentications. To use camel-box with your account, you’ll need to +create a new application within Box.com at +[https://developer.box.com](https://developer.box.com/). The Box +application’s client id and secret will allow access to Box APIs which +require a current user. A user access token is generated and managed by +the API for an end user. + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-box + ${camel-version} + + +# Connection Authentication Types + +The Box component supports three different types of authenticated +connections. + +## Standard Authentication + +**Standard Authentication** uses the **OAuth 2.0 three-legged +authentication process** to authenticate its connections with Box.com. +This type of authentication enables Box **managed users** and **external +users** to access, edit, and save their Box content through the Box +component. + +## App Enterprise Authentication + +**App Enterprise Authentication** uses the **OAuth 2.0 with JSON Web +Tokens (JWT)** to authenticate its connections as a **Service Account** +for a **Box Application**. This type of authentication enables a service +account to access, edit, and save the Box content of its **Box +Application** through the Box component. + +## App User Authentication + +**App User Authentication** uses the **OAuth 2.0 with JSON Web Tokens +(JWT)** to authenticate its connections as an **App User** for a **Box +Application**. This type of authentication enables app users to access, +edit, and save their Box content in its **Box Application** through the +Box component. + +# Samples + +The following route uploads new files to the user’s root folder: + + from("file:...") + .to("box://files/upload/inBody=fileUploadRequest"); + +The following route polls user’s account for updates: + + from("box://events/listen?startingPosition=-1") + .to("bean:blah"); + +The following route uses a producer with dynamic header options. The +**fileId** property has the Box file id and the **output** property has +the output stream of the file contents, so they are assigned to the +**CamelBox.fileId** header and **CamelBox.output** header respectively +as follows: + + from("direct:foo") + .setHeader("CamelBox.fileId", header("fileId")) + .setHeader("CamelBox.output", header("output")) + .to("box://files/download") + .to("file://..."); + +## More information + +See more details at the Box API reference: +[https://developer.box.com/reference](https://developer.box.com/reference) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|clientId|Box application client ID||string| +|configuration|To use the shared configuration||object| +|enterpriseId|The enterprise ID to use for an App Enterprise.||string| +|userId|The user ID to use for an App User.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|httpParams|Custom HTTP params for settings like proxy host||object| +|authenticationType|The type of authentication for connection. Types of Authentication: STANDARD\_AUTHENTICATION - OAuth 2.0 (3-legged) SERVER\_AUTHENTICATION - OAuth 2.0 with JSON Web Tokens|APP\_USER\_AUTHENTICATION|string| +|accessTokenCache|Custom Access Token Cache for storing and retrieving access tokens.||object| +|clientSecret|Box application client secret||string| +|encryptionAlgorithm|The type of encryption algorithm for JWT. Supported Algorithms: RSA\_SHA\_256 RSA\_SHA\_384 RSA\_SHA\_512|RSA\_SHA\_256|object| +|maxCacheEntries|The maximum number of access tokens in cache.|100|integer| +|privateKeyFile|The private key for generating the JWT signature.||string| +|privateKeyPassword|The password for the private key.||string| +|publicKeyId|The ID for public key for validating the JWT signature.||string| +|sslContextParameters|To configure security using SSLContextParameters.||object| +|userName|Box user name, MUST be provided||string| +|userPassword|Box user password, MUST be provided if authSecureStorage is not set, or returns null on first call||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|apiName|What kind of operation to perform||object| +|methodName|What sub operation to use for the selected operation||string| +|clientId|Box application client ID||string| +|enterpriseId|The enterprise ID to use for an App Enterprise.||string| +|inBody|Sets the name of a parameter to be passed in the exchange In Body||string| +|userId|The user ID to use for an App User.||string| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|httpParams|Custom HTTP params for settings like proxy host||object| +|authenticationType|The type of authentication for connection. Types of Authentication: STANDARD\_AUTHENTICATION - OAuth 2.0 (3-legged) SERVER\_AUTHENTICATION - OAuth 2.0 with JSON Web Tokens|APP\_USER\_AUTHENTICATION|string| +|accessTokenCache|Custom Access Token Cache for storing and retrieving access tokens.||object| +|clientSecret|Box application client secret||string| +|encryptionAlgorithm|The type of encryption algorithm for JWT. Supported Algorithms: RSA\_SHA\_256 RSA\_SHA\_384 RSA\_SHA\_512|RSA\_SHA\_256|object| +|maxCacheEntries|The maximum number of access tokens in cache.|100|integer| +|privateKeyFile|The private key for generating the JWT signature.||string| +|privateKeyPassword|The password for the private key.||string| +|publicKeyId|The ID for public key for validating the JWT signature.||string| +|sslContextParameters|To configure security using SSLContextParameters.||object| +|userName|Box user name, MUST be provided||string| +|userPassword|Box user password, MUST be provided if authSecureStorage is not set, or returns null on first call||string| diff --git a/camel-braintree.md b/camel-braintree.md new file mode 100644 index 0000000000000000000000000000000000000000..3b64fa650db08132962fdbd596f05520dc885594 --- /dev/null +++ b/camel-braintree.md @@ -0,0 +1,101 @@ +# Braintree + +**Since Camel 2.17** + +**Only producer is supported** + +The Braintree component provides access to [Braintree +Payments](https://www.braintreepayments.com/) through their [Java +SDK](https://developers.braintreepayments.com/start/hello-server/java). + +All client applications need API credential to process payments. To use +camel-braintree with your account, you’ll need to create a new +[Sandbox](https://www.braintreepayments.com/get-started) or +[Production](https://www.braintreepayments.com/signup) account. + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-braintree + ${camel-version} + + +# Examples + +Java +from("direct://GENERATE") +.to("braintree://sclientToken/generate"); + +OSGi Blueprint + + + + + + + + + + + + + + + + + + + + + + + + + + + +Starting from Camel 4, OSGI Blueprint is considered a **legacy** DSL. +Users are strongly advised to migrate to the modern XML IO DSL. + +# More Information + +For more information on the endpoints and options see Braintree +references at +[https://developers.braintreepayments.com/reference/overview](https://developers.braintreepayments.com/reference/overview) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configuration|Component configuration||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|apiName|What kind of operation to perform||object| +|methodName|What sub operation to use for the selected operation||string| +|environment|The environment Either SANDBOX or PRODUCTION||string| +|inBody|Sets the name of a parameter to be passed in the exchange In Body||string| +|merchantId|The merchant id provided by Braintree.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|httpReadTimeout|Set read timeout for http calls.||integer| +|httpLogLevel|Set logging level for http calls, see java.util.logging.Level||string| +|httpLogName|Set log category to use to log http calls.|Braintree|string| +|logHandlerEnabled|Sets whether to enable the BraintreeLogHandler. It may be desirable to set this to 'false' where an existing JUL - SLF4J logger bridge is on the classpath. This option can also be configured globally on the BraintreeComponent.|true|boolean| +|proxyHost|The proxy host||string| +|proxyPort|The proxy port||integer| +|accessToken|The access token granted by a merchant to another in order to process transactions on their behalf. Used in place of environment, merchant id, public key and private key fields.||string| +|privateKey|The private key provided by Braintree.||string| +|publicKey|The public key provided by Braintree.||string| diff --git a/camel-browse.md b/camel-browse.md new file mode 100644 index 0000000000000000000000000000000000000000..0d668a2c060fb054fb370b99ee6aa8b224b52a00 --- /dev/null +++ b/camel-browse.md @@ -0,0 +1,57 @@ +# Browse + +**Since Camel 1.3** + +**Both producer and consumer are supported** + +The Browse component provides a simple BrowsableEndpoint which can be +useful for testing, visualization tools or debugging. The exchanges sent +to the endpoint are all available to be browsed. + +# URI format + + browse:someId[?options] + +Where *someId* can be any string to uniquely identify the endpoint. + +# Sample + +In the route below, we insert a `browse:` component to be able to browse +the Exchanges that are passing through: + + from("activemq:order.in").to("browse:orderReceived").to("bean:processOrder"); + +We can now inspect the received exchanges from within the Java code: + + private CamelContext context; + + public void inspectReceivedOrders() { + BrowsableEndpoint browse = context.getEndpoint("browse:orderReceived", BrowsableEndpoint.class); + List exchanges = browse.getExchanges(); + + // then we can inspect the list of received exchanges from Java + for (Exchange exchange : exchanges) { + String payload = exchange.getIn().getBody(); + // do something with payload + } + } + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|A name which can be any string to uniquely identify the endpoint||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-caffeine-cache.md b/camel-caffeine-cache.md new file mode 100644 index 0000000000000000000000000000000000000000..6332ee204f18e286520b315b8da6acdcddbd57a7 --- /dev/null +++ b/camel-caffeine-cache.md @@ -0,0 +1,97 @@ +# Caffeine-cache + +**Since Camel 2.20** + +**Only producer is supported** + +The Caffeine Cache component enables you to perform caching operations +using the simple cache from Caffeine. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-caffeine + x.x.x + + + + caffeine-cache://cacheName[?options] + +You can append query options to the URI in the following format: +`?option=value&option=#beanRef&...`. + +# Examples + +You can use your cache with the following code: + + @BindToRegistry("cache") + Cache cache = Caffeine.newBuilder().recordStats().build(); + + @Override + protected RouteBuilder createRouteBuilder() throws Exception { + return new RouteBuilder() { + public void configure() { + from("direct://start") + .to("caffeine-cache://cache?action=PUT&key=1") + .to("caffeine-cache://cache?key=1&action=GET") + .log("Test! ${body}") + .to("mock:result"); + } + }; + } + +In this way, you’ll work always on the same cache in the registry. + +# Checking the operation result + +Each time you’ll use an operation on the cache, you’ll have two +different headers to check for status: + +- `CaffeineConstants.ACTION_HAS_RESULT` + +- `CaffeineConstants.ACTION_SUCCEEDED` + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|action|To configure the default cache action. If an action is set in the message header, then the operation from the header takes precedence.||string| +|createCacheIfNotExist|Automatic create the Caffeine cache if none has been configured or exists in the registry.|true|boolean| +|evictionType|Set the eviction Type for this cache|SIZE\_BASED|object| +|expireAfterAccessTime|Specifies that each entry should be automatically removed from the cache once a fixed duration has elapsed after the entry's creation, the most recent replacement of its value, or its last read. Access time is reset by all cache read and write operations. The unit is in seconds.|300|integer| +|expireAfterWriteTime|Specifies that each entry should be automatically removed from the cache once a fixed duration has elapsed after the entry's creation, or the most recent replacement of its value. The unit is in seconds.|300|integer| +|initialCapacity|Sets the minimum total size for the internal data structures. Providing a large enough estimate at construction time avoids the need for expensive resizing operations later, but setting this value unnecessarily high wastes memory.||integer| +|key|To configure the default action key. If a key is set in the message header, then the key from the header takes precedence.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|maximumSize|Specifies the maximum number of entries the cache may contain. Note that the cache may evict an entry before this limit is exceeded or temporarily exceed the threshold while evicting. As the cache size grows close to the maximum, the cache evicts entries that are less likely to be used again. For example, the cache may evict an entry because it hasn't been used recently or very often. When size is zero, elements will be evicted immediately after being loaded into the cache. This can be useful in testing, or to disable caching temporarily without a code change. As eviction is scheduled on the configured executor, tests may instead prefer to configure the cache to execute tasks directly on the same thread.||integer| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|cacheLoader|To configure a CacheLoader in case of a LoadCache use||object| +|configuration|Sets the global component configuration||object| +|removalListener|Set a specific removal Listener for the cache||object| +|statsCounter|Set a specific Stats Counter for the cache stats||object| +|statsEnabled|To enable stats on the cache|false|boolean| +|valueType|The cache value type, default java.lang.Object||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|Cache name||string| +|action|To configure the default cache action. If an action is set in the message header, then the operation from the header takes precedence.||string| +|createCacheIfNotExist|Automatic create the Caffeine cache if none has been configured or exists in the registry.|true|boolean| +|evictionType|Set the eviction Type for this cache|SIZE\_BASED|object| +|expireAfterAccessTime|Specifies that each entry should be automatically removed from the cache once a fixed duration has elapsed after the entry's creation, the most recent replacement of its value, or its last read. Access time is reset by all cache read and write operations. The unit is in seconds.|300|integer| +|expireAfterWriteTime|Specifies that each entry should be automatically removed from the cache once a fixed duration has elapsed after the entry's creation, or the most recent replacement of its value. The unit is in seconds.|300|integer| +|initialCapacity|Sets the minimum total size for the internal data structures. Providing a large enough estimate at construction time avoids the need for expensive resizing operations later, but setting this value unnecessarily high wastes memory.||integer| +|key|To configure the default action key. If a key is set in the message header, then the key from the header takes precedence.||string| +|maximumSize|Specifies the maximum number of entries the cache may contain. Note that the cache may evict an entry before this limit is exceeded or temporarily exceed the threshold while evicting. As the cache size grows close to the maximum, the cache evicts entries that are less likely to be used again. For example, the cache may evict an entry because it hasn't been used recently or very often. When size is zero, elements will be evicted immediately after being loaded into the cache. This can be useful in testing, or to disable caching temporarily without a code change. As eviction is scheduled on the configured executor, tests may instead prefer to configure the cache to execute tasks directly on the same thread.||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|cacheLoader|To configure a CacheLoader in case of a LoadCache use||object| +|removalListener|Set a specific removal Listener for the cache||object| +|statsCounter|Set a specific Stats Counter for the cache stats||object| +|statsEnabled|To enable stats on the cache|false|boolean| +|valueType|The cache value type, default java.lang.Object||string| diff --git a/camel-caffeine-loadcache.md b/camel-caffeine-loadcache.md new file mode 100644 index 0000000000000000000000000000000000000000..8a84c6895614dca002fea0d14ee1c685e01884e9 --- /dev/null +++ b/camel-caffeine-loadcache.md @@ -0,0 +1,68 @@ +# Caffeine-loadcache + +**Since Camel 2.20** + +**Only producer is supported** + +The Caffeine LoadCache component enables you to perform caching +operations using the LoadingCache from Caffeine. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-caffeine + x.x.x + + + +# URI format + + caffeine-loadcache://cacheName[?options] + +You can append query options to the URI in the following format: +`?option=value&option=#beanRef&...` + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|action|To configure the default cache action. If an action is set in the message header, then the operation from the header takes precedence.||string| +|createCacheIfNotExist|Automatic create the Caffeine cache if none has been configured or exists in the registry.|true|boolean| +|evictionType|Set the eviction Type for this cache|SIZE\_BASED|object| +|expireAfterAccessTime|Specifies that each entry should be automatically removed from the cache once a fixed duration has elapsed after the entry's creation, the most recent replacement of its value, or its last read. Access time is reset by all cache read and write operations. The unit is in seconds.|300|integer| +|expireAfterWriteTime|Specifies that each entry should be automatically removed from the cache once a fixed duration has elapsed after the entry's creation, or the most recent replacement of its value. The unit is in seconds.|300|integer| +|initialCapacity|Sets the minimum total size for the internal data structures. Providing a large enough estimate at construction time avoids the need for expensive resizing operations later, but setting this value unnecessarily high wastes memory.||integer| +|key|To configure the default action key. If a key is set in the message header, then the key from the header takes precedence.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|maximumSize|Specifies the maximum number of entries the cache may contain. Note that the cache may evict an entry before this limit is exceeded or temporarily exceed the threshold while evicting. As the cache size grows close to the maximum, the cache evicts entries that are less likely to be used again. For example, the cache may evict an entry because it hasn't been used recently or very often. When size is zero, elements will be evicted immediately after being loaded into the cache. This can be useful in testing, or to disable caching temporarily without a code change. As eviction is scheduled on the configured executor, tests may instead prefer to configure the cache to execute tasks directly on the same thread.||integer| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|cacheLoader|To configure a CacheLoader in case of a LoadCache use||object| +|configuration|Sets the global component configuration||object| +|removalListener|Set a specific removal Listener for the cache||object| +|statsCounter|Set a specific Stats Counter for the cache stats||object| +|statsEnabled|To enable stats on the cache|false|boolean| +|valueType|The cache value type, default java.lang.Object||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|the cache name||string| +|action|To configure the default cache action. If an action is set in the message header, then the operation from the header takes precedence.||string| +|createCacheIfNotExist|Automatic create the Caffeine cache if none has been configured or exists in the registry.|true|boolean| +|evictionType|Set the eviction Type for this cache|SIZE\_BASED|object| +|expireAfterAccessTime|Specifies that each entry should be automatically removed from the cache once a fixed duration has elapsed after the entry's creation, the most recent replacement of its value, or its last read. Access time is reset by all cache read and write operations. The unit is in seconds.|300|integer| +|expireAfterWriteTime|Specifies that each entry should be automatically removed from the cache once a fixed duration has elapsed after the entry's creation, or the most recent replacement of its value. The unit is in seconds.|300|integer| +|initialCapacity|Sets the minimum total size for the internal data structures. Providing a large enough estimate at construction time avoids the need for expensive resizing operations later, but setting this value unnecessarily high wastes memory.||integer| +|key|To configure the default action key. If a key is set in the message header, then the key from the header takes precedence.||string| +|maximumSize|Specifies the maximum number of entries the cache may contain. Note that the cache may evict an entry before this limit is exceeded or temporarily exceed the threshold while evicting. As the cache size grows close to the maximum, the cache evicts entries that are less likely to be used again. For example, the cache may evict an entry because it hasn't been used recently or very often. When size is zero, elements will be evicted immediately after being loaded into the cache. This can be useful in testing, or to disable caching temporarily without a code change. As eviction is scheduled on the configured executor, tests may instead prefer to configure the cache to execute tasks directly on the same thread.||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|cacheLoader|To configure a CacheLoader in case of a LoadCache use||object| +|removalListener|Set a specific removal Listener for the cache||object| +|statsCounter|Set a specific Stats Counter for the cache stats||object| +|statsEnabled|To enable stats on the cache|false|boolean| +|valueType|The cache value type, default java.lang.Object||string| diff --git a/camel-chatscript.md b/camel-chatscript.md new file mode 100644 index 0000000000000000000000000000000000000000..34f81b28f501816ed337603104c4cfe1519491cb --- /dev/null +++ b/camel-chatscript.md @@ -0,0 +1,42 @@ +# Chatscript + +**Since Camel 3.0** + +**Only producer is supported** + +The ChatScript component allows you to interact with [ChatScript +Server](https://github.com/ChatScript/ChatScript) and have +conversations. This component is stateless and relies on ChatScript to +maintain chat history. + +This component expects a JSON with the following fields: + + { + "username": "name here", + "botname": "name here", + "body": "body here" + } + +Refer to the file +[`ChatScriptMessage.java`](https://github.com/apache/camel/blob/main/components/camel-chatscript/src/main/java/org/apache/camel/component/chatscript/ChatScriptMessage.java) +for details and samples. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|Hostname or IP of the server on which CS server is running||string| +|port|Port on which ChatScript is listening to|1024|integer| +|botName|Name of the Bot in CS to converse with||string| +|chatUserName|Username who initializes the CS conversation. To be set when chat is initialized from camel route||string| +|resetChat|Issues :reset command to start a new conversation everytime|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-chunk.md b/camel-chunk.md new file mode 100644 index 0000000000000000000000000000000000000000..c0064b9cf0a24bdec9fa8b4df28d4a8978104ba2 --- /dev/null +++ b/camel-chunk.md @@ -0,0 +1,158 @@ +# Chunk + +**Since Camel 2.15** + +**Only producer is supported** + +The Chunk component allows for processing a message using a +[Chunk](http://www.x5software.com/chunk/examples/ChunkExample?loc=en_US) +template. This can be ideal when using Templating to generate responses +for requests. + +# URI format + + chunk:templateName[?options] + +Where **templateName** is the classpath-local URI of the template to +invoke. + +You can append query options to the URI in the following format: +`?option=value&option=value&...` + +Chunk component will look for a specific template in the *themes* folder +with extensions *.chtml* or \_.cxml. \_If you need to specify a +different folder or extensions, you will need to use the specific +options listed above. + +# Chunk Context + +Camel will provide exchange information in the Chunk context (just a +`Map`). The `Exchange` is transferred as: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
keyvalue

exchange

The Exchange +itself.

exchange.properties

The Exchange +properties.

variables

The variables

headers

The headers of the In message.

camelContext

The Camel Context.

request

The In message.

body

The In message body.

response

The Out message (only for InOut message +exchange pattern).

+ +# Dynamic templates + +Camel provides two headers by which you can define a different resource +location for a template or the template content itself. If any of these +headers is set, then Camel uses this over the endpoint configured +resource. This allows you to provide a dynamic template at runtime. + +# Samples + +For example, you could use something like: + + from("activemq:My.Queue"). + to("chunk:template"); + +To use a Chunk template to formulate a response for a message for InOut +message exchanges (where there is a `JMSReplyTo` header). + +If you want to use InOnly and consume the message and send it to another +destination, you could use: + + from("activemq:My.Queue"). + to("chunk:template"). + to("activemq:Another.Queue"); + +It’s possible to specify what template the component should use +dynamically via a header, so for example: + + from("direct:in"). + setHeader(ChunkConstants.CHUNK_RESOURCE_URI).constant("template"). + to("chunk:dummy?allowTemplateFromHeader=true"); + +An example of Chunk component options use: + + from("direct:in"). + to("chunk:file_example?themeFolder=template&themeSubfolder=subfolder&extension=chunk"); + +In this example, the Chunk component will look for the file +`file_example.chunk` in the folder `template/subfolder`. + +# The Email Sample + +In this sample, we want to use Chunk templating for an order +confirmation email. The email template is laid out in Chunk as: + + Dear {$headers.lastName}, {$headers.firstName} + + Thanks for the order of {$headers.item}. + + Regards Camel Riders Bookstore + {$body} + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|resourceUri|Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod.||string| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|contentCache|Sets whether to use resource content cache or not|false|boolean| +|encoding|Define the encoding of the body||string| +|extension|Define the file extension of the template||string| +|themeFolder|Define the themes folder to scan||string| +|themeLayer|Define the theme layer to elaborate||string| +|themeSubfolder|Define the themes subfolder to scan||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-class.md b/camel-class.md new file mode 100644 index 0000000000000000000000000000000000000000..a2bbb38a1801fbd0dfc23d286076b4e94c924581 --- /dev/null +++ b/camel-class.md @@ -0,0 +1,79 @@ +# Class + +**Since Camel 2.4** + +**Only producer is supported** + +The Class component binds beans to Camel message exchanges. It works in +the same way as the [Bean](#bean-component.adoc) component, but instead +of looking up beans from a Registry, it creates the bean based on the +class name. + +# URI format + + class:className[?options] + +Where `className` is the fully qualified class name to create and use as +bean. + +# Using + +You simply use the **class** component just as the +[Bean](#bean-component.adoc) component but by specifying the fully +qualified class name instead. For example to use the `MyFooBean` you +have to do as follows: + + from("direct:start") + .to("class:org.apache.camel.component.bean.MyFooBean") + .to("mock:result"); + +You can also specify which method to invoke on the `MyFooBean`, for +example `hello`: + + from("direct:start") + .to("class:org.apache.camel.component.bean.MyFooBean?method=hello") + .to("mock:result"); + +# Setting properties on the created instance + +In the endpoint uri you can specify properties to set on the created +instance, for example, if it has a `setPrefix` method: + + from("direct:start") + .to("class:org.apache.camel.component.bean.MyPrefixBean?bean.prefix=Bye") + .to("mock:result"); + +And you can also use the `#` syntax to refer to properties to be looked +up in the Registry. + + from("direct:start") + .to("class:org.apache.camel.component.bean.MyPrefixBean?bean.cool=#foo") + .to("mock:result"); + +Which will look up a bean from the Registry with the id `foo` and invoke +the `setCool` method on the created instance of the `MyPrefixBean` +class. + +See more details on the [Bean](#bean-component.adoc) component as the +**class** component works in much the same way. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|scope|Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using delegate scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry.|Singleton|object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|beanInfoCacheSize|Maximum cache size of internal cache for bean introspection. Setting a value of 0 or negative will disable the cache.|1000|integer| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|beanName|Sets the name of the bean to invoke||string| +|method|Sets the name of the method to invoke on the bean||string| +|scope|Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using prototype scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry.|Singleton|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|parameters|Used for configuring additional properties on the bean||object| diff --git a/camel-cm-sms.md b/camel-cm-sms.md new file mode 100644 index 0000000000000000000000000000000000000000..9d932c3457000c338ab8f0d886ab88b131c92d36 --- /dev/null +++ b/camel-cm-sms.md @@ -0,0 +1,42 @@ +# Cm-sms + +**Since Camel 2.18** + +**Only producer is supported** + +**Camel-Cm-Sms** is an [Apache Camel](http://camel.apache.org/) +component for the [CM SMS Gateway](https://www.cmtelecom.com) + +It allows integrating [CM SMS +API](https://dashboard.onlinesmsgateway.com/docs) in an application as a +camel component. + +You must have a valid account. More information is available at [CM +Telecom](https://www.cmtelecom.com/support). + +# Sample + + cm-sms://sgw01.cm.nl/gateway.ashx?defaultFrom=DefaultSender&defaultMaxNumberOfParts=8&productToken=xxxxx + +You can try [this project](https://github.com/oalles/camel-cm-sample) to +see how camel-cm-sms can be integrated in a camel route. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|SMS Provider HOST with scheme||string| +|defaultFrom|This is the sender name. The maximum length is 11 characters.||string| +|defaultMaxNumberOfParts|If it is a multipart message forces the max number. Message can be truncated. Technically the gateway will first check if a message is larger than 160 characters, if so, the message will be cut into multiple 153 characters parts limited by these parameters.|8|integer| +|productToken|The unique token to use||string| +|testConnectionOnStartup|Whether to test the connection to the SMS Gateway on startup|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-coap.md b/camel-coap.md new file mode 100644 index 0000000000000000000000000000000000000000..5b3afaf05492349d3e7de088db868143cc4ce295 --- /dev/null +++ b/camel-coap.md @@ -0,0 +1,111 @@ +# Coap + +**Since Camel 2.16** + +**Both producer and consumer are supported** + +Camel-CoAP is an [Apache Camel](http://camel.apache.org/) component that +allows you to work with CoAP, a lightweight REST-type protocol for +machine-to-machine operation. [CoAP](http://coap.technology/), +Constrained Application Protocol is a specialized web transfer protocol +for use with constrained nodes and constrained networks, and it is based +on RFC 7252. + +Camel supports the DTLS, TCP and TLS protocols via the following URI +schemes: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + +
SchemeProtocol

coap

UDP

coaps

UDP + DTLS

coap+tcp

TCP

coaps+tcp

TCP + TLS

+ +There are a number of different configuration options to configure TLS. +For both DTLS (the "coaps" uri scheme) and TCP + TLS (the "coaps+tcp" +uri scheme), it is possible to use a "sslContextParameters" parameter, +from which the camel-coap component will extract the required truststore +/ keystores etc. to set up TLS. In addition, the DTLS protocol supports +two alternative configuration mechanisms. To use a pre-shared key, +configure a pskStore, and to work with raw public keys, configure +privateKey + publicKey objects. + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-coap + x.x.x + + + +# Configuring the CoAP producer request method + +The following rules determine which request method the CoAP producer +will use to invoke the target URI: + +1. The value of the `CamelCoapMethod` header + +2. **GET** if a query string is provided on the target CoAP server URI. + +3. **POST** if the message exchange body is not null. + +4. **GET** otherwise. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|uri|The URI for the CoAP endpoint||string| +|coapMethodRestrict|Comma separated list of methods that the CoAP consumer will bind to. The default is to bind to all methods (DELETE, GET, POST, PUT).||string| +|observable|Make CoAP resource observable for source endpoint, based on RFC 7641.|false|boolean| +|observe|Send an observe request from a source endpoint, based on RFC 7641.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|notify|Notify observers that the resource of this URI has changed, based on RFC 7641. Use this flag on a destination endpoint, with a URI that matches an existing source endpoint URI.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|advancedCertificateVerifier|Set the AdvancedCertificateVerifier to use to determine trust in raw public keys.||object| +|advancedPskStore|Set the AdvancedPskStore to use for pre-shared key.||object| +|alias|Sets the alias used to query the KeyStore for the private key and certificate. This parameter is used when we are enabling TLS with certificates on the service side, and similarly on the client side when TLS is used with certificates and client authentication. If the parameter is not specified then the default behavior is to use the first alias in the keystore that contains a key entry. This configuration parameter does not apply to configuring TLS via a Raw Public Key or a Pre-Shared Key.||string| +|cipherSuites|Sets the cipherSuites String. This is a comma separated String of ciphersuites to configure. If it is not specified, then it falls back to getting the ciphersuites from the sslContextParameters object.||string| +|clientAuthentication|Sets the configuration options for server-side client-authentication requirements. The value must be one of NONE, WANT, REQUIRE. If this value is not specified, then it falls back to checking the sslContextParameters.getServerParameters().getClientAuthentication() value.||object| +|privateKey|Set the configured private key for use with Raw Public Key.||object| +|publicKey|Set the configured public key for use with Raw Public Key.||object| +|recommendedCipherSuitesOnly|The CBC cipher suites are not recommended. If you want to use them, you first need to set the recommendedCipherSuitesOnly option to false.|true|boolean| +|sslContextParameters|Set the SSLContextParameters object for setting up TLS. This is required for coapstcp, and for coaps when we are using certificates for TLS (as opposed to RPK or PKS).||object| diff --git a/camel-cometd.md b/camel-cometd.md new file mode 100644 index 0000000000000000000000000000000000000000..d93676a74ec4525f180492ee6a3ed05e80901885 --- /dev/null +++ b/camel-cometd.md @@ -0,0 +1,148 @@ +# Cometd + +**Since Camel 2.0** + +**Both producer and consumer are supported** + +The Cometd component is a transport mechanism for working with the +[jetty](http://www.mortbay.org/jetty) implementation of the +[cometd/bayeux +protocol](http://docs.codehaus.org/display/JETTY/Cometd+%28aka+Bayeux%29). +Using this component in combination with the dojo toolkit library, it’s +possible to push Camel messages directly into the browser using an +AJAX-based mechanism. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-cometd + x.x.x + + + +# URI format + + cometd://host:port/channelName[?options] + +The **channelName** represents a topic that can be subscribed to by the +Camel endpoints. + + cometd://localhost:8080/service/mychannel + cometds://localhost:8443/service/mychannel + +where `cometds:` represents an SSL configured endpoint. + +# Samples + +Below, you can find some examples of how to pass the parameters. + +For file, for webapp resources located in the Web Application directory +\-→ `cometd://localhost:8080?resourceBase=file./webapp`. + +For classpath, when, for example, the web resources are packaged inside +the webapp folder -→ +`cometd://localhost:8080?resourceBase=classpath:webapp` + +# Authentication + +You can configure custom `SecurityPolicy` and `Extension`'s to the +`CometdComponent` which allows you to use authentication as [documented +here](http://cometd.org/documentation/howtos/authentication) + +# Setting up SSL for Cometd Component + +## Using the JSSE Configuration Utility + +The Cometd component supports SSL/TLS configuration through the [Camel +JSSE Configuration +Utility](#manual::camel-configuration-utilities.adoc). This utility +greatly decreases the amount of component-specific code you need to +write and is configurable at the endpoint and component levels. The +following examples demonstrate how to use the utility with the Cometd +component. You need to configure SSL on the CometdComponent.x + +Java +Programmatic configuration of the component: + + KeyStoreParameters ksp = new KeyStoreParameters(); + ksp.setResource("/users/home/server/keystore.jks"); + ksp.setPassword("keystorePassword"); + + KeyManagersParameters kmp = new KeyManagersParameters(); + kmp.setKeyStore(ksp); + kmp.setKeyPassword("keyPassword"); + + TrustManagersParameters tmp = new TrustManagersParameters(); + tmp.setKeyStore(ksp); + + SSLContextParameters scp = new SSLContextParameters(); + scp.setKeyManagers(kmp); + scp.setTrustManagers(tmp); + + CometdComponent commetdComponent = getContext().getComponent("cometds", CometdComponent.class); + commetdComponent.setSslContextParameters(scp); + +Spring XML +\ +\ +\ +\ +[camel:trustManagers](camel:trustManagers) +\ +\ +\ + + + + + + ... + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|extensions|To use a list of custom BayeuxServer.Extension that allows modifying incoming and outgoing requests.||array| +|securityPolicy|To use a custom configured SecurityPolicy to control authorization||object| +|sslContextParameters|To configure security using SSLContextParameters||object| +|sslKeyPassword|The password for the keystore when using SSL.||string| +|sslKeystore|The path to the keystore.||string| +|sslPassword|The password when using SSL.||string| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|Hostname||string| +|port|Host port number||integer| +|channelName|The channelName represents a topic that can be subscribed to by the Camel endpoints.||string| +|allowedOrigins|The origins domain that support to cross, if the crosssOriginFilterOn is true|\*|string| +|baseResource|The root directory for the web resources or classpath. Use the protocol file: or classpath: depending if you want that the component loads the resource from file system or classpath. Classpath is required for OSGI deployment where the resources are packaged in the jar||string| +|crossOriginFilterOn|If true, the server will support for cross-domain filtering|false|boolean| +|filterPath|The filterPath will be used by the CrossOriginFilter, if the crosssOriginFilterOn is true||string| +|interval|The client side poll timeout in milliseconds. How long a client will wait between reconnects||integer| +|jsonCommented|If true, the server will accept JSON wrapped in a comment and will generate JSON wrapped in a comment. This is a defence against Ajax Hijacking.|true|boolean| +|logLevel|Logging level. 0=none, 1=info, 2=debug.|1|integer| +|maxInterval|The max client side poll timeout in milliseconds. A client will be removed if a connection is not received in this time.|30000|integer| +|multiFrameInterval|The client side poll timeout, if multiple connections are detected from the same browser.|1500|integer| +|timeout|The server side poll timeout in milliseconds. This is how long the server will hold a reconnect request before responding.|240000|integer| +|sessionHeadersEnabled|Whether to include the server session headers in the Camel message when creating a Camel Message for incoming requests.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|disconnectLocalSession|Whether to disconnect local sessions after publishing a message to its channel. Disconnecting local session is needed as they are not swept by default by CometD, and therefore you can run out of memory.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-consul.md b/camel-consul.md new file mode 100644 index 0000000000000000000000000000000000000000..b2b8cf9dc42ac0449826aa7f1e45d55280d08ed5 --- /dev/null +++ b/camel-consul.md @@ -0,0 +1,199 @@ +# Consul + +**Since Camel 2.18** + +**Both producer and consumer are supported** + +The Consul component is a component for integrating your application +with [Hashicorp Consul](https://www.consul.io/). + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-consul + ${camel-version} + + +# URI format + + consul://domain?[options] + +# Api Endpoint + +The `apiEndpoint` denotes the type of [consul +api](https://www.consul.io/api-docs) which should be addressed. + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DomainProducerConsumer

kv

ConsulKeyValueProducer

ConsulKeyValueConsumer

event

ConsulEventProducer

ConsulEventConsumer

agent

ConsulAgentProducer

-

coordinates

ConsulCoordinatesProducer

-

health

ConsulHealthProducer

-

status

ConsulStatusProducer

-

preparedQuery

ConsulPreparedQueryProducer

-

catalog

ConsulCatalogProducer

-

session

ConsulSessionProducer

-

+ +# Producer Examples + +As an example, we will show how to use the `ConsulAgentProducer` to +register a service by means of the Consul agent api. + +Registering and unregistering are examples for possible actions against +the Consul agent api. + +The desired action can be defined by setting the header +`ConsulConstants.CONSUL_ACTION` to a value from the `ConsulXXXActions` +interface of the respective Consul api. E.g. `ConsulAgentActions` +contains the actions for the agent api. + +If you set `CONSUL_ACTION` to `ConsulAgentActions.REGISTER`, the agent +action `REGISTER` will be executed. + +Which producer action invoked by which consul api is defined by the +respective producer. E.g., the `ConsulAgentProducer` maps +`ConsulAgentActions.REGISTER` to an invocation of +`AgentClient.register`. + + from("direct:registerFooService") + .setBody().constant(ImmutableRegistration.builder() + .id("foo-1") + .name("foo") + .address("localhost") + .port(80) + .build()) + .setHeader(ConsulConstants.CONSUL_ACTION, constant(ConsulAgentActions.REGISTER)) + .to("consul:agent"); + +It is also possible to set a default action on the consul endpoint and +do without the header: + + consul:agent?action=REGISTER + +# Registering Camel Routes with Consul + +You can employ a `ServiceRegistrationRoutePolicy` to register Camel +routes as services with Consul automatically. + + from("jetty:http://0.0.0.0:8080/service/endpoint").routeId("foo-1") + .routeProperty(ServiceDefinition.SERVICE_META_ID, "foo-1") + .routeProperty(ServiceDefinition.SERVICE_META_NAME, "foo") + .routePolicy(new ServiceRegistrationRoutePolicy()) + ... + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|connectTimeout|Connect timeout for OkHttpClient||object| +|consulClient|Reference to a org.kiwiproject.consul.Consul in the registry.||object| +|key|The default key. Can be overridden by CamelConsulKey||string| +|pingInstance|Configure if the AgentClient should attempt a ping before returning the Consul instance|true|boolean| +|readTimeout|Read timeout for OkHttpClient||object| +|tags|Set tags. You can separate multiple tags by comma.||string| +|url|The Consul agent URL||string| +|writeTimeout|Write timeout for OkHttpClient||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|action|The default action. Can be overridden by CamelConsulAction||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|valueAsString|Default to transform values retrieved from Consul i.e. on KV endpoint to string.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configuration|Consul configuration||object| +|consistencyMode|The consistencyMode used for queries, default ConsistencyMode.DEFAULT|DEFAULT|object| +|datacenter|The data center||string| +|nearNode|The near node to use for queries.||string| +|nodeMeta|The note meta-data to use for queries.||array| +|aclToken|Sets the ACL token to be used with Consul||string| +|password|Sets the password to be used for basic authentication||string| +|sslContextParameters|SSL configuration using an org.apache.camel.support.jsse.SSLContextParameters instance.||object| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| +|userName|Sets the username to be used for basic authentication||string| +|blockSeconds|The second to wait for a watch event, default 10 seconds|10|integer| +|firstIndex|The first index for watch for, default 0|0|object| +|recursive|Recursively watch, default false|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|apiEndpoint|The API endpoint||string| +|connectTimeout|Connect timeout for OkHttpClient||object| +|consulClient|Reference to a org.kiwiproject.consul.Consul in the registry.||object| +|key|The default key. Can be overridden by CamelConsulKey||string| +|pingInstance|Configure if the AgentClient should attempt a ping before returning the Consul instance|true|boolean| +|readTimeout|Read timeout for OkHttpClient||object| +|tags|Set tags. You can separate multiple tags by comma.||string| +|url|The Consul agent URL||string| +|writeTimeout|Write timeout for OkHttpClient||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|action|The default action. Can be overridden by CamelConsulAction||string| +|valueAsString|Default to transform values retrieved from Consul i.e. on KV endpoint to string.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|consistencyMode|The consistencyMode used for queries, default ConsistencyMode.DEFAULT|DEFAULT|object| +|datacenter|The data center||string| +|nearNode|The near node to use for queries.||string| +|nodeMeta|The note meta-data to use for queries.||array| +|aclToken|Sets the ACL token to be used with Consul||string| +|password|Sets the password to be used for basic authentication||string| +|sslContextParameters|SSL configuration using an org.apache.camel.support.jsse.SSLContextParameters instance.||object| +|userName|Sets the username to be used for basic authentication||string| +|blockSeconds|The second to wait for a watch event, default 10 seconds|10|integer| +|firstIndex|The first index for watch for, default 0|0|object| +|recursive|Recursively watch, default false|false|boolean| diff --git a/camel-controlbus.md b/camel-controlbus.md new file mode 100644 index 0000000000000000000000000000000000000000..662243ae9295146d6623e7ed8607385072008160 --- /dev/null +++ b/camel-controlbus.md @@ -0,0 +1,150 @@ +# Controlbus + +**Since Camel 2.11** + +**Only producer is supported** + +The [Control Bus](http://www.eaipatterns.com/ControlBus.html) from the +EIP patterns allows for the integration system to be monitored and +managed from within the framework. + +
+image +
+ +Use a Control Bus to manage an enterprise integration system. The +Control Bus uses the same messaging mechanism used by the application +data, but uses separate channels to transmit data that is relevant to +the management of components involved in the message flow. + +In Camel, you can manage and monitor the application: + +- using JMX. + +- by using a Java API from the `CamelContext`. + +- from the `org.apache.camel.api.management` package. + +- using the event notifier. + +- using the ControlBus component. + +The ControlBus component provides easy management of Camel applications +based on the [Control Bus](#controlbus-component.adoc) EIP pattern. For +example, by sending a message to an endpoint, you can control the +lifecycle of routes, or gather performance statistics. + + controlbus:command[?options] + +Where `command` can be any string to identify which type of command to +use. + +# Commands + + ++++ + + + + + + + + + + + + + + + + +
CommandDescription

route

To control routes using the +routeId and action parameter.

language

Allows you to specify a Language to use for evaluating the +message body. If there is any result from the evaluation, then the +result is put in the message body.

+ +# Using route command + +The route command allows you to do common tasks on a given route very +easily. For example, to start a route, you can send an empty message to +this endpoint: + + template.sendBody("controlbus:route?routeId=foo&action=start", null); + +To get the status of the route, you can do: + + String status = template.requestBody("controlbus:route?routeId=foo&action=status", null, String.class); + +# Getting performance statistics + +This requires JMX to be enabled (it is enabled by default) then you can +get the performance statics per route, or for the CamelContext. For +example, to get the statics for a route named foo, we can use: + + String xml = template.requestBody("controlbus:route?routeId=foo&action=stats", null, String.class); + +The returned statics is in XML format. It is the same data you can get +from JMX with the `dumpRouteStatsAsXml` operation on the +`ManagedRouteMBean`. + +To get statics for the entire `CamelContext` you just omit the routeId +parameter as shown below: + + String xml = template.requestBody("controlbus:route?action=stats", null, String.class); + +# Using Simple language + +You can use the [Simple](#languages:simple-language.adoc) language with +the control bus. For example, to stop a specific route, you can send a +message to the `"controlbus:language:simple"` endpoint containing the +following message: + + template.sendBody("controlbus:language:simple", "${camelContext.getRouteController().stopRoute('myRoute')}"); + +As this is a void operation, no result is returned. However, if you want +the route status, you can use: + + String status = template.requestBody("controlbus:language:simple", "${camelContext.getRouteController().getRouteStatus('myRoute')}", String.class); + +It’s easier to use the `route` command to control lifecycle of routes. +The `language` command allows you to execute a language script that has +stronger powers such as [Groovy](#languages:groovy-language.adoc) or to +some extend the [Simple](#languages:simple-language.adoc) language. + +For example, to shut down Apache Camel itself, you can do: + + template.sendBody("controlbus:language:simple?async=true", "${camelContext.stop()}"); + +We use `async=true` to stop Camel asynchronously as otherwise we would +be trying to stop Camel while it was in-flight processing the message we +sent to the control bus component. + +You can also use other languages such as +[Groovy](#languages:groovy-language.adoc), etc. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|command|Command can be either route or language||string| +|language|Allows you to specify the name of a Language to use for evaluating the message body. If there is any result from the evaluation, then the result is put in the message body.||object| +|action|To denote an action that can be either: start, stop, or status. To either start or stop a route, or to get the status of the route as output in the message body. You can use suspend and resume to either suspend or resume a route. You can use stats to get performance statics returned in XML format; the routeId option can be used to define which route to get the performance stats for, if routeId is not defined, then you get statistics for the entire CamelContext. The restart action will restart the route. And the fail action will stop and mark the route as failed (stopped due to an exception)||string| +|async|Whether to execute the control bus task asynchronously. Important: If this option is enabled, then any result from the task is not set on the Exchange. This is only possible if executing tasks synchronously.|false|boolean| +|loggingLevel|Logging level used for logging when task is done, or if any exceptions occurred during processing the task.|INFO|object| +|restartDelay|The delay in millis to use when restarting a route.|1000|integer| +|routeId|To specify a route by its id. The special keyword current indicates the current route.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-couchbase.md b/camel-couchbase.md new file mode 100644 index 0000000000000000000000000000000000000000..82e391e718092186a2608ad25d9fdf09f9044a71 --- /dev/null +++ b/camel-couchbase.md @@ -0,0 +1,105 @@ +# Couchbase + +**Since Camel 2.19** + +**Both producer and consumer are supported** + +The **couchbase:** component allows you to treat +[Couchbase](https://www.couchbase.com/) instances as a producer or +consumer of messages. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-couchbase + x.x.x + + + +# URI format + + couchbase:url + +# Couchbase SDK compatibility + +Using collections and scopes is supported only for Couchbase Server 7.0 +and later. + +This component is currently using Java SDK 3.x, so it might be not +compatible with older Couchbase servers anymore. Check the compatibility +[page](https://docs.couchbase.com/java-sdk/current/project-docs/compatibility.html) +for details. + +- The value formerly interpreted as a bucket-name is now interpreted + as a username. The username must correspond to a user defined on the + cluster that is being accessed. + +- The value formerly interpreted as a bucket-password is now + interpreted as the password of the defined user. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|protocol|The protocol to use||string| +|hostname|The hostname to use||string| +|port|The port number to use|8091|integer| +|bucket|The bucket to use||string| +|collection|The collection to use||string| +|key|The key to use||string| +|scope|The scope to use||string| +|consumerProcessedStrategy|Define the consumer Processed strategy to use|none|string| +|descending|Define if this operation is descending or not|false|boolean| +|designDocumentName|The design document name to use|beer|string| +|fullDocument|If true consumer will return complete document instead data defined in view|false|boolean| +|limit|The output limit to use|-1|integer| +|rangeEndKey|Define a range for the end key||string| +|rangeStartKey|Define a range for the start key||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|skip|Define the skip to use|-1|integer| +|viewName|The view name to use|brewery\_beers|string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|autoStartIdForInserts|Define if we want an autostart Id when we are doing an insert operation|false|boolean| +|operation|The operation to do|CCB\_PUT|string| +|persistTo|Where to persist the data|0|integer| +|producerRetryAttempts|Define the number of retry attempts|2|integer| +|producerRetryPause|Define the retry pause between different attempts|5000|integer| +|replicateTo|Where to replicate the data|0|integer| +|startingIdForInsertsFrom|Define the starting Id where we are doing an insert operation||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|additionalHosts|The additional hosts||string| +|connectTimeout|Define the timeoutconnect in milliseconds|30000|duration| +|queryTimeout|Define the operation timeout in milliseconds|2500|duration| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|password|The password to use||string| +|username|The username to use||string| diff --git a/camel-couchdb.md b/camel-couchdb.md new file mode 100644 index 0000000000000000000000000000000000000000..d08abf91bfde5e0e791fdc21f7826e2baadc3cd2 --- /dev/null +++ b/camel-couchdb.md @@ -0,0 +1,122 @@ +# Couchdb + +**Since Camel 2.11** + +**Both producer and consumer are supported** + +The **couchdb:** component allows you to treat +[CouchDB](http://couchdb.apache.org/) instances as a producer or +consumer of messages. Using the lightweight LightCouch API, this camel +component has the following features: + +- As a consumer, monitors couch changesets for inserts, updates and + deletes and publishes these as messages into camel routes. + +- As a producer, can save, update, delete (by using `CouchDbMethod` + with `DELETE` value) documents and get documents by id (by using + `CouchDbMethod` with GET value) into CouchDB. + +- Can support as many endpoints as required, eg for multiple databases + across multiple instances. + +- Ability to have events trigger for only deletes, only + inserts/updates or all (default). + +- Headers set for sequenceId, document revision, document id, and HTTP + method type. + +CouchDB 3.x is not supported. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-couchdb + x.x.x + + + +# URI format + + couchdb:http://hostname[:port]/database?[options] + +Where **hostname** is the hostname of the running couchdb instance. Port +is optional and if not specified, then defaults to 5984. + +Headers are set by the consumer once the message is received. The +producer will also set the headers for downstream processors once the +insert/update has taken place. Any headers set prior to the producer are +ignored. That means, for example, if you set CouchDbId as a header, it +will not be used as the id for insertion, the id of the document will +still be used. + +# Message Body + +The component will use the message body as the document to be inserted. +If the body is an instance of String, then it will be marshaled into a +GSON object before insert. This means that the string must be valid JSON +or the insert / update will fail. If the body is an instance of a +`com.google.gson.JsonElement` then it will be inserted as is. Otherwise, +the producer will throw an unsupported body type exception. + +To update a CouchDB document, its `id` and `rev` field must be part of +the json payload routed to CouchDB by Camel. + +# Samples + +For example, if you wish to consume all inserts, updates and deletes +from a CouchDB instance running locally, on port 9999, then you could +use the following: + + from("couchdb:http://localhost:9999").process(someProcessor); + +If you were only interested in deleting, then you could use the +following: + + from("couchdb:http://localhost:9999?updates=false").process(someProcessor); + +If you want to insert a message as a document, then the body of the +exchange is used: + + from("someProducingEndpoint").process(someProcessor).to("couchdb:http://localhost:9999") + +To start tracking the changes immediately after an update sequence, +implement a custom resume strategy. To do so, it is necessary to +implement a CouchDbResumeStrategy and use the resumable to set the last +(update) offset to start tracking the changes: + +\`\`\` public class CustomSequenceResumeStrategy implements +CouchDbResumeStrategy { @Override public void resume(CouchDbResumable +resumable) { resumable.setLastOffset("custom-last-update"); } } \`\`\` + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|protocol|The protocol to use for communicating with the database.||string| +|hostname|Hostname of the running couchdb instance||string| +|port|Port number for the running couchdb instance|5984|integer| +|database|Name of the database to use||string| +|createDatabase|Creates the database if it does not already exist|false|boolean| +|deletes|Document deletes are published as events|true|boolean| +|heartbeat|How often to send an empty message to keep socket alive in millis|30000|duration| +|maxMessagesPerPoll|Gets the maximum number of messages as a limit to poll at each polling. Gets the maximum number of messages as a limit to poll at each polling. The default value is 10. Use 0 or a negative number to set it as unlimited.|10|integer| +|style|Specifies how many revisions are returned in the changes array. The default, main\_only, will only return the current winning revision; all\_docs will return all leaf revisions (including conflicts and deleted former conflicts.)|main\_only|string| +|updates|Document inserts/updates are published as events|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|password|Password for authenticated databases||string| +|username|Username in case of authenticated databases||string| diff --git a/camel-cql.md b/camel-cql.md new file mode 100644 index 0000000000000000000000000000000000000000..c130483626f733cbb983d9a9b4651ca24ed3d9ea --- /dev/null +++ b/camel-cql.md @@ -0,0 +1,332 @@ +# Cql + +**Since Camel 2.15** + +**Both producer and consumer are supported** + +[Apache Cassandra](http://cassandra.apache.org) is an open source NoSQL +database designed to handle large amounts on commodity hardware. Like +Amazon’s DynamoDB, Cassandra has a peer-to-peer and master-less +architecture to avoid a single point of failure and guaranty high +availability. Like Google’s BigTable, Cassandra data is structured using +column families which can be accessed through the Thrift RPC API or an +SQL-like API called CQL. + +This component aims at integrating Cassandra 2.0+ using the CQL3 API +instead of the Thrift API. It’s based on [Cassandra Java +Driver](https://github.com/datastax/java-driver) provided by DataStax. + +# Endpoint Connection Syntax + +The endpoint can initiate the Cassandra connection or use an existing +one. + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
URIDescription

cql:localhost/keyspace

Single host, default port, usual for +testing

cql:host1,host2/keyspace

Multi host, default port

cql:host1,host2:9042/keyspace

Multi host, custom port

cql:host1,host2

Default port and keyspace

cql:bean:sessionRef

Provided Session reference

+ +To fine-tune the Cassandra connection (SSL options, pooling options, +load balancing policy, retry policy, reconnection policy…), create your +own Cluster instance and give it to the Camel endpoint. + +# Messages + +## Incoming Message + +The Camel Cassandra endpoint expects a bunch of simple objects (`Object` +or `Object[]` or `Collection`) which will be bound to the CQL +statement as query parameters. If the message body is null or empty, +then CQL query will be executed without binding parameters. + +Headers: + +- `CamelCqlQuery` (optional, `String` or `RegularStatement`): CQL + query either as a plain String or built using the `QueryBuilder`. + +## Outgoing Message + +The Camel Cassandra endpoint produces one or many a Cassandra Row +objects depending on the `resultSetConversionStrategy`: + +- `List` if `resultSetConversionStrategy` is `ALL` or + `LIMIT_[0-9]+` + +- Single\` Row\` if `resultSetConversionStrategy` is `ONE` + +- Anything else, if `resultSetConversionStrategy` is a custom + implementation of the `ResultSetConversionStrategy` + +# Repositories + +Cassandra can be used to store message keys or messages for the +idempotent and aggregation EIP. + +Cassandra might not be the best tool for queuing use cases yet, read +[Cassandra anti-patterns queues and queue like +datasets](http://www.datastax.com/dev/blog/cassandra-anti-patterns-queues-and-queue-like-datasets). +It’s advised to use LeveledCompaction and a small GC grace setting for +these tables to allow tombstoned rows to be removed quickly. + +# Idempotent repository + +The `NamedCassandraIdempotentRepository` stores messages keys in a +Cassandra table like this: + +**CAMEL\_IDEMPOTENT.cql** + + CREATE TABLE CAMEL_IDEMPOTENT ( + NAME varchar, -- Repository name + KEY varchar, -- Message key + PRIMARY KEY (NAME, KEY) + ) WITH compaction = {'class':'LeveledCompactionStrategy'} + AND gc_grace_seconds = 86400; + +This repository implementation uses lightweight transactions, (also +known as Compare and Set) and requires Cassandra 2.0.7+. + +Alternatively, the `CassandraIdempotentRepository` does not have a +`NAME` column and can be extended to use a different data model. + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OptionDefaultDescription

table

CAMEL_IDEMPOTENT

Table name

pkColumns

NAME,` KEY`

Primary key columns

name

Repository name, value used for +NAME column

ttl

Key time to live

writeConsistencyLevel

Consistency level used to insert/delete +key: ANY, ONE, TWO, +QUORUM, LOCAL_QUORUM

readConsistencyLevel

Consistency level used to read/check +key: ONE, TWO, QUORUM, +LOCAL_QUORUM

+ +# Aggregation repository + +The `NamedCassandraAggregationRepository` stores exchanges by +correlation key in a Cassandra table like this: + +**CAMEL\_AGGREGATION.cql** + + CREATE TABLE CAMEL_AGGREGATION ( + NAME varchar, -- Repository name + KEY varchar, -- Correlation id + EXCHANGE_ID varchar, -- Exchange id + EXCHANGE blob, -- Serialized exchange + PRIMARY KEY (NAME, KEY) + ) WITH compaction = {'class':'LeveledCompactionStrategy'} + AND gc_grace_seconds = 86400; + +Alternatively, the `CassandraAggregationRepository` does not have a +`NAME` column and can be extended to use a different data model. + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OptionDefaultDescription

table

CAMEL_AGGREGATION

Table name

pkColumns

NAME,KEY

Primary key columns

exchangeIdColumn

EXCHANGE_ID

Exchange Id column

exchangeColumn

EXCHANGE

Exchange content column

name

Repository name, value used for +NAME column

ttl

Exchange time to live

writeConsistencyLevel

Consistency level used to insert/delete +exchange: ANY, ONE, TWO, +QUORUM, LOCAL_QUORUM

readConsistencyLevel

Consistency level used to read/check +exchange: ONE, TWO, QUORUM, +LOCAL_QUORUM

+ +While deserializing, it’s important to notice that the +`unmarshallExchange` method will allow only all java packages and +subpackages and org.apache.camel packages and subpackages. The remaining +classes will be blacklisted. So you’ll need to change the filter in case +of need. This could be accomplished by changing the +deserializationFilter field in the repository. + +# Examples + +To insert something on a table, you can use the following code: + + String CQL = "insert into camel_user(login, first_name, last_name) values (?, ?, ?)"; + from("direct:input") + .to("cql://localhost/camel_ks?cql=" + CQL); + +At this point, you should be able to insert data by using a list as body + + Arrays.asList("davsclaus", "Claus", "Ibsen"); + +The same approach can be used for updating or querying the table. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|beanRef|beanRef is defined using bean:id||string| +|hosts|Hostname(s) Cassandra server(s). Multiple hosts can be separated by comma.||string| +|port|Port number of Cassandra server(s)||integer| +|keyspace|Keyspace to use||string| +|clusterName|Cluster name||string| +|cql|CQL query to perform. Can be overridden with the message header with key CamelCqlQuery.||string| +|datacenter|Datacenter to use|datacenter1|string| +|prepareStatements|Whether to use PreparedStatements or regular Statements|true|boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|extraTypeCodecs|To use a specific comma separated list of Extra Type codecs. Possible values are: BLOB\_TO\_ARRAY, BOOLEAN\_LIST\_TO\_ARRAY, BYTE\_LIST\_TO\_ARRAY, SHORT\_LIST\_TO\_ARRAY, INT\_LIST\_TO\_ARRAY, LONG\_LIST\_TO\_ARRAY, FLOAT\_LIST\_TO\_ARRAY, DOUBLE\_LIST\_TO\_ARRAY, TIMESTAMP\_UTC, TIMESTAMP\_MILLIS\_SYSTEM, TIMESTAMP\_MILLIS\_UTC, ZONED\_TIMESTAMP\_SYSTEM, ZONED\_TIMESTAMP\_UTC, ZONED\_TIMESTAMP\_PERSISTED, LOCAL\_TIMESTAMP\_SYSTEM and LOCAL\_TIMESTAMP\_UTC||string| +|loadBalancingPolicyClass|To use a specific LoadBalancingPolicyClass||string| +|resultSetConversionStrategy|To use a custom class that implements logic for converting ResultSet into message body ALL, ONE, LIMIT\_10, LIMIT\_100...||object| +|session|To use the Session instance (you would normally not use this option)||object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|password|Password for session authentication||string| +|username|Username for session authentication||string| diff --git a/camel-cron.md b/camel-cron.md new file mode 100644 index 0000000000000000000000000000000000000000..8e9d90b331b0d103e64d202f9465881fbd8e534e --- /dev/null +++ b/camel-cron.md @@ -0,0 +1,107 @@ +# Cron + +**Since Camel 3.1** + +**Only consumer is supported** + +The Cron component is a generic interface component that allows +triggering events at a specific time interval specified using the Unix +cron syntax (e.g. `0/2 * * * * ?` to trigger an event every two +seconds). + +As an interface component, the Cron component does not contain a default +implementation. Instead, it requires that the users plug the +implementation of their choice. + +The following standard Camel components support the Cron endpoints: + +- [Camel Quartz](#components::quartz-component.adoc) + +- [Camel Spring](#components::spring-summary.adoc) + +The Cron component is also supported in **Camel K**, which can use the +Kubernetes scheduler to trigger the routes when required by the cron +expression. Camel K does not require additional libraries to be plugged +when using cron expressions compatible with Kubernetes cron syntax. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-cron + x.x.x + + + +Additional libraries may be needed to plug a specific implementation. + +# Usage + +The component can be used to trigger events at specified times, as in +the following example: + +Java +from("cron:tab?schedule=0/1+*+*+*+*+?") +.setBody().constant("event") +.log("${body}"); + +XML + + + +event + + + + +The schedule expression `0/3{plus}10{plus}*{plus}*{plus}*{plus}?` can be +also written as `0/3 10 * * * ?` and triggers an event every three +seconds only in the tenth minute of each hour. + +Breaking down the parts in the schedule expression(in order): + +- Seconds (optional) + +- Minutes + +- Hours + +- Day of month + +- Month + +- Day of the week + +- Year (optional) + +Schedule expressions can be made of five to seven parts. When +expressions are composed of six parts, the first items is the *Seconds* +part (and year is considered missing). + +Other valid examples of schedule expressions are: + +- `0/2 * * * ?` (Five parts, an event every two minutes) + +- `0 0/2 * * * MON-FRI 2030` (Seven parts, an event every two minutes + only in the year 2030) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|cronService|The id of the CamelCronService to use when multiple implementations are provided||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|The name of the cron trigger||string| +|schedule|A cron expression that will be used to generate events||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| diff --git a/camel-crypto.md b/camel-crypto.md new file mode 100644 index 0000000000000000000000000000000000000000..d674b8871dcd82eb3613cb87b98bf7e37f932693 --- /dev/null +++ b/camel-crypto.md @@ -0,0 +1,254 @@ +# Crypto + +**Since Camel 2.3** + +**Only producer is supported** + +With Camel cryptographic endpoints and Java’s Cryptographic extension, +it is possible to create Digital Signatures for Exchanges. Camel +provides a pair of flexible endpoints which get used in concert to +create a signature for an exchange in one part of the exchange’s +workflow and then verify the signature in a later part of the workflow. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-crypto + x.x.x + + + +# Introduction + +Digital signatures make use of Asymmetric Cryptographic techniques to +sign messages. From a (very) high level, the algorithms use pairs of +complimentary keys with the special property that data encrypted with +one key can only be decrypted with the other. One, the private key, is +closely guarded and used to *sign* the message while the other, public +key, is shared around to anyone interested in verifying the signed +messages. Messages are signed by using the private key to encrypting a +digest of the message. This encrypted digest is transmitted along with +the message. On the other side, the verifier recalculates the message +digest and uses the public key to decrypt the digest in the signature. +If both digests match, the verifier knows only the holder of the private +key could have created the signature. + +Camel uses the Signature service from the Java Cryptographic Extension +to do all the heavy cryptographic lifting required to create exchange +signatures. The following are some excellent resources for explaining +the mechanics of Cryptography, Message digests and Digital Signatures +and how to leverage them with the JCE. + +- Bruce Schneier’s Applied Cryptography + +- Beginning Cryptography with Java by David Hook + +- The ever insightful Wikipedia + [Digital\_signatures](http://en.wikipedia.org/wiki/Digital_signature) + +# URI format + +As mentioned, Camel provides a pair of crypto endpoints to create and +verify signatures + + crypto:sign:name[?options] + crypto:verify:name[?options] + +- `crypto:sign` creates the signature and stores it in the Header + keyed by the constant + `org.apache.camel.component.crypto.DigitalSignatureConstants.SIGNATURE`, + i.e., `"CamelDigitalSignature"`. + +- `crypto:verify` will read in the contents of this header and do the + verification calculation. + +To correctly function, the sign and verify process needs a pair of keys +to be shared, signing requiring a `PrivateKey` and verifying a +`PublicKey` (or a `Certificate` containing one). Using the JCE it is +very simple to generate these key pairs, but it is usually most secure +to use a KeyStore to house and share your keys. The DSL is very flexible +about how keys are supplied and provides a number of mechanisms. + +Note a `crypto:sign` endpoint is typically defined in one route and the +complimentary `crypto:verify` in another, though for simplicity in the +examples they appear one after the other. It goes without saying that +both signing and verifying should be configured identically. + +# Using + +## Raw keys + +The most basic way to sign and verify an exchange is with a KeyPair as +follows. + + KeyPair keyPair = KeyGenerator.getInstance("RSA").generateKeyPair(); + + from("direct:sign") + .setHeader(DigitalSignatureConstants.SIGNATURE_PRIVATE_KEY, constant(keys.getPrivate())) + .to("crypto:sign:message") + .to("direct:verify"); + + from("direct:verify") + .setHeader(DigitalSignatureConstants.SIGNATURE_PUBLIC_KEY_OR_CERT, constant(keys.getPublic())) + .to("crypto:verify:check"); + +The same can be achieved with the [Spring XML +Extensions](#manual::spring-xml-extensions.adoc) using references to +keys + +## KeyStores and Aliases. + +The JCE provides a very versatile keystore concept for housing pairs of +private keys and certificates, keeping them encrypted and password +protected. They can be retrieved by applying an alias to the retrieval +APIs. There are a number of ways to get keys and Certificates into a +keystore, most often this is done with the external *keytool* +application. + +The following command will create a keystore containing a key and +certificate aliased by `bob`, which can be used in the following +examples. The password for the keystore and the key is `letmein`. + + keytool -genkey -keyalg RSA -keysize 2048 -keystore keystore.jks -storepass letmein -alias bob -dname "CN=Bob,OU=IT,O=Camel" -noprompt + +The following route first signs an exchange using Bob’s alias from the +KeyStore bound into the Camel Registry, and then verifies it using the +same alias. + + from("direct:sign") + .to("crypto:sign:keystoreSign?alias=bob&keystoreName=myKeystore&password=letmein") + .log("Signature: ${header.CamelDigitalSignature}") + .to("crypto:verify:keystoreVerify?alias=bob&keystoreName=myKeystore&password=letmein") + .log("Verified: ${body}"); + +The following code shows how to load the keystore created using the +above `keytool` command and bind it into the registry with the name +`myKeystore` for use in the above route. The example makes use of the +`@Configuration` and `@BindToRegistry` annotations introduced in Camel 3 +to instantiate the KeyStore and register it with the name `myKeyStore`. + + @Configuration + public class KeystoreConfig { + + @BindToRegistry + public KeyStore myKeystore() throws Exception { + KeyStore store = KeyStore.getInstance("JKS"); + try (FileInputStream fis = new FileInputStream("keystore.jks")) { + store.load(fis, "letmein".toCharArray()); + } + return store; + } + } + +Again in Spring, a ref is used to look up an actual keystore instance. + +## Changing JCE Provider and Algorithm + +Changing the Signature algorithm or the Security provider is a simple +matter of specifying their names. You will need to also use Keys that +are compatible with the algorithm you choose. + +## Changing the Signature Message Header + +It may be desirable to change the message header used to store the +signature. A different header name can be specified in the route +definition as follows + + from("direct:sign") + .to("crypto:sign:keystoreSign?alias=bob&keystoreName=myKeystore&password=letmein&signatureHeaderName=mySignature") + .log("Signature: ${header.mySignature}") + .to("crypto:verify:keystoreVerify?alias=bob&keystoreName=myKeystore&password=letmein&signatureHeaderName=mySignature"); + +## Changing the bufferSize + +In case you need to update the size of the buffer… + +## Supplying Keys dynamically. + +When using a Recipient list or similar EIP, the recipient of an exchange +can vary dynamically. Using the same key across all recipients may be +neither feasible nor desirable. It would be useful to be able to specify +signature keys dynamically on a per-exchange basis. The exchange could +then be dynamically enriched with the key of its target recipient prior +to signing. To facilitate this, the signature mechanisms allow for keys +to be supplied dynamically via the message headers below + +- `DigitalSignatureConstants.SIGNATURE_PRIVATE_KEY`, + `"CamelSignaturePrivateKey"` + +- `DigitalSignatureConstants.SIGNATURE_PUBLIC_KEY_OR_CERT`, + `"CamelSignaturePublicKeyOrCert"` + +Even better would be to dynamically supply a keystore alias. Again, the +alias can be supplied in a message header + +- `DigitalSignatureConstants.KEYSTORE_ALIAS`, + `"CamelSignatureKeyStoreAlias"` + +The header would be set as follows: + + Exchange unsigned = getMandatoryEndpoint("direct:alias-sign").createExchange(); + unsigned.getIn().setBody(payload); + unsigned.getIn().setHeader(DigitalSignatureConstants.KEYSTORE_ALIAS, "bob"); + unsigned.getIn().setHeader(DigitalSignatureConstants.KEYSTORE_PASSWORD, "letmein".toCharArray()); + template.send("direct:alias-sign", unsigned); + Exchange signed = getMandatoryEndpoint("direct:alias-sign").createExchange(); + signed.getIn().copyFrom(unsigned.getMessage()); + signed.getIn().setHeader(DigitalSignatureConstants.KEYSTORE_ALIAS, "bob"); + template.send("direct:alias-verify", signed); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|algorithm|Sets the JCE name of the Algorithm that should be used for the signer.|SHA256withRSA|string| +|alias|Sets the alias used to query the KeyStore for keys and {link java.security.cert.Certificate Certificates} to be used in signing and verifying exchanges. This value can be provided at runtime via the message header org.apache.camel.component.crypto.DigitalSignatureConstants#KEYSTORE\_ALIAS||string| +|certificateName|Sets the reference name for a PrivateKey that can be found in the registry.||string| +|keystore|Sets the KeyStore that can contain keys and Certficates for use in signing and verifying exchanges. A KeyStore is typically used with an alias, either one supplied in the Route definition or dynamically via the message header CamelSignatureKeyStoreAlias. If no alias is supplied and there is only a single entry in the Keystore, then this single entry will be used.||object| +|keystoreName|Sets the reference name for a Keystore that can be found in the registry.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|privateKey|Set the PrivateKey that should be used to sign the exchange||object| +|privateKeyName|Sets the reference name for a PrivateKey that can be found in the registry.||string| +|provider|Set the id of the security provider that provides the configured Signature algorithm.||string| +|publicKeyName|references that should be resolved when the context changes||string| +|secureRandomName|Sets the reference name for a SecureRandom that can be found in the registry.||string| +|signatureHeaderName|Set the name of the message header that should be used to store the base64 encoded signature. This defaults to 'CamelDigitalSignature'||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|bufferSize|Set the size of the buffer used to read in the Exchange payload data.|2048|integer| +|certificate|Set the Certificate that should be used to verify the signature in the exchange based on its payload.||object| +|clearHeaders|Determines if the Signature specific headers be cleared after signing and verification. Defaults to true, and should only be made otherwise at your extreme peril as vital private information such as Keys and passwords may escape if unset.|true|boolean| +|configuration|To use the shared DigitalSignatureConfiguration as configuration||object| +|keyStoreParameters|Sets the KeyStore that can contain keys and Certficates for use in signing and verifying exchanges based on the given KeyStoreParameters. A KeyStore is typically used with an alias, either one supplied in the Route definition or dynamically via the message header CamelSignatureKeyStoreAlias. If no alias is supplied and there is only a single entry in the Keystore, then this single entry will be used.||object| +|publicKey|Set the PublicKey that should be used to verify the signature in the exchange.||object| +|secureRandom|Set the SecureRandom used to initialize the Signature service||object| +|password|Sets the password used to access an aliased PrivateKey in the KeyStore.||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cryptoOperation|Set the Crypto operation from that supplied after the crypto scheme in the endpoint uri e.g. crypto:sign sets sign as the operation.||object| +|name|The logical name of this operation.||string| +|algorithm|Sets the JCE name of the Algorithm that should be used for the signer.|SHA256withRSA|string| +|alias|Sets the alias used to query the KeyStore for keys and {link java.security.cert.Certificate Certificates} to be used in signing and verifying exchanges. This value can be provided at runtime via the message header org.apache.camel.component.crypto.DigitalSignatureConstants#KEYSTORE\_ALIAS||string| +|certificateName|Sets the reference name for a PrivateKey that can be found in the registry.||string| +|keystore|Sets the KeyStore that can contain keys and Certficates for use in signing and verifying exchanges. A KeyStore is typically used with an alias, either one supplied in the Route definition or dynamically via the message header CamelSignatureKeyStoreAlias. If no alias is supplied and there is only a single entry in the Keystore, then this single entry will be used.||object| +|keystoreName|Sets the reference name for a Keystore that can be found in the registry.||string| +|privateKey|Set the PrivateKey that should be used to sign the exchange||object| +|privateKeyName|Sets the reference name for a PrivateKey that can be found in the registry.||string| +|provider|Set the id of the security provider that provides the configured Signature algorithm.||string| +|publicKeyName|references that should be resolved when the context changes||string| +|secureRandomName|Sets the reference name for a SecureRandom that can be found in the registry.||string| +|signatureHeaderName|Set the name of the message header that should be used to store the base64 encoded signature. This defaults to 'CamelDigitalSignature'||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|bufferSize|Set the size of the buffer used to read in the Exchange payload data.|2048|integer| +|certificate|Set the Certificate that should be used to verify the signature in the exchange based on its payload.||object| +|clearHeaders|Determines if the Signature specific headers be cleared after signing and verification. Defaults to true, and should only be made otherwise at your extreme peril as vital private information such as Keys and passwords may escape if unset.|true|boolean| +|keyStoreParameters|Sets the KeyStore that can contain keys and Certficates for use in signing and verifying exchanges based on the given KeyStoreParameters. A KeyStore is typically used with an alias, either one supplied in the Route definition or dynamically via the message header CamelSignatureKeyStoreAlias. If no alias is supplied and there is only a single entry in the Keystore, then this single entry will be used.||object| +|publicKey|Set the PublicKey that should be used to verify the signature in the exchange.||object| +|secureRandom|Set the SecureRandom used to initialize the Signature service||object| +|password|Sets the password used to access an aliased PrivateKey in the KeyStore.||string| diff --git a/camel-cxf.md b/camel-cxf.md new file mode 100644 index 0000000000000000000000000000000000000000..92f89fe43409dd5a143ff95aef63ae1069c4b5c4 --- /dev/null +++ b/camel-cxf.md @@ -0,0 +1,1218 @@ +# Cxf + +**Since Camel 1.0** + +**Both producer and consumer are supported** + +The CXF component provides integration with [Apache +CXF](http://cxf.apache.org) for connecting to +[JAX-WS](http://cxf.apache.org/docs/jax-ws.html) services hosted in CXF. + +When using CXF in streaming mode - check the DataFormat options below, +then also read about [stream caching](#manual::stream-caching.adoc). + +Maven users must add the following dependency to their `pom.xml` for +this component: + + + org.apache.camel + camel-cxf-soap + x.x.x + + + +# URI format + +There are two URI formats for this endpoint: **cxfEndpoint** and +**someAddress**. + + cxf:bean:cxfEndpoint[?options] + +Where **cxfEndpoint** represents a bean ID that references a bean in the +Spring bean registry. With this URI format, most of the endpoint details +are specified in the bean definition. + + cxf://someAddress[?options] + +Where `someAddress` specifies the CXF endpoint’s address. With this URI +format, most of the endpoint details are specified using options. + +For either style above, you can append options to the URI as follows: + + cxf:bean:cxfEndpoint?wsdlURL=wsdl/hello_world.wsdl&dataFormat=PAYLOAD + +The `serviceName` and `portName` are +[QNames](http://en.wikipedia.org/wiki/QName), so if you provide them be +sure to prefix them with their *{namespace}* as shown in the examples +above. + +## Descriptions of the data formats + +In Apache Camel, the Camel CXF component is the key to integrating +routes with Web services. You can use the Camel CXF component to create +a CXF endpoint, which can be used in either of the following ways: + +- **Consumer** — (at the start of a route) represents a Web service + instance, which integrates with the route. The type of payload + injected into the route depends on the value of the endpoint’s + dataFormat option. + +- **Producer** — (at other points in the route) represents a WS client + proxy, which converts the current exchange object into an operation + invocation on a remote Web service. The format of the current + exchange must match the endpoint’s dataFormat setting. + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + +
DataFormatDescription

POJO

POJOs (Plain old Java objects) are the +Java parameters to the method being invoked on the target server. Both +Protocol and Logical JAX-WS handlers are supported.

PAYLOAD

PAYLOAD is the message +payload (the contents of the soap:body) after message +configuration in the CXF endpoint is applied. Only Protocol JAX-WS +handler is supported. Logical JAX-WS handler is not supported.

RAW

RAW mode provides the raw +message stream received from the transport layer. It is not possible to +touch or change the stream, some of the CXF interceptors will be removed +if you are using this kind of DataFormat, so you can’t see any soap +headers after the Camel CXF consumer. JAX-WS handler is not supported. +Note that RAW mode is equivalent to deprecated +MESSAGE mode.

CXF_MESSAGE

CXF_MESSAGE allows for +invoking the full capabilities of CXF interceptors by converting the +message from the transport layer into a raw SOAP message

+ +You can determine the data format mode of an exchange by retrieving the +exchange property, `CamelCXFDataFormat`. The exchange key constant is +defined in +`org.apache.camel.component.cxf.common.message.CxfConstants.DATA_FORMAT_PROPERTY`. + +# How to create a simple CXF service with POJO data format + +Having simple java web service interface: + + package org.apache.camel.component.cxf.soap.server; + + @WebService(targetNamespace = "http://server.soap.cxf.component.camel.apache.org/", name = "EchoService") + public interface EchoService { + + String echo(String text); + } + +And implementation: + + package org.apache.camel.component.cxf.soap.server; + + @WebService(name = "EchoService", serviceName = "EchoService", targetNamespace = "http://server.soap.cxf.component.camel.apache.org/") + public class EchoServiceImpl implements EchoService { + + @Override + public String echo(String text) { + return text; + } + + } + +We can then create the simplest CXF service (note we didn’t specify the +`POJO` mode, as it is the default mode): + + from("cxf:echoServiceResponseFromImpl?serviceClass=org.apache.camel.component.cxf.soap.server.EchoServiceImpl&address=/echo-impl")// no body set here; the response comes from EchoServiceImpl + .log("${body}"); + +For more complicated implementation of the service (more "Camel way"), +we can set the body from the route instead: + + from("cxf:echoServiceResponseFromRoute?serviceClass=org.apache.camel.component.cxf.soap.server.EchoServiceImpl&address=/echo-route") + .setBody(exchange -> exchange.getMessage().getBody(String.class) + " from Camel route"); + +# How to consume a message from a Camel CXF endpoint in POJO data format + +The Camel CXF endpoint consumer POJO data format is based on the [CXF +invoker](http://cxf.apache.org/docs/invokers.html), so the message +header has a property with the name of `CxfConstants.OPERATION_NAME` and +the message body is a list of the SEI method parameters. + +Consider the +[PersonProcessor](https://github.com/apache/camel/blob/main/components/camel-cxf/camel-cxf-soap/src/test/java/org/apache/camel/wsdl_first/PersonProcessor.java) +example code: + + public class PersonProcessor implements Processor { + + private static final Logger LOG = LoggerFactory.getLogger(PersonProcessor.class); + + @Override + @SuppressWarnings("unchecked") + public void process(Exchange exchange) throws Exception { + LOG.info("processing exchange in camel"); + + BindingOperationInfo boi = (BindingOperationInfo) exchange.getProperty(BindingOperationInfo.class.getName()); + if (boi != null) { + LOG.info("boi.isUnwrapped" + boi.isUnwrapped()); + } + // Get the parameter list which element is the holder. + MessageContentsList msgList = (MessageContentsList) exchange.getIn().getBody(); + Holder personId = (Holder) msgList.get(0); + Holder ssn = (Holder) msgList.get(1); + Holder name = (Holder) msgList.get(2); + + if (personId.value == null || personId.value.length() == 0) { + LOG.info("person id 123, so throwing exception"); + // Try to throw out the soap fault message + org.apache.camel.wsdl_first.types.UnknownPersonFault personFault + = new org.apache.camel.wsdl_first.types.UnknownPersonFault(); + personFault.setPersonId(""); + org.apache.camel.wsdl_first.UnknownPersonFault fault + = new org.apache.camel.wsdl_first.UnknownPersonFault("Get the null value of person name", personFault); + exchange.getMessage().setBody(fault); + return; + } + + name.value = "Bonjour"; + ssn.value = "123"; + LOG.info("setting Bonjour as the response"); + // Set the response message, the first element is the return value of the operation, + // the others are the holders of method parameters + exchange.getMessage().setBody(new Object[] { null, personId, ssn, name }); + } + + } + +# How to prepare the message for the Camel CXF endpoint in POJO data format + +The Camel CXF endpoint producer is based on the [CXF client +API](https://github.com/apache/cxf/blob/master/core/src/main/java/org/apache/cxf/endpoint/Client.java). +First, you need to specify the operation name in the message header, +then add the method parameters to a list, and initialize the message +with this parameter list. The response message’s body is a +messageContentsList, you can get the result from that list. + +If you don’t specify the operation name in the message header, +`CxfProducer` will try to use the `defaultOperationName` from +`CxfEndpoint`, if there is no `defaultOperationName` set on +`CxfEndpoint`, it will pick up the first operationName from the +Operation list. + +If you want to get the object array from the message body, you can get +the body using `message.getBody(Object[].class)`, as shown in +[CxfProducerRouterTest.testInvokingSimpleServerWithParams](https://github.com/apache/camel/blob/main/components/camel-cxf/camel-cxf-soap/src/test/java/org/apache/camel/component/cxf/jaxws/CxfProducerRouterTest.java#L117): + + Exchange senderExchange = new DefaultExchange(context, ExchangePattern.InOut); + final List params = new ArrayList<>(); + // Prepare the request message for the camel-cxf procedure + params.add(TEST_MESSAGE); + senderExchange.getIn().setBody(params); + senderExchange.getIn().setHeader(CxfConstants.OPERATION_NAME, ECHO_OPERATION); + + Exchange exchange = template.send("direct:EndpointA", senderExchange); + + org.apache.camel.Message out = exchange.getMessage(); + // The response message's body is a MessageContentsList which first element is the return value of the operation, + // If there are some holder parameters, the holder parameter will be filled in the reset of List. + // The result will be extracted from the MessageContentsList with the String class type + MessageContentsList result = (MessageContentsList) out.getBody(); + LOG.info("Received output text: " + result.get(0)); + Map responseContext = CastUtils.cast((Map) out.getHeader(Client.RESPONSE_CONTEXT)); + assertNotNull(responseContext); + assertEquals("UTF-8", responseContext.get(org.apache.cxf.message.Message.ENCODING), + "We should get the response context here"); + assertEquals("echo " + TEST_MESSAGE, result.get(0), "Reply body on Camel is wrong"); + +# How to consume a message from a Camel CXF endpoint in PAYLOAD data format + +`PAYLOAD` means that you process the payload from the SOAP envelope as a +native CxfPayload. `Message.getBody()` will return a +`org.apache.camel.component.cxf.CxfPayload` object, with getters for +SOAP message headers and the SOAP body. + +See +[CxfConsumerPayloadTest](https://github.com/apache/camel/blob/main/components/camel-cxf/camel-cxf-soap/src/test/java/org/apache/camel/component/cxf/jaxws/CxfConsumerPayloadTest.java#L68): + + protected RouteBuilder createRouteBuilder() { + return new RouteBuilder() { + public void configure() { + from(simpleEndpointURI + "&dataFormat=PAYLOAD").to("log:info").process(new Processor() { + @SuppressWarnings("unchecked") + public void process(final Exchange exchange) throws Exception { + CxfPayload requestPayload = exchange.getIn().getBody(CxfPayload.class); + List inElements = requestPayload.getBodySources(); + List outElements = new ArrayList<>(); + // You can use a customer toStringConverter to turn a CxfPayLoad message into String as you want + String request = exchange.getIn().getBody(String.class); + XmlConverter converter = new XmlConverter(); + String documentString = ECHO_RESPONSE; + + Element in = new XmlConverter().toDOMElement(inElements.get(0)); + // Check the element namespace + if (!in.getNamespaceURI().equals(ELEMENT_NAMESPACE)) { + throw new IllegalArgumentException("Wrong element namespace"); + } + if (in.getLocalName().equals("echoBoolean")) { + documentString = ECHO_BOOLEAN_RESPONSE; + checkRequest("ECHO_BOOLEAN_REQUEST", request); + } else { + documentString = ECHO_RESPONSE; + checkRequest("ECHO_REQUEST", request); + } + Document outDocument = converter.toDOMDocument(documentString, exchange); + outElements.add(new DOMSource(outDocument.getDocumentElement())); + // set the payload header with null + CxfPayload responsePayload = new CxfPayload<>(null, outElements, null); + exchange.getMessage().setBody(responsePayload); + } + }); + } + }; + } + +# How to get and set SOAP headers in POJO mode + +`POJO` means that the data format is a *"list of Java objects"* when the +Camel CXF endpoint produces or consumes Camel exchanges. Even though +Camel exposes the message body as POJOs in this mode, Camel CXF still +provides access to read and write SOAP headers. However, since CXF +interceptors remove in-band SOAP headers from the header list, after +they have been processed, only out-of-band SOAP headers are available to +Camel CXF in POJO mode. + +The following example illustrates how to get/set SOAP headers. Suppose +we have a route that forwards from one Camel CXF endpoint to another. +That is, `SOAP Client -> Camel -> CXF service`. We can attach two +processors to obtain/insert SOAP headers at (1) before a request goes +out to the CXF service and (2) before the response comes back to the +SOAP Client. Processors (1) and (2) in this example are +`InsertRequestOutHeaderProcessor` and +`InsertResponseOutHeaderProcessor`. Our route looks like this: + +Java +from("cxf:bean:routerRelayEndpointWithInsertion") +.process(new InsertRequestOutHeaderProcessor()) +.to("cxf:bean:serviceRelayEndpointWithInsertion") +.process(new InsertResponseOutHeaderProcessor()); + +XML + + + + + + + +SOAP headers are propagated to and from Camel Message headers. The Camel +message header name is `org.apache.cxf.headers.Header.list` which is a +constant defined in CXF (`org.apache.cxf.headers.Header.HEADER_LIST`). +The header value is a List of CXF `SoapHeader` objects +(`org.apache.cxf.binding.soap.SoapHeader`). The following snippet is the +`InsertResponseOutHeaderProcessor` (that inserts a new SOAP header in +the response message). The way to access SOAP headers in both +`InsertResponseOutHeaderProcessor` and `InsertRequestOutHeaderProcessor` +are actually the same. The only difference between the two processors is +setting the direction of the inserted SOAP header. + +You can find the `InsertResponseOutHeaderProcessor` example in +[CxfMessageHeadersRelayTest](https://github.com/apache/camel/blob/main/components/camel-cxf/camel-cxf-spring-soap/src/test/java/org/apache/camel/component/cxf/soap/headers/CxfMessageHeadersRelayTest.java#L731): + + public static class InsertResponseOutHeaderProcessor implements Processor { + + public void process(Exchange exchange) throws Exception { + List soapHeaders = CastUtils.cast((List)exchange.getIn().getHeader(Header.HEADER_LIST)); + + // Insert a new header + String xml = "" + + "New_testOobHeaderNew_testOobHeaderValue"; + SoapHeader newHeader = new SoapHeader(soapHeaders.get(0).getName(), + DOMUtils.readXml(new StringReader(xml)).getDocumentElement()); + // make sure the direction is OUT since it is a response message. + newHeader.setDirection(Direction.DIRECTION_OUT); + //newHeader.setMustUnderstand(false); + soapHeaders.add(newHeader); + + } + + } + +# How to get and set SOAP headers in PAYLOAD mode + +We’ve already shown how to access the SOAP message as `CxfPayload` +object in PAYLOAD mode in the section +\[???\](#How to consume a message from a Camel CXF endpoint in PAYLOAD data format). + +Once you obtain a `CxfPayload` object, you can invoke the +`CxfPayload.getHeaders()` method that returns a List of DOM Elements +(SOAP headers). + +For example, see +[CxfPayLoadSoapHeaderTest](https://github.com/apache/camel/blob/main/components/camel-cxf/camel-cxf-soap/src/test/java/org/apache/camel/component/cxf/jaxws/CxfPayLoadSoapHeaderTest.java#L53): + + from(getRouterEndpointURI()).process(new Processor() { + @SuppressWarnings("unchecked") + public void process(Exchange exchange) throws Exception { + CxfPayload payload = exchange.getIn().getBody(CxfPayload.class); + List elements = payload.getBodySources(); + assertNotNull(elements, "We should get the elements here"); + assertEquals(1, elements.size(), "Get the wrong elements size"); + + Element el = new XmlConverter().toDOMElement(elements.get(0)); + elements.set(0, new DOMSource(el)); + assertEquals("http://camel.apache.org/pizza/types", + el.getNamespaceURI(), "Get the wrong namespace URI"); + + List headers = payload.getHeaders(); + assertNotNull(headers, "We should get the headers here"); + assertEquals(1, headers.size(), "Get the wrong headers size"); + assertEquals("http://camel.apache.org/pizza/types", + ((Element) (headers.get(0).getObject())).getNamespaceURI(), "Get the wrong namespace URI"); + // alternatively, you can also get the SOAP header via the camel header: + headers = exchange.getIn().getHeader(Header.HEADER_LIST, List.class); + assertNotNull(headers, "We should get the headers here"); + assertEquals(1, headers.size(), "Get the wrong headers size"); + assertEquals("http://camel.apache.org/pizza/types", + ((Element) (headers.get(0).getObject())).getNamespaceURI(), "Get the wrong namespace URI"); + + } + + }) + .to(getServiceEndpointURI()); + +You can also use the same way as described in subchapter "How to get and +set SOAP headers in POJO mode" to set or get the SOAP headers. So, you +can use the header `org.apache.cxf.headers.Header.list` to get and set a +list of SOAP headers.This does also mean that if you have a route that +forwards from one Camel CXF endpoint to another +(`SOAP Client -> Camel -> CXF service`), now also the SOAP headers sent +by the SOAP client are forwarded to the CXF service. If you do not want +that these headers are forwarded, you have to remove them in the Camel +header `org.apache.cxf.headers.Header.list`. + +# SOAP headers are not available in RAW mode + +SOAP headers are not available in RAW mode as SOAP processing is +skipped. + +# How to throw a SOAP Fault from Camel + +If you are using a Camel CXF endpoint to consume the SOAP request, you +may need to throw the SOAP Fault from the camel context. +Basically, you can use the `throwFault` DSL to do that; it works for +`POJO`, `PAYLOAD` and `RAW` data format. +You can define the soap fault as shown in +[CxfCustomizedExceptionTest](https://github.com/apache/camel/blob/main/components/camel-cxf/camel-cxf-soap/src/test/java/org/apache/camel/component/cxf/jaxws/CxfCustomizedExceptionTest.java#L65): + + SOAP_FAULT = new SoapFault(EXCEPTION_MESSAGE, SoapFault.FAULT_CODE_CLIENT); + Element detail = SOAP_FAULT.getOrCreateDetail(); + Document doc = detail.getOwnerDocument(); + Text tn = doc.createTextNode(DETAIL_TEXT); + detail.appendChild(tn); + +Then throw it as you like: + + from(routerEndpointURI).setFaultBody(constant(SOAP_FAULT)); + +If your CXF endpoint is working in the `RAW` data format, you could set +the SOAP Fault message in the message body and set the response code in +the message header as demonstrated by +[CxfMessageStreamExceptionTest](https://github.com/apache/camel/blob/main/components/camel-cxf/camel-cxf-soap/src/test/java/org/apache/camel/component/cxf/jaxws/CxfMessageStreamExceptionTest.java#L43): + + from(routerEndpointURI).process(new Processor() { + + public void process(Exchange exchange) throws Exception { + Message out = exchange.getMessage(); + // Set the message body + out.setBody(this.getClass().getResourceAsStream("SoapFaultMessage.xml")); + // Set the response code here + out.setHeader(org.apache.cxf.message.Message.RESPONSE_CODE, new Integer(500)); + } + + }); + +Same for using POJO data format. You can set the SOAPFault on the *OUT* +body. + +[CXF client +API](https://github.com/apache/cxf/blob/master/core/src/main/java/org/apache/cxf/endpoint/Client.java) +provides a way to invoke the operation with request and response +context. If you are using a Camel CXF endpoint producer to invoke the +outside web service, you can set the request context and get response +context with the following code: + + CxfExchange exchange = (CxfExchange)template.send(getJaxwsEndpointUri(), new Processor() { + public void process(final Exchange exchange) { + final List params = new ArrayList(); + params.add(TEST_MESSAGE); + // Set the request context to the inMessage + Map requestContext = new HashMap(); + requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, JAXWS_SERVER_ADDRESS); + exchange.getIn().setBody(params); + exchange.getIn().setHeader(Client.REQUEST_CONTEXT , requestContext); + exchange.getIn().setHeader(CxfConstants.OPERATION_NAME, GREET_ME_OPERATION); + } + }); + org.apache.camel.Message out = exchange.getMessage(); + // The output is an object array, the first element of the array is the return value + Object\[\] output = out.getBody(Object\[\].class); + LOG.info("Received output text: " + output\[0\]); + // Get the response context form outMessage + Map responseContext = CastUtils.cast((Map)out.getHeader(Client.RESPONSE_CONTEXT)); + assertNotNull(responseContext); + assertEquals("Get the wrong wsdl operation name", "{http://apache.org/hello_world_soap_http}greetMe", + responseContext.get("javax.xml.ws.wsdl.operation").toString()); + +# Attachment Support + +## POJO Mode + +Message Transmission Optimization Mechanism (MTOM) is supported if +enabled - check the example in Payload Mode for enabling MTOM. Since +attachments are marshalled and unmarshalled into POJOs, the attachments +should be retrieved from the Apache Camel message body (as a parameter +list), and it isn’t possible to retrieve attachments by Camel Message +API + + DataHandler handler = Exchange.getIn(AttachmentMessage.class).getAttachment("id"); + +## Payload Mode + +Message Transmission Optimization Mechanism (MTOM) is supported by this +Mode. Attachments can be retrieved by Camel Message APIs mentioned +above. SOAP with Attachment (SwA) is supported and attachments can be +retrieved. SwA is the default (same as setting the CXF endpoint property +`mtomEnabled` to `false`). + +To enable MTOM, set the CXF endpoint property `mtomEnabled` to `true`. + +Java (Quarkus) +import org.apache.camel.builder.RouteBuilder; +import org.apache.camel.component.cxf.common.DataFormat; +import org.apache.camel.component.cxf.jaxws.CxfEndpoint; +import jakarta.enterprise.context.ApplicationScoped; +import jakarta.enterprise.context.SessionScoped; +import jakarta.enterprise.inject.Produces; +import jakarta.inject.Named; + + @ApplicationScoped + public class CxfSoapMtomRoutes extends RouteBuilder { + + @Override + public void configure() { + from("cxf:bean:mtomPayloadModeEndpoint") + .process( exchange -> { ... }); + } + + @Produces + @SessionScoped + @Named + CxfEndpoint mtomPayloadModeEndpoint() { + final CxfEndpoint result = new CxfEndpoint(); + result.setServiceClass(MyMtomService.class); + result.setDataFormat(DataFormat.PAYLOAD); + result.setMtomEnabled(true); + result.setAddress("/mtom/hello"); + return result; + } + } + +XML (Spring) +\ + + + + + + + + + +You can produce a Camel message with attachment to send to a CXF +endpoint in Payload mode. + + Exchange exchange = context.createProducerTemplate().send("direct:testEndpoint", new Processor() { + + public void process(Exchange exchange) throws Exception { + exchange.setPattern(ExchangePattern.InOut); + List elements = new ArrayList(); + elements.add(new DOMSource(DOMUtils.readXml(new StringReader(MtomTestHelper.REQ_MESSAGE)).getDocumentElement())); + CxfPayload body = new CxfPayload(new ArrayList(), + elements, null); + exchange.getIn().setBody(body); + exchange.getIn(AttachmentMessage.class).addAttachment(MtomTestHelper.REQ_PHOTO_CID, + new DataHandler(new ByteArrayDataSource(MtomTestHelper.REQ_PHOTO_DATA, "application/octet-stream"))); + + exchange.getIn(AttachmentMessage.class).addAttachment(MtomTestHelper.REQ_IMAGE_CID, + new DataHandler(new ByteArrayDataSource(MtomTestHelper.requestJpeg, "image/jpeg"))); + + } + + }); + + // process response + + CxfPayload out = exchange.getMessage().getBody(CxfPayload.class); + assertEquals(1, out.getBody().size()); + + Map ns = new HashMap<>(); + ns.put("ns", MtomTestHelper.SERVICE_TYPES_NS); + ns.put("xop", MtomTestHelper.XOP_NS); + + XPathUtils xu = new XPathUtils(ns); + Element oute = new XmlConverter().toDOMElement(out.getBody().get(0)); + Element ele = (Element) xu.getValue("//ns:DetailResponse/ns:photo/xop:Include", oute, + XPathConstants.NODE); + String photoId = ele.getAttribute("href").substring(4); // skip "cid:" + + ele = (Element) xu.getValue("//ns:DetailResponse/ns:image/xop:Include", oute, + XPathConstants.NODE); + String imageId = ele.getAttribute("href").substring(4); // skip "cid:" + + DataHandler dr = exchange.getMessage(AttachmentMessage.class).getAttachment(decodingReference(photoId)); + assertEquals("application/octet-stream", dr.getContentType()); + assertArrayEquals(MtomTestHelper.RESP_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); + + dr = exchange.getMessage(AttachmentMessage.class).getAttachment(decodingReference(imageId)); + assertEquals("image/jpeg", dr.getContentType()); + + BufferedImage image = ImageIO.read(dr.getInputStream()); + assertEquals(560, image.getWidth()); + assertEquals(300, image.getHeight()); + +You can also consume a Camel message received from a CXF endpoint in +Payload mode. The +[CxfMtomConsumerPayloadModeTest](https://github.com/apache/camel/blob/main/components/camel-cxf/camel-cxf-spring-soap/src/test/java/org/apache/camel/component/cxf/mtom/CxfMtomConsumerPayloadModeTest.java#L97) +illustrates how this works: + + public static class MyProcessor implements Processor { + + @Override + @SuppressWarnings("unchecked") + public void process(Exchange exchange) throws Exception { + CxfPayload in = exchange.getIn().getBody(CxfPayload.class); + + // verify request + assertEquals(1, in.getBody().size()); + + Map ns = new HashMap<>(); + ns.put("ns", MtomTestHelper.SERVICE_TYPES_NS); + ns.put("xop", MtomTestHelper.XOP_NS); + + XPathUtils xu = new XPathUtils(ns); + Element body = new XmlConverter().toDOMElement(in.getBody().get(0)); + Element ele = (Element) xu.getValue("//ns:Detail/ns:photo/xop:Include", body, + XPathConstants.NODE); + String photoId = ele.getAttribute("href").substring(4); // skip "cid:" + assertEquals(MtomTestHelper.REQ_PHOTO_CID, photoId); + + ele = (Element) xu.getValue("//ns:Detail/ns:image/xop:Include", body, + XPathConstants.NODE); + String imageId = ele.getAttribute("href").substring(4); // skip "cid:" + assertEquals(MtomTestHelper.REQ_IMAGE_CID, imageId); + + DataHandler dr = exchange.getIn(AttachmentMessage.class).getAttachment(photoId); + assertEquals("application/octet-stream", dr.getContentType()); + assertArrayEquals(MtomTestHelper.REQ_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); + + dr = exchange.getIn(AttachmentMessage.class).getAttachment(imageId); + assertEquals("image/jpeg", dr.getContentType()); + assertArrayEquals(MtomTestHelper.requestJpeg, IOUtils.readBytesFromStream(dr.getInputStream())); + + // create response + List elements = new ArrayList<>(); + elements.add(new DOMSource(StaxUtils.read(new StringReader(MtomTestHelper.RESP_MESSAGE)).getDocumentElement())); + CxfPayload sbody = new CxfPayload<>( + new ArrayList(), + elements, null); + exchange.getMessage().setBody(sbody); + exchange.getMessage(AttachmentMessage.class).addAttachment(MtomTestHelper.RESP_PHOTO_CID, + new DataHandler(new ByteArrayDataSource(MtomTestHelper.RESP_PHOTO_DATA, "application/octet-stream"))); + + exchange.getMessage(AttachmentMessage.class).addAttachment(MtomTestHelper.RESP_IMAGE_CID, + new DataHandler(new ByteArrayDataSource(MtomTestHelper.responseJpeg, "image/jpeg"))); + + } + } + +## RAW Mode + +Attachments are not supported as it does not process the message at all. + +## CXF\_MESSAGE Mode + +MTOM is supported, and Attachments can be retrieved by Camel Message +APIs mentioned above. Note that when receiving a multipart (i.e., MTOM) +message, the default ``SOAPMessag`e to `String`` converter will +provide the complete multipart payload on the body. If you require just +the SOAP XML as a String, you can set the message body with +`message.getSOAPPart()`, and the Camel converter can do the rest of the +work for you. + +# Streaming Support in PAYLOAD mode + +The Camel CXF component now supports streaming of incoming messages when +using PAYLOAD mode. Previously, the incoming messages would have been +completely DOM parsed. For large messages, this is time-consuming and +uses a significant amount of memory. The incoming messages can remain as +a `javax.xml.transform.Source` while being routed and, if nothing +modifies the payload, can then be directly streamed out to the target +destination. For common "simple proxy" use cases (example: +`from("cxf:...").to("cxf:...")`), this can provide very significant +performance increases as well as significantly lowered memory +requirements. + +However, there are cases where streaming may not be appropriate or +desired. Due to the streaming nature, invalid incoming XML may not be +caught until later in the processing chain. Also, certain actions may +require the message to be DOM parsed anyway (like WS-Security or message +tracing and such) in which case, the advantages of the streaming are +limited. At this point, there are two ways to control the streaming: + +- Endpoint property: you can add `allowStreaming=false` as an endpoint + property to turn the streaming on/off. + +- Component property: the `CxfComponent` object also has an + `allowStreaming` property that can set the default for endpoints + created from that component. + +Global system property: you can add a system property of +`org.apache.camel.component.cxf.streaming` to `false` to turn it off. +That sets the global default, but setting the endpoint property above +will override this value for that endpoint. + +# Using the generic CXF Dispatch mode + +The Camel CXF component supports the generic [CXF dispatch +mode](https://cxf.apache.org/docs/jax-ws-dispatch-api.html) that can +transport messages of arbitrary structures (i.e., not bound to a +specific XML schema). To use this mode, you omit specifying the +`wsdlURL` and `serviceClass` attributes of the CXF endpoint. + +Java (Quarkus) +import org.apache.camel.component.cxf.common.DataFormat; +import org.apache.camel.component.cxf.jaxws.CxfEndpoint; +import jakarta.enterprise.context.SessionScoped; +import jakarta.enterprise.inject.Produces; +import jakarta.inject.Named; + + ... + + @Produces + @SessionScoped + @Named + CxfEndpoint dispatchEndpoint() { + final CxfEndpoint result = new CxfEndpoint(); + result.setDataFormat(DataFormat.PAYLOAD); + result.setAddress("/SoapAnyPort"); + return result; + } + +XML (Spring) +\ +[cxf:properties](cxf:properties) + +\ +\ + +It is noted that the default CXF dispatch client does not send a +specific `SOAPAction` header. Therefore, when the target service +requires a specific `SOAPAction` value, it is supplied in the Camel +header using the key `SOAPAction` (case-insensitive). + +CXF’s `LoggingOutInterceptor` outputs outbound message that goes on the +wire to logging system (Java Util Logging). Since the +`LoggingOutInterceptor` is in `PRE_STREAM` phase (but `PRE_STREAM` phase +is removed in `RAW` mode), you have to configure `LoggingOutInterceptor` +to be run during the `WRITE` phase. The following is an example. + +Java (Quarkus) +import java.util.List; +import org.apache.camel.builder.RouteBuilder; +import org.apache.camel.component.cxf.common.DataFormat; +import org.apache.camel.component.cxf.jaxws.CxfEndpoint; +import org.apache.cxf.interceptor.LoggingOutInterceptor; +import org.apache.cxf.phase.Phase; +import jakarta.enterprise.context.SessionScoped; +import jakarta.enterprise.inject.Produces; +import jakarta.inject.Named; + + ... + + @Produces + @SessionScoped + @Named + CxfEndpoint soapMtomEnabledServerPayloadModeEndpoint() { + final CxfEndpoint result = new CxfEndpoint(); + result.setServiceClass(HelloService.class); + result.setDataFormat(DataFormat.RAW); + result.setOutFaultInterceptors(List.of(new LoggingOutInterceptor(Phase.WRITE)));; + result.setAddress("/helloworld"); + return result; + } + +XML (Spring) + + + + + + + + + + + + + + +# Description of CxfHeaderFilterStrategy options + +There are *in-band* and *out-of-band* on-the-wire headers from the +perspective of a JAXWS WSDL-first developer. + +The *in-band* headers are headers that are explicitly defined as part of +the WSDL binding contract for an endpoint such as SOAP headers. + +The *out-of-band* headers are headers that are serialized over the wire, +but are not explicitly part of the WSDL binding contract. + +Headers relaying/filtering is bi-directional. + +When a route has a CXF endpoint and the developer needs to have +on-the-wire headers, such as SOAP headers, be relayed along the route to +be consumed say by another JAXWS endpoint, a `CxfHeaderFilterStrategy` +instance should be set on the CXF endpoint, then `relayHeaders` property +of the `CxfHeaderFilterStrategy` instance should be set to `true`, which +is the default value. Plus, the `CxfHeaderFilterStrategy` instance also +holds a list of `MessageHeaderFilter` interface, which decides if a +specific header will be relayed or not. + +Take a look at the tests that show how you’d be able to relay/drop +headers here: + +[CxfMessageHeadersRelayTest](https://github.com/apache/camel/blob/main/components/camel-cxf/camel-cxf-spring-soap/src/test/java/org/apache/camel/component/cxf/soap/headers/CxfMessageHeadersRelayTest.java) + +- The `relayHeaders=true` expresses an intent to relay the headers. + The actual decision on whether a given header is relayed is + delegated to a pluggable instance that implements the + `MessageHeaderFilter` interface. A concrete implementation of + `MessageHeaderFilter` will be consulted to decide if a header needs + to be relayed or not. There is already an implementation of + `SoapMessageHeaderFilter` which binds itself to well-known SOAP name + spaces. If there is a header on the wire whose name space is unknown + to the runtime, the header will be simply relayed. + +- `POJO` and `PAYLOAD` modes are supported. In `POJO` mode, only + out-of-band message headers are available for filtering as the + in-band headers have been processed and removed from the header list + by CXF. The in-band headers are incorporated into the + `MessageContentList` in POJO mode. The Camel CXF component does make + any attempt to remove the in-band headers from the + `MessageContentList`. If filtering of in-band headers is required, + please use `PAYLOAD` mode or plug in a (pretty straightforward) CXF + interceptor/JAXWS Handler to the CXF endpoint. Here is an example of + configuring the `CxfHeaderFilterStrategy`. + + + + + + + + + + +Then, your endpoint can reference the `CxfHeaderFilterStrategy`: + + + + + + +- You can plug in your own `MessageHeaderFilter` implementations + overriding or adding additional ones to the list of relays. To + override a preloaded relay instance, make sure that your + `MessageHeaderFilter` implementation services the same name spaces + as the one you are looking to override. + +Here is an example of configuring user defined Message Header Filters: + + + + + + + + + + + + + +- In addition to `relayHeaders`, the following properties can be + configured in `CxfHeaderFilterStrategy`. + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
NameRequiredDescription

relayHeaders

No

All message headers will be processed +by Message Header Filters Type: boolean +Default: true

relayAllMessageHeaders

No

All message headers will be propagated +(without processing by Message Header Filters) Type: +boolean Default: false

allowFilterNamespaceClash

No

If two filters overlap in activation +namespace, the property controls how it should be handled. If the value +is true, last one wins. If the value is false, +it will throw an exception Type: boolean +Default: false

+ +# How to make the Camel CXF component use log4j instead of java.util.logging + +CXF’s default logger is `java.util.logging`. If you want to change it to +log4j, proceed as follows. Create a file, in the classpath, named +`META-INF/cxf/org.apache.cxf.logger`. This file should contain the fully +qualified name of the class, +`org.apache.cxf.common.logging.Log4jLogger`, with no comments, on a +single line. + +# How to let Camel CXF response start with xml processing instruction + +If you are using some SOAP client such as PHP, you will get this kind of +error because CXF doesn’t add the XML processing instruction +``: + + Error:sendSms: SoapFault exception: [Client] looks like we got no XML document in [...] + +To resolve this issue, you need to tell `StaxOutInterceptor` to write +the XML start document for you, as in the +[WriteXmlDeclarationInterceptor](https://github.com/apache/camel/blob/main/components/camel-cxf/camel-cxf-soap/src/test/java/org/apache/camel/component/cxf/jaxws/WriteXmlDeclarationInterceptor.java) +below: + + public class WriteXmlDeclarationInterceptor extends AbstractPhaseInterceptor { + public WriteXmlDeclarationInterceptor() { + super(Phase.PRE_STREAM); + addBefore(StaxOutInterceptor.class.getName()); + } + + public void handleMessage(SoapMessage message) throws Fault { + message.put("org.apache.cxf.stax.force-start-document", Boolean.TRUE); + } + + } + +As an alternative, you can add a message header for it as demonstrated +in +[CxfConsumerTest](https://github.com/apache/camel/blob/main/components/camel-cxf/camel-cxf-soap/src/test/java/org/apache/camel/component/cxf/jaxws/CxfConsumerTest.java#L62): + + // set up the response context which force start document + Map map = new HashMap(); + map.put("org.apache.cxf.stax.force-start-document", Boolean.TRUE); + exchange.getMessage().setHeader(Client.RESPONSE_CONTEXT, map); + +# Configure the CXF endpoints with Spring + +You can configure the CXF endpoint with the Spring configuration file +shown below, and you can also embed the endpoint into the `camelContext` +tags. When you are invoking the service endpoint, you can set the +`operationName` and `operationNamespace` headers to explicitly state +which operation you are calling. + + + + + + + + + + + + +Be sure to include the JAX-WS `schemaLocation` attribute specified on +the root `beans` element. This allows CXF to validate the file and is +required. Also note the namespace declarations at the end of the +`` tag. These declarations are required because the +combined `{namespace}localName` syntax is presently not supported for +this tag’s attribute values. + +The `cxf:cxfEndpoint` element supports many additional attributes: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameValue

PortName

The endpoint name this service is +implementing, it maps to the wsdl:port@name. In the format +of ns:PORT_NAME where ns is a namespace prefix +valid at this scope.

serviceName

The service name this service is +implementing, it maps to the wsdl:service@name. In the +format of ns:SERVICE_NAME where ns is a +namespace prefix valid at this scope.

wsdlURL

The location of the WSDL. Can be on the +classpath, file system, or be hosted remotely.

bindingId

The bindingId for the +service model to use.

address

The service publish address.

bus

The bus name that will be used in the +JAX-WS endpoint.

serviceClass

The class name of the SEI (Service +Endpoint Interface) class which could have JSR181 annotation or +not.

+ +It also supports many child elements: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameValue

cxf:inInterceptors

The incoming interceptors for this +endpoint. A list of <bean> or +<ref>.

cxf:inFaultInterceptors

The incoming fault interceptors for +this endpoint. A list of <bean> or +<ref>.

cxf:outInterceptors

The outgoing interceptors for this +endpoint. A list of <bean> or +<ref>.

cxf:outFaultInterceptors

The outgoing fault interceptors for +this endpoint. A list of <bean> or +<ref>.

cxf:properties

A properties map which should be +supplied to the JAX-WS endpoint. See below.

cxf:handlers

A JAX-WS handler list which should be +supplied to the JAX-WS endpoint. See below.

cxf:dataBinding

You can specify which +DataBinding will be used in the endpoint. This can be +supplied using the Spring +<bean class="MyDataBinding"/> syntax.

cxf:binding

You can specify the +BindingFactory for this endpoint to use. This can be +supplied using the Spring +<bean class="MyBindingFactory"/> syntax.

cxf:features

The features that hold the interceptors +for this endpoint. A list of beans or refs

cxf:schemaLocations

The schema locations for endpoint to +use. A list of schemaLocations

cxf:serviceFactory

The service factory for this endpoint +to use. This can be supplied using the Spring +<bean class="MyServiceFactory"/> syntax

+ +You can find more advanced examples that show how to provide +interceptors, properties and handlers on the CXF [JAX-WS Configuration +page](http://cxf.apache.org/docs/jax-ws-configuration.html). + +You can use `cxf:properties` to set the Camel CXF endpoint’s dataFormat +and setDefaultBus properties from spring configuration file. + + + + + + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|allowStreaming|This option controls whether the CXF component, when running in PAYLOAD mode, will DOM parse the incoming messages into DOM Elements or keep the payload as a javax.xml.transform.Source object that would allow streaming in some cases.||boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|beanId|To lookup an existing configured CxfEndpoint. Must used bean: as prefix.||string| +|address|The service publish address.||string| +|dataFormat|The data type messages supported by the CXF endpoint.|POJO|object| +|wrappedStyle|The WSDL style that describes how parameters are represented in the SOAP body. If the value is false, CXF will chose the document-literal unwrapped style, If the value is true, CXF will chose the document-literal wrapped style||boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|cookieHandler|Configure a cookie handler to maintain a HTTP session||object| +|defaultOperationName|This option will set the default operationName that will be used by the CxfProducer which invokes the remote service.||string| +|defaultOperationNamespace|This option will set the default operationNamespace that will be used by the CxfProducer which invokes the remote service.||string| +|hostnameVerifier|The hostname verifier to be used. Use the # notation to reference a HostnameVerifier from the registry.||object| +|sslContextParameters|The Camel SSL setting reference. Use the # notation to reference the SSL Context.||object| +|wrapped|Which kind of operation that CXF endpoint producer will invoke|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|allowStreaming|This option controls whether the CXF component, when running in PAYLOAD mode, will DOM parse the incoming messages into DOM Elements or keep the payload as a javax.xml.transform.Source object that would allow streaming in some cases.||boolean| +|bus|To use a custom configured CXF Bus.||object| +|continuationTimeout|This option is used to set the CXF continuation timeout which could be used in CxfConsumer by default when the CXF server is using Jetty or Servlet transport.|30000|duration| +|cxfBinding|To use a custom CxfBinding to control the binding between Camel Message and CXF Message.||object| +|cxfConfigurer|This option could apply the implementation of org.apache.camel.component.cxf.CxfEndpointConfigurer which supports to configure the CXF endpoint in programmatic way. User can configure the CXF server and client by implementing configure{ServerClient} method of CxfEndpointConfigurer.||object| +|defaultBus|Will set the default bus when CXF endpoint create a bus by itself|false|boolean| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object| +|mergeProtocolHeaders|Whether to merge protocol headers. If enabled then propagating headers between Camel and CXF becomes more consistent and similar. For more details see CAMEL-6393.|false|boolean| +|mtomEnabled|To enable MTOM (attachments). This requires to use POJO or PAYLOAD data format mode.|false|boolean| +|properties|To set additional CXF options using the key/value pairs from the Map. For example to turn on stacktraces in SOAP faults, properties.faultStackTraceEnabled=true||object| +|schemaValidationEnabled|Enable schema validation for request and response. Disabled by default for performance reason|false|boolean| +|skipPayloadMessagePartCheck|Sets whether SOAP message validation should be disabled.|false|boolean| +|loggingFeatureEnabled|This option enables CXF Logging Feature which writes inbound and outbound SOAP messages to log.|false|boolean| +|loggingSizeLimit|To limit the total size of number of bytes the logger will output when logging feature has been enabled and -1 for no limit.|49152|integer| +|skipFaultLogging|This option controls whether the PhaseInterceptorChain skips logging the Fault that it catches.|false|boolean| +|password|This option is used to set the basic authentication information of password for the CXF client.||string| +|username|This option is used to set the basic authentication information of username for the CXF client.||string| +|bindingId|The bindingId for the service model to use.||string| +|portName|The endpoint name this service is implementing, it maps to the wsdl:portname. In the format of ns:PORT\_NAME where ns is a namespace prefix valid at this scope.||string| +|publishedEndpointUrl|This option can override the endpointUrl that published from the WSDL which can be accessed with service address url plus wsd||string| +|serviceClass|The class name of the SEI (Service Endpoint Interface) class which could have JSR181 annotation or not.||string| +|serviceName|The service name this service is implementing, it maps to the wsdl:servicename.||string| +|wsdlURL|The location of the WSDL. Can be on the classpath, file system, or be hosted remotely.||string| diff --git a/camel-cxfrs.md b/camel-cxfrs.md new file mode 100644 index 0000000000000000000000000000000000000000..28ee5d5eab7a2a6dd640381efa7f68feee477add --- /dev/null +++ b/camel-cxfrs.md @@ -0,0 +1,411 @@ +# Cxfrs + +**Since Camel 2.0** + +**Both producer and consumer are supported** + +The CXFRS component provides integration with [Apache +CXF](http://cxf.apache.org) for connecting to JAX-RS 1.1 and 2.0 +services hosted in CXF. + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-cxf-rest + x.x.x + + +# URI format + + cxfrs://address?options + +Where **address** represents the CXF endpoint’s address + + cxfrs:bean:rsEndpoint + +Where **rsEndpoint** represents the spring bean’s name, which presents +the CXFRS client or server + +For either style above, you can append options to the URI as follows: + + cxfrs:bean:cxfEndpoint?resourceClasses=org.apache.camel.rs.Example + +You can also configure the CXF REST endpoint through the spring +configuration. + +Since there are lots of differences between the CXF REST client and CXF +REST Server, we provide different configuration for them. + +Please check the following files for more details: + +- the [schema + file](https://github.com/apache/camel/blob/main/components/camel-cxf/camel-cxf-spring-rest/src/main/resources/schema/cxfJaxrsEndpoint.xsd). + +- [CXF JAX-RS documentation](http://cxf.apache.org/docs/jax-rs.html). + +# How to configure the REST endpoint in Camel + +In the [camel-cxf schema +file](https://github.com/apache/camel/blob/main/components/camel-cxf/camel-cxf-spring-rest/src/main/resources/schema/cxfJaxrsEndpoint.xsd), +there are two elements for the REST endpoint definition: + +- `cxf:rsServer` for REST consumer + +- `cxf:rsClient` for REST producer. + +You can find a Camel REST service route configuration example there. + +# How to override the CXF producer address from message header + +The `camel-cxfrs` producer supports overriding the service address by +setting the message with the key of `CamelDestinationOverrideUrl`. + + // set up the service address from the message header to override the setting of CXF endpoint + exchange.getIn().setHeader(Exchange.DESTINATION_OVERRIDE_URL, constant(getServiceAddress())); + +# Consuming a REST Request - Simple Binding Style + +**Since Camel 2.11** + +The `Default` binding style is rather low-level, requiring the user to +manually process the `MessageContentsList` object coming into the route. +Thus, it tightly couples the route logic with the method signature and +parameter indices of the JAX-RS operation. Somewhat inelegant, difficult +and error-prone. + +In contrast, the `SimpleConsumer` binding style performs the following +mappings, to **make the request data more accessible** to you within the +Camel Message: + +- JAX-RS Parameters (`@HeaderParam`, `@QueryParam`, etc.) are injected + as *IN* message headers. The header name matches the value of the + annotation. + +- The request entity (POJO or another type) becomes the *IN* message + body. If a single entity cannot be identified in the JAX-RS method + signature, it falls back to the original `MessageContentsList`. + +- Binary `@Multipart` body parts become *IN* message attachments, + supporting `DataHandler`, `InputStream`, `DataSource` and CXF’s + `Attachment` class. + +- Non-binary `@Multipart` body parts are mapped as *IN* message + headers. The header name matches the Body Part name. + +Additionally, the following rules apply to the **Response mapping**: + +- If the message body type is different to `javax.ws.rs.core.Response` + (user-built response), a new `Response` is created and the message + body is set as the entity (so long it’s not null). The response + status code is taken from the `Exchange.HTTP_RESPONSE_CODE` header, + or defaults to 200 OK if not present. + +- If the message body type is equal to `javax.ws.rs.core.Response`, it + means that the user has built a custom response, and therefore it is + respected, and it becomes the final response. + +- In all cases, Camel headers permitted by custom or default + `HeaderFilterStrategy` are added to the HTTP response. + +## Enabling the Simple Binding Style + +This binding style can be activated by setting the `bindingStyle` +parameter in the consumer endpoint to value `SimpleConsumer`: + + from("cxfrs:bean:rsServer?bindingStyle=SimpleConsumer") + .to("log:TEST?showAll=true"); + +## Examples of request binding with different method signatures + +Below is a list of method signatures along with the expected result from +the simple binding: + +- `public Response doAction(BusinessObject request);`: the request + payload is placed in tbe *IN* message body, replacing the original + MessageContentsList. + +- `public Response doAction(BusinessObject request, @HeaderParam("abcd") String abcd, @QueryParam("defg") String defg);`: + the request payload is placed in the *IN* message body, replacing + the original `MessageContentsList`. Both request parameters are + mapped as IN message headers with names *"abcd"* and *"defg"*. + +- `public Response doAction(@HeaderParam("abcd") String abcd, @QueryParam("defg") String defg);`: + both request parameters are mapped as the *IN* message headers with + names *"abcd"* and *"defg"*. The original `MessageContentsList` is + preserved, even though it only contains the two parameters. + +- `public Response doAction(@Multipart(value="body1") BusinessObject request, @Multipart(value="body2") BusinessObject request2);`: + the first parameter is transferred as a header with name *"body1"*, + and the second one is mapped as header *"body2"*. The original + `MessageContentsList` is preserved as the *IN* message body. + +- `public Response doAction(InputStream abcd);`: the `InputStream` is + unwrapped from the `MessageContentsList` and preserved as the *IN* + message body. + +- `public Response doAction(DataHandler abcd);`: the *DataHandler* is + unwrapped from the `MessageContentsList` and preserved as the *IN* + message body. + +## More examples of the Simple Binding Style + +Given a JAX-RS resource class with this method: + + @POST @Path("/customers/{type}") + public Response newCustomer(Customer customer, @PathParam("type") String type, @QueryParam("active") @DefaultValue("true") boolean active) { + return null; + } + +Serviced by the following route: + + from("cxfrs:bean:rsServer?bindingStyle=SimpleConsumer") + .recipientList(simple("direct:${header.operationName}")); + + from("direct:newCustomer") + .log("Request: type=${header.type}, active=${header.active}, customerData=${body}"); + +The following HTTP request with XML payload (given that the Customer DTO +is JAXB-annotated): + + POST /customers/gold?active=true + + Payload: + + Raul Kripalani + Spain + Apache Camel + + +Will print the message: + + Request: type=gold, active=true, customerData= + +More examples on how to process requests and write responses can be +found +[here](https://svn.apache.org/repos/asf/camel/trunk/components/camel-cxf/src/test/java/org/apache/camel/component/cxf/jaxrs/simplebinding/). + +# Consuming a REST Request - Default Binding Style + +The [CXF JAXRS front end](http://cxf.apache.org/docs/jax-rs.html) +implements the [JAX-RS (JSR-311) API](https://javaee.github.io/jsr311/), +so we can export the resource classes as a REST service. And we leverage +the [CXF Invoker API](http://cxf.apache.org/docs/invokers.html) to turn +a REST request into a normal Java object method invocation. You don’t +need to specify the URI template within your endpoint, CXF takes care of +the REST request URI to resource class method mapping according to the +JSR-311 specification. All you need to do in Camel is delegate this +method request to the right processor or endpoint. + +Here is an example of a CXFRS route… + + private static final String CXF_RS_ENDPOINT_URI = + "cxfrs://http://localhost:" + CXT + "/rest?resourceClasses=org.apache.camel.component.cxf.jaxrs.testbean.CustomerServiceResource"; + private static final String CXF_RS_ENDPOINT_URI2 = + "cxfrs://http://localhost:" + CXT + "/rest2?resourceClasses=org.apache.camel.component.cxf.jaxrs.testbean.CustomerService"; + private static final String CXF_RS_ENDPOINT_URI3 = + "cxfrs://http://localhost:" + CXT + "/rest3?" + + "resourceClasses=org.apache.camel.component.cxf.jaxrs.testbean.CustomerServiceNoAnnotations&" + + "modelRef=classpath:/org/apache/camel/component/cxf/jaxrs/CustomerServiceModel.xml"; + private static final String CXF_RS_ENDPOINT_URI4 = + "cxfrs://http://localhost:" + CXT + "/rest4?" + + "modelRef=classpath:/org/apache/camel/component/cxf/jaxrs/CustomerServiceDefaultHandlerModel.xml"; + private static final String CXF_RS_ENDPOINT_URI5 = + "cxfrs://http://localhost:" + CXT + "/rest5?" + + "propagateContexts=true&" + + "modelRef=classpath:/org/apache/camel/component/cxf/jaxrs/CustomerServiceDefaultHandlerModel.xml"; + protected RouteBuilder createRouteBuilder() throws Exception { + final Processor testProcessor = new TestProcessor(); + final Processor testProcessor2 = new TestProcessor2(); + final Processor testProcessor3 = new TestProcessor3(); + return new RouteBuilder() { + public void configure() { + errorHandler(new NoErrorHandlerBuilder()); + from(CXF_RS_ENDPOINT_URI).process(testProcessor); + from(CXF_RS_ENDPOINT_URI2).process(testProcessor); + from(CXF_RS_ENDPOINT_URI3).process(testProcessor); + from(CXF_RS_ENDPOINT_URI4).process(testProcessor2); + from(CXF_RS_ENDPOINT_URI5).process(testProcessor3); + } + }; + } + +And the corresponding resource class used to configure the endpoint… + +**Note about resource classes** + +By default, JAX-RS resource classes are **only** used to configure +JAX-RS properties. Methods will **not** be executed during routing of +messages to the endpoint. Instead, it is the responsibility of the route +to do all processing. + +It is sufficient to provide an interface only as opposed to a no-op +service implementation class for the default mode. + +If a **performInvocation** option is enabled, the service implementation +will be invoked first, the response will be set on the Camel exchange, +and the route execution will continue as usual. This can be useful for +integrating the existing JAX-RS implementations into Camel routes and +for post-processing JAX-RS Responses in custom processors. + + @Path("/customerservice/") + public interface CustomerServiceResource { + + @GET + @Path("/customers/{id}/") + Customer getCustomer(@PathParam("id") String id); + + @PUT + @Path("/customers/") + Response updateCustomer(Customer customer); + + @Path("/{id}") + @PUT() + @Consumes({ "application/xml", "text/plain", + "application/json" }) + @Produces({ "application/xml", "text/plain", + "application/json" }) + Object invoke(@PathParam("id") String id, + String payload); + } + +# How to invoke the REST service through camel-cxfrs producer + +The [CXF JAXRS front end](http://cxf.apache.org/docs/jax-rs.html) +implements [a proxy-based client +API](http://cxf.apache.org/docs/jax-rs-client-api.html#JAX-RSClientAPI-Proxy-basedAPI), +with this API you can invoke the remote REST service through a proxy. +The `camel-cxfrs` producer is based on this [proxy +API](http://cxf.apache.org/docs/jax-rs-client-api.html#JAX-RSClientAPI-Proxy-basedAPI). +You need to specify the operation name in the message header and prepare +the parameter in the message body, the camel-cxfrs producer will +generate the right REST request for you. + +Here is an example: + + Exchange exchange = template.send("direct://proxy", new Processor() { + public void process(Exchange exchange) throws Exception { + exchange.setPattern(ExchangePattern.InOut); + Message inMessage = exchange.getIn(); + // set the operation name + inMessage.setHeader(CxfConstants.OPERATION_NAME, "getCustomer"); + // using the proxy client API + inMessage.setHeader(CxfConstants.CAMEL_CXF_RS_USING_HTTP_API, Boolean.FALSE); + // set a customer header + inMessage.setHeader("key", "value"); + // set up the accepted content type + inMessage.setHeader(Exchange.ACCEPT_CONTENT_TYPE, "application/json"); + // set the parameters, if you just have one parameter, + // camel will put this object into an Object[] itself + inMessage.setBody("123"); + } + }); + + // get the response message + Customer response = (Customer) exchange.getMessage().getBody(); + + assertNotNull(response, "The response should not be null"); + assertEquals(123, response.getId(), "Get a wrong customer id"); + assertEquals("John", response.getName(), "Get a wrong customer name"); + assertEquals(200, exchange.getMessage().getHeader(Exchange.HTTP_RESPONSE_CODE), "Get a wrong response code"); + assertEquals("value", exchange.getMessage().getHeader("key"), "Get a wrong header value"); + +The [CXF JAXRS front end](http://cxf.apache.org/docs/jax-rs.html) also +provides [an HTTP centric client +API](http://cxf.apache.org/docs/jax-rs-client-api.html#JAX-RSClientAPI-CXFWebClientAPI). +You can also invoke this API from `camel-cxfrs` producer. You need to +specify the +[HTTP\_PATH](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Exchange.html#HTTP_PATH) +and the +[HTTP\_METHOD](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Exchange.html#HTTP_METHOD) +and let the producer use the http centric client API by using the URI +option **httpClientAPI** or by setting the message header +[CxfConstants.CAMEL\_CXF\_RS\_USING\_HTTP\_API](https://www.javadoc.io/doc/org.apache.camel/camel-cxf-transport/current/org/apache/camel/component/cxf/common/message/CxfConstants.html#CAMEL_CXF_RS_USING_HTTP_API). +You can turn the response object to the type class specified with the +message header +[CxfConstants.CAMEL\_CXF\_RS\_RESPONSE\_CLASS](https://www.javadoc.io/doc/org.apache.camel/camel-cxf-transport/current/org/apache/camel/component/cxf/common/message/CxfConstants.html#CAMEL_CXF_RS_RESPONSE_CLASS). + + Exchange exchange = template.send("direct://http", new Processor() { + public void process(Exchange exchange) throws Exception { + exchange.setPattern(ExchangePattern.InOut) + Message inMessage = exchange.getIn(); + // using the http central client API + inMessage.setHeader(CxfConstants.CAMEL_CXF_RS_USING_HTTP_API, Boolean.TRUE); + // set the Http method + inMessage.setHeader(Exchange.HTTP_METHOD, "GET"); + // set the relative path + inMessage.setHeader(Exchange.HTTP_PATH, "/customerservice/customers/123"); + // Specify the response class, cxfrs will use InputStream as the response object type + inMessage.setHeader(CxfConstants.CAMEL_CXF_RS_RESPONSE_CLASS, Customer.class); + // set a customer header + inMessage.setHeader("key", "value"); + // since we use the Get method, so we don't need to set the message body + inMessage.setBody(null); + } + }); + +We also support to specify the query parameters from cxfrs URI for the +CXFRS http centric client. + + Exchange exchange = template.send("cxfrs://http://localhost:9003/testQuery?httpClientAPI=true&q1=12&q2=13" + +To support the Dynamical routing, you can override the URI’s query +parameters by using the +[CxfConstants.CAMEL\_CXF\_RS\_QUERY\_MAP](https://www.javadoc.io/doc/org.apache.camel/camel-cxf-transport/current/org/apache/camel/component/cxf/common/message/CxfConstants.html#CAMEL_CXF_RS_QUERY_MAP) +header to set the parameter map for it. + + Map queryMap = new LinkedHashMap<>(); + queryMap.put("q1", "new"); + queryMap.put("q2", "world"); + inMessage.setHeader(CxfConstants.CAMEL_CXF_RS_QUERY_MAP, queryMap); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|beanId|To lookup an existing configured CxfRsEndpoint. Must used bean: as prefix.||string| +|address|The service publish address.||string| +|features|Set the feature list to the CxfRs endpoint.||array| +|modelRef|This option is used to specify the model file which is useful for the resource class without annotation. When using this option, then the service class can be omitted, to emulate document-only endpoints||string| +|providers|Set custom JAX-RS provider(s) list to the CxfRs endpoint. You can specify a string with a list of providers to lookup in the registy separated by comma.||string| +|resourceClasses|The resource classes which you want to export as REST service. Multiple classes can be separated by comma.||array| +|schemaLocations|Sets the locations of the schema(s) which can be used to validate the incoming XML or JAXB-driven JSON.||array| +|skipFaultLogging|This option controls whether the PhaseInterceptorChain skips logging the Fault that it catches.|false|boolean| +|bindingStyle|Sets how requests and responses will be mapped to/from Camel. Two values are possible: SimpleConsumer: This binding style processes request parameters, multiparts, etc. and maps them to IN headers, IN attachments and to the message body. It aims to eliminate low-level processing of org.apache.cxf.message.MessageContentsList. It also also adds more flexibility and simplicity to the response mapping. Only available for consumers. Default: The default style. For consumers this passes on a MessageContentsList to the route, requiring low-level processing in the route. This is the traditional binding style, which simply dumps the org.apache.cxf.message.MessageContentsList coming in from the CXF stack onto the IN message body. The user is then responsible for processing it according to the contract defined by the JAX-RS method signature. Custom: allows you to specify a custom binding through the binding option.|Default|object| +|publishedEndpointUrl|This option can override the endpointUrl that published from the WADL which can be accessed with resource address url plus \_wadl||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|serviceBeans|The service beans (the bean ids to lookup in the registry) which you want to export as REST service. Multiple beans can be separated by comma||string| +|cookieHandler|Configure a cookie handler to maintain a HTTP session||object| +|hostnameVerifier|The hostname verifier to be used. Use the # notation to reference a HostnameVerifier from the registry.||object| +|sslContextParameters|The Camel SSL setting reference. Use the # notation to reference the SSL Context.||object| +|throwExceptionOnFailure|This option tells the CxfRsProducer to inspect return codes and will generate an Exception if the return code is larger than 207.|true|boolean| +|httpClientAPI|If it is true, the CxfRsProducer will use the HttpClientAPI to invoke the service. If it is false, the CxfRsProducer will use the ProxyClientAPI to invoke the service|true|boolean| +|ignoreDeleteMethodMessageBody|This option is used to tell CxfRsProducer to ignore the message body of the DELETE method when using HTTP API.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|maxClientCacheSize|This option allows you to configure the maximum size of the cache. The implementation caches CXF clients or ClientFactoryBean in CxfProvider and CxfRsProvider.|10|integer| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|binding|To use a custom CxfBinding to control the binding between Camel Message and CXF Message.||object| +|bus|To use a custom configured CXF Bus.||object| +|continuationTimeout|This option is used to set the CXF continuation timeout which could be used in CxfConsumer by default when the CXF server is using Jetty or Servlet transport.|30000|duration| +|cxfRsConfigurer|This option could apply the implementation of org.apache.camel.component.cxf.jaxrs.CxfRsEndpointConfigurer which supports to configure the CXF endpoint in programmatic way. User can configure the CXF server and client by implementing configure{Server/Client} method of CxfEndpointConfigurer.||object| +|defaultBus|Will set the default bus when CXF endpoint create a bus by itself|false|boolean| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object| +|performInvocation|When the option is true, Camel will perform the invocation of the resource class instance and put the response object into the exchange for further processing.|false|boolean| +|propagateContexts|When the option is true, JAXRS UriInfo, HttpHeaders, Request and SecurityContext contexts will be available to custom CXFRS processors as typed Camel exchange properties. These contexts can be used to analyze the current requests using JAX-RS API.|false|boolean| +|loggingFeatureEnabled|This option enables CXF Logging Feature which writes inbound and outbound REST messages to log.|false|boolean| +|loggingSizeLimit|To limit the total size of number of bytes the logger will output when logging feature has been enabled and -1 for no limit.|49152|integer| diff --git a/camel-dataformat.md b/camel-dataformat.md new file mode 100644 index 0000000000000000000000000000000000000000..3f22822a832301bc6e2da98783948d0b3adfb6b9 --- /dev/null +++ b/camel-dataformat.md @@ -0,0 +1,55 @@ +# Dataformat + +**Since Camel 2.12** + +**Only producer is supported** + +The Data Format component allows using [Data +Format](#manual::data-format.adoc) as a Camel Component. + +# URI format + + dataformat:name:(marshal|unmarshal)[?options] + +Where **name** is the name of the Data Format. And then followed by the +operation which must either be `marshal` or `unmarshal`. The options are +used for configuring the [Data Format](#manual::data-format.adoc) in +use. See the Data Format documentation for which options it supports. + +# DataFormat Options + +# Samples + +For example, to use the [JAXB](#dataformats:jaxb-dataformat.adoc) [Data +Format](#manual::data-format.adoc), we can do as follows: + +Java +from("activemq:My.Queue"). +to("dataformat:jaxb:unmarshal?contextPath=com.acme.model"). +to("mqseries:Another.Queue"); + +XML + + + + + + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Name of data format||string| +|operation|Operation to use either marshal or unmarshal||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-dataset-test.md b/camel-dataset-test.md new file mode 100644 index 0000000000000000000000000000000000000000..ffaa7639a71663e9cac86eca46e9cfdf9e978003 --- /dev/null +++ b/camel-dataset-test.md @@ -0,0 +1,84 @@ +# Dataset-test + +**Since Camel 1.3** + +**Only producer is supported** + +Testing of distributed and asynchronous processing is notoriously +difficult. The [Mock](#mock-component.adoc), +[DataSet](#dataset-component.adoc), and [DataSet +Test](#dataset-test-component.adoc) endpoints work great with the Camel +Testing Framework to simplify your unit and integration testing using +[Enterprise Integration +Patterns](#eips:enterprise-integration-patterns.adoc) and Camel’s large +range of Components together with the powerful Bean Integration. + +The **dataset-test** component extends the [Mock](#mock-component.adoc) +component to support pulling messages from another endpoint on startup +to set the expected message bodies on the underlying +[Mock](#mock-component.adoc) endpoint. + +That is, you use the dataset test endpoint in a route and messages +arriving at it will be implicitly compared to some expected messages +extracted from some other location. + +So you can use, for example, an expected set of message bodies as files. +This will then set up a properly configured [Mock](#mock-component.adoc) +endpoint, which is only valid if the received messages match the number +of expected messages and their message payloads are equal. + +# URI format + + dataset-test:expectedMessagesEndpointUri + +Where **expectedMessagesEndpointUri** refers to some other Component URI +that the expected message bodies are pulled from before starting the +test. + +# Example + +For example, you could write a test case as follows: + + from("seda:someEndpoint"). + to("dataset-test:file://data/expectedOutput?noop=true"); + +If your test then invokes the +[MockEndpoint.assertIsSatisfied(camelContext) +method](https://www.javadoc.io/doc/org.apache.camel/camel-mock/current/org/apache/camel/component/mock/MockEndpoint.html#assertIsSatisfied-org.apache.camel.CamelContext-), +your test case will perform the necessary assertions. + +To see how you can set other expectations on the test endpoint, see the +[Mock](#mock-component.adoc) component. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|log|To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging, then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|exchangeFormatter|Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Name of endpoint to lookup in the registry to use for polling messages used for testing||string| +|anyOrder|Whether the expected messages should arrive in the same order or can be in any order.|false|boolean| +|assertPeriod|Sets a grace period after which the mock endpoint will re-assert to ensure the preliminary assertion is still valid. This is used, for example, to assert that exactly a number of messages arrive. For example, if the expected count was set to 5, then the assertion is satisfied when five or more messages arrive. To ensure that exactly 5 messages arrive, then you would need to wait a little period to ensure no further message arrives. This is what you can use this method for. By default, this period is disabled.||duration| +|delimiter|The split delimiter to use when split is enabled. By default the delimiter is new line based. The delimiter can be a regular expression.||string| +|expectedCount|Specifies the expected number of message exchanges that should be received by this endpoint. Beware: If you want to expect that 0 messages, then take extra care, as 0 matches when the tests starts, so you need to set a assert period time to let the test run for a while to make sure there are still no messages arrived; for that use setAssertPeriod(long). An alternative is to use NotifyBuilder, and use the notifier to know when Camel is done routing some messages, before you call the assertIsSatisfied() method on the mocks. This allows you to not use a fixed assert period, to speedup testing times. If you want to assert that exactly nth message arrives to this mock endpoint, then see also the setAssertPeriod(long) method for further details.|-1|integer| +|failFast|Sets whether assertIsSatisfied() should fail fast at the first detected failed expectation while it may otherwise wait for all expected messages to arrive before performing expectations verifications. Is by default true. Set to false to use behavior as in Camel 2.x.|false|boolean| +|log|To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class.|false|boolean| +|reportGroup|A number that is used to turn on throughput logging based on groups of the size.||integer| +|resultMinimumWaitTime|Sets the minimum expected amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied||duration| +|resultWaitTime|Sets the maximum amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied||duration| +|retainFirst|Specifies to only retain the first nth number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the first 10 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the first 10 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object...) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received.|-1|integer| +|retainLast|Specifies to only retain the last nth number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the last 20 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the last 20 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object...) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received.|-1|integer| +|sleepForEmptyTest|Allows a sleep to be specified to wait to check that this endpoint really is empty when expectedMessageCount(int) is called with zero||duration| +|split|If enabled the messages loaded from the test endpoint will be split using new line delimiters so each line is an expected message. For example to use a file endpoint to load a file where each line is an expected message.|false|boolean| +|timeout|The timeout to use when polling for message bodies from the URI|2000|duration| +|copyOnExchange|Sets whether to make a deep copy of the incoming Exchange when received at this mock endpoint. Is by default true.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-dataset.md b/camel-dataset.md new file mode 100644 index 0000000000000000000000000000000000000000..e7b20bda5e962d0f00faacea3e91bdbc4b8df74d --- /dev/null +++ b/camel-dataset.md @@ -0,0 +1,300 @@ +# Dataset + +**Since Camel 1.3** + +**Both producer and consumer are supported** + +Testing of distributed and asynchronous processing is notoriously +challenging. The [Mock](#mock-component.adoc), +[DataSet](#dataset-component.adoc), and [DataSet +Test](#dataset-test-component.adoc) endpoints work with the Camel +Testing Framework to simplify your unit and integration testing using +[Enterprise Integration +Patterns](#eips:enterprise-integration-patterns.adoc) and Camel’s large +range of Components together with the powerful Bean Integration. + +The DataSet component provides a mechanism to easily perform load \& soak +testing of your system. It works by allowing you to create [DataSet +instances](https://www.javadoc.io/doc/org.apache.camel/camel-dataset/current/org/apache/camel/component/dataset/DataSet.html) +both as a source of messages and as a way to assert that the data set is +received. + +Camel will use the [throughput logger](#log-component.adoc) when sending +the dataset. + +# URI format + + dataset:name[?options] + +Where **name** is used to find the [DataSet +instance](https://www.javadoc.io/doc/org.apache.camel/camel-dataset/current/org/apache/camel/component/dataset/DataSet.html) +in the Registry + +Camel ships with a support implementation of +`org.apache.camel.component.dataset.DataSet`, the +`org.apache.camel.component.dataset.DataSetSupport` class, that can be +used as a base for implementing your own data set. Camel also ships with +some implementations that can be used for testing: +`org.apache.camel.component.dataset.SimpleDataSet`, +`org.apache.camel.component.dataset.ListDataSet` and +`org.apache.camel.component.dataset.FileDataSet`, all of which extend +`DataSetSupport`. + +# Configuring DataSet + +Camel will look up in the Registry for a bean implementing the `DataSet` +interface. So you can register your own data set as: + + + + + +# Example + +For example, to test that a set of messages are sent to a queue and then +consumed from the queue without losing any messages: + + // send the dataset to a queue + from("dataset:foo").to("activemq:SomeQueue"); + + // now lets test that the messages are consumed correctly + from("activemq:SomeQueue").to("dataset:foo"); + +The above would look in the Registry to find the `foo` `DataSet` +instance which is used to create the messages. + +Then you create a `DataSet` implementation, such as using the +`SimpleDataSet` as described below, configuring things like how big the +data set is and what the messages look like etc. + +# DataSetSupport (abstract class) + +The `DataSetSupport` abstract class is a nice starting point for new +data set, and provides some useful features to derived classes. + +## Properties on DataSetSupport + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PropertyTypeDefaultDescription

defaultHeaders

Map<String,Object>

null

Specify the default message body. For +SimpleDataSet it is a constant payload; though if you want +to create custom payloads per message, create your own derivation of +DataSetSupport.

outputTransformer

org.apache.camel.Processor

null

size

long

10

Specify how many messages to +send/consume.

reportCount

long

-1

Specify the number of messages to be +received before reporting progress. Useful for showing the progress of a +large load test. If smaller than zero (` < 0`), then +size / 5, if is 0 then size, else set to +reportCount value.

+ +# SimpleDataSet + +The `SimpleDataSet` extends `DataSetSupport`, and adds a default body. + +## Additional Properties on SimpleDataSet + + ++++++ + + + + + + + + + + + + + + + + +
PropertyTypeDefaultDescription

defaultBody

Object

<hello>world!</hello>

Specify the default message body. By +default, the SimpleDataSet produces the same constant +payload for each exchange. If you want to customize the payload for each +exchange, create a Camel Processor and configure the +SimpleDataSet to use it by setting the +outputTransformer property.

+ +# ListDataSet + +The List\`DataSet\` extends `DataSetSupport`, and adds a list of default +bodies. + +## Additional Properties on ListDataSet + + ++++++ + + + + + + + + + + + + + + + + + + + + + + +
PropertyTypeDefaultDescription

defaultBodies

List<Object>

empty LinkedList<Object>

Specify the default message body. By +default, the ListDataSet selects a constant payload from +the list of defaultBodies using the +CamelDataSetIndex. If you want to customize the payload, +create a Camel Processor and configure the +ListDataSet to use it by setting the +outputTransformer property.

size

long

the size of the defaultBodies +list

Specify how many messages to +send/consume. This value can be different from the size of the +defaultBodies list. If the value is less than the size of +the defaultBodies list, some of the list elements will not +be used. If the value is greater than the size of the +defaultBodies list, the payload for the exchange will be +selected using the modulus of the CamelDataSetIndex and the +size of the defaultBodies list (i.e., +CamelDataSetIndex % defaultBodies.size() )

+ +# FileDataSet + +The `FileDataSet` extends `ListDataSet`, and adds support for loading +the bodies from a file. + +## Additional Properties on FileDataSet + + ++++++ + + + + + + + + + + + + + + + + + + + + + + +
PropertyTypeDefaultDescription

sourceFile

File

null

Specify the source file for +payloads

delimiter

String

\z

Specify the delimiter pattern used by a +java.util.Scanner to split the file into multiple +payloads.

+ +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|log|To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging, then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|exchangeFormatter|Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Name of DataSet to lookup in the registry||object| +|dataSetIndex|Controls the behaviour of the CamelDataSetIndex header. off (consumer) the header will not be set. strict (consumer) the header will be set. lenient (consumer) the header will be set. off (producer) the header value will not be verified, and will not be set if it is not present. strict (producer) the header value must be present and will be verified. lenient (producer) the header value will be verified if it is present, and will be set if it is not present.|lenient|string| +|initialDelay|Time period in millis to wait before starting sending messages.|1000|duration| +|minRate|Wait until the DataSet contains at least this number of messages|0|integer| +|preloadSize|Sets how many messages should be preloaded (sent) before the route completes its initialization|0|integer| +|produceDelay|Allows a delay to be specified which causes a delay when a message is sent by the consumer (to simulate slow processing)|3|duration| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|assertPeriod|Sets a grace period after which the mock endpoint will re-assert to ensure the preliminary assertion is still valid. This is used, for example, to assert that exactly a number of messages arrive. For example, if the expected count was set to 5, then the assertion is satisfied when five or more messages arrive. To ensure that exactly 5 messages arrive, then you would need to wait a little period to ensure no further message arrives. This is what you can use this method for. By default, this period is disabled.||duration| +|consumeDelay|Allows a delay to be specified which causes a delay when a message is consumed by the producer (to simulate slow processing)|0|duration| +|expectedCount|Specifies the expected number of message exchanges that should be received by this endpoint. Beware: If you want to expect that 0 messages, then take extra care, as 0 matches when the tests starts, so you need to set a assert period time to let the test run for a while to make sure there are still no messages arrived; for that use setAssertPeriod(long). An alternative is to use NotifyBuilder, and use the notifier to know when Camel is done routing some messages, before you call the assertIsSatisfied() method on the mocks. This allows you to not use a fixed assert period, to speedup testing times. If you want to assert that exactly nth message arrives to this mock endpoint, then see also the setAssertPeriod(long) method for further details.|-1|integer| +|failFast|Sets whether assertIsSatisfied() should fail fast at the first detected failed expectation while it may otherwise wait for all expected messages to arrive before performing expectations verifications. Is by default true. Set to false to use behavior as in Camel 2.x.|false|boolean| +|log|To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class.|false|boolean| +|reportGroup|A number that is used to turn on throughput logging based on groups of the size.||integer| +|resultMinimumWaitTime|Sets the minimum expected amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied||duration| +|resultWaitTime|Sets the maximum amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied||duration| +|retainFirst|Specifies to only retain the first nth number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the first 10 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the first 10 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object...) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received.|-1|integer| +|retainLast|Specifies to only retain the last nth number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the last 20 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the last 20 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object...) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received.|-1|integer| +|sleepForEmptyTest|Allows a sleep to be specified to wait to check that this endpoint really is empty when expectedMessageCount(int) is called with zero||duration| +|copyOnExchange|Sets whether to make a deep copy of the incoming Exchange when received at this mock endpoint. Is by default true.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-debezium-db2.md b/camel-debezium-db2.md new file mode 100644 index 0000000000000000000000000000000000000000..a399e302549659807c960486e937645f59625a36 --- /dev/null +++ b/camel-debezium-db2.md @@ -0,0 +1,279 @@ +# Debezium-db2 + +**Since Camel 3.17** + +**Only consumer is supported** + +The Debezium db2 component is wrapper around +[Debezium](https://debezium.io/) using [Debezium +Engine](https://debezium.io/documentation/reference/1.8/operations/embedded.html), +which enables Change Data Capture from db2 database using Debezium +without the need for Kafka or Kafka Connect. + +**Note on handling failures:** per [Debezium Embedded +Engine](https://debezium.io/documentation/reference/1.9/development/engine.html#_handling_failures) +documentation, the engines are actively recording source offsets and +periodically flush these offsets to a persistent storage. Therefore, +when the application is restarted or crashed, the engine will resume +from the last recorded offset. This means that, at normal operation, +your downstream routes will receive each event exactly once. However, in +case of an application crash (not having a graceful shutdown), the +application will resume from the last recorded offset, which may result +in receiving duplicate events immediately after the restart. Therefore, +your downstream routes should be tolerant enough of such a case and +deduplicate events if needed. + +Maven users will need to add the following dependency to their `pom.xml` +for this component. + + + org.apache.camel + camel-debezium-db2 + x.x.x + + + +# URI format + + debezium-db2:name[?options] + +For more information about configuration: +[https://debezium.io/documentation/reference/1.8/operations/embedded.html#engine-properties](https://debezium.io/documentation/reference/1.18/operations/embedded.html#engine-properties) +[https://debezium.io/documentation/reference/1.8/connectors/db2.html#connector-properties](https://debezium.io/documentation/reference/1.18/connectors/db2ql.html#connector-properties) + +# Message body + +The message body if is not `null` (in case of tombstones), it contains +the state of the row after the event occurred as `Struct` format or +`Map` format if you use the included Type Converter from `Struct` to +`Map` (please look below for more explanation). + +# Samples + +## Consuming events + +Here is a basic route that you can use to listen to Debezium events from +the db2 connector: + + from("debezium-db2:dbz-test-1?offsetStorageFileName=/usr/offset-file-1.dat&databaseHostname=localhost&databaseUser=debezium&databasePassword=dbz&databaseServerName=my-app-connector&databaseHistoryFileFilename=/usr/history-file-1.dat") + .log("Event received from Debezium : ${body}") + .log(" with this identifier ${headers.CamelDebeziumIdentifier}") + .log(" with these source metadata ${headers.CamelDebeziumSourceMetadata}") + .log(" the event occurred upon this operation '${headers.CamelDebeziumSourceOperation}'") + .log(" on this database '${headers.CamelDebeziumSourceMetadata[db]}' and this table '${headers.CamelDebeziumSourceMetadata[table]}'") + .log(" with the key ${headers.CamelDebeziumKey}") + .log(" the previous value is ${headers.CamelDebeziumBefore}") + +By default, the component will emit the events in the body and +`CamelDebeziumBefore` header as +[`Struct`](https://kafka.apache.org/22/javadoc/org/apache/kafka/connect/data/Struct.html) +data type, the reasoning behind this, is to perceive the schema +information in case is needed. However, the component as well contains a +[Type Converter](#manual::type-converter.adoc) that converts from +default output type of +[`Struct`](https://kafka.apache.org/22/javadoc/org/apache/kafka/connect/data/Struct.html) +to `Map` in order to leverage Camel’s rich [Data +Format](#manual::data-format.adoc) types which many of them work out of +box with `Map` data type. To use it, you can either add `Map.class` type +when you access the message (e.g., +`exchange.getIn().getBody(Map.class)`), or you can convert the body +always to `Map` from the route builder by adding +`.convertBodyTo(Map.class)` to your Camel Route DSL after `from` +statement. + +We mentioned above the schema, which can be used in case you need to +perform advance data transformation and the schema is needed for that. +If you choose not to convert your body to `Map`, you can obtain the +schema information as +[`Schema`](https://kafka.apache.org/22/javadoc/org/apache/kafka/connect/data/Schema.html) +type from `Struct` like this: + + from("debezium-db2:[name]?[options]]) + .process(exchange -> { + final Struct bodyValue = exchange.getIn().getBody(Struct.class); + final Schema schemaValue = bodyValue.schema(); + + log.info("Body value is : {}", bodyValue); + log.info("With Schema : {}", schemaValue); + log.info("And fields of : {}", schemaValue.fields()); + log.info("Field name has `{}` type", schemaValue.field("name").schema()); + }); + +This component is a thin wrapper around Debezium Engine as mentioned. +Therefore, before using this component in production, you need to +understand how Debezium works and how configurations can reflect the +expected behavior. This is especially true in regard to [handling +failures](https://debezium.io/documentation/reference/1.8/operations/embedded.html#_handling_failures). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|additionalProperties|Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345\&additionalProperties.schema.registry.url=http://localhost:8811/avro||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|configuration|Allow pre-configured Configurations to be set.||object| +|internalKeyConverter|The Converter class that should be used to serialize and deserialize key data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|internalValueConverter|The Converter class that should be used to serialize and deserialize value data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|offsetCommitPolicy|The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface 'OffsetCommitPolicy'. The default is a periodic commit policy based upon time intervals.||string| +|offsetCommitTimeoutMs|Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds.|5000|duration| +|offsetFlushIntervalMs|Interval at which to try committing offsets. The default is 1 minute.|60000|duration| +|offsetStorage|The name of the Java class that is responsible for persistence of connector offsets.|org.apache.kafka.connect.storage.FileOffsetBackingStore|string| +|offsetStorageFileName|Path to file where offsets are to be stored. Required when offset.storage is set to the FileOffsetBackingStore.||string| +|offsetStoragePartitions|The number of partitions used when creating the offset storage topic. Required when offset.storage is set to the 'KafkaOffsetBackingStore'.||integer| +|offsetStorageReplicationFactor|Replication factor used when creating the offset storage topic. Required when offset.storage is set to the KafkaOffsetBackingStore||integer| +|offsetStorageTopic|The name of the Kafka topic where offsets are to be stored. Required when offset.storage is set to the KafkaOffsetBackingStore.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|cdcChangeTablesSchema|The name of the schema where CDC change tables are located; defaults to 'ASNCDC'|ASNCDC|string| +|cdcControlSchema|The name of the schema where CDC control structures are located; defaults to 'ASNCDC'|ASNCDC|string| +|columnExcludeList|Regular expressions matching columns to exclude from change events||string| +|columnIncludeList|Regular expressions matching columns to include in change events||string| +|columnPropagateSourceType|A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|converters|Optional list of custom converters that would be used instead of default ones. The converters are defined using '.type' config option and configured using options '.'||string| +|customMetricTags|The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example: k1=v1,k2=v2||string| +|databaseDbname|The name of the database from which the connector should capture changes||string| +|databaseHostname|Resolvable hostname or IP address of the database server.||string| +|databasePassword|Password of the database user to be used when connecting to the database.||string| +|databasePort|Port of the database server.|50000|integer| +|databaseUser|Name of the database user to be used when connecting to the database.||string| +|datatypePropagateSourceType|A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|db2Platform|Informs connector which Db2 implementation platform it is connected to. The default is 'LUW', which means Windows, UNIX, Linux. Using a value of 'Z' ensures that the Db2 for z/OS specific SQL statements are used.|LUW|string| +|decimalHandlingMode|Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers.|precise|string| +|errorsMaxRetries|The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries).|-1|integer| +|eventProcessingFailureHandlingMode|Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.|fail|string| +|heartbeatIntervalMs|Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default.|0ms|duration| +|heartbeatTopicsPrefix|The prefix that is used to name heartbeat topics.Defaults to \_\_debezium-heartbeat.|\_\_debezium-heartbeat|string| +|includeSchemaChanges|Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value include logical description of the new schema and optionally the DDL statement(s). The default is 'true'. This is independent of how the connector internally records database schema history.|true|boolean| +|incrementalSnapshotChunkSize|The maximum size of chunk (number of documents/rows) for incremental snapshotting|1024|integer| +|incrementalSnapshotWatermarkingStrategy|Specify the strategy used for watermarking during an incremental snapshot: 'insert\_insert' both open and close signal is written into signal data collection (default); 'insert\_delete' only open signal is written on signal data collection, the close will delete the relative open signal;|INSERT\_INSERT|string| +|maxBatchSize|Maximum size of each batch of source records. Defaults to 2048.|2048|integer| +|maxQueueSize|Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size.|8192|integer| +|maxQueueSizeInBytes|Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled|0|integer| +|messageKeyColumns|A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':', where the table names could be defined as (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connector, and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) will be used as message key. Example: dbserver1.inventory.orderlines:orderId,orderLineId;dbserver1.inventory.orders:id||string| +|notificationEnabledChannels|List of notification channels names that are enabled.||string| +|notificationSinkTopicName|The name of the topic for the notifications. This is required in case 'sink' is in the list of enabled channels||string| +|pollIntervalMs|Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms.|500ms|duration| +|postProcessors|Optional list of post processors. The processors are defined using '.type' config option and configured using options ''||string| +|provideTransactionMetadata|Enables transaction metadata extraction together with event counting|false|boolean| +|queryFetchSize|The maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fetch size. The default value is '10000'.|10000|integer| +|retriableRestartConnectorWaitMs|Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms.|10s|duration| +|schemaHistoryInternal|The name of the SchemaHistory class that should be used to store and recover database schema changes. The configuration properties for the history are prefixed with the 'schema.history.internal.' string.|io.debezium.storage.kafka.history.KafkaSchemaHistory|string| +|schemaHistoryInternalFileFilename|The path to the file that will be used to record the database schema history||string| +|schemaHistoryInternalSkipUnparseableDdl|Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse. If skipping is enabled then Debezium can miss metadata changes.|false|boolean| +|schemaHistoryInternalStoreOnlyCapturedDatabasesDdl|Controls what DDL will Debezium store in database schema history. By default (true) only DDL that manipulates a table from captured schema/database will be stored. If set to false, then Debezium will store all incoming DDL statements.|false|boolean| +|schemaHistoryInternalStoreOnlyCapturedTablesDdl|Controls what DDL will Debezium store in database schema history. By default (false) Debezium will store all incoming DDL statements. If set to true, then only DDL that manipulates a captured table will be stored.|false|boolean| +|schemaNameAdjustmentMode|Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro\_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like \_uxxxx. Note: \_ is an escape sequence like backslash in Java;'none' does not apply any adjustment (default)|none|string| +|signalDataCollection|The name of the data collection that is used to send signals/commands to Debezium. Signaling is disabled when not set.||string| +|signalEnabledChannels|List of channels names that are enabled. Source channel is enabled by default|source|string| +|signalPollIntervalMs|Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds.|5s|duration| +|skippedOperations|The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped. By default, only truncate operations will be skipped.|t|string| +|snapshotDelayMs|A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|snapshotFetchSize|The maximum number of records that should be loaded into memory while performing a snapshot.||integer| +|snapshotIncludeCollectionList|This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector.||string| +|snapshotLockTimeoutMs|The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds|10s|duration| +|snapshotMode|The criteria for running a snapshot upon startup of the connector. Options include: 'initial' (the default) to specify the connector should run a snapshot only when no offsets are available for the logical server name; 'schema\_only' to specify the connector should run a snapshot of the schema when no offsets are available for the logical server name.|initial|string| +|snapshotModeConfigurationBasedSnapshotData|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnDataError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnSchemaError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotSchema|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedStartStream|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the stream should start or not after snapshot.|false|boolean| +|snapshotModeCustomName|When 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'Snapshotter' interface and is called on each app boot to determine whether to do a snapshot.||string| +|snapshotSelectStatementOverrides|This property contains a comma-separated list of fully-qualified tables (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connectors. Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id 'snapshot.select.statement.overrides.DB\_NAME.TABLE\_NAME' or 'snapshot.select.statement.overrides.SCHEMA\_NAME.TABLE\_NAME', respectively. The value of those properties is the select statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted.||string| +|snapshotTablesOrderByRowCount|Controls the order in which tables are processed in the initial snapshot. A descending value will order the tables by row count descending. A ascending value will order the tables by row count ascending. A value of disabled (the default) will disable ordering by row count.|disabled|string| +|sourceinfoStructMaker|The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.|io.debezium.connector.db2.Db2SourceInfoStructMaker|string| +|streamingDelayMs|A delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|tableExcludeList|A comma-separated list of regular expressions that match the fully-qualified names of tables to be excluded from monitoring||string| +|tableIgnoreBuiltin|Flag specifying whether built-in tables should be ignored.|true|boolean| +|tableIncludeList|The tables for which changes are to be captured||string| +|timePrecisionMode|Time, date, and timestamps can be represented with different kinds of precisions, including: 'adaptive' (the default) bases the precision of time, date, and timestamp values on the database column's precision; 'adaptive\_time\_microseconds' like 'adaptive' mode, but TIME fields always use microseconds precision; 'connect' always represents time, date, and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision.|adaptive|string| +|tombstonesOnDelete|Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.|false|boolean| +|topicNamingStrategy|The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc.|io.debezium.schema.SchemaTopicNamingStrategy|string| +|topicPrefix|Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be accepted.||string| +|transactionMetadataFactory|Class to make transaction context \& transaction struct/schemas|io.debezium.pipeline.txmetadata.DefaultTransactionMetadataFactory|string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Unique name for the connector. Attempting to register again with the same name will fail.||string| +|additionalProperties|Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345\&additionalProperties.schema.registry.url=http://localhost:8811/avro||object| +|internalKeyConverter|The Converter class that should be used to serialize and deserialize key data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|internalValueConverter|The Converter class that should be used to serialize and deserialize value data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|offsetCommitPolicy|The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface 'OffsetCommitPolicy'. The default is a periodic commit policy based upon time intervals.||string| +|offsetCommitTimeoutMs|Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds.|5000|duration| +|offsetFlushIntervalMs|Interval at which to try committing offsets. The default is 1 minute.|60000|duration| +|offsetStorage|The name of the Java class that is responsible for persistence of connector offsets.|org.apache.kafka.connect.storage.FileOffsetBackingStore|string| +|offsetStorageFileName|Path to file where offsets are to be stored. Required when offset.storage is set to the FileOffsetBackingStore.||string| +|offsetStoragePartitions|The number of partitions used when creating the offset storage topic. Required when offset.storage is set to the 'KafkaOffsetBackingStore'.||integer| +|offsetStorageReplicationFactor|Replication factor used when creating the offset storage topic. Required when offset.storage is set to the KafkaOffsetBackingStore||integer| +|offsetStorageTopic|The name of the Kafka topic where offsets are to be stored. Required when offset.storage is set to the KafkaOffsetBackingStore.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|cdcChangeTablesSchema|The name of the schema where CDC change tables are located; defaults to 'ASNCDC'|ASNCDC|string| +|cdcControlSchema|The name of the schema where CDC control structures are located; defaults to 'ASNCDC'|ASNCDC|string| +|columnExcludeList|Regular expressions matching columns to exclude from change events||string| +|columnIncludeList|Regular expressions matching columns to include in change events||string| +|columnPropagateSourceType|A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|converters|Optional list of custom converters that would be used instead of default ones. The converters are defined using '.type' config option and configured using options '.'||string| +|customMetricTags|The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example: k1=v1,k2=v2||string| +|databaseDbname|The name of the database from which the connector should capture changes||string| +|databaseHostname|Resolvable hostname or IP address of the database server.||string| +|databasePassword|Password of the database user to be used when connecting to the database.||string| +|databasePort|Port of the database server.|50000|integer| +|databaseUser|Name of the database user to be used when connecting to the database.||string| +|datatypePropagateSourceType|A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|db2Platform|Informs connector which Db2 implementation platform it is connected to. The default is 'LUW', which means Windows, UNIX, Linux. Using a value of 'Z' ensures that the Db2 for z/OS specific SQL statements are used.|LUW|string| +|decimalHandlingMode|Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers.|precise|string| +|errorsMaxRetries|The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries).|-1|integer| +|eventProcessingFailureHandlingMode|Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.|fail|string| +|heartbeatIntervalMs|Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default.|0ms|duration| +|heartbeatTopicsPrefix|The prefix that is used to name heartbeat topics.Defaults to \_\_debezium-heartbeat.|\_\_debezium-heartbeat|string| +|includeSchemaChanges|Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value include logical description of the new schema and optionally the DDL statement(s). The default is 'true'. This is independent of how the connector internally records database schema history.|true|boolean| +|incrementalSnapshotChunkSize|The maximum size of chunk (number of documents/rows) for incremental snapshotting|1024|integer| +|incrementalSnapshotWatermarkingStrategy|Specify the strategy used for watermarking during an incremental snapshot: 'insert\_insert' both open and close signal is written into signal data collection (default); 'insert\_delete' only open signal is written on signal data collection, the close will delete the relative open signal;|INSERT\_INSERT|string| +|maxBatchSize|Maximum size of each batch of source records. Defaults to 2048.|2048|integer| +|maxQueueSize|Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size.|8192|integer| +|maxQueueSizeInBytes|Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled|0|integer| +|messageKeyColumns|A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':', where the table names could be defined as (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connector, and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) will be used as message key. Example: dbserver1.inventory.orderlines:orderId,orderLineId;dbserver1.inventory.orders:id||string| +|notificationEnabledChannels|List of notification channels names that are enabled.||string| +|notificationSinkTopicName|The name of the topic for the notifications. This is required in case 'sink' is in the list of enabled channels||string| +|pollIntervalMs|Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms.|500ms|duration| +|postProcessors|Optional list of post processors. The processors are defined using '.type' config option and configured using options ''||string| +|provideTransactionMetadata|Enables transaction metadata extraction together with event counting|false|boolean| +|queryFetchSize|The maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fetch size. The default value is '10000'.|10000|integer| +|retriableRestartConnectorWaitMs|Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms.|10s|duration| +|schemaHistoryInternal|The name of the SchemaHistory class that should be used to store and recover database schema changes. The configuration properties for the history are prefixed with the 'schema.history.internal.' string.|io.debezium.storage.kafka.history.KafkaSchemaHistory|string| +|schemaHistoryInternalFileFilename|The path to the file that will be used to record the database schema history||string| +|schemaHistoryInternalSkipUnparseableDdl|Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse. If skipping is enabled then Debezium can miss metadata changes.|false|boolean| +|schemaHistoryInternalStoreOnlyCapturedDatabasesDdl|Controls what DDL will Debezium store in database schema history. By default (true) only DDL that manipulates a table from captured schema/database will be stored. If set to false, then Debezium will store all incoming DDL statements.|false|boolean| +|schemaHistoryInternalStoreOnlyCapturedTablesDdl|Controls what DDL will Debezium store in database schema history. By default (false) Debezium will store all incoming DDL statements. If set to true, then only DDL that manipulates a captured table will be stored.|false|boolean| +|schemaNameAdjustmentMode|Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro\_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like \_uxxxx. Note: \_ is an escape sequence like backslash in Java;'none' does not apply any adjustment (default)|none|string| +|signalDataCollection|The name of the data collection that is used to send signals/commands to Debezium. Signaling is disabled when not set.||string| +|signalEnabledChannels|List of channels names that are enabled. Source channel is enabled by default|source|string| +|signalPollIntervalMs|Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds.|5s|duration| +|skippedOperations|The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped. By default, only truncate operations will be skipped.|t|string| +|snapshotDelayMs|A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|snapshotFetchSize|The maximum number of records that should be loaded into memory while performing a snapshot.||integer| +|snapshotIncludeCollectionList|This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector.||string| +|snapshotLockTimeoutMs|The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds|10s|duration| +|snapshotMode|The criteria for running a snapshot upon startup of the connector. Options include: 'initial' (the default) to specify the connector should run a snapshot only when no offsets are available for the logical server name; 'schema\_only' to specify the connector should run a snapshot of the schema when no offsets are available for the logical server name.|initial|string| +|snapshotModeConfigurationBasedSnapshotData|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnDataError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnSchemaError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotSchema|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedStartStream|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the stream should start or not after snapshot.|false|boolean| +|snapshotModeCustomName|When 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'Snapshotter' interface and is called on each app boot to determine whether to do a snapshot.||string| +|snapshotSelectStatementOverrides|This property contains a comma-separated list of fully-qualified tables (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connectors. Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id 'snapshot.select.statement.overrides.DB\_NAME.TABLE\_NAME' or 'snapshot.select.statement.overrides.SCHEMA\_NAME.TABLE\_NAME', respectively. The value of those properties is the select statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted.||string| +|snapshotTablesOrderByRowCount|Controls the order in which tables are processed in the initial snapshot. A descending value will order the tables by row count descending. A ascending value will order the tables by row count ascending. A value of disabled (the default) will disable ordering by row count.|disabled|string| +|sourceinfoStructMaker|The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.|io.debezium.connector.db2.Db2SourceInfoStructMaker|string| +|streamingDelayMs|A delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|tableExcludeList|A comma-separated list of regular expressions that match the fully-qualified names of tables to be excluded from monitoring||string| +|tableIgnoreBuiltin|Flag specifying whether built-in tables should be ignored.|true|boolean| +|tableIncludeList|The tables for which changes are to be captured||string| +|timePrecisionMode|Time, date, and timestamps can be represented with different kinds of precisions, including: 'adaptive' (the default) bases the precision of time, date, and timestamp values on the database column's precision; 'adaptive\_time\_microseconds' like 'adaptive' mode, but TIME fields always use microseconds precision; 'connect' always represents time, date, and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision.|adaptive|string| +|tombstonesOnDelete|Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.|false|boolean| +|topicNamingStrategy|The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc.|io.debezium.schema.SchemaTopicNamingStrategy|string| +|topicPrefix|Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be accepted.||string| +|transactionMetadataFactory|Class to make transaction context \& transaction struct/schemas|io.debezium.pipeline.txmetadata.DefaultTransactionMetadataFactory|string| diff --git a/camel-debezium-mongodb.md b/camel-debezium-mongodb.md new file mode 100644 index 0000000000000000000000000000000000000000..a54b843bfd816656d2cb8a95d540dddc47e97d4e --- /dev/null +++ b/camel-debezium-mongodb.md @@ -0,0 +1,254 @@ +# Debezium-mongodb + +**Since Camel 3.0** + +**Only consumer is supported** + +The Debezium MongoDB component is wrapper around +[Debezium](https://debezium.io/) using [Debezium +Engine](https://debezium.io/documentation/reference/1.9/development/engine.html), +which enables Change Data Capture from MongoDB database using Debezium +without the need for Kafka or Kafka Connect. + +The Debezium MongoDB connector uses MongoDB’s oplog to capture the +changes. The connector works only with the MongoDB replica sets or with +sharded clusters, where each shard is a separate replica set. Therefore, +you will need to have your MongoDB instance running either in replica +set mode or sharded clusters mode. + +**Note on handling failures:** per [Debezium Embedded +Engine](https://debezium.io/documentation/reference/1.9/development/engine.html#_handling_failures) +documentation, the engines are actively recording source offsets and +periodically flush these offsets to a persistent storage. Therefore, +when the application is restarted or crashed, the engine will resume +from the last recorded offset. This means that, at normal operation, +your downstream routes will receive each event exactly once. However, in +case of an application crash (not having a graceful shutdown), the +application will resume from the last recorded offset, which may result +in receiving duplicate events immediately after the restart. Therefore, +your downstream routes should be tolerant enough of such a case and +deduplicate events if needed. + +Maven users will need to add the following dependency to their `pom.xml` +for this component. + + + org.apache.camel + camel-debezium-mongodb + x.x.x + + + +# URI format + + debezium-mongodb:name[?options] + +For more information about configuration: +[https://debezium.io/documentation/reference/0.10/operations/embedded.html#engine-properties](https://debezium.io/documentation/reference/0.10/operations/embedded.html#engine-properties) +[https://debezium.io/documentation/reference/0.10/connectors/mongodb.html#connector-properties](https://debezium.io/documentation/reference/0.10/connectors/mongodb.html#connector-properties) + +**Note**: Debezium Mongodb uses MongoDB’s oplog to populate the CDC +events, the update events in MongoDB’s oplog don’t have the before or +after states of the changed document, so there’s no way for the Debezium +connector to provide this information, therefore header key +`CamelDebeziumBefore` is not available in this component. + +# Message body + +The message body if is not `null` (in case of tombstones), it contains +the state of the row after the event occurred as `String` JSON format, +and you can unmarshal using Camel JSON Data Format. + +# Samples + +## Consuming events + +Here is a basic route that you can use to listen to Debezium events from +MongoDB connector: + + from("debezium-mongodb:dbz-test-1?offsetStorageFileName=/usr/offset-file-1.dat&mongodbHosts=rs0/localhost:27017&mongodbUser=debezium&mongodbPassword=dbz&mongodbName=dbserver1&databaseHistoryFileFilename=/usr/history-file-1.dat") + .log("Event received from Debezium : ${body}") + .log(" with this identifier ${headers.CamelDebeziumIdentifier}") + .log(" with these source metadata ${headers.CamelDebeziumSourceMetadata}") + .log(" the event occurred upon this operation '${headers.CamelDebeziumSourceOperation}'") + .log(" on this database '${headers.CamelDebeziumSourceMetadata[db]}' and this table '${headers.CamelDebeziumSourceMetadata[table]}'") + .log(" with the key ${headers.CamelDebeziumKey}") + .choice() + .when(header(DebeziumConstants.HEADER_OPERATION).in("c", "u", "r")) + .unmarshal().json() + .log("Event received from Debezium : ${body}") + .end() + .end(); + +By default, the component will emit the events in the body String JSON +format in case of `u`, `c` or `r` operations. This can be easily +converted to JSON using Camel JSON Data Format e.g.: +`.unmarshal().json()` like the above example. In case of operation `d`, +the body will be `null`. + +This component is a thin wrapper around Debezium Engine as mentioned. +Therefore, before using this component in production, you need to +understand how Debezium works and how configurations can reflect the +expected behavior. This is especially true in regard to [handling +failures](https://debezium.io/documentation/reference/1.8/operations/embedded.html#_handling_failures). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|additionalProperties|Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345\&additionalProperties.schema.registry.url=http://localhost:8811/avro||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|configuration|Allow pre-configured Configurations to be set.||object| +|internalKeyConverter|The Converter class that should be used to serialize and deserialize key data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|internalValueConverter|The Converter class that should be used to serialize and deserialize value data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|offsetCommitPolicy|The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface 'OffsetCommitPolicy'. The default is a periodic commit policy based upon time intervals.||string| +|offsetCommitTimeoutMs|Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds.|5000|duration| +|offsetFlushIntervalMs|Interval at which to try committing offsets. The default is 1 minute.|60000|duration| +|offsetStorage|The name of the Java class that is responsible for persistence of connector offsets.|org.apache.kafka.connect.storage.FileOffsetBackingStore|string| +|offsetStorageFileName|Path to file where offsets are to be stored. Required when offset.storage is set to the FileOffsetBackingStore.||string| +|offsetStoragePartitions|The number of partitions used when creating the offset storage topic. Required when offset.storage is set to the 'KafkaOffsetBackingStore'.||integer| +|offsetStorageReplicationFactor|Replication factor used when creating the offset storage topic. Required when offset.storage is set to the KafkaOffsetBackingStore||integer| +|offsetStorageTopic|The name of the Kafka topic where offsets are to be stored. Required when offset.storage is set to the KafkaOffsetBackingStore.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|captureMode|The method used to capture changes from MongoDB server. Options include: 'change\_streams' to capture changes via MongoDB Change Streams, update events do not contain full documents; 'change\_streams\_update\_full' (the default) to capture changes via MongoDB Change Streams, update events contain full documents|change\_streams\_update\_full|string| +|collectionExcludeList|A comma-separated list of regular expressions or literals that match the collection names for which changes are to be excluded||string| +|collectionIncludeList|A comma-separated list of regular expressions or literals that match the collection names for which changes are to be captured||string| +|converters|Optional list of custom converters that would be used instead of default ones. The converters are defined using '.type' config option and configured using options '.'||string| +|cursorMaxAwaitTimeMs|The maximum processing time in milliseconds to wait for the oplog cursor to process a single poll request||duration| +|customMetricTags|The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example: k1=v1,k2=v2||string| +|databaseExcludeList|A comma-separated list of regular expressions or literals that match the database names for which changes are to be excluded||string| +|databaseIncludeList|A comma-separated list of regular expressions or literals that match the database names for which changes are to be captured||string| +|errorsMaxRetries|The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries).|-1|integer| +|eventProcessingFailureHandlingMode|Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.|fail|string| +|fieldExcludeList|A comma-separated list of the fully-qualified names of fields that should be excluded from change event message values||string| +|fieldRenames|A comma-separated list of the fully-qualified replacements of fields that should be used to rename fields in change event message values. Fully-qualified replacements for fields are of the form databaseName.collectionName.fieldName.nestedFieldName:newNestedFieldName, where databaseName and collectionName may contain the wildcard () which matches any characters, the colon character (:) is used to determine rename mapping of field.||string| +|heartbeatIntervalMs|Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default.|0ms|duration| +|heartbeatTopicsPrefix|The prefix that is used to name heartbeat topics.Defaults to \_\_debezium-heartbeat.|\_\_debezium-heartbeat|string| +|incrementalSnapshotWatermarkingStrategy|Specify the strategy used for watermarking during an incremental snapshot: 'insert\_insert' both open and close signal is written into signal data collection (default); 'insert\_delete' only open signal is written on signal data collection, the close will delete the relative open signal;|INSERT\_INSERT|string| +|maxBatchSize|Maximum size of each batch of source records. Defaults to 2048.|2048|integer| +|maxQueueSize|Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size.|8192|integer| +|maxQueueSizeInBytes|Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled|0|integer| +|mongodbAuthsource|Database containing user credentials.|admin|string| +|mongodbConnectionString|Database connection string.||string| +|mongodbConnectTimeoutMs|The connection timeout, given in milliseconds. Defaults to 10 seconds (10,000 ms).|10s|duration| +|mongodbHeartbeatFrequencyMs|The frequency that the cluster monitor attempts to reach each server. Defaults to 10 seconds (10,000 ms).|10s|duration| +|mongodbPassword|Password to be used when connecting to MongoDB, if necessary.||string| +|mongodbPollIntervalMs|Interval for looking for new, removed, or changed replica sets, given in milliseconds. Defaults to 30 seconds (30,000 ms).|30s|duration| +|mongodbServerSelectionTimeoutMs|The server selection timeout, given in milliseconds. Defaults to 10 seconds (10,000 ms).|30s|duration| +|mongodbSocketTimeoutMs|The socket timeout, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|mongodbSslEnabled|Should connector use SSL to connect to MongoDB instances|false|boolean| +|mongodbSslInvalidHostnameAllowed|Whether invalid host names are allowed when using SSL. If true the connection will not prevent man-in-the-middle attacks|false|boolean| +|mongodbUser|Database user for connecting to MongoDB, if necessary.||string| +|notificationEnabledChannels|List of notification channels names that are enabled.||string| +|notificationSinkTopicName|The name of the topic for the notifications. This is required in case 'sink' is in the list of enabled channels||string| +|pollIntervalMs|Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms.|500ms|duration| +|postProcessors|Optional list of post processors. The processors are defined using '.type' config option and configured using options ''||string| +|provideTransactionMetadata|Enables transaction metadata extraction together with event counting|false|boolean| +|queryFetchSize|The maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fetch size.|0|integer| +|retriableRestartConnectorWaitMs|Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms.|10s|duration| +|schemaHistoryInternalFileFilename|The path to the file that will be used to record the database schema history||string| +|schemaNameAdjustmentMode|Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro\_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like \_uxxxx. Note: \_ is an escape sequence like backslash in Java;'none' does not apply any adjustment (default)|none|string| +|signalDataCollection|The name of the data collection that is used to send signals/commands to Debezium. Signaling is disabled when not set.||string| +|signalEnabledChannels|List of channels names that are enabled. Source channel is enabled by default|source|string| +|signalPollIntervalMs|Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds.|5s|duration| +|skippedOperations|The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped. By default, only truncate operations will be skipped.|t|string| +|snapshotCollectionFilterOverrides|This property contains a comma-separated list of ., for which the initial snapshot may be a subset of data present in the data source. The subset would be defined by mongodb filter query specified as value for property snapshot.collection.filter.override..||string| +|snapshotDelayMs|A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|snapshotFetchSize|The maximum number of records that should be loaded into memory while performing a snapshot.||integer| +|snapshotIncludeCollectionList|This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector.||string| +|snapshotMaxThreads|The maximum number of threads used to perform the snapshot. Defaults to 1.|1|integer| +|snapshotMode|The criteria for running a snapshot upon startup of the connector. Select one of the following snapshot options: 'initial' (default): If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures the current full state of the configured tables. After the snapshot completes, the connector begins to stream changes from the oplog. 'never': The connector does not run a snapshot. Upon first startup, the connector immediately begins reading from the beginning of the oplog.|initial|string| +|snapshotModeConfigurationBasedSnapshotData|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnDataError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnSchemaError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotSchema|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedStartStream|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the stream should start or not after snapshot.|false|boolean| +|snapshotModeCustomName|When 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'Snapshotter' interface and is called on each app boot to determine whether to do a snapshot.||string| +|sourceinfoStructMaker|The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.|io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker|string| +|streamingDelayMs|A delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|tombstonesOnDelete|Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.|false|boolean| +|topicNamingStrategy|The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc.|io.debezium.schema.SchemaTopicNamingStrategy|string| +|topicPrefix|Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be accepted.||string| +|transactionMetadataFactory|Class to make transaction context \& transaction struct/schemas|io.debezium.pipeline.txmetadata.DefaultTransactionMetadataFactory|string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Unique name for the connector. Attempting to register again with the same name will fail.||string| +|additionalProperties|Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345\&additionalProperties.schema.registry.url=http://localhost:8811/avro||object| +|internalKeyConverter|The Converter class that should be used to serialize and deserialize key data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|internalValueConverter|The Converter class that should be used to serialize and deserialize value data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|offsetCommitPolicy|The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface 'OffsetCommitPolicy'. The default is a periodic commit policy based upon time intervals.||string| +|offsetCommitTimeoutMs|Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds.|5000|duration| +|offsetFlushIntervalMs|Interval at which to try committing offsets. The default is 1 minute.|60000|duration| +|offsetStorage|The name of the Java class that is responsible for persistence of connector offsets.|org.apache.kafka.connect.storage.FileOffsetBackingStore|string| +|offsetStorageFileName|Path to file where offsets are to be stored. Required when offset.storage is set to the FileOffsetBackingStore.||string| +|offsetStoragePartitions|The number of partitions used when creating the offset storage topic. Required when offset.storage is set to the 'KafkaOffsetBackingStore'.||integer| +|offsetStorageReplicationFactor|Replication factor used when creating the offset storage topic. Required when offset.storage is set to the KafkaOffsetBackingStore||integer| +|offsetStorageTopic|The name of the Kafka topic where offsets are to be stored. Required when offset.storage is set to the KafkaOffsetBackingStore.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|captureMode|The method used to capture changes from MongoDB server. Options include: 'change\_streams' to capture changes via MongoDB Change Streams, update events do not contain full documents; 'change\_streams\_update\_full' (the default) to capture changes via MongoDB Change Streams, update events contain full documents|change\_streams\_update\_full|string| +|collectionExcludeList|A comma-separated list of regular expressions or literals that match the collection names for which changes are to be excluded||string| +|collectionIncludeList|A comma-separated list of regular expressions or literals that match the collection names for which changes are to be captured||string| +|converters|Optional list of custom converters that would be used instead of default ones. The converters are defined using '.type' config option and configured using options '.'||string| +|cursorMaxAwaitTimeMs|The maximum processing time in milliseconds to wait for the oplog cursor to process a single poll request||duration| +|customMetricTags|The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example: k1=v1,k2=v2||string| +|databaseExcludeList|A comma-separated list of regular expressions or literals that match the database names for which changes are to be excluded||string| +|databaseIncludeList|A comma-separated list of regular expressions or literals that match the database names for which changes are to be captured||string| +|errorsMaxRetries|The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries).|-1|integer| +|eventProcessingFailureHandlingMode|Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.|fail|string| +|fieldExcludeList|A comma-separated list of the fully-qualified names of fields that should be excluded from change event message values||string| +|fieldRenames|A comma-separated list of the fully-qualified replacements of fields that should be used to rename fields in change event message values. Fully-qualified replacements for fields are of the form databaseName.collectionName.fieldName.nestedFieldName:newNestedFieldName, where databaseName and collectionName may contain the wildcard () which matches any characters, the colon character (:) is used to determine rename mapping of field.||string| +|heartbeatIntervalMs|Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default.|0ms|duration| +|heartbeatTopicsPrefix|The prefix that is used to name heartbeat topics.Defaults to \_\_debezium-heartbeat.|\_\_debezium-heartbeat|string| +|incrementalSnapshotWatermarkingStrategy|Specify the strategy used for watermarking during an incremental snapshot: 'insert\_insert' both open and close signal is written into signal data collection (default); 'insert\_delete' only open signal is written on signal data collection, the close will delete the relative open signal;|INSERT\_INSERT|string| +|maxBatchSize|Maximum size of each batch of source records. Defaults to 2048.|2048|integer| +|maxQueueSize|Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size.|8192|integer| +|maxQueueSizeInBytes|Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled|0|integer| +|mongodbAuthsource|Database containing user credentials.|admin|string| +|mongodbConnectionString|Database connection string.||string| +|mongodbConnectTimeoutMs|The connection timeout, given in milliseconds. Defaults to 10 seconds (10,000 ms).|10s|duration| +|mongodbHeartbeatFrequencyMs|The frequency that the cluster monitor attempts to reach each server. Defaults to 10 seconds (10,000 ms).|10s|duration| +|mongodbPassword|Password to be used when connecting to MongoDB, if necessary.||string| +|mongodbPollIntervalMs|Interval for looking for new, removed, or changed replica sets, given in milliseconds. Defaults to 30 seconds (30,000 ms).|30s|duration| +|mongodbServerSelectionTimeoutMs|The server selection timeout, given in milliseconds. Defaults to 10 seconds (10,000 ms).|30s|duration| +|mongodbSocketTimeoutMs|The socket timeout, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|mongodbSslEnabled|Should connector use SSL to connect to MongoDB instances|false|boolean| +|mongodbSslInvalidHostnameAllowed|Whether invalid host names are allowed when using SSL. If true the connection will not prevent man-in-the-middle attacks|false|boolean| +|mongodbUser|Database user for connecting to MongoDB, if necessary.||string| +|notificationEnabledChannels|List of notification channels names that are enabled.||string| +|notificationSinkTopicName|The name of the topic for the notifications. This is required in case 'sink' is in the list of enabled channels||string| +|pollIntervalMs|Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms.|500ms|duration| +|postProcessors|Optional list of post processors. The processors are defined using '.type' config option and configured using options ''||string| +|provideTransactionMetadata|Enables transaction metadata extraction together with event counting|false|boolean| +|queryFetchSize|The maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fetch size.|0|integer| +|retriableRestartConnectorWaitMs|Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms.|10s|duration| +|schemaHistoryInternalFileFilename|The path to the file that will be used to record the database schema history||string| +|schemaNameAdjustmentMode|Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro\_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like \_uxxxx. Note: \_ is an escape sequence like backslash in Java;'none' does not apply any adjustment (default)|none|string| +|signalDataCollection|The name of the data collection that is used to send signals/commands to Debezium. Signaling is disabled when not set.||string| +|signalEnabledChannels|List of channels names that are enabled. Source channel is enabled by default|source|string| +|signalPollIntervalMs|Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds.|5s|duration| +|skippedOperations|The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped. By default, only truncate operations will be skipped.|t|string| +|snapshotCollectionFilterOverrides|This property contains a comma-separated list of ., for which the initial snapshot may be a subset of data present in the data source. The subset would be defined by mongodb filter query specified as value for property snapshot.collection.filter.override..||string| +|snapshotDelayMs|A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|snapshotFetchSize|The maximum number of records that should be loaded into memory while performing a snapshot.||integer| +|snapshotIncludeCollectionList|This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector.||string| +|snapshotMaxThreads|The maximum number of threads used to perform the snapshot. Defaults to 1.|1|integer| +|snapshotMode|The criteria for running a snapshot upon startup of the connector. Select one of the following snapshot options: 'initial' (default): If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures the current full state of the configured tables. After the snapshot completes, the connector begins to stream changes from the oplog. 'never': The connector does not run a snapshot. Upon first startup, the connector immediately begins reading from the beginning of the oplog.|initial|string| +|snapshotModeConfigurationBasedSnapshotData|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnDataError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnSchemaError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotSchema|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedStartStream|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the stream should start or not after snapshot.|false|boolean| +|snapshotModeCustomName|When 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'Snapshotter' interface and is called on each app boot to determine whether to do a snapshot.||string| +|sourceinfoStructMaker|The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.|io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker|string| +|streamingDelayMs|A delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|tombstonesOnDelete|Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.|false|boolean| +|topicNamingStrategy|The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc.|io.debezium.schema.SchemaTopicNamingStrategy|string| +|topicPrefix|Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be accepted.||string| +|transactionMetadataFactory|Class to make transaction context \& transaction struct/schemas|io.debezium.pipeline.txmetadata.DefaultTransactionMetadataFactory|string| diff --git a/camel-debezium-mysql.md b/camel-debezium-mysql.md new file mode 100644 index 0000000000000000000000000000000000000000..d4663f479ff32e1f86c6781a9db3a9315257906e --- /dev/null +++ b/camel-debezium-mysql.md @@ -0,0 +1,352 @@ +# Debezium-mysql + +**Since Camel 3.0** + +**Only consumer is supported** + +The Debezium MySQL component is wrapper around +[Debezium](https://debezium.io/) using [Debezium +Engine](https://debezium.io/documentation/reference/1.9/development/engine.html), +which enables Change Data Capture from MySQL database using Debezium +without the need for Kafka or Kafka Connect. + +**Note on handling failures:** per [Debezium Embedded +Engine](https://debezium.io/documentation/reference/1.9/development/engine.html#_handling_failures) +documentation, the engines are actively recording source offsets and +periodically flush these offsets to a persistent storage. Therefore, +when the application is restarted or crashed, the engine will resume +from the last recorded offset. This means that, at normal operation, +your downstream routes will receive each event exactly once. However, in +case of an application crash (not having a graceful shutdown), the +application will resume from the last recorded offset, which may result +in receiving duplicate events immediately after the restart. Therefore, +your downstream routes should be tolerant enough of such a case and +deduplicate events if needed. + +Maven users will need to add the following dependency to their `pom.xml` +for this component. + + + org.apache.camel + camel-debezium-mysql + x.x.x + + + +# URI format + + debezium-mysql:name[?options] + +Due to licensing issues, you will need to add the dependency for +`mysql-connector-j` if you are using MySQL connector, add the following +to your POM file: + + + com.mysql + mysql-connector-j + 8.0.15 + + +For more information about configuration: +[https://debezium.io/documentation/reference/0.10/operations/embedded.html#engine-properties](https://debezium.io/documentation/reference/0.10/operations/embedded.html#engine-properties) +[https://debezium.io/documentation/reference/0.10/connectors/mysql.html#connector-properties](https://debezium.io/documentation/reference/0.10/connectors/mysql.html#connector-properties) + +# Message body + +The message body if is not `null` (in case of tombstones), it contains +the state of the row after the event occurred as `Struct` format or +`Map` format if you use the included Type Converter from `Struct` to +`Map`. + +Check below for more details. + +# Samples + +## Consuming events + +Here is a basic route that you can use to listen to Debezium events from +MySQL connector. + + from("debezium-mysql:dbz-test-1?offsetStorageFileName=/usr/offset-file-1.dat&databaseHostname=localhost&databaseUser=debezium&databasePassword=dbz&databaseServerName=my-app-connector&databaseHistoryFileFilename=/usr/history-file-1.dat") + .log("Event received from Debezium : ${body}") + .log(" with this identifier ${headers.CamelDebeziumIdentifier}") + .log(" with these source metadata ${headers.CamelDebeziumSourceMetadata}") + .log(" the event occurred upon this operation '${headers.CamelDebeziumSourceOperation}'") + .log(" on this database '${headers.CamelDebeziumSourceMetadata[db]}' and this table '${headers.CamelDebeziumSourceMetadata[table]}'") + .log(" with the key ${headers.CamelDebeziumKey}") + .log(" the previous value is ${headers.CamelDebeziumBefore}") + .log(" the ddl sql text is ${headers.CamelDebeziumDdlSQL}") + +By default, the component will emit the events in the body and +`CamelDebeziumBefore` header as +[`Struct`](https://kafka.apache.org/22/javadoc/org/apache/kafka/connect/data/Struct.html) +data type, the reasoning behind this, is to perceive the schema +information in case is needed. However, the component as well contains a +[Type Converter](#manual::type-converter.adoc) that converts from +default output type of +[`Struct`](https://kafka.apache.org/22/javadoc/org/apache/kafka/connect/data/Struct.html) +to `Map` in order to leverage Camel’s rich [Data +Format](#manual::data-format.adoc) types. Many of them work out of box +with `Map` data type. To use it, you can either add `Map.class` type +when you access the message (e.g., +`exchange.getIn().getBody(Map.class)`), or you can convert the body +always to `Map` from the route builder by adding +`.convertBodyTo(Map.class)` to your Camel Route DSL after `from` +statement. + +We mentioned above the schema, which can be used in case you need to +perform advance data transformation and the schema is needed for that. +If you choose not to convert your body to `Map`, you can obtain the +schema information as +[`Schema`](https://kafka.apache.org/22/javadoc/org/apache/kafka/connect/data/Schema.html) +type from `Struct` like this: + + from("debezium-mysql:[name]?[options]]) + .process(exchange -> { + final Struct bodyValue = exchange.getIn().getBody(Struct.class); + final Schema schemaValue = bodyValue.schema(); + + log.info("Body value is : {}", bodyValue); + log.info("With Schema : {}", schemaValue); + log.info("And fields of : {}", schemaValue.fields()); + log.info("Field name has `{}` type", schemaValue.field("name").schema()); + }); + +This component is a thin wrapper around Debezium Engine as mentioned. +Therefore, before using this component in production, you need to +understand how Debezium works and how configurations can reflect the +expected behavior. This is especially true in regard to [handling +failures](https://debezium.io/documentation/reference/1.9/operations/embedded.html#_handling_failures). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|additionalProperties|Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345\&additionalProperties.schema.registry.url=http://localhost:8811/avro||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|configuration|Allow pre-configured Configurations to be set.||object| +|internalKeyConverter|The Converter class that should be used to serialize and deserialize key data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|internalValueConverter|The Converter class that should be used to serialize and deserialize value data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|offsetCommitPolicy|The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface 'OffsetCommitPolicy'. The default is a periodic commit policy based upon time intervals.||string| +|offsetCommitTimeoutMs|Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds.|5000|duration| +|offsetFlushIntervalMs|Interval at which to try committing offsets. The default is 1 minute.|60000|duration| +|offsetStorage|The name of the Java class that is responsible for persistence of connector offsets.|org.apache.kafka.connect.storage.FileOffsetBackingStore|string| +|offsetStorageFileName|Path to file where offsets are to be stored. Required when offset.storage is set to the FileOffsetBackingStore.||string| +|offsetStoragePartitions|The number of partitions used when creating the offset storage topic. Required when offset.storage is set to the 'KafkaOffsetBackingStore'.||integer| +|offsetStorageReplicationFactor|Replication factor used when creating the offset storage topic. Required when offset.storage is set to the KafkaOffsetBackingStore||integer| +|offsetStorageTopic|The name of the Kafka topic where offsets are to be stored. Required when offset.storage is set to the KafkaOffsetBackingStore.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|bigintUnsignedHandlingMode|Specify how BIGINT UNSIGNED columns should be represented in change events, including: 'precise' uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'long' (the default) represents values using Java's 'long', which may not offer the precision but will be far easier to use in consumers.|long|string| +|binlogBufferSize|The size of a look-ahead buffer used by the binlog reader to decide whether the transaction in progress is going to be committed or rolled back. Use 0 to disable look-ahead buffering. Defaults to 0 (i.e. buffering is disabled.|0|integer| +|columnExcludeList|Regular expressions matching columns to exclude from change events||string| +|columnIncludeList|Regular expressions matching columns to include in change events||string| +|columnPropagateSourceType|A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|connectKeepAlive|Whether a separate thread should be used to ensure the connection is kept alive.|true|boolean| +|connectKeepAliveIntervalMs|Interval for connection checking if keep alive thread is used, given in milliseconds Defaults to 1 minute (60,000 ms).|1m|duration| +|connectTimeoutMs|Maximum time to wait after trying to connect to the database before timing out, given in milliseconds. Defaults to 30 seconds (30,000 ms).|30s|duration| +|converters|Optional list of custom converters that would be used instead of default ones. The converters are defined using '.type' config option and configured using options '.'||string| +|customMetricTags|The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example: k1=v1,k2=v2||string| +|databaseExcludeList|A comma-separated list of regular expressions that match database names to be excluded from monitoring||string| +|databaseHostname|Resolvable hostname or IP address of the database server.||string| +|databaseIncludeList|The databases for which changes are to be captured||string| +|databaseInitialStatements|A semicolon separated list of SQL statements to be executed when a JDBC connection (not binlog reading connection) to the database is established. Note that the connector may establish JDBC connections at its own discretion, so this should typically be used for configuration of session parameters only, but not for executing DML statements. Use doubled semicolon (';;') to use a semicolon as a character and not as a delimiter.||string| +|databaseJdbcDriver|JDBC Driver class name used to connect to the MySQL database server.|com.mysql.cj.jdbc.Driver|string| +|databasePassword|Password of the database user to be used when connecting to the database.||string| +|databasePort|Port of the database server.|3306|integer| +|databaseProtocol|JDBC protocol to use with the driver.|jdbc:mysql|string| +|databaseQueryTimeoutMs|Time to wait for a query to execute, given in milliseconds. Defaults to 600 seconds (600,000 ms); zero means there is no limit.|10m|duration| +|databaseServerId|A numeric ID of this database client, which must be unique across all currently-running database processes in the cluster. This connector joins the database cluster as another server (with this unique ID) so it can read the binlog.||integer| +|databaseServerIdOffset|Only relevant if parallel snapshotting is configured. During parallel snapshotting, multiple (4) connections open to the database client, and they each need their own unique connection ID. This offset is used to generate those IDs from the base configured cluster ID.|10000|integer| +|databaseSslKeystore|The location of the key store file. This is optional and can be used for two-way authentication between the client and the database.||string| +|databaseSslKeystorePassword|The password for the key store file. This is optional and only needed if 'database.ssl.keystore' is configured.||string| +|databaseSslMode|Whether to use an encrypted connection to the database. Options include: 'disabled' to use an unencrypted connection; 'preferred' (the default) to establish a secure (encrypted) connection if the server supports secure connections, but fall back to an unencrypted connection otherwise; 'required' to use a secure (encrypted) connection, and fail if one cannot be established; 'verify\_ca' like 'required' but additionally verify the server TLS certificate against the configured Certificate Authority (CA) certificates, or fail if no valid matching CA certificates are found; or 'verify\_identity' like 'verify\_ca' but additionally verify that the server certificate matches the host to which the connection is attempted.|preferred|string| +|databaseSslTruststore|The location of the trust store file for the server certificate verification.||string| +|databaseSslTruststorePassword|The password for the trust store file. Used to check the integrity of the truststore, and unlock the truststore.||string| +|databaseUser|Name of the database user to be used when connecting to the database.||string| +|datatypePropagateSourceType|A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|decimalHandlingMode|Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers.|precise|string| +|enableTimeAdjuster|The database allows the user to insert year value as either 2-digit or 4-digit. In case of two digit the value is automatically mapped into 1970 - 2069.false - delegates the implicit conversion to the database; true - (the default) Debezium makes the conversion|true|boolean| +|errorsMaxRetries|The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries).|-1|integer| +|eventDeserializationFailureHandlingMode|Specify how failures during deserialization of binlog events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its binlog position is raised, causing the connector to be stopped; 'warn' the problematic event and its binlog position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.|fail|string| +|eventProcessingFailureHandlingMode|Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.|fail|string| +|gtidSourceExcludes|The source UUIDs used to exclude GTID ranges when determine the starting position in the MySQL server's binlog.||string| +|gtidSourceFilterDmlEvents|When set to true, only produce DML events for transactions that were written on the server with matching GTIDs defined by the gtid.source.includes or gtid.source.excludes, if they were specified.|true|boolean| +|gtidSourceIncludes|The source UUIDs used to include GTID ranges when determine the starting position in the MySQL server's binlog.||string| +|heartbeatActionQuery|The query executed with every heartbeat.||string| +|heartbeatIntervalMs|Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default.|0ms|duration| +|heartbeatTopicsPrefix|The prefix that is used to name heartbeat topics.Defaults to \_\_debezium-heartbeat.|\_\_debezium-heartbeat|string| +|includeQuery|Whether the connector should include the original SQL query that generated the change event. Note: This option requires the database to be configured using the server option binlog\_rows\_query\_log\_events (MySQL) or binlog\_annotate\_row\_events (MariaDB) set to ON.Query will not be present for events generated from snapshot. WARNING: Enabling this option may expose tables or fields explicitly excluded or masked by including the original SQL statement in the change event. For this reason the default value is 'false'.|false|boolean| +|includeSchemaChanges|Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value include logical description of the new schema and optionally the DDL statement(s). The default is 'true'. This is independent of how the connector internally records database schema history.|true|boolean| +|includeSchemaComments|Whether the connector parse table and column's comment to metadata object. Note: Enable this option will bring the implications on memory usage. The number and size of ColumnImpl objects is what largely impacts how much memory is consumed by the Debezium connectors, and adding a String to each of them can potentially be quite heavy. The default is 'false'.|false|boolean| +|inconsistentSchemaHandlingMode|Specify how binlog events that belong to a table missing from internal schema representation (i.e. internal representation is not consistent with database) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its binlog position is raised, causing the connector to be stopped; 'warn' the problematic event and its binlog position will be logged and the event will be skipped; 'skip' the problematic event will be skipped.|fail|string| +|incrementalSnapshotAllowSchemaChanges|Detect schema change during an incremental snapshot and re-select a current chunk to avoid locking DDLs. Note that changes to a primary key are not supported and can cause incorrect results if performed during an incremental snapshot. Another limitation is that if a schema change affects only columns' default values, then the change won't be detected until the DDL is processed from the binlog stream. This doesn't affect the snapshot events' values, but the schema of snapshot events may have outdated defaults.|false|boolean| +|incrementalSnapshotChunkSize|The maximum size of chunk (number of documents/rows) for incremental snapshotting|1024|integer| +|incrementalSnapshotWatermarkingStrategy|Specify the strategy used for watermarking during an incremental snapshot: 'insert\_insert' both open and close signal is written into signal data collection (default); 'insert\_delete' only open signal is written on signal data collection, the close will delete the relative open signal;|INSERT\_INSERT|string| +|maxBatchSize|Maximum size of each batch of source records. Defaults to 2048.|2048|integer| +|maxQueueSize|Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size.|8192|integer| +|maxQueueSizeInBytes|Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled|0|integer| +|messageKeyColumns|A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':', where the table names could be defined as (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connector, and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) will be used as message key. Example: dbserver1.inventory.orderlines:orderId,orderLineId;dbserver1.inventory.orders:id||string| +|minRowCountToStreamResults|The number of rows a table must contain to stream results rather than pull all into memory during snapshots. Defaults to 1,000. Use 0 to stream all results and completely avoid checking the size of each table.|1000|integer| +|notificationEnabledChannels|List of notification channels names that are enabled.||string| +|notificationSinkTopicName|The name of the topic for the notifications. This is required in case 'sink' is in the list of enabled channels||string| +|pollIntervalMs|Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms.|500ms|duration| +|postProcessors|Optional list of post processors. The processors are defined using '.type' config option and configured using options ''||string| +|provideTransactionMetadata|Enables transaction metadata extraction together with event counting|false|boolean| +|queryFetchSize|The maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fetch size.|0|integer| +|retriableRestartConnectorWaitMs|Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms.|10s|duration| +|schemaHistoryInternal|The name of the SchemaHistory class that should be used to store and recover database schema changes. The configuration properties for the history are prefixed with the 'schema.history.internal.' string.|io.debezium.storage.kafka.history.KafkaSchemaHistory|string| +|schemaHistoryInternalFileFilename|The path to the file that will be used to record the database schema history||string| +|schemaHistoryInternalSkipUnparseableDdl|Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse. If skipping is enabled then Debezium can miss metadata changes.|false|boolean| +|schemaHistoryInternalStoreOnlyCapturedDatabasesDdl|Controls what DDL will Debezium store in database schema history. By default (true) only DDL that manipulates a table from captured schema/database will be stored. If set to false, then Debezium will store all incoming DDL statements.|true|boolean| +|schemaHistoryInternalStoreOnlyCapturedTablesDdl|Controls what DDL will Debezium store in database schema history. By default (false) Debezium will store all incoming DDL statements. If set to true, then only DDL that manipulates a captured table will be stored.|false|boolean| +|schemaNameAdjustmentMode|Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro\_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like \_uxxxx. Note: \_ is an escape sequence like backslash in Java;'none' does not apply any adjustment (default)|none|string| +|signalDataCollection|The name of the data collection that is used to send signals/commands to Debezium. Signaling is disabled when not set.||string| +|signalEnabledChannels|List of channels names that are enabled. Source channel is enabled by default|source|string| +|signalPollIntervalMs|Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds.|5s|duration| +|skippedOperations|The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped. By default, only truncate operations will be skipped.|t|string| +|snapshotDelayMs|A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|snapshotFetchSize|The maximum number of records that should be loaded into memory while performing a snapshot.||integer| +|snapshotIncludeCollectionList|This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector.||string| +|snapshotLockingMode|Controls how long the connector holds onto the global read lock while it is performing a snapshot. The default is 'minimal', which means the connector holds the global read lock (and thus prevents any updates) for just the initial portion of the snapshot while the database schemas and other metadata are being read. The remaining work in a snapshot involves selecting all rows from each table, and this can be done using the snapshot process' REPEATABLE READ transaction even when the lock is no longer held and other operations are updating the database. However, in some cases it may be desirable to block all writes for the entire duration of the snapshot; in such cases set this property to 'extended'. Using a value of 'none' will prevent the connector from acquiring any table locks during the snapshot process. This mode can only be used in combination with snapshot.mode values of 'schema\_only' or 'schema\_only\_recovery' and is only safe to use if no schema changes are happening while the snapshot is taken.|minimal|string| +|snapshotLockTimeoutMs|The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds|10s|duration| +|snapshotMaxThreads|The maximum number of threads used to perform the snapshot. Defaults to 1.|1|integer| +|snapshotMode|The criteria for running a snapshot upon startup of the connector. Select one of the following snapshot options: 'when\_needed': On startup, the connector runs a snapshot if one is needed.; 'schema\_only': If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures only the schema (table structures), but not any table data. After the snapshot completes, the connector begins to stream changes from the binlog.; 'schema\_only\_recovery': The connector performs a snapshot that captures only the database schema history. The connector then transitions back to streaming. Use this setting to restore a corrupted or lost database schema history topic. Do not use if the database schema was modified after the connector stopped.; 'initial' (default): If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures the current full state of the configured tables. After the snapshot completes, the connector begins to stream changes from the binlog.; 'initial\_only': The connector performs a snapshot as it does for the 'initial' option, but after the connector completes the snapshot, it stops, and does not stream changes from the binlog.; 'never': The connector does not run a snapshot. Upon first startup, the connector immediately begins reading from the beginning of the binlog. The 'never' mode should be used with care, and only when the binlog is known to contain all history.|initial|string| +|snapshotModeConfigurationBasedSnapshotData|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnDataError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnSchemaError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotSchema|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedStartStream|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the stream should start or not after snapshot.|false|boolean| +|snapshotModeCustomName|When 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'Snapshotter' interface and is called on each app boot to determine whether to do a snapshot.||string| +|snapshotQueryMode|Controls query used during the snapshot|select\_all|string| +|snapshotQueryModeCustomName|When 'snapshot.query.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'SnapshotterQuery' interface and is called to determine how to build queries during snapshot.||string| +|snapshotSelectStatementOverrides|This property contains a comma-separated list of fully-qualified tables (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connectors. Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id 'snapshot.select.statement.overrides.DB\_NAME.TABLE\_NAME' or 'snapshot.select.statement.overrides.SCHEMA\_NAME.TABLE\_NAME', respectively. The value of those properties is the select statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted.||string| +|snapshotTablesOrderByRowCount|Controls the order in which tables are processed in the initial snapshot. A descending value will order the tables by row count descending. A ascending value will order the tables by row count ascending. A value of disabled (the default) will disable ordering by row count.|disabled|string| +|sourceinfoStructMaker|The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.|io.debezium.connector.mysql.MySqlSourceInfoStructMaker|string| +|streamingDelayMs|A delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|tableExcludeList|A comma-separated list of regular expressions that match the fully-qualified names of tables to be excluded from monitoring||string| +|tableIgnoreBuiltin|Flag specifying whether built-in tables should be ignored.|true|boolean| +|tableIncludeList|The tables for which changes are to be captured||string| +|timePrecisionMode|Time, date and timestamps can be represented with different kinds of precisions, including: 'adaptive\_time\_microseconds': the precision of date and timestamp values is based the database column's precision; but time fields always use microseconds precision; 'connect': always represents time, date and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision.|adaptive\_time\_microseconds|string| +|tombstonesOnDelete|Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.|false|boolean| +|topicNamingStrategy|The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc.|io.debezium.schema.SchemaTopicNamingStrategy|string| +|topicPrefix|Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be accepted.||string| +|transactionMetadataFactory|Class to make transaction context \& transaction struct/schemas|io.debezium.pipeline.txmetadata.DefaultTransactionMetadataFactory|string| +|useNongracefulDisconnect|Whether to use socket.setSoLinger(true, 0) when BinaryLogClient keepalive thread triggers a disconnect for a stale connection.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Unique name for the connector. Attempting to register again with the same name will fail.||string| +|additionalProperties|Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345\&additionalProperties.schema.registry.url=http://localhost:8811/avro||object| +|internalKeyConverter|The Converter class that should be used to serialize and deserialize key data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|internalValueConverter|The Converter class that should be used to serialize and deserialize value data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|offsetCommitPolicy|The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface 'OffsetCommitPolicy'. The default is a periodic commit policy based upon time intervals.||string| +|offsetCommitTimeoutMs|Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds.|5000|duration| +|offsetFlushIntervalMs|Interval at which to try committing offsets. The default is 1 minute.|60000|duration| +|offsetStorage|The name of the Java class that is responsible for persistence of connector offsets.|org.apache.kafka.connect.storage.FileOffsetBackingStore|string| +|offsetStorageFileName|Path to file where offsets are to be stored. Required when offset.storage is set to the FileOffsetBackingStore.||string| +|offsetStoragePartitions|The number of partitions used when creating the offset storage topic. Required when offset.storage is set to the 'KafkaOffsetBackingStore'.||integer| +|offsetStorageReplicationFactor|Replication factor used when creating the offset storage topic. Required when offset.storage is set to the KafkaOffsetBackingStore||integer| +|offsetStorageTopic|The name of the Kafka topic where offsets are to be stored. Required when offset.storage is set to the KafkaOffsetBackingStore.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|bigintUnsignedHandlingMode|Specify how BIGINT UNSIGNED columns should be represented in change events, including: 'precise' uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'long' (the default) represents values using Java's 'long', which may not offer the precision but will be far easier to use in consumers.|long|string| +|binlogBufferSize|The size of a look-ahead buffer used by the binlog reader to decide whether the transaction in progress is going to be committed or rolled back. Use 0 to disable look-ahead buffering. Defaults to 0 (i.e. buffering is disabled.|0|integer| +|columnExcludeList|Regular expressions matching columns to exclude from change events||string| +|columnIncludeList|Regular expressions matching columns to include in change events||string| +|columnPropagateSourceType|A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|connectKeepAlive|Whether a separate thread should be used to ensure the connection is kept alive.|true|boolean| +|connectKeepAliveIntervalMs|Interval for connection checking if keep alive thread is used, given in milliseconds Defaults to 1 minute (60,000 ms).|1m|duration| +|connectTimeoutMs|Maximum time to wait after trying to connect to the database before timing out, given in milliseconds. Defaults to 30 seconds (30,000 ms).|30s|duration| +|converters|Optional list of custom converters that would be used instead of default ones. The converters are defined using '.type' config option and configured using options '.'||string| +|customMetricTags|The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example: k1=v1,k2=v2||string| +|databaseExcludeList|A comma-separated list of regular expressions that match database names to be excluded from monitoring||string| +|databaseHostname|Resolvable hostname or IP address of the database server.||string| +|databaseIncludeList|The databases for which changes are to be captured||string| +|databaseInitialStatements|A semicolon separated list of SQL statements to be executed when a JDBC connection (not binlog reading connection) to the database is established. Note that the connector may establish JDBC connections at its own discretion, so this should typically be used for configuration of session parameters only, but not for executing DML statements. Use doubled semicolon (';;') to use a semicolon as a character and not as a delimiter.||string| +|databaseJdbcDriver|JDBC Driver class name used to connect to the MySQL database server.|com.mysql.cj.jdbc.Driver|string| +|databasePassword|Password of the database user to be used when connecting to the database.||string| +|databasePort|Port of the database server.|3306|integer| +|databaseProtocol|JDBC protocol to use with the driver.|jdbc:mysql|string| +|databaseQueryTimeoutMs|Time to wait for a query to execute, given in milliseconds. Defaults to 600 seconds (600,000 ms); zero means there is no limit.|10m|duration| +|databaseServerId|A numeric ID of this database client, which must be unique across all currently-running database processes in the cluster. This connector joins the database cluster as another server (with this unique ID) so it can read the binlog.||integer| +|databaseServerIdOffset|Only relevant if parallel snapshotting is configured. During parallel snapshotting, multiple (4) connections open to the database client, and they each need their own unique connection ID. This offset is used to generate those IDs from the base configured cluster ID.|10000|integer| +|databaseSslKeystore|The location of the key store file. This is optional and can be used for two-way authentication between the client and the database.||string| +|databaseSslKeystorePassword|The password for the key store file. This is optional and only needed if 'database.ssl.keystore' is configured.||string| +|databaseSslMode|Whether to use an encrypted connection to the database. Options include: 'disabled' to use an unencrypted connection; 'preferred' (the default) to establish a secure (encrypted) connection if the server supports secure connections, but fall back to an unencrypted connection otherwise; 'required' to use a secure (encrypted) connection, and fail if one cannot be established; 'verify\_ca' like 'required' but additionally verify the server TLS certificate against the configured Certificate Authority (CA) certificates, or fail if no valid matching CA certificates are found; or 'verify\_identity' like 'verify\_ca' but additionally verify that the server certificate matches the host to which the connection is attempted.|preferred|string| +|databaseSslTruststore|The location of the trust store file for the server certificate verification.||string| +|databaseSslTruststorePassword|The password for the trust store file. Used to check the integrity of the truststore, and unlock the truststore.||string| +|databaseUser|Name of the database user to be used when connecting to the database.||string| +|datatypePropagateSourceType|A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|decimalHandlingMode|Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers.|precise|string| +|enableTimeAdjuster|The database allows the user to insert year value as either 2-digit or 4-digit. In case of two digit the value is automatically mapped into 1970 - 2069.false - delegates the implicit conversion to the database; true - (the default) Debezium makes the conversion|true|boolean| +|errorsMaxRetries|The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries).|-1|integer| +|eventDeserializationFailureHandlingMode|Specify how failures during deserialization of binlog events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its binlog position is raised, causing the connector to be stopped; 'warn' the problematic event and its binlog position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.|fail|string| +|eventProcessingFailureHandlingMode|Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.|fail|string| +|gtidSourceExcludes|The source UUIDs used to exclude GTID ranges when determine the starting position in the MySQL server's binlog.||string| +|gtidSourceFilterDmlEvents|When set to true, only produce DML events for transactions that were written on the server with matching GTIDs defined by the gtid.source.includes or gtid.source.excludes, if they were specified.|true|boolean| +|gtidSourceIncludes|The source UUIDs used to include GTID ranges when determine the starting position in the MySQL server's binlog.||string| +|heartbeatActionQuery|The query executed with every heartbeat.||string| +|heartbeatIntervalMs|Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default.|0ms|duration| +|heartbeatTopicsPrefix|The prefix that is used to name heartbeat topics.Defaults to \_\_debezium-heartbeat.|\_\_debezium-heartbeat|string| +|includeQuery|Whether the connector should include the original SQL query that generated the change event. Note: This option requires the database to be configured using the server option binlog\_rows\_query\_log\_events (MySQL) or binlog\_annotate\_row\_events (MariaDB) set to ON.Query will not be present for events generated from snapshot. WARNING: Enabling this option may expose tables or fields explicitly excluded or masked by including the original SQL statement in the change event. For this reason the default value is 'false'.|false|boolean| +|includeSchemaChanges|Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value include logical description of the new schema and optionally the DDL statement(s). The default is 'true'. This is independent of how the connector internally records database schema history.|true|boolean| +|includeSchemaComments|Whether the connector parse table and column's comment to metadata object. Note: Enable this option will bring the implications on memory usage. The number and size of ColumnImpl objects is what largely impacts how much memory is consumed by the Debezium connectors, and adding a String to each of them can potentially be quite heavy. The default is 'false'.|false|boolean| +|inconsistentSchemaHandlingMode|Specify how binlog events that belong to a table missing from internal schema representation (i.e. internal representation is not consistent with database) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its binlog position is raised, causing the connector to be stopped; 'warn' the problematic event and its binlog position will be logged and the event will be skipped; 'skip' the problematic event will be skipped.|fail|string| +|incrementalSnapshotAllowSchemaChanges|Detect schema change during an incremental snapshot and re-select a current chunk to avoid locking DDLs. Note that changes to a primary key are not supported and can cause incorrect results if performed during an incremental snapshot. Another limitation is that if a schema change affects only columns' default values, then the change won't be detected until the DDL is processed from the binlog stream. This doesn't affect the snapshot events' values, but the schema of snapshot events may have outdated defaults.|false|boolean| +|incrementalSnapshotChunkSize|The maximum size of chunk (number of documents/rows) for incremental snapshotting|1024|integer| +|incrementalSnapshotWatermarkingStrategy|Specify the strategy used for watermarking during an incremental snapshot: 'insert\_insert' both open and close signal is written into signal data collection (default); 'insert\_delete' only open signal is written on signal data collection, the close will delete the relative open signal;|INSERT\_INSERT|string| +|maxBatchSize|Maximum size of each batch of source records. Defaults to 2048.|2048|integer| +|maxQueueSize|Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size.|8192|integer| +|maxQueueSizeInBytes|Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled|0|integer| +|messageKeyColumns|A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':', where the table names could be defined as (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connector, and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) will be used as message key. Example: dbserver1.inventory.orderlines:orderId,orderLineId;dbserver1.inventory.orders:id||string| +|minRowCountToStreamResults|The number of rows a table must contain to stream results rather than pull all into memory during snapshots. Defaults to 1,000. Use 0 to stream all results and completely avoid checking the size of each table.|1000|integer| +|notificationEnabledChannels|List of notification channels names that are enabled.||string| +|notificationSinkTopicName|The name of the topic for the notifications. This is required in case 'sink' is in the list of enabled channels||string| +|pollIntervalMs|Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms.|500ms|duration| +|postProcessors|Optional list of post processors. The processors are defined using '.type' config option and configured using options ''||string| +|provideTransactionMetadata|Enables transaction metadata extraction together with event counting|false|boolean| +|queryFetchSize|The maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fetch size.|0|integer| +|retriableRestartConnectorWaitMs|Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms.|10s|duration| +|schemaHistoryInternal|The name of the SchemaHistory class that should be used to store and recover database schema changes. The configuration properties for the history are prefixed with the 'schema.history.internal.' string.|io.debezium.storage.kafka.history.KafkaSchemaHistory|string| +|schemaHistoryInternalFileFilename|The path to the file that will be used to record the database schema history||string| +|schemaHistoryInternalSkipUnparseableDdl|Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse. If skipping is enabled then Debezium can miss metadata changes.|false|boolean| +|schemaHistoryInternalStoreOnlyCapturedDatabasesDdl|Controls what DDL will Debezium store in database schema history. By default (true) only DDL that manipulates a table from captured schema/database will be stored. If set to false, then Debezium will store all incoming DDL statements.|true|boolean| +|schemaHistoryInternalStoreOnlyCapturedTablesDdl|Controls what DDL will Debezium store in database schema history. By default (false) Debezium will store all incoming DDL statements. If set to true, then only DDL that manipulates a captured table will be stored.|false|boolean| +|schemaNameAdjustmentMode|Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro\_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like \_uxxxx. Note: \_ is an escape sequence like backslash in Java;'none' does not apply any adjustment (default)|none|string| +|signalDataCollection|The name of the data collection that is used to send signals/commands to Debezium. Signaling is disabled when not set.||string| +|signalEnabledChannels|List of channels names that are enabled. Source channel is enabled by default|source|string| +|signalPollIntervalMs|Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds.|5s|duration| +|skippedOperations|The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped. By default, only truncate operations will be skipped.|t|string| +|snapshotDelayMs|A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|snapshotFetchSize|The maximum number of records that should be loaded into memory while performing a snapshot.||integer| +|snapshotIncludeCollectionList|This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector.||string| +|snapshotLockingMode|Controls how long the connector holds onto the global read lock while it is performing a snapshot. The default is 'minimal', which means the connector holds the global read lock (and thus prevents any updates) for just the initial portion of the snapshot while the database schemas and other metadata are being read. The remaining work in a snapshot involves selecting all rows from each table, and this can be done using the snapshot process' REPEATABLE READ transaction even when the lock is no longer held and other operations are updating the database. However, in some cases it may be desirable to block all writes for the entire duration of the snapshot; in such cases set this property to 'extended'. Using a value of 'none' will prevent the connector from acquiring any table locks during the snapshot process. This mode can only be used in combination with snapshot.mode values of 'schema\_only' or 'schema\_only\_recovery' and is only safe to use if no schema changes are happening while the snapshot is taken.|minimal|string| +|snapshotLockTimeoutMs|The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds|10s|duration| +|snapshotMaxThreads|The maximum number of threads used to perform the snapshot. Defaults to 1.|1|integer| +|snapshotMode|The criteria for running a snapshot upon startup of the connector. Select one of the following snapshot options: 'when\_needed': On startup, the connector runs a snapshot if one is needed.; 'schema\_only': If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures only the schema (table structures), but not any table data. After the snapshot completes, the connector begins to stream changes from the binlog.; 'schema\_only\_recovery': The connector performs a snapshot that captures only the database schema history. The connector then transitions back to streaming. Use this setting to restore a corrupted or lost database schema history topic. Do not use if the database schema was modified after the connector stopped.; 'initial' (default): If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures the current full state of the configured tables. After the snapshot completes, the connector begins to stream changes from the binlog.; 'initial\_only': The connector performs a snapshot as it does for the 'initial' option, but after the connector completes the snapshot, it stops, and does not stream changes from the binlog.; 'never': The connector does not run a snapshot. Upon first startup, the connector immediately begins reading from the beginning of the binlog. The 'never' mode should be used with care, and only when the binlog is known to contain all history.|initial|string| +|snapshotModeConfigurationBasedSnapshotData|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnDataError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnSchemaError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotSchema|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedStartStream|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the stream should start or not after snapshot.|false|boolean| +|snapshotModeCustomName|When 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'Snapshotter' interface and is called on each app boot to determine whether to do a snapshot.||string| +|snapshotQueryMode|Controls query used during the snapshot|select\_all|string| +|snapshotQueryModeCustomName|When 'snapshot.query.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'SnapshotterQuery' interface and is called to determine how to build queries during snapshot.||string| +|snapshotSelectStatementOverrides|This property contains a comma-separated list of fully-qualified tables (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connectors. Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id 'snapshot.select.statement.overrides.DB\_NAME.TABLE\_NAME' or 'snapshot.select.statement.overrides.SCHEMA\_NAME.TABLE\_NAME', respectively. The value of those properties is the select statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted.||string| +|snapshotTablesOrderByRowCount|Controls the order in which tables are processed in the initial snapshot. A descending value will order the tables by row count descending. A ascending value will order the tables by row count ascending. A value of disabled (the default) will disable ordering by row count.|disabled|string| +|sourceinfoStructMaker|The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.|io.debezium.connector.mysql.MySqlSourceInfoStructMaker|string| +|streamingDelayMs|A delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|tableExcludeList|A comma-separated list of regular expressions that match the fully-qualified names of tables to be excluded from monitoring||string| +|tableIgnoreBuiltin|Flag specifying whether built-in tables should be ignored.|true|boolean| +|tableIncludeList|The tables for which changes are to be captured||string| +|timePrecisionMode|Time, date and timestamps can be represented with different kinds of precisions, including: 'adaptive\_time\_microseconds': the precision of date and timestamp values is based the database column's precision; but time fields always use microseconds precision; 'connect': always represents time, date and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision.|adaptive\_time\_microseconds|string| +|tombstonesOnDelete|Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.|false|boolean| +|topicNamingStrategy|The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc.|io.debezium.schema.SchemaTopicNamingStrategy|string| +|topicPrefix|Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be accepted.||string| +|transactionMetadataFactory|Class to make transaction context \& transaction struct/schemas|io.debezium.pipeline.txmetadata.DefaultTransactionMetadataFactory|string| +|useNongracefulDisconnect|Whether to use socket.setSoLinger(true, 0) when BinaryLogClient keepalive thread triggers a disconnect for a stale connection.|false|boolean| diff --git a/camel-debezium-oracle.md b/camel-debezium-oracle.md new file mode 100644 index 0000000000000000000000000000000000000000..e5752a1d4bf2c10d58b454d6ab1adf6f917f4d4a --- /dev/null +++ b/camel-debezium-oracle.md @@ -0,0 +1,367 @@ +# Debezium-oracle + +**Since Camel 3.17** + +**Only consumer is supported** + +The Debezium oracle component is wrapper around +[Debezium](https://debezium.io/) using [Debezium +Engine](https://debezium.io/documentation/reference/1.9/development/engine.html), +that enables Change Data Capture from the Oracle database using Debezium +without the need for Kafka or Kafka Connect. + +**Note on handling failures:** per [Debezium Embedded +Engine](https://debezium.io/documentation/reference/1.9/development/engine.html#_handling_failures) +documentation, the engines are actively recording source offsets and +periodically flush these offsets to a persistent storage. Therefore, +when the application is restarted or crashed, the engine will resume +from the last recorded offset. This means that, at normal operation, +your downstream routes will receive each event exactly once. However, in +case of an application crash (not having a graceful shutdown), the +application will resume from the last recorded offset, which may result +in receiving duplicate events immediately after the restart. Therefore, +your downstream routes should be tolerant enough of such a case and +deduplicate events if needed. + +Maven users will need to add the following dependency to their `pom.xml` +for this component. + + + org.apache.camel + camel-debezium-oracle + x.x.x + + + +# URI format + + debezium-oracle:name[?options] + +For more information about configuration: +[https://debezium.io/documentation/reference/0.10/operations/embedded.html#engine-properties](https://debezium.io/documentation/reference/1.18/operations/embedded.html#engine-properties) +[https://debezium.io/documentation/reference/0.10/connectors/oracleql.html#connector-properties](https://debezium.io/documentation/reference/1.18/connectors/oracleql.html#connector-properties) + +# Message body + +The message body if is not `null` (in case of tombstones), it contains +the state of the row after the event occurred as `Struct` format or +`Map` format if you use the included Type Converter from `Struct` to +`Map`. + +Check below for more details. + +# Samples + +## Consuming events + +Here is a basic route that you can use to listen to Debezium events from +oracle connector. + + from("debezium-oracle:dbz-test-1?offsetStorageFileName=/usr/offset-file-1.dat&databaseHostname=localhost&databaseUser=debezium&databasePassword=dbz&databaseServerName=my-app-connector&databaseHistoryFileFilename=/usr/history-file-1.dat") + .log("Event received from Debezium : ${body}") + .log(" with this identifier ${headers.CamelDebeziumIdentifier}") + .log(" with these source metadata ${headers.CamelDebeziumSourceMetadata}") + .log(" the event occurred upon this operation '${headers.CamelDebeziumSourceOperation}'") + .log(" on this database '${headers.CamelDebeziumSourceMetadata[db]}' and this table '${headers.CamelDebeziumSourceMetadata[table]}'") + .log(" with the key ${headers.CamelDebeziumKey}") + .log(" the previous value is ${headers.CamelDebeziumBefore}") + +By default, the component will emit the events in the body and +`CamelDebeziumBefore` header as +[`Struct`](https://kafka.apache.org/22/javadoc/org/apache/kafka/connect/data/Struct.html) +data type, the reasoning behind this, is to perceive the schema +information in case is needed. However, the component as well contains a +[Type Converter](#manual::type-converter.adoc) that converts from +default output type of +[`Struct`](https://kafka.apache.org/22/javadoc/org/apache/kafka/connect/data/Struct.html) +to `Map` in order to leverage Camel’s rich [Data +Format](#manual::data-format.adoc) types which many of them work out of +box with `Map` data type. To use it, you can either add `Map.class` type +when you access the message (e.g., +`exchange.getIn().getBody(Map.class)`), or you can convert the body +always to `Map` from the route builder by adding +`.convertBodyTo(Map.class)` to your Camel Route DSL after `from` +statement. + +We mentioned above the schema, which can be used in case you need to +perform advance data transformation and the schema is needed for that. +If you choose not to convert your body to `Map`, you can obtain the +schema information as +[`Schema`](https://kafka.apache.org/22/javadoc/org/apache/kafka/connect/data/Schema.html) +type from `Struct` like this: + + from("debezium-oracle:[name]?[options]]) + .process(exchange -> { + final Struct bodyValue = exchange.getIn().getBody(Struct.class); + final Schema schemaValue = bodyValue.schema(); + + log.info("Body value is : {}", bodyValue); + log.info("With Schema : {}", schemaValue); + log.info("And fields of : {}", schemaValue.fields()); + log.info("Field name has `{}` type", schemaValue.field("name").schema()); + }); + +This component is a thin wrapper around Debezium Engine as mentioned. +Therefore, before using this component in production, you need to +understand how Debezium works and how configurations can reflect the +expected behavior. This is especially true in regard to [handling +failures](https://debezium.io/documentation/reference/1.9/operations/embedded.html#_handling_failures). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|additionalProperties|Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345\&additionalProperties.schema.registry.url=http://localhost:8811/avro||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|configuration|Allow pre-configured Configurations to be set.||object| +|internalKeyConverter|The Converter class that should be used to serialize and deserialize key data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|internalValueConverter|The Converter class that should be used to serialize and deserialize value data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|offsetCommitPolicy|The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface 'OffsetCommitPolicy'. The default is a periodic commit policy based upon time intervals.||string| +|offsetCommitTimeoutMs|Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds.|5000|duration| +|offsetFlushIntervalMs|Interval at which to try committing offsets. The default is 1 minute.|60000|duration| +|offsetStorage|The name of the Java class that is responsible for persistence of connector offsets.|org.apache.kafka.connect.storage.FileOffsetBackingStore|string| +|offsetStorageFileName|Path to file where offsets are to be stored. Required when offset.storage is set to the FileOffsetBackingStore.||string| +|offsetStoragePartitions|The number of partitions used when creating the offset storage topic. Required when offset.storage is set to the 'KafkaOffsetBackingStore'.||integer| +|offsetStorageReplicationFactor|Replication factor used when creating the offset storage topic. Required when offset.storage is set to the KafkaOffsetBackingStore||integer| +|offsetStorageTopic|The name of the Kafka topic where offsets are to be stored. Required when offset.storage is set to the KafkaOffsetBackingStore.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|archiveDestinationName|Sets the specific archive log destination as the source for reading archive logs.When not set, the connector will automatically select the first LOCAL and VALID destination.||string| +|archiveLogHours|The number of hours in the past from SYSDATE to mine archive logs. Using 0 mines all available archive logs|0|integer| +|binaryHandlingMode|Specify how binary (blob, binary, etc.) columns should be represented in change events, including: 'bytes' represents binary data as byte array (default); 'base64' represents binary data as base64-encoded string; 'base64-url-safe' represents binary data as base64-url-safe-encoded string; 'hex' represents binary data as hex-encoded (base16) string|bytes|string| +|columnExcludeList|Regular expressions matching columns to exclude from change events||string| +|columnIncludeList|Regular expressions matching columns to include in change events||string| +|columnPropagateSourceType|A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|converters|Optional list of custom converters that would be used instead of default ones. The converters are defined using '.type' config option and configured using options '.'||string| +|customMetricTags|The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example: k1=v1,k2=v2||string| +|databaseConnectionAdapter|The adapter to use when capturing changes from the database. Options include: 'logminer': (the default) to capture changes using native Oracle LogMiner; 'xstream' to capture changes using Oracle XStreams|LogMiner|string| +|databaseDbname|The name of the database from which the connector should capture changes||string| +|databaseHostname|Resolvable hostname or IP address of the database server.||string| +|databaseOutServerName|Name of the XStream Out server to connect to.||string| +|databasePassword|Password of the database user to be used when connecting to the database.||string| +|databasePdbName|Name of the pluggable database when working with a multi-tenant set-up. The CDB name must be given via database.dbname in this case.||string| +|databasePort|Port of the database server.|1528|integer| +|databaseQueryTimeoutMs|Time to wait for a query to execute, given in milliseconds. Defaults to 600 seconds (600,000 ms); zero means there is no limit.|10m|duration| +|databaseUrl|Complete JDBC URL as an alternative to specifying hostname, port and database provided as a way to support alternative connection scenarios.||string| +|databaseUser|Name of the database user to be used when connecting to the database.||string| +|datatypePropagateSourceType|A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|decimalHandlingMode|Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers.|precise|string| +|errorsMaxRetries|The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries).|-1|integer| +|eventProcessingFailureHandlingMode|Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.|fail|string| +|heartbeatActionQuery|The query executed with every heartbeat.||string| +|heartbeatIntervalMs|Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default.|0ms|duration| +|heartbeatTopicsPrefix|The prefix that is used to name heartbeat topics.Defaults to \_\_debezium-heartbeat.|\_\_debezium-heartbeat|string| +|includeSchemaChanges|Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value include logical description of the new schema and optionally the DDL statement(s). The default is 'true'. This is independent of how the connector internally records database schema history.|true|boolean| +|includeSchemaComments|Whether the connector parse table and column's comment to metadata object. Note: Enable this option will bring the implications on memory usage. The number and size of ColumnImpl objects is what largely impacts how much memory is consumed by the Debezium connectors, and adding a String to each of them can potentially be quite heavy. The default is 'false'.|false|boolean| +|incrementalSnapshotWatermarkingStrategy|Specify the strategy used for watermarking during an incremental snapshot: 'insert\_insert' both open and close signal is written into signal data collection (default); 'insert\_delete' only open signal is written on signal data collection, the close will delete the relative open signal;|INSERT\_INSERT|string| +|intervalHandlingMode|Specify how INTERVAL columns should be represented in change events, including: 'string' represents values as an exact ISO formatted string; 'numeric' (default) represents values using the inexact conversion into microseconds|numeric|string| +|lobEnabled|When set to 'false', the default, LOB fields will not be captured nor emitted. When set to 'true', the connector will capture LOB fields and emit changes for those fields like any other column type.|false|boolean| +|logMiningArchiveLogOnlyMode|When set to 'false', the default, the connector will mine both archive log and redo logs to emit change events. When set to 'true', the connector will only mine archive logs. There are circumstances where its advantageous to only mine archive logs and accept latency in event emission due to frequent revolving redo logs.|false|boolean| +|logMiningArchiveLogOnlyScnPollIntervalMs|The interval in milliseconds to wait between polls checking to see if the SCN is in the archive logs.|10s|duration| +|logMiningBatchSizeDefault|The starting SCN interval size that the connector will use for reading data from redo/archive logs.|20000|integer| +|logMiningBatchSizeMax|The maximum SCN interval size that this connector will use when reading from redo/archive logs.|100000|integer| +|logMiningBatchSizeMin|The minimum SCN interval size that this connector will try to read from redo/archive logs. Active batch size will be also increased/decreased by this amount for tuning connector throughput when needed.|1000|integer| +|logMiningBufferDropOnStop|When set to true the underlying buffer cache is not retained when the connector is stopped. When set to false (the default), the buffer cache is retained across restarts.|false|boolean| +|logMiningBufferInfinispanCacheEvents|Specifies the XML configuration for the Infinispan 'events' cache||string| +|logMiningBufferInfinispanCacheGlobal|Specifies the XML configuration for the Infinispan 'global' configuration||string| +|logMiningBufferInfinispanCacheProcessedTransactions|Specifies the XML configuration for the Infinispan 'processed-transactions' cache||string| +|logMiningBufferInfinispanCacheSchemaChanges|Specifies the XML configuration for the Infinispan 'schema-changes' cache||string| +|logMiningBufferInfinispanCacheTransactions|Specifies the XML configuration for the Infinispan 'transactions' cache||string| +|logMiningBufferTransactionEventsThreshold|The number of events a transaction can include before the transaction is discarded. This is useful for managing buffer memory and/or space when dealing with very large transactions. Defaults to 0, meaning that no threshold is applied and transactions can have unlimited events.|0|integer| +|logMiningBufferType|The buffer type controls how the connector manages buffering transaction data. memory - Uses the JVM process' heap to buffer all transaction data. infinispan\_embedded - This option uses an embedded Infinispan cache to buffer transaction data and persist it to disk. infinispan\_remote - This option uses a remote Infinispan cluster to buffer transaction data and persist it to disk.|memory|string| +|logMiningFlushTableName|The name of the flush table used by the connector, defaults to LOG\_MINING\_FLUSH.|LOG\_MINING\_FLUSH|string| +|logMiningIncludeRedoSql|When enabled, the transaction log REDO SQL will be included in the source information block.|false|boolean| +|logMiningQueryFilterMode|Specifies how the filter configuration is applied to the LogMiner database query. none - The query does not apply any schema or table filters, all filtering is at runtime by the connector. in - The query uses SQL in-clause expressions to specify the schema or table filters. regex - The query uses Oracle REGEXP\_LIKE expressions to specify the schema or table filters.|none|string| +|logMiningRestartConnection|Debezium opens a database connection and keeps that connection open throughout the entire streaming phase. In some situations, this can lead to excessive SGA memory usage. By setting this option to 'true' (the default is 'false'), the connector will close and re-open a database connection after every detected log switch or if the log.mining.session.max.ms has been reached.|false|boolean| +|logMiningScnGapDetectionGapSizeMin|Used for SCN gap detection, if the difference between current SCN and previous end SCN is bigger than this value, and the time difference of current SCN and previous end SCN is smaller than log.mining.scn.gap.detection.time.interval.max.ms, consider it a SCN gap.|1000000|integer| +|logMiningScnGapDetectionTimeIntervalMaxMs|Used for SCN gap detection, if the difference between current SCN and previous end SCN is bigger than log.mining.scn.gap.detection.gap.size.min, and the time difference of current SCN and previous end SCN is smaller than this value, consider it a SCN gap.|20s|duration| +|logMiningSessionMaxMs|The maximum number of milliseconds that a LogMiner session lives for before being restarted. Defaults to 0 (indefinite until a log switch occurs)|0ms|duration| +|logMiningSleepTimeDefaultMs|The amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.|1s|duration| +|logMiningSleepTimeIncrementMs|The maximum amount of time that the connector will use to tune the optimal sleep time when reading data from LogMiner. Value is in milliseconds.|200ms|duration| +|logMiningSleepTimeMaxMs|The maximum amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.|3s|duration| +|logMiningSleepTimeMinMs|The minimum amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.|0ms|duration| +|logMiningStrategy|There are strategies: Online catalog with faster mining but no captured DDL. Another - with data dictionary loaded into REDO LOG files|redo\_log\_catalog|string| +|logMiningTransactionRetentionMs|Duration in milliseconds to keep long running transactions in transaction buffer between log mining sessions. By default, all transactions are retained.|0ms|duration| +|logMiningUsernameExcludeList|Comma separated list of usernames to exclude from LogMiner query.||string| +|logMiningUsernameIncludeList|Comma separated list of usernames to include from LogMiner query.||string| +|maxBatchSize|Maximum size of each batch of source records. Defaults to 2048.|2048|integer| +|maxQueueSize|Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size.|8192|integer| +|maxQueueSizeInBytes|Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled|0|integer| +|messageKeyColumns|A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':', where the table names could be defined as (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connector, and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) will be used as message key. Example: dbserver1.inventory.orderlines:orderId,orderLineId;dbserver1.inventory.orders:id||string| +|notificationEnabledChannels|List of notification channels names that are enabled.||string| +|notificationSinkTopicName|The name of the topic for the notifications. This is required in case 'sink' is in the list of enabled channels||string| +|openlogreplicatorHost|The hostname of the OpenLogReplicator network service||string| +|openlogreplicatorPort|The port of the OpenLogReplicator network service||integer| +|openlogreplicatorSource|The configured logical source name in the OpenLogReplicator configuration that is to stream changes||string| +|pollIntervalMs|Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms.|500ms|duration| +|postProcessors|Optional list of post processors. The processors are defined using '.type' config option and configured using options ''||string| +|provideTransactionMetadata|Enables transaction metadata extraction together with event counting|false|boolean| +|queryFetchSize|The maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fetch size, defaults to '2000'.|10000|integer| +|racNodes|A comma-separated list of RAC node hostnames or ip addresses||string| +|retriableRestartConnectorWaitMs|Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms.|10s|duration| +|schemaHistoryInternal|The name of the SchemaHistory class that should be used to store and recover database schema changes. The configuration properties for the history are prefixed with the 'schema.history.internal.' string.|io.debezium.storage.kafka.history.KafkaSchemaHistory|string| +|schemaHistoryInternalFileFilename|The path to the file that will be used to record the database schema history||string| +|schemaHistoryInternalSkipUnparseableDdl|Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse. If skipping is enabled then Debezium can miss metadata changes.|false|boolean| +|schemaHistoryInternalStoreOnlyCapturedDatabasesDdl|Controls what DDL will Debezium store in database schema history. By default (true) only DDL that manipulates a table from captured schema/database will be stored. If set to false, then Debezium will store all incoming DDL statements.|false|boolean| +|schemaHistoryInternalStoreOnlyCapturedTablesDdl|Controls what DDL will Debezium store in database schema history. By default (false) Debezium will store all incoming DDL statements. If set to true, then only DDL that manipulates a captured table will be stored.|false|boolean| +|schemaNameAdjustmentMode|Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro\_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like \_uxxxx. Note: \_ is an escape sequence like backslash in Java;'none' does not apply any adjustment (default)|none|string| +|signalDataCollection|The name of the data collection that is used to send signals/commands to Debezium. Signaling is disabled when not set.||string| +|signalEnabledChannels|List of channels names that are enabled. Source channel is enabled by default|source|string| +|signalPollIntervalMs|Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds.|5s|duration| +|skippedOperations|The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped. By default, only truncate operations will be skipped.|t|string| +|snapshotDatabaseErrorsMaxRetries|The number of attempts to retry database errors during snapshots before failing.|0|integer| +|snapshotDelayMs|A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|snapshotFetchSize|The maximum number of records that should be loaded into memory while performing a snapshot.||integer| +|snapshotIncludeCollectionList|This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector.||string| +|snapshotLockingMode|Controls how the connector holds locks on tables while performing the schema snapshot. The default is 'shared', which means the connector will hold a table lock that prevents exclusive table access for just the initial portion of the snapshot while the database schemas and other metadata are being read. The remaining work in a snapshot involves selecting all rows from each table, and this is done using a flashback query that requires no locks. However, in some cases it may be desirable to avoid locks entirely which can be done by specifying 'none'. This mode is only safe to use if no schema changes are happening while the snapshot is taken.|shared|string| +|snapshotLockTimeoutMs|The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds|10s|duration| +|snapshotMaxThreads|The maximum number of threads used to perform the snapshot. Defaults to 1.|1|integer| +|snapshotMode|The criteria for running a snapshot upon startup of the connector. Select one of the following snapshot options: 'always': The connector runs a snapshot every time that it starts. After the snapshot completes, the connector begins to stream changes from the redo logs.; 'initial' (default): If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures the current full state of the configured tables. After the snapshot completes, the connector begins to stream changes from the redo logs. 'initial\_only': The connector performs a snapshot as it does for the 'initial' option, but after the connector completes the snapshot, it stops, and does not stream changes from the redo logs.; 'schema\_only': If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures only the schema (table structures), but not any table data. After the snapshot completes, the connector begins to stream changes from the redo logs.; 'schema\_only\_recovery': The connector performs a snapshot that captures only the database schema history. The connector then transitions to streaming from the redo logs. Use this setting to restore a corrupted or lost database schema history topic. Do not use if the database schema was modified after the connector stopped.|initial|string| +|snapshotModeConfigurationBasedSnapshotData|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnDataError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnSchemaError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotSchema|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedStartStream|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the stream should start or not after snapshot.|false|boolean| +|snapshotModeCustomName|When 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'Snapshotter' interface and is called on each app boot to determine whether to do a snapshot.||string| +|snapshotSelectStatementOverrides|This property contains a comma-separated list of fully-qualified tables (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connectors. Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id 'snapshot.select.statement.overrides.DB\_NAME.TABLE\_NAME' or 'snapshot.select.statement.overrides.SCHEMA\_NAME.TABLE\_NAME', respectively. The value of those properties is the select statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted.||string| +|snapshotTablesOrderByRowCount|Controls the order in which tables are processed in the initial snapshot. A descending value will order the tables by row count descending. A ascending value will order the tables by row count ascending. A value of disabled (the default) will disable ordering by row count.|disabled|string| +|sourceinfoStructMaker|The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.|io.debezium.connector.oracle.OracleSourceInfoStructMaker|string| +|streamingDelayMs|A delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|tableExcludeList|A comma-separated list of regular expressions that match the fully-qualified names of tables to be excluded from monitoring||string| +|tableIncludeList|The tables for which changes are to be captured||string| +|timePrecisionMode|Time, date, and timestamps can be represented with different kinds of precisions, including: 'adaptive' (the default) bases the precision of time, date, and timestamp values on the database column's precision; 'adaptive\_time\_microseconds' like 'adaptive' mode, but TIME fields always use microseconds precision; 'connect' always represents time, date, and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision.|adaptive|string| +|tombstonesOnDelete|Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.|false|boolean| +|topicNamingStrategy|The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc.|io.debezium.schema.SchemaTopicNamingStrategy|string| +|topicPrefix|Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be accepted.||string| +|transactionMetadataFactory|Class to make transaction context \& transaction struct/schemas|io.debezium.pipeline.txmetadata.DefaultTransactionMetadataFactory|string| +|unavailableValuePlaceholder|Specify the constant that will be provided by Debezium to indicate that the original value is unavailable and not provided by the database.|\_\_debezium\_unavailable\_value|string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Unique name for the connector. Attempting to register again with the same name will fail.||string| +|additionalProperties|Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345\&additionalProperties.schema.registry.url=http://localhost:8811/avro||object| +|internalKeyConverter|The Converter class that should be used to serialize and deserialize key data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|internalValueConverter|The Converter class that should be used to serialize and deserialize value data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|offsetCommitPolicy|The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface 'OffsetCommitPolicy'. The default is a periodic commit policy based upon time intervals.||string| +|offsetCommitTimeoutMs|Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds.|5000|duration| +|offsetFlushIntervalMs|Interval at which to try committing offsets. The default is 1 minute.|60000|duration| +|offsetStorage|The name of the Java class that is responsible for persistence of connector offsets.|org.apache.kafka.connect.storage.FileOffsetBackingStore|string| +|offsetStorageFileName|Path to file where offsets are to be stored. Required when offset.storage is set to the FileOffsetBackingStore.||string| +|offsetStoragePartitions|The number of partitions used when creating the offset storage topic. Required when offset.storage is set to the 'KafkaOffsetBackingStore'.||integer| +|offsetStorageReplicationFactor|Replication factor used when creating the offset storage topic. Required when offset.storage is set to the KafkaOffsetBackingStore||integer| +|offsetStorageTopic|The name of the Kafka topic where offsets are to be stored. Required when offset.storage is set to the KafkaOffsetBackingStore.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|archiveDestinationName|Sets the specific archive log destination as the source for reading archive logs.When not set, the connector will automatically select the first LOCAL and VALID destination.||string| +|archiveLogHours|The number of hours in the past from SYSDATE to mine archive logs. Using 0 mines all available archive logs|0|integer| +|binaryHandlingMode|Specify how binary (blob, binary, etc.) columns should be represented in change events, including: 'bytes' represents binary data as byte array (default); 'base64' represents binary data as base64-encoded string; 'base64-url-safe' represents binary data as base64-url-safe-encoded string; 'hex' represents binary data as hex-encoded (base16) string|bytes|string| +|columnExcludeList|Regular expressions matching columns to exclude from change events||string| +|columnIncludeList|Regular expressions matching columns to include in change events||string| +|columnPropagateSourceType|A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|converters|Optional list of custom converters that would be used instead of default ones. The converters are defined using '.type' config option and configured using options '.'||string| +|customMetricTags|The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example: k1=v1,k2=v2||string| +|databaseConnectionAdapter|The adapter to use when capturing changes from the database. Options include: 'logminer': (the default) to capture changes using native Oracle LogMiner; 'xstream' to capture changes using Oracle XStreams|LogMiner|string| +|databaseDbname|The name of the database from which the connector should capture changes||string| +|databaseHostname|Resolvable hostname or IP address of the database server.||string| +|databaseOutServerName|Name of the XStream Out server to connect to.||string| +|databasePassword|Password of the database user to be used when connecting to the database.||string| +|databasePdbName|Name of the pluggable database when working with a multi-tenant set-up. The CDB name must be given via database.dbname in this case.||string| +|databasePort|Port of the database server.|1528|integer| +|databaseQueryTimeoutMs|Time to wait for a query to execute, given in milliseconds. Defaults to 600 seconds (600,000 ms); zero means there is no limit.|10m|duration| +|databaseUrl|Complete JDBC URL as an alternative to specifying hostname, port and database provided as a way to support alternative connection scenarios.||string| +|databaseUser|Name of the database user to be used when connecting to the database.||string| +|datatypePropagateSourceType|A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|decimalHandlingMode|Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers.|precise|string| +|errorsMaxRetries|The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries).|-1|integer| +|eventProcessingFailureHandlingMode|Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.|fail|string| +|heartbeatActionQuery|The query executed with every heartbeat.||string| +|heartbeatIntervalMs|Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default.|0ms|duration| +|heartbeatTopicsPrefix|The prefix that is used to name heartbeat topics.Defaults to \_\_debezium-heartbeat.|\_\_debezium-heartbeat|string| +|includeSchemaChanges|Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value include logical description of the new schema and optionally the DDL statement(s). The default is 'true'. This is independent of how the connector internally records database schema history.|true|boolean| +|includeSchemaComments|Whether the connector parse table and column's comment to metadata object. Note: Enable this option will bring the implications on memory usage. The number and size of ColumnImpl objects is what largely impacts how much memory is consumed by the Debezium connectors, and adding a String to each of them can potentially be quite heavy. The default is 'false'.|false|boolean| +|incrementalSnapshotWatermarkingStrategy|Specify the strategy used for watermarking during an incremental snapshot: 'insert\_insert' both open and close signal is written into signal data collection (default); 'insert\_delete' only open signal is written on signal data collection, the close will delete the relative open signal;|INSERT\_INSERT|string| +|intervalHandlingMode|Specify how INTERVAL columns should be represented in change events, including: 'string' represents values as an exact ISO formatted string; 'numeric' (default) represents values using the inexact conversion into microseconds|numeric|string| +|lobEnabled|When set to 'false', the default, LOB fields will not be captured nor emitted. When set to 'true', the connector will capture LOB fields and emit changes for those fields like any other column type.|false|boolean| +|logMiningArchiveLogOnlyMode|When set to 'false', the default, the connector will mine both archive log and redo logs to emit change events. When set to 'true', the connector will only mine archive logs. There are circumstances where its advantageous to only mine archive logs and accept latency in event emission due to frequent revolving redo logs.|false|boolean| +|logMiningArchiveLogOnlyScnPollIntervalMs|The interval in milliseconds to wait between polls checking to see if the SCN is in the archive logs.|10s|duration| +|logMiningBatchSizeDefault|The starting SCN interval size that the connector will use for reading data from redo/archive logs.|20000|integer| +|logMiningBatchSizeMax|The maximum SCN interval size that this connector will use when reading from redo/archive logs.|100000|integer| +|logMiningBatchSizeMin|The minimum SCN interval size that this connector will try to read from redo/archive logs. Active batch size will be also increased/decreased by this amount for tuning connector throughput when needed.|1000|integer| +|logMiningBufferDropOnStop|When set to true the underlying buffer cache is not retained when the connector is stopped. When set to false (the default), the buffer cache is retained across restarts.|false|boolean| +|logMiningBufferInfinispanCacheEvents|Specifies the XML configuration for the Infinispan 'events' cache||string| +|logMiningBufferInfinispanCacheGlobal|Specifies the XML configuration for the Infinispan 'global' configuration||string| +|logMiningBufferInfinispanCacheProcessedTransactions|Specifies the XML configuration for the Infinispan 'processed-transactions' cache||string| +|logMiningBufferInfinispanCacheSchemaChanges|Specifies the XML configuration for the Infinispan 'schema-changes' cache||string| +|logMiningBufferInfinispanCacheTransactions|Specifies the XML configuration for the Infinispan 'transactions' cache||string| +|logMiningBufferTransactionEventsThreshold|The number of events a transaction can include before the transaction is discarded. This is useful for managing buffer memory and/or space when dealing with very large transactions. Defaults to 0, meaning that no threshold is applied and transactions can have unlimited events.|0|integer| +|logMiningBufferType|The buffer type controls how the connector manages buffering transaction data. memory - Uses the JVM process' heap to buffer all transaction data. infinispan\_embedded - This option uses an embedded Infinispan cache to buffer transaction data and persist it to disk. infinispan\_remote - This option uses a remote Infinispan cluster to buffer transaction data and persist it to disk.|memory|string| +|logMiningFlushTableName|The name of the flush table used by the connector, defaults to LOG\_MINING\_FLUSH.|LOG\_MINING\_FLUSH|string| +|logMiningIncludeRedoSql|When enabled, the transaction log REDO SQL will be included in the source information block.|false|boolean| +|logMiningQueryFilterMode|Specifies how the filter configuration is applied to the LogMiner database query. none - The query does not apply any schema or table filters, all filtering is at runtime by the connector. in - The query uses SQL in-clause expressions to specify the schema or table filters. regex - The query uses Oracle REGEXP\_LIKE expressions to specify the schema or table filters.|none|string| +|logMiningRestartConnection|Debezium opens a database connection and keeps that connection open throughout the entire streaming phase. In some situations, this can lead to excessive SGA memory usage. By setting this option to 'true' (the default is 'false'), the connector will close and re-open a database connection after every detected log switch or if the log.mining.session.max.ms has been reached.|false|boolean| +|logMiningScnGapDetectionGapSizeMin|Used for SCN gap detection, if the difference between current SCN and previous end SCN is bigger than this value, and the time difference of current SCN and previous end SCN is smaller than log.mining.scn.gap.detection.time.interval.max.ms, consider it a SCN gap.|1000000|integer| +|logMiningScnGapDetectionTimeIntervalMaxMs|Used for SCN gap detection, if the difference between current SCN and previous end SCN is bigger than log.mining.scn.gap.detection.gap.size.min, and the time difference of current SCN and previous end SCN is smaller than this value, consider it a SCN gap.|20s|duration| +|logMiningSessionMaxMs|The maximum number of milliseconds that a LogMiner session lives for before being restarted. Defaults to 0 (indefinite until a log switch occurs)|0ms|duration| +|logMiningSleepTimeDefaultMs|The amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.|1s|duration| +|logMiningSleepTimeIncrementMs|The maximum amount of time that the connector will use to tune the optimal sleep time when reading data from LogMiner. Value is in milliseconds.|200ms|duration| +|logMiningSleepTimeMaxMs|The maximum amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.|3s|duration| +|logMiningSleepTimeMinMs|The minimum amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.|0ms|duration| +|logMiningStrategy|There are strategies: Online catalog with faster mining but no captured DDL. Another - with data dictionary loaded into REDO LOG files|redo\_log\_catalog|string| +|logMiningTransactionRetentionMs|Duration in milliseconds to keep long running transactions in transaction buffer between log mining sessions. By default, all transactions are retained.|0ms|duration| +|logMiningUsernameExcludeList|Comma separated list of usernames to exclude from LogMiner query.||string| +|logMiningUsernameIncludeList|Comma separated list of usernames to include from LogMiner query.||string| +|maxBatchSize|Maximum size of each batch of source records. Defaults to 2048.|2048|integer| +|maxQueueSize|Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size.|8192|integer| +|maxQueueSizeInBytes|Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled|0|integer| +|messageKeyColumns|A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':', where the table names could be defined as (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connector, and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) will be used as message key. Example: dbserver1.inventory.orderlines:orderId,orderLineId;dbserver1.inventory.orders:id||string| +|notificationEnabledChannels|List of notification channels names that are enabled.||string| +|notificationSinkTopicName|The name of the topic for the notifications. This is required in case 'sink' is in the list of enabled channels||string| +|openlogreplicatorHost|The hostname of the OpenLogReplicator network service||string| +|openlogreplicatorPort|The port of the OpenLogReplicator network service||integer| +|openlogreplicatorSource|The configured logical source name in the OpenLogReplicator configuration that is to stream changes||string| +|pollIntervalMs|Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms.|500ms|duration| +|postProcessors|Optional list of post processors. The processors are defined using '.type' config option and configured using options ''||string| +|provideTransactionMetadata|Enables transaction metadata extraction together with event counting|false|boolean| +|queryFetchSize|The maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fetch size, defaults to '2000'.|10000|integer| +|racNodes|A comma-separated list of RAC node hostnames or ip addresses||string| +|retriableRestartConnectorWaitMs|Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms.|10s|duration| +|schemaHistoryInternal|The name of the SchemaHistory class that should be used to store and recover database schema changes. The configuration properties for the history are prefixed with the 'schema.history.internal.' string.|io.debezium.storage.kafka.history.KafkaSchemaHistory|string| +|schemaHistoryInternalFileFilename|The path to the file that will be used to record the database schema history||string| +|schemaHistoryInternalSkipUnparseableDdl|Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse. If skipping is enabled then Debezium can miss metadata changes.|false|boolean| +|schemaHistoryInternalStoreOnlyCapturedDatabasesDdl|Controls what DDL will Debezium store in database schema history. By default (true) only DDL that manipulates a table from captured schema/database will be stored. If set to false, then Debezium will store all incoming DDL statements.|false|boolean| +|schemaHistoryInternalStoreOnlyCapturedTablesDdl|Controls what DDL will Debezium store in database schema history. By default (false) Debezium will store all incoming DDL statements. If set to true, then only DDL that manipulates a captured table will be stored.|false|boolean| +|schemaNameAdjustmentMode|Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro\_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like \_uxxxx. Note: \_ is an escape sequence like backslash in Java;'none' does not apply any adjustment (default)|none|string| +|signalDataCollection|The name of the data collection that is used to send signals/commands to Debezium. Signaling is disabled when not set.||string| +|signalEnabledChannels|List of channels names that are enabled. Source channel is enabled by default|source|string| +|signalPollIntervalMs|Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds.|5s|duration| +|skippedOperations|The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped. By default, only truncate operations will be skipped.|t|string| +|snapshotDatabaseErrorsMaxRetries|The number of attempts to retry database errors during snapshots before failing.|0|integer| +|snapshotDelayMs|A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|snapshotFetchSize|The maximum number of records that should be loaded into memory while performing a snapshot.||integer| +|snapshotIncludeCollectionList|This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector.||string| +|snapshotLockingMode|Controls how the connector holds locks on tables while performing the schema snapshot. The default is 'shared', which means the connector will hold a table lock that prevents exclusive table access for just the initial portion of the snapshot while the database schemas and other metadata are being read. The remaining work in a snapshot involves selecting all rows from each table, and this is done using a flashback query that requires no locks. However, in some cases it may be desirable to avoid locks entirely which can be done by specifying 'none'. This mode is only safe to use if no schema changes are happening while the snapshot is taken.|shared|string| +|snapshotLockTimeoutMs|The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds|10s|duration| +|snapshotMaxThreads|The maximum number of threads used to perform the snapshot. Defaults to 1.|1|integer| +|snapshotMode|The criteria for running a snapshot upon startup of the connector. Select one of the following snapshot options: 'always': The connector runs a snapshot every time that it starts. After the snapshot completes, the connector begins to stream changes from the redo logs.; 'initial' (default): If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures the current full state of the configured tables. After the snapshot completes, the connector begins to stream changes from the redo logs. 'initial\_only': The connector performs a snapshot as it does for the 'initial' option, but after the connector completes the snapshot, it stops, and does not stream changes from the redo logs.; 'schema\_only': If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures only the schema (table structures), but not any table data. After the snapshot completes, the connector begins to stream changes from the redo logs.; 'schema\_only\_recovery': The connector performs a snapshot that captures only the database schema history. The connector then transitions to streaming from the redo logs. Use this setting to restore a corrupted or lost database schema history topic. Do not use if the database schema was modified after the connector stopped.|initial|string| +|snapshotModeConfigurationBasedSnapshotData|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnDataError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnSchemaError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotSchema|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedStartStream|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the stream should start or not after snapshot.|false|boolean| +|snapshotModeCustomName|When 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'Snapshotter' interface and is called on each app boot to determine whether to do a snapshot.||string| +|snapshotSelectStatementOverrides|This property contains a comma-separated list of fully-qualified tables (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connectors. Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id 'snapshot.select.statement.overrides.DB\_NAME.TABLE\_NAME' or 'snapshot.select.statement.overrides.SCHEMA\_NAME.TABLE\_NAME', respectively. The value of those properties is the select statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted.||string| +|snapshotTablesOrderByRowCount|Controls the order in which tables are processed in the initial snapshot. A descending value will order the tables by row count descending. A ascending value will order the tables by row count ascending. A value of disabled (the default) will disable ordering by row count.|disabled|string| +|sourceinfoStructMaker|The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.|io.debezium.connector.oracle.OracleSourceInfoStructMaker|string| +|streamingDelayMs|A delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|tableExcludeList|A comma-separated list of regular expressions that match the fully-qualified names of tables to be excluded from monitoring||string| +|tableIncludeList|The tables for which changes are to be captured||string| +|timePrecisionMode|Time, date, and timestamps can be represented with different kinds of precisions, including: 'adaptive' (the default) bases the precision of time, date, and timestamp values on the database column's precision; 'adaptive\_time\_microseconds' like 'adaptive' mode, but TIME fields always use microseconds precision; 'connect' always represents time, date, and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision.|adaptive|string| +|tombstonesOnDelete|Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.|false|boolean| +|topicNamingStrategy|The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc.|io.debezium.schema.SchemaTopicNamingStrategy|string| +|topicPrefix|Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be accepted.||string| +|transactionMetadataFactory|Class to make transaction context \& transaction struct/schemas|io.debezium.pipeline.txmetadata.DefaultTransactionMetadataFactory|string| +|unavailableValuePlaceholder|Specify the constant that will be provided by Debezium to indicate that the original value is unavailable and not provided by the database.|\_\_debezium\_unavailable\_value|string| diff --git a/camel-debezium-postgres.md b/camel-debezium-postgres.md new file mode 100644 index 0000000000000000000000000000000000000000..68cc03f43a1457737617d15e5a3005dea2c8ee44 --- /dev/null +++ b/camel-debezium-postgres.md @@ -0,0 +1,341 @@ +# Debezium-postgres + +**Since Camel 3.0** + +**Only consumer is supported** + +The Debezium PostgresSQL component is wrapper around +[Debezium](https://debezium.io/) using [Debezium +Engine](https://debezium.io/documentation/reference/1.9/development/engine.html), +which enables Change Data Capture from PostgresSQL database using +Debezium without the need for Kafka or Kafka Connect. + +**Note on handling failures:** per [Debezium Embedded +Engine](https://debezium.io/documentation/reference/1.9/development/engine.html#_handling_failures) +documentation, the engines are actively recording source offsets and +periodically flush these offsets to a persistent storage. Therefore, +when the application is restarted or crashed, the engine will resume +from the last recorded offset. This means that, at normal operation, +your downstream routes will receive each event exactly once. However, in +case of an application crash (not having a graceful shutdown), the +application will resume from the last recorded offset, which may result +in receiving duplicate events immediately after the restart. Therefore, +your downstream routes should be tolerant enough of such a case and +deduplicate events if needed. + +Maven users will need to add the following dependency to their `pom.xml` +for this component. + + + org.apache.camel + camel-debezium-postgres + x.x.x + + + +# URI format + + debezium-postgres:name[?options] + +For more information about configuration: +[https://debezium.io/documentation/reference/0.10/operations/embedded.html#engine-properties](https://debezium.io/documentation/reference/0.10/operations/embedded.html#engine-properties) +[https://debezium.io/documentation/reference/0.10/connectors/postgresql.html#connector-properties](https://debezium.io/documentation/reference/0.10/connectors/postgresql.html#connector-properties) + +# Message body + +The message body if is not `null` (in case of tombstones), it contains +the state of the row after the event occurred as `Struct` format or +`Map` format if you use the included Type Converter from `Struct` to +`Map`. + +Check below for more details. + +# Samples + +## Consuming events + +Here is a basic route that you can use to listen to Debezium events from +PostgresSQL connector. + + from("debezium-postgres:dbz-test-1?offsetStorageFileName=/usr/offset-file-1.dat&databaseHostname=localhost&databaseUser=debezium&databasePassword=dbz&databaseServerName=my-app-connector&databaseHistoryFileFilename=/usr/history-file-1.dat") + .log("Event received from Debezium : ${body}") + .log(" with this identifier ${headers.CamelDebeziumIdentifier}") + .log(" with these source metadata ${headers.CamelDebeziumSourceMetadata}") + .log(" the event occurred upon this operation '${headers.CamelDebeziumSourceOperation}'") + .log(" on this database '${headers.CamelDebeziumSourceMetadata[db]}' and this table '${headers.CamelDebeziumSourceMetadata[table]}'") + .log(" with the key ${headers.CamelDebeziumKey}") + .log(" the previous value is ${headers.CamelDebeziumBefore}") + +By default, the component will emit the events in the body and +`CamelDebeziumBefore` header as +[`Struct`](https://kafka.apache.org/22/javadoc/org/apache/kafka/connect/data/Struct.html) +data type, the reasoning behind this, is to perceive the schema +information in case is needed. However, the component as well contains a +[Type Converter](#manual::type-converter.adoc) that converts from +default output type of +[`Struct`](https://kafka.apache.org/22/javadoc/org/apache/kafka/connect/data/Struct.html) +to `Map` in order to leverage Camel’s rich [Data +Format](#manual::data-format.adoc) types which many of them work out of +box with `Map` data type. To use it, you can either add `Map.class` type +when you access the message (e.g., +`exchange.getIn().getBody(Map.class)`), or you can convert the body +always to `Map` from the route builder by adding +`.convertBodyTo(Map.class)` to your Camel Route DSL after `from` +statement. + +We mentioned above the schema, which can be used in case you need to +perform advance data transformation and the schema is needed for that. +If you choose not to convert your body to `Map`, you can obtain the +schema information as +[`Schema`](https://kafka.apache.org/22/javadoc/org/apache/kafka/connect/data/Schema.html) +type from `Struct` like this: + + from("debezium-postgres:[name]?[options]]) + .process(exchange -> { + final Struct bodyValue = exchange.getIn().getBody(Struct.class); + final Schema schemaValue = bodyValue.schema(); + + log.info("Body value is : {}", bodyValue); + log.info("With Schema : {}", schemaValue); + log.info("And fields of : {}", schemaValue.fields()); + log.info("Field name has `{}` type", schemaValue.field("name").schema()); + }); + +This component is a thin wrapper around Debezium Engine as mentioned. +Therefore, before using this component in production, you need to +understand how Debezium works and how configurations can reflect the +expected behavior. This is especially true in regard to [handling +failures](https://debezium.io/documentation/reference/1.9/operations/embedded.html#_handling_failures). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|additionalProperties|Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345\&additionalProperties.schema.registry.url=http://localhost:8811/avro||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|configuration|Allow pre-configured Configurations to be set.||object| +|internalKeyConverter|The Converter class that should be used to serialize and deserialize key data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|internalValueConverter|The Converter class that should be used to serialize and deserialize value data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|offsetCommitPolicy|The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface 'OffsetCommitPolicy'. The default is a periodic commit policy based upon time intervals.||string| +|offsetCommitTimeoutMs|Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds.|5000|duration| +|offsetFlushIntervalMs|Interval at which to try committing offsets. The default is 1 minute.|60000|duration| +|offsetStorage|The name of the Java class that is responsible for persistence of connector offsets.|org.apache.kafka.connect.storage.FileOffsetBackingStore|string| +|offsetStorageFileName|Path to file where offsets are to be stored. Required when offset.storage is set to the FileOffsetBackingStore.||string| +|offsetStoragePartitions|The number of partitions used when creating the offset storage topic. Required when offset.storage is set to the 'KafkaOffsetBackingStore'.||integer| +|offsetStorageReplicationFactor|Replication factor used when creating the offset storage topic. Required when offset.storage is set to the KafkaOffsetBackingStore||integer| +|offsetStorageTopic|The name of the Kafka topic where offsets are to be stored. Required when offset.storage is set to the KafkaOffsetBackingStore.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|binaryHandlingMode|Specify how binary (blob, binary, etc.) columns should be represented in change events, including: 'bytes' represents binary data as byte array (default); 'base64' represents binary data as base64-encoded string; 'base64-url-safe' represents binary data as base64-url-safe-encoded string; 'hex' represents binary data as hex-encoded (base16) string|bytes|string| +|columnExcludeList|Regular expressions matching columns to exclude from change events||string| +|columnIncludeList|Regular expressions matching columns to include in change events||string| +|columnPropagateSourceType|A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|converters|Optional list of custom converters that would be used instead of default ones. The converters are defined using '.type' config option and configured using options '.'||string| +|customMetricTags|The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example: k1=v1,k2=v2||string| +|databaseDbname|The name of the database from which the connector should capture changes||string| +|databaseHostname|Resolvable hostname or IP address of the database server.||string| +|databaseInitialStatements|A semicolon separated list of SQL statements to be executed when a JDBC connection to the database is established. Note that the connector may establish JDBC connections at its own discretion, so this should typically be used for configuration of session parameters only, but not for executing DML statements. Use doubled semicolon (';;') to use a semicolon as a character and not as a delimiter.||string| +|databasePassword|Password of the database user to be used when connecting to the database.||string| +|databasePort|Port of the database server.|5432|integer| +|databaseQueryTimeoutMs|Time to wait for a query to execute, given in milliseconds. Defaults to 600 seconds (600,000 ms); zero means there is no limit.|10m|duration| +|databaseSslcert|File containing the SSL Certificate for the client. See the Postgres SSL docs for further information||string| +|databaseSslfactory|A name of class to that creates SSL Sockets. Use org.postgresql.ssl.NonValidatingFactory to disable SSL validation in development environments||string| +|databaseSslkey|File containing the SSL private key for the client. See the Postgres SSL docs for further information||string| +|databaseSslmode|Whether to use an encrypted connection to Postgres. Options include: 'disable' (the default) to use an unencrypted connection; 'allow' to try and use an unencrypted connection first and, failing that, a secure (encrypted) connection; 'prefer' (the default) to try and use a secure (encrypted) connection first and, failing that, an unencrypted connection; 'require' to use a secure (encrypted) connection, and fail if one cannot be established; 'verify-ca' like 'required' but additionally verify the server TLS certificate against the configured Certificate Authority (CA) certificates, or fail if no valid matching CA certificates are found; or 'verify-full' like 'verify-ca' but additionally verify that the server certificate matches the host to which the connection is attempted.|prefer|string| +|databaseSslpassword|Password to access the client private key from the file specified by 'database.sslkey'. See the Postgres SSL docs for further information||string| +|databaseSslrootcert|File containing the root certificate(s) against which the server is validated. See the Postgres JDBC SSL docs for further information||string| +|databaseTcpkeepalive|Enable or disable TCP keep-alive probe to avoid dropping TCP connection|true|boolean| +|databaseUser|Name of the database user to be used when connecting to the database.||string| +|datatypePropagateSourceType|A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|decimalHandlingMode|Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers.|precise|string| +|errorsMaxRetries|The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries).|-1|integer| +|eventProcessingFailureHandlingMode|Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.|fail|string| +|flushLsnSource|Boolean to determine if Debezium should flush LSN in the source postgres database. If set to false, user will have to flush the LSN manually outside Debezium.|true|boolean| +|heartbeatActionQuery|The query executed with every heartbeat.||string| +|heartbeatIntervalMs|Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default.|0ms|duration| +|heartbeatTopicsPrefix|The prefix that is used to name heartbeat topics.Defaults to \_\_debezium-heartbeat.|\_\_debezium-heartbeat|string| +|hstoreHandlingMode|Specify how HSTORE columns should be represented in change events, including: 'json' represents values as string-ified JSON (default); 'map' represents values as a key/value map|json|string| +|includeSchemaComments|Whether the connector parse table and column's comment to metadata object. Note: Enable this option will bring the implications on memory usage. The number and size of ColumnImpl objects is what largely impacts how much memory is consumed by the Debezium connectors, and adding a String to each of them can potentially be quite heavy. The default is 'false'.|false|boolean| +|includeUnknownDatatypes|Specify whether the fields of data type not supported by Debezium should be processed: 'false' (the default) omits the fields; 'true' converts the field into an implementation dependent binary representation.|false|boolean| +|incrementalSnapshotChunkSize|The maximum size of chunk (number of documents/rows) for incremental snapshotting|1024|integer| +|incrementalSnapshotWatermarkingStrategy|Specify the strategy used for watermarking during an incremental snapshot: 'insert\_insert' both open and close signal is written into signal data collection (default); 'insert\_delete' only open signal is written on signal data collection, the close will delete the relative open signal;|INSERT\_INSERT|string| +|intervalHandlingMode|Specify how INTERVAL columns should be represented in change events, including: 'string' represents values as an exact ISO formatted string; 'numeric' (default) represents values using the inexact conversion into microseconds|numeric|string| +|maxBatchSize|Maximum size of each batch of source records. Defaults to 2048.|2048|integer| +|maxQueueSize|Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size.|8192|integer| +|maxQueueSizeInBytes|Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled|0|integer| +|messageKeyColumns|A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':', where the table names could be defined as (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connector, and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) will be used as message key. Example: dbserver1.inventory.orderlines:orderId,orderLineId;dbserver1.inventory.orders:id||string| +|messagePrefixExcludeList|A comma-separated list of regular expressions that match the logical decoding message prefixes to be excluded from monitoring.||string| +|messagePrefixIncludeList|A comma-separated list of regular expressions that match the logical decoding message prefixes to be monitored. All prefixes are monitored by default.||string| +|notificationEnabledChannels|List of notification channels names that are enabled.||string| +|notificationSinkTopicName|The name of the topic for the notifications. This is required in case 'sink' is in the list of enabled channels||string| +|pluginName|The name of the Postgres logical decoding plugin installed on the server. Supported values are 'decoderbufs' and 'pgoutput'. Defaults to 'decoderbufs'.|decoderbufs|string| +|pollIntervalMs|Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms.|500ms|duration| +|postProcessors|Optional list of post processors. The processors are defined using '.type' config option and configured using options ''||string| +|provideTransactionMetadata|Enables transaction metadata extraction together with event counting|false|boolean| +|publicationAutocreateMode|Applies only when streaming changes using pgoutput.Determine how creation of a publication should work, the default is all\_tables.DISABLED - The connector will not attempt to create a publication at all. The expectation is that the user has created the publication up-front. If the publication isn't found to exist upon startup, the connector will throw an exception and stop.ALL\_TABLES - If no publication exists, the connector will create a new publication for all tables. Note this requires that the configured user has access. If the publication already exists, it will be used. i.e CREATE PUBLICATION FOR ALL TABLES;FILTERED - If no publication exists, the connector will create a new publication for all those tables matchingthe current filter configuration (see table/database include/exclude list properties). If the publication already exists, it will be used. i.e CREATE PUBLICATION FOR TABLE|all\_tables|string| +|publicationName|The name of the Postgres 10 publication used for streaming changes from a plugin. Defaults to 'dbz\_publication'|dbz\_publication|string| +|queryFetchSize|The maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fetch size.|0|integer| +|replicaIdentityAutosetValues|Applies only when streaming changes using pgoutput.Determines the value for Replica Identity at table level. This option will overwrite the existing value in databaseA comma-separated list of regular expressions that match fully-qualified tables and Replica Identity value to be used in the table. Each expression must match the pattern ':', where the table names could be defined as (SCHEMA\_NAME.TABLE\_NAME), and the replica identity values are: DEFAULT - Records the old values of the columns of the primary key, if any. This is the default for non-system tables.INDEX index\_name - Records the old values of the columns covered by the named index, that must be unique, not partial, not deferrable, and include only columns marked NOT NULL. If this index is dropped, the behavior is the same as NOTHING.FULL - Records the old values of all columns in the row.NOTHING - Records no information about the old row. This is the default for system tables.||string| +|retriableRestartConnectorWaitMs|Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms.|10s|duration| +|schemaExcludeList|The schemas for which events must not be captured||string| +|schemaHistoryInternalFileFilename|The path to the file that will be used to record the database schema history||string| +|schemaIncludeList|The schemas for which events should be captured||string| +|schemaNameAdjustmentMode|Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro\_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like \_uxxxx. Note: \_ is an escape sequence like backslash in Java;'none' does not apply any adjustment (default)|none|string| +|schemaRefreshMode|Specify the conditions that trigger a refresh of the in-memory schema for a table. 'columns\_diff' (the default) is the safest mode, ensuring the in-memory schema stays in-sync with the database table's schema at all times. 'columns\_diff\_exclude\_unchanged\_toast' instructs the connector to refresh the in-memory schema cache if there is a discrepancy between it and the schema derived from the incoming message, unless unchanged TOASTable data fully accounts for the discrepancy. This setting can improve connector performance significantly if there are frequently-updated tables that have TOASTed data that are rarely part of these updates. However, it is possible for the in-memory schema to become outdated if TOASTable columns are dropped from the table.|columns\_diff|string| +|signalDataCollection|The name of the data collection that is used to send signals/commands to Debezium. Signaling is disabled when not set.||string| +|signalEnabledChannels|List of channels names that are enabled. Source channel is enabled by default|source|string| +|signalPollIntervalMs|Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds.|5s|duration| +|skippedOperations|The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped. By default, only truncate operations will be skipped.|t|string| +|slotDropOnStop|Whether or not to drop the logical replication slot when the connector finishes orderly. By default the replication is kept so that on restart progress can resume from the last recorded location|false|boolean| +|slotMaxRetries|How many times to retry connecting to a replication slot when an attempt fails.|6|integer| +|slotName|The name of the Postgres logical decoding slot created for streaming changes from a plugin. Defaults to 'debezium|debezium|string| +|slotRetryDelayMs|Time to wait between retry attempts when the connector fails to connect to a replication slot, given in milliseconds. Defaults to 10 seconds (10,000 ms).|10s|duration| +|slotStreamParams|Any optional parameters used by logical decoding plugin. Semi-colon separated. E.g. 'add-tables=public.table,public.table2;include-lsn=true'||string| +|snapshotDelayMs|A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|snapshotFetchSize|The maximum number of records that should be loaded into memory while performing a snapshot.||integer| +|snapshotIncludeCollectionList|This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector.||string| +|snapshotLockingMode|Controls how the connector holds locks on tables while performing the schema snapshot. The 'shared' which means the connector will hold a table lock that prevents exclusive table access for just the initial portion of the snapshot while the database schemas and other metadata are being read. The remaining work in a snapshot involves selecting all rows from each table, and this is done using a flashback query that requires no locks. However, in some cases it may be desirable to avoid locks entirely which can be done by specifying 'none'. This mode is only safe to use if no schema changes are happening while the snapshot is taken.|none|string| +|snapshotLockingModeCustomName|When 'snapshot.locking.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'SnapshotterLocking' interface and is called to determine how to lock tables during schema snapshot.||string| +|snapshotLockTimeoutMs|The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds|10s|duration| +|snapshotMaxThreads|The maximum number of threads used to perform the snapshot. Defaults to 1.|1|integer| +|snapshotMode|The criteria for running a snapshot upon startup of the connector. Select one of the following snapshot options: 'always': The connector runs a snapshot every time that it starts. After the snapshot completes, the connector begins to stream changes from the transaction log.; 'initial' (default): If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures the current full state of the configured tables. After the snapshot completes, the connector begins to stream changes from the transaction log. 'initial\_only': The connector performs a snapshot as it does for the 'initial' option, but after the connector completes the snapshot, it stops, and does not stream changes from the transaction log.; 'never': The connector does not run a snapshot. Upon first startup, the connector immediately begins reading from the beginning of the transaction log. 'exported': This option is deprecated; use 'initial' instead.; 'custom': The connector loads a custom class to specify how the connector performs snapshots. For more information, see Custom snapshotter SPI in the PostgreSQL connector documentation.|initial|string| +|snapshotModeConfigurationBasedSnapshotData|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnDataError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnSchemaError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotSchema|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedStartStream|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the stream should start or not after snapshot.|false|boolean| +|snapshotModeCustomName|When 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'Snapshotter' interface and is called on each app boot to determine whether to do a snapshot.||string| +|snapshotQueryMode|Controls query used during the snapshot|select\_all|string| +|snapshotQueryModeCustomName|When 'snapshot.query.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'SnapshotterQuery' interface and is called to determine how to build queries during snapshot.||string| +|snapshotSelectStatementOverrides|This property contains a comma-separated list of fully-qualified tables (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connectors. Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id 'snapshot.select.statement.overrides.DB\_NAME.TABLE\_NAME' or 'snapshot.select.statement.overrides.SCHEMA\_NAME.TABLE\_NAME', respectively. The value of those properties is the select statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted.||string| +|snapshotTablesOrderByRowCount|Controls the order in which tables are processed in the initial snapshot. A descending value will order the tables by row count descending. A ascending value will order the tables by row count ascending. A value of disabled (the default) will disable ordering by row count.|disabled|string| +|sourceinfoStructMaker|The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.|io.debezium.connector.postgresql.PostgresSourceInfoStructMaker|string| +|statusUpdateIntervalMs|Frequency for sending replication connection status updates to the server, given in milliseconds. Defaults to 10 seconds (10,000 ms).|10s|duration| +|streamingDelayMs|A delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|tableExcludeList|A comma-separated list of regular expressions that match the fully-qualified names of tables to be excluded from monitoring||string| +|tableIgnoreBuiltin|Flag specifying whether built-in tables should be ignored.|true|boolean| +|tableIncludeList|The tables for which changes are to be captured||string| +|timePrecisionMode|Time, date, and timestamps can be represented with different kinds of precisions, including: 'adaptive' (the default) bases the precision of time, date, and timestamp values on the database column's precision; 'adaptive\_time\_microseconds' like 'adaptive' mode, but TIME fields always use microseconds precision; 'connect' always represents time, date, and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision.|adaptive|string| +|tombstonesOnDelete|Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.|false|boolean| +|topicNamingStrategy|The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc.|io.debezium.schema.SchemaTopicNamingStrategy|string| +|topicPrefix|Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be accepted.||string| +|transactionMetadataFactory|Class to make transaction context \& transaction struct/schemas|io.debezium.pipeline.txmetadata.DefaultTransactionMetadataFactory|string| +|unavailableValuePlaceholder|Specify the constant that will be provided by Debezium to indicate that the original value is a toasted value not provided by the database. If starts with 'hex:' prefix it is expected that the rest of the string represents hexadecimal encoded octets.|\_\_debezium\_unavailable\_value|string| +|xminFetchIntervalMs|Specify how often (in ms) the xmin will be fetched from the replication slot. This xmin value is exposed by the slot which gives a lower bound of where a new replication slot could start from. The lower the value, the more likely this value is to be the current 'true' value, but the bigger the performance cost. The bigger the value, the less likely this value is to be the current 'true' value, but the lower the performance penalty. The default is set to 0 ms, which disables tracking xmin.|0ms|duration| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Unique name for the connector. Attempting to register again with the same name will fail.||string| +|additionalProperties|Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345\&additionalProperties.schema.registry.url=http://localhost:8811/avro||object| +|internalKeyConverter|The Converter class that should be used to serialize and deserialize key data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|internalValueConverter|The Converter class that should be used to serialize and deserialize value data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|offsetCommitPolicy|The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface 'OffsetCommitPolicy'. The default is a periodic commit policy based upon time intervals.||string| +|offsetCommitTimeoutMs|Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds.|5000|duration| +|offsetFlushIntervalMs|Interval at which to try committing offsets. The default is 1 minute.|60000|duration| +|offsetStorage|The name of the Java class that is responsible for persistence of connector offsets.|org.apache.kafka.connect.storage.FileOffsetBackingStore|string| +|offsetStorageFileName|Path to file where offsets are to be stored. Required when offset.storage is set to the FileOffsetBackingStore.||string| +|offsetStoragePartitions|The number of partitions used when creating the offset storage topic. Required when offset.storage is set to the 'KafkaOffsetBackingStore'.||integer| +|offsetStorageReplicationFactor|Replication factor used when creating the offset storage topic. Required when offset.storage is set to the KafkaOffsetBackingStore||integer| +|offsetStorageTopic|The name of the Kafka topic where offsets are to be stored. Required when offset.storage is set to the KafkaOffsetBackingStore.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|binaryHandlingMode|Specify how binary (blob, binary, etc.) columns should be represented in change events, including: 'bytes' represents binary data as byte array (default); 'base64' represents binary data as base64-encoded string; 'base64-url-safe' represents binary data as base64-url-safe-encoded string; 'hex' represents binary data as hex-encoded (base16) string|bytes|string| +|columnExcludeList|Regular expressions matching columns to exclude from change events||string| +|columnIncludeList|Regular expressions matching columns to include in change events||string| +|columnPropagateSourceType|A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|converters|Optional list of custom converters that would be used instead of default ones. The converters are defined using '.type' config option and configured using options '.'||string| +|customMetricTags|The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example: k1=v1,k2=v2||string| +|databaseDbname|The name of the database from which the connector should capture changes||string| +|databaseHostname|Resolvable hostname or IP address of the database server.||string| +|databaseInitialStatements|A semicolon separated list of SQL statements to be executed when a JDBC connection to the database is established. Note that the connector may establish JDBC connections at its own discretion, so this should typically be used for configuration of session parameters only, but not for executing DML statements. Use doubled semicolon (';;') to use a semicolon as a character and not as a delimiter.||string| +|databasePassword|Password of the database user to be used when connecting to the database.||string| +|databasePort|Port of the database server.|5432|integer| +|databaseQueryTimeoutMs|Time to wait for a query to execute, given in milliseconds. Defaults to 600 seconds (600,000 ms); zero means there is no limit.|10m|duration| +|databaseSslcert|File containing the SSL Certificate for the client. See the Postgres SSL docs for further information||string| +|databaseSslfactory|A name of class to that creates SSL Sockets. Use org.postgresql.ssl.NonValidatingFactory to disable SSL validation in development environments||string| +|databaseSslkey|File containing the SSL private key for the client. See the Postgres SSL docs for further information||string| +|databaseSslmode|Whether to use an encrypted connection to Postgres. Options include: 'disable' (the default) to use an unencrypted connection; 'allow' to try and use an unencrypted connection first and, failing that, a secure (encrypted) connection; 'prefer' (the default) to try and use a secure (encrypted) connection first and, failing that, an unencrypted connection; 'require' to use a secure (encrypted) connection, and fail if one cannot be established; 'verify-ca' like 'required' but additionally verify the server TLS certificate against the configured Certificate Authority (CA) certificates, or fail if no valid matching CA certificates are found; or 'verify-full' like 'verify-ca' but additionally verify that the server certificate matches the host to which the connection is attempted.|prefer|string| +|databaseSslpassword|Password to access the client private key from the file specified by 'database.sslkey'. See the Postgres SSL docs for further information||string| +|databaseSslrootcert|File containing the root certificate(s) against which the server is validated. See the Postgres JDBC SSL docs for further information||string| +|databaseTcpkeepalive|Enable or disable TCP keep-alive probe to avoid dropping TCP connection|true|boolean| +|databaseUser|Name of the database user to be used when connecting to the database.||string| +|datatypePropagateSourceType|A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|decimalHandlingMode|Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers.|precise|string| +|errorsMaxRetries|The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries).|-1|integer| +|eventProcessingFailureHandlingMode|Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.|fail|string| +|flushLsnSource|Boolean to determine if Debezium should flush LSN in the source postgres database. If set to false, user will have to flush the LSN manually outside Debezium.|true|boolean| +|heartbeatActionQuery|The query executed with every heartbeat.||string| +|heartbeatIntervalMs|Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default.|0ms|duration| +|heartbeatTopicsPrefix|The prefix that is used to name heartbeat topics.Defaults to \_\_debezium-heartbeat.|\_\_debezium-heartbeat|string| +|hstoreHandlingMode|Specify how HSTORE columns should be represented in change events, including: 'json' represents values as string-ified JSON (default); 'map' represents values as a key/value map|json|string| +|includeSchemaComments|Whether the connector parse table and column's comment to metadata object. Note: Enable this option will bring the implications on memory usage. The number and size of ColumnImpl objects is what largely impacts how much memory is consumed by the Debezium connectors, and adding a String to each of them can potentially be quite heavy. The default is 'false'.|false|boolean| +|includeUnknownDatatypes|Specify whether the fields of data type not supported by Debezium should be processed: 'false' (the default) omits the fields; 'true' converts the field into an implementation dependent binary representation.|false|boolean| +|incrementalSnapshotChunkSize|The maximum size of chunk (number of documents/rows) for incremental snapshotting|1024|integer| +|incrementalSnapshotWatermarkingStrategy|Specify the strategy used for watermarking during an incremental snapshot: 'insert\_insert' both open and close signal is written into signal data collection (default); 'insert\_delete' only open signal is written on signal data collection, the close will delete the relative open signal;|INSERT\_INSERT|string| +|intervalHandlingMode|Specify how INTERVAL columns should be represented in change events, including: 'string' represents values as an exact ISO formatted string; 'numeric' (default) represents values using the inexact conversion into microseconds|numeric|string| +|maxBatchSize|Maximum size of each batch of source records. Defaults to 2048.|2048|integer| +|maxQueueSize|Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size.|8192|integer| +|maxQueueSizeInBytes|Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled|0|integer| +|messageKeyColumns|A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':', where the table names could be defined as (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connector, and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) will be used as message key. Example: dbserver1.inventory.orderlines:orderId,orderLineId;dbserver1.inventory.orders:id||string| +|messagePrefixExcludeList|A comma-separated list of regular expressions that match the logical decoding message prefixes to be excluded from monitoring.||string| +|messagePrefixIncludeList|A comma-separated list of regular expressions that match the logical decoding message prefixes to be monitored. All prefixes are monitored by default.||string| +|notificationEnabledChannels|List of notification channels names that are enabled.||string| +|notificationSinkTopicName|The name of the topic for the notifications. This is required in case 'sink' is in the list of enabled channels||string| +|pluginName|The name of the Postgres logical decoding plugin installed on the server. Supported values are 'decoderbufs' and 'pgoutput'. Defaults to 'decoderbufs'.|decoderbufs|string| +|pollIntervalMs|Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms.|500ms|duration| +|postProcessors|Optional list of post processors. The processors are defined using '.type' config option and configured using options ''||string| +|provideTransactionMetadata|Enables transaction metadata extraction together with event counting|false|boolean| +|publicationAutocreateMode|Applies only when streaming changes using pgoutput.Determine how creation of a publication should work, the default is all\_tables.DISABLED - The connector will not attempt to create a publication at all. The expectation is that the user has created the publication up-front. If the publication isn't found to exist upon startup, the connector will throw an exception and stop.ALL\_TABLES - If no publication exists, the connector will create a new publication for all tables. Note this requires that the configured user has access. If the publication already exists, it will be used. i.e CREATE PUBLICATION FOR ALL TABLES;FILTERED - If no publication exists, the connector will create a new publication for all those tables matchingthe current filter configuration (see table/database include/exclude list properties). If the publication already exists, it will be used. i.e CREATE PUBLICATION FOR TABLE|all\_tables|string| +|publicationName|The name of the Postgres 10 publication used for streaming changes from a plugin. Defaults to 'dbz\_publication'|dbz\_publication|string| +|queryFetchSize|The maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fetch size.|0|integer| +|replicaIdentityAutosetValues|Applies only when streaming changes using pgoutput.Determines the value for Replica Identity at table level. This option will overwrite the existing value in databaseA comma-separated list of regular expressions that match fully-qualified tables and Replica Identity value to be used in the table. Each expression must match the pattern ':', where the table names could be defined as (SCHEMA\_NAME.TABLE\_NAME), and the replica identity values are: DEFAULT - Records the old values of the columns of the primary key, if any. This is the default for non-system tables.INDEX index\_name - Records the old values of the columns covered by the named index, that must be unique, not partial, not deferrable, and include only columns marked NOT NULL. If this index is dropped, the behavior is the same as NOTHING.FULL - Records the old values of all columns in the row.NOTHING - Records no information about the old row. This is the default for system tables.||string| +|retriableRestartConnectorWaitMs|Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms.|10s|duration| +|schemaExcludeList|The schemas for which events must not be captured||string| +|schemaHistoryInternalFileFilename|The path to the file that will be used to record the database schema history||string| +|schemaIncludeList|The schemas for which events should be captured||string| +|schemaNameAdjustmentMode|Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro\_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like \_uxxxx. Note: \_ is an escape sequence like backslash in Java;'none' does not apply any adjustment (default)|none|string| +|schemaRefreshMode|Specify the conditions that trigger a refresh of the in-memory schema for a table. 'columns\_diff' (the default) is the safest mode, ensuring the in-memory schema stays in-sync with the database table's schema at all times. 'columns\_diff\_exclude\_unchanged\_toast' instructs the connector to refresh the in-memory schema cache if there is a discrepancy between it and the schema derived from the incoming message, unless unchanged TOASTable data fully accounts for the discrepancy. This setting can improve connector performance significantly if there are frequently-updated tables that have TOASTed data that are rarely part of these updates. However, it is possible for the in-memory schema to become outdated if TOASTable columns are dropped from the table.|columns\_diff|string| +|signalDataCollection|The name of the data collection that is used to send signals/commands to Debezium. Signaling is disabled when not set.||string| +|signalEnabledChannels|List of channels names that are enabled. Source channel is enabled by default|source|string| +|signalPollIntervalMs|Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds.|5s|duration| +|skippedOperations|The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped. By default, only truncate operations will be skipped.|t|string| +|slotDropOnStop|Whether or not to drop the logical replication slot when the connector finishes orderly. By default the replication is kept so that on restart progress can resume from the last recorded location|false|boolean| +|slotMaxRetries|How many times to retry connecting to a replication slot when an attempt fails.|6|integer| +|slotName|The name of the Postgres logical decoding slot created for streaming changes from a plugin. Defaults to 'debezium|debezium|string| +|slotRetryDelayMs|Time to wait between retry attempts when the connector fails to connect to a replication slot, given in milliseconds. Defaults to 10 seconds (10,000 ms).|10s|duration| +|slotStreamParams|Any optional parameters used by logical decoding plugin. Semi-colon separated. E.g. 'add-tables=public.table,public.table2;include-lsn=true'||string| +|snapshotDelayMs|A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|snapshotFetchSize|The maximum number of records that should be loaded into memory while performing a snapshot.||integer| +|snapshotIncludeCollectionList|This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector.||string| +|snapshotLockingMode|Controls how the connector holds locks on tables while performing the schema snapshot. The 'shared' which means the connector will hold a table lock that prevents exclusive table access for just the initial portion of the snapshot while the database schemas and other metadata are being read. The remaining work in a snapshot involves selecting all rows from each table, and this is done using a flashback query that requires no locks. However, in some cases it may be desirable to avoid locks entirely which can be done by specifying 'none'. This mode is only safe to use if no schema changes are happening while the snapshot is taken.|none|string| +|snapshotLockingModeCustomName|When 'snapshot.locking.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'SnapshotterLocking' interface and is called to determine how to lock tables during schema snapshot.||string| +|snapshotLockTimeoutMs|The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds|10s|duration| +|snapshotMaxThreads|The maximum number of threads used to perform the snapshot. Defaults to 1.|1|integer| +|snapshotMode|The criteria for running a snapshot upon startup of the connector. Select one of the following snapshot options: 'always': The connector runs a snapshot every time that it starts. After the snapshot completes, the connector begins to stream changes from the transaction log.; 'initial' (default): If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures the current full state of the configured tables. After the snapshot completes, the connector begins to stream changes from the transaction log. 'initial\_only': The connector performs a snapshot as it does for the 'initial' option, but after the connector completes the snapshot, it stops, and does not stream changes from the transaction log.; 'never': The connector does not run a snapshot. Upon first startup, the connector immediately begins reading from the beginning of the transaction log. 'exported': This option is deprecated; use 'initial' instead.; 'custom': The connector loads a custom class to specify how the connector performs snapshots. For more information, see Custom snapshotter SPI in the PostgreSQL connector documentation.|initial|string| +|snapshotModeConfigurationBasedSnapshotData|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnDataError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnSchemaError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotSchema|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedStartStream|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the stream should start or not after snapshot.|false|boolean| +|snapshotModeCustomName|When 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'Snapshotter' interface and is called on each app boot to determine whether to do a snapshot.||string| +|snapshotQueryMode|Controls query used during the snapshot|select\_all|string| +|snapshotQueryModeCustomName|When 'snapshot.query.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'SnapshotterQuery' interface and is called to determine how to build queries during snapshot.||string| +|snapshotSelectStatementOverrides|This property contains a comma-separated list of fully-qualified tables (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connectors. Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id 'snapshot.select.statement.overrides.DB\_NAME.TABLE\_NAME' or 'snapshot.select.statement.overrides.SCHEMA\_NAME.TABLE\_NAME', respectively. The value of those properties is the select statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted.||string| +|snapshotTablesOrderByRowCount|Controls the order in which tables are processed in the initial snapshot. A descending value will order the tables by row count descending. A ascending value will order the tables by row count ascending. A value of disabled (the default) will disable ordering by row count.|disabled|string| +|sourceinfoStructMaker|The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.|io.debezium.connector.postgresql.PostgresSourceInfoStructMaker|string| +|statusUpdateIntervalMs|Frequency for sending replication connection status updates to the server, given in milliseconds. Defaults to 10 seconds (10,000 ms).|10s|duration| +|streamingDelayMs|A delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|tableExcludeList|A comma-separated list of regular expressions that match the fully-qualified names of tables to be excluded from monitoring||string| +|tableIgnoreBuiltin|Flag specifying whether built-in tables should be ignored.|true|boolean| +|tableIncludeList|The tables for which changes are to be captured||string| +|timePrecisionMode|Time, date, and timestamps can be represented with different kinds of precisions, including: 'adaptive' (the default) bases the precision of time, date, and timestamp values on the database column's precision; 'adaptive\_time\_microseconds' like 'adaptive' mode, but TIME fields always use microseconds precision; 'connect' always represents time, date, and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision.|adaptive|string| +|tombstonesOnDelete|Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.|false|boolean| +|topicNamingStrategy|The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc.|io.debezium.schema.SchemaTopicNamingStrategy|string| +|topicPrefix|Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be accepted.||string| +|transactionMetadataFactory|Class to make transaction context \& transaction struct/schemas|io.debezium.pipeline.txmetadata.DefaultTransactionMetadataFactory|string| +|unavailableValuePlaceholder|Specify the constant that will be provided by Debezium to indicate that the original value is a toasted value not provided by the database. If starts with 'hex:' prefix it is expected that the rest of the string represents hexadecimal encoded octets.|\_\_debezium\_unavailable\_value|string| +|xminFetchIntervalMs|Specify how often (in ms) the xmin will be fetched from the replication slot. This xmin value is exposed by the slot which gives a lower bound of where a new replication slot could start from. The lower the value, the more likely this value is to be the current 'true' value, but the bigger the performance cost. The bigger the value, the less likely this value is to be the current 'true' value, but the lower the performance penalty. The default is set to 0 ms, which disables tracking xmin.|0ms|duration| diff --git a/camel-debezium-sqlserver.md b/camel-debezium-sqlserver.md new file mode 100644 index 0000000000000000000000000000000000000000..2a4a1c39be305c0976c2f327686eba1fefd44c1c --- /dev/null +++ b/camel-debezium-sqlserver.md @@ -0,0 +1,295 @@ +# Debezium-sqlserver + +**Since Camel 3.0** + +**Only consumer is supported** + +The Debezium SQL Server component is wrapper around +[Debezium](https://debezium.io/) using [Debezium +Engine](https://debezium.io/documentation/reference/0.10/operations/embedded.html), +which enables Change Data Capture from SQL Server database using +Debezium without the need for Kafka or Kafka Connect. + +**Note on handling failures:** per [Debezium Embedded +Engine](https://debezium.io/documentation/reference/1.9/development/engine.html#_handling_failures) +documentation, the engines are actively recording source offsets and +periodically flush these offsets to a persistent storage. Therefore, +when the application is restarted or crashed, the engine will resume +from the last recorded offset. This means that, at normal operation, +your downstream routes will receive each event exactly once. However, in +case of an application crash (not having a graceful shutdown), the +application will resume from the last recorded offset, which may result +in receiving duplicate events immediately after the restart. Therefore, +your downstream routes should be tolerant enough of such a case and +deduplicate events if needed. + +Maven users will need to add the following dependency to their `pom.xml` +for this component. + + + org.apache.camel + camel-debezium-sqlserver + x.x.x + + + +# URI format + + debezium-sqlserver:name[?options] + +For more information about configuration: +[https://debezium.io/documentation/reference/0.10/operations/embedded.html#engine-properties](https://debezium.io/documentation/reference/0.10/operations/embedded.html#engine-properties) +[https://debezium.io/documentation/reference/0.10/connectors/sqlserver.html#connector-properties](https://debezium.io/documentation/reference/0.10/connectors/sqlserver.html#connector-properties) + +# Message body + +The message body if is not `null` (in case of tombstones), it contains +the state of the row after the event occurred as `Struct` format or +`Map` format if you use the included Type Converter from `Struct` to +`Map`. + +Check below for more details. + +# Samples + +## Consuming events + +Here is a very simple route that you can use in order to listen to +Debezium events from SQL Server connector. + + from("debezium-sqlserver:dbz-test-1?offsetStorageFileName=/usr/offset-file-1.dat&databaseHostname=localhost&databaseUser=debezium&databasePassword=dbz&databaseServerName=my-app-connector&databaseHistoryFileFilename=/usr/history-file-1.dat") + .log("Event received from Debezium : ${body}") + .log(" with this identifier ${headers.CamelDebeziumIdentifier}") + .log(" with these source metadata ${headers.CamelDebeziumSourceMetadata}") + .log(" the event occurred upon this operation '${headers.CamelDebeziumSourceOperation}'") + .log(" on this database '${headers.CamelDebeziumSourceMetadata[db]}' and this table '${headers.CamelDebeziumSourceMetadata[table]}'") + .log(" with the key ${headers.CamelDebeziumKey}") + .log(" the previous value is ${headers.CamelDebeziumBefore}") + +By default, the component will emit the events in the body and +`CamelDebeziumBefore` header as +[`Struct`](https://kafka.apache.org/22/javadoc/org/apache/kafka/connect/data/Struct.html) +data type, the reasoning behind this, is to perceive the schema +information in case is needed. However, the component as well contains a +[Type Converter](#manual::type-converter.adoc) that converts from +default output type of +[`Struct`](https://kafka.apache.org/22/javadoc/org/apache/kafka/connect/data/Struct.html) +to `Map` in order to leverage Camel’s rich [Data +Format](#manual::data-format.adoc) types which many of them work out of +box with `Map` data type. To use it, you can either add `Map.class` type +when you access the message (e.g., +`exchange.getIn().getBody(Map.class)`), or you can convert the body +always to `Map` from the route builder by adding +`.convertBodyTo(Map.class)` to your Camel Route DSL after `from` +statement. + +We mentioned above the schema, which can be used in case you need to +perform advance data transformation and the schema is needed for that. +If you choose not to convert your body to `Map`, you can obtain the +schema information as +[`Schema`](https://kafka.apache.org/22/javadoc/org/apache/kafka/connect/data/Schema.html) +type from `Struct` like this: + + from("debezium-sqlserver:[name]?[options]]) + .process(exchange -> { + final Struct bodyValue = exchange.getIn().getBody(Struct.class); + final Schema schemaValue = bodyValue.schema(); + + log.info("Body value is : {}", bodyValue); + log.info("With Schema : {}", schemaValue); + log.info("And fields of : {}", schemaValue.fields()); + log.info("Field name has `{}` type", schemaValue.field("name").schema()); + }); + +This component is a thin wrapper around Debezium Engine as mentioned. +Therefore, before using this component in production, you need to +understand how Debezium works and how configurations can reflect the +expected behavior. This is especially true in regard to [handling +failures](https://debezium.io/documentation/reference/1.9/operations/embedded.html#_handling_failures). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|additionalProperties|Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345\&additionalProperties.schema.registry.url=http://localhost:8811/avro||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|configuration|Allow pre-configured Configurations to be set.||object| +|internalKeyConverter|The Converter class that should be used to serialize and deserialize key data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|internalValueConverter|The Converter class that should be used to serialize and deserialize value data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|offsetCommitPolicy|The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface 'OffsetCommitPolicy'. The default is a periodic commit policy based upon time intervals.||string| +|offsetCommitTimeoutMs|Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds.|5000|duration| +|offsetFlushIntervalMs|Interval at which to try committing offsets. The default is 1 minute.|60000|duration| +|offsetStorage|The name of the Java class that is responsible for persistence of connector offsets.|org.apache.kafka.connect.storage.FileOffsetBackingStore|string| +|offsetStorageFileName|Path to file where offsets are to be stored. Required when offset.storage is set to the FileOffsetBackingStore.||string| +|offsetStoragePartitions|The number of partitions used when creating the offset storage topic. Required when offset.storage is set to the 'KafkaOffsetBackingStore'.||integer| +|offsetStorageReplicationFactor|Replication factor used when creating the offset storage topic. Required when offset.storage is set to the KafkaOffsetBackingStore||integer| +|offsetStorageTopic|The name of the Kafka topic where offsets are to be stored. Required when offset.storage is set to the KafkaOffsetBackingStore.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|binaryHandlingMode|Specify how binary (blob, binary, etc.) columns should be represented in change events, including: 'bytes' represents binary data as byte array (default); 'base64' represents binary data as base64-encoded string; 'base64-url-safe' represents binary data as base64-url-safe-encoded string; 'hex' represents binary data as hex-encoded (base16) string|bytes|string| +|columnExcludeList|Regular expressions matching columns to exclude from change events||string| +|columnIncludeList|Regular expressions matching columns to include in change events||string| +|columnPropagateSourceType|A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|converters|Optional list of custom converters that would be used instead of default ones. The converters are defined using '.type' config option and configured using options '.'||string| +|customMetricTags|The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example: k1=v1,k2=v2||string| +|databaseHostname|Resolvable hostname or IP address of the database server.||string| +|databaseInstance|The SQL Server instance name||string| +|databaseNames|The names of the databases from which the connector should capture changes||string| +|databasePassword|Password of the database user to be used when connecting to the database.||string| +|databasePort|Port of the database server.|1433|integer| +|databaseQueryTimeoutMs|Time to wait for a query to execute, given in milliseconds. Defaults to 600 seconds (600,000 ms); zero means there is no limit.|10m|duration| +|databaseUser|Name of the database user to be used when connecting to the database.||string| +|dataQueryMode|Controls how the connector queries CDC data. The default is 'function', which means the data is queried by means of calling cdc.fn\_cdc\_get\_all\_changes\_# function. The value of 'direct' makes the connector to query the change tables directly.|function|string| +|datatypePropagateSourceType|A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|decimalHandlingMode|Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers.|precise|string| +|errorsMaxRetries|The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries).|-1|integer| +|eventProcessingFailureHandlingMode|Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.|fail|string| +|heartbeatActionQuery|The query executed with every heartbeat.||string| +|heartbeatIntervalMs|Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default.|0ms|duration| +|heartbeatTopicsPrefix|The prefix that is used to name heartbeat topics.Defaults to \_\_debezium-heartbeat.|\_\_debezium-heartbeat|string| +|includeSchemaChanges|Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value include logical description of the new schema and optionally the DDL statement(s). The default is 'true'. This is independent of how the connector internally records database schema history.|true|boolean| +|includeSchemaComments|Whether the connector parse table and column's comment to metadata object. Note: Enable this option will bring the implications on memory usage. The number and size of ColumnImpl objects is what largely impacts how much memory is consumed by the Debezium connectors, and adding a String to each of them can potentially be quite heavy. The default is 'false'.|false|boolean| +|incrementalSnapshotAllowSchemaChanges|Detect schema change during an incremental snapshot and re-select a current chunk to avoid locking DDLs. Note that changes to a primary key are not supported and can cause incorrect results if performed during an incremental snapshot. Another limitation is that if a schema change affects only columns' default values, then the change won't be detected until the DDL is processed from the binlog stream. This doesn't affect the snapshot events' values, but the schema of snapshot events may have outdated defaults.|false|boolean| +|incrementalSnapshotChunkSize|The maximum size of chunk (number of documents/rows) for incremental snapshotting|1024|integer| +|incrementalSnapshotOptionRecompile|Add OPTION(RECOMPILE) on each SELECT statement during the incremental snapshot process. This prevents parameter sniffing but can cause CPU pressure on the source database.|false|boolean| +|incrementalSnapshotWatermarkingStrategy|Specify the strategy used for watermarking during an incremental snapshot: 'insert\_insert' both open and close signal is written into signal data collection (default); 'insert\_delete' only open signal is written on signal data collection, the close will delete the relative open signal;|INSERT\_INSERT|string| +|maxBatchSize|Maximum size of each batch of source records. Defaults to 2048.|2048|integer| +|maxIterationTransactions|This property can be used to reduce the connector memory usage footprint when changes are streamed from multiple tables per database.|500|integer| +|maxQueueSize|Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size.|8192|integer| +|maxQueueSizeInBytes|Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled|0|integer| +|messageKeyColumns|A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':', where the table names could be defined as (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connector, and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) will be used as message key. Example: dbserver1.inventory.orderlines:orderId,orderLineId;dbserver1.inventory.orders:id||string| +|notificationEnabledChannels|List of notification channels names that are enabled.||string| +|notificationSinkTopicName|The name of the topic for the notifications. This is required in case 'sink' is in the list of enabled channels||string| +|pollIntervalMs|Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms.|500ms|duration| +|postProcessors|Optional list of post processors. The processors are defined using '.type' config option and configured using options ''||string| +|provideTransactionMetadata|Enables transaction metadata extraction together with event counting|false|boolean| +|retriableRestartConnectorWaitMs|Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms.|10s|duration| +|schemaHistoryInternal|The name of the SchemaHistory class that should be used to store and recover database schema changes. The configuration properties for the history are prefixed with the 'schema.history.internal.' string.|io.debezium.storage.kafka.history.KafkaSchemaHistory|string| +|schemaHistoryInternalFileFilename|The path to the file that will be used to record the database schema history||string| +|schemaHistoryInternalSkipUnparseableDdl|Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse. If skipping is enabled then Debezium can miss metadata changes.|false|boolean| +|schemaHistoryInternalStoreOnlyCapturedDatabasesDdl|Controls what DDL will Debezium store in database schema history. By default (true) only DDL that manipulates a table from captured schema/database will be stored. If set to false, then Debezium will store all incoming DDL statements.|false|boolean| +|schemaHistoryInternalStoreOnlyCapturedTablesDdl|Controls what DDL will Debezium store in database schema history. By default (false) Debezium will store all incoming DDL statements. If set to true, then only DDL that manipulates a captured table will be stored.|false|boolean| +|schemaNameAdjustmentMode|Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro\_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like \_uxxxx. Note: \_ is an escape sequence like backslash in Java;'none' does not apply any adjustment (default)|none|string| +|signalDataCollection|The name of the data collection that is used to send signals/commands to Debezium. Signaling is disabled when not set.||string| +|signalEnabledChannels|List of channels names that are enabled. Source channel is enabled by default|source|string| +|signalPollIntervalMs|Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds.|5s|duration| +|skippedOperations|The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped. By default, only truncate operations will be skipped.|t|string| +|snapshotDelayMs|A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|snapshotFetchSize|The maximum number of records that should be loaded into memory while performing a snapshot.||integer| +|snapshotIncludeCollectionList|This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector.||string| +|snapshotIsolationMode|Controls which transaction isolation level is used and how long the connector locks the captured tables. The default is 'repeatable\_read', which means that repeatable read isolation level is used. In addition, type of acquired lock during schema snapshot depends on snapshot.locking.mode property. Using a value of 'exclusive' ensures that the connector holds the type of lock specified with snapshot.locking.mode property (and thus prevents any reads and updates) for all captured tables during the entire snapshot duration. When 'snapshot' is specified, connector runs the initial snapshot in SNAPSHOT isolation level, which guarantees snapshot consistency. In addition, neither table nor row-level locks are held. When 'read\_committed' is specified, connector runs the initial snapshot in READ COMMITTED isolation level. No long-running locks are taken, so that initial snapshot does not prevent other transactions from updating table rows. Snapshot consistency is not guaranteed.In 'read\_uncommitted' mode neither table nor row-level locks are acquired, but connector does not guarantee snapshot consistency.|repeatable\_read|string| +|snapshotLockTimeoutMs|The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds|10s|duration| +|snapshotMaxThreads|The maximum number of threads used to perform the snapshot. Defaults to 1.|1|integer| +|snapshotMode|The criteria for running a snapshot upon startup of the connector. Select one of the following snapshot options: 'initial' (default): If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures the current full state of the configured tables. After the snapshot completes, the connector begins to stream changes from the transaction log.; 'initial\_only': The connector performs a snapshot as it does for the 'initial' option, but after the connector completes the snapshot, it stops, and does not stream changes from the transaction log.; 'schema\_only': If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures only the schema (table structures), but not any table data. After the snapshot completes, the connector begins to stream changes from the transaction log.|initial|string| +|snapshotModeConfigurationBasedSnapshotData|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnDataError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnSchemaError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotSchema|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedStartStream|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the stream should start or not after snapshot.|false|boolean| +|snapshotModeCustomName|When 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'Snapshotter' interface and is called on each app boot to determine whether to do a snapshot.||string| +|snapshotSelectStatementOverrides|This property contains a comma-separated list of fully-qualified tables (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connectors. Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id 'snapshot.select.statement.overrides.DB\_NAME.TABLE\_NAME' or 'snapshot.select.statement.overrides.SCHEMA\_NAME.TABLE\_NAME', respectively. The value of those properties is the select statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted.||string| +|snapshotTablesOrderByRowCount|Controls the order in which tables are processed in the initial snapshot. A descending value will order the tables by row count descending. A ascending value will order the tables by row count ascending. A value of disabled (the default) will disable ordering by row count.|disabled|string| +|sourceinfoStructMaker|The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.|io.debezium.connector.sqlserver.SqlServerSourceInfoStructMaker|string| +|streamingDelayMs|A delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|tableExcludeList|A comma-separated list of regular expressions that match the fully-qualified names of tables to be excluded from monitoring||string| +|tableIgnoreBuiltin|Flag specifying whether built-in tables should be ignored.|true|boolean| +|tableIncludeList|The tables for which changes are to be captured||string| +|timePrecisionMode|Time, date, and timestamps can be represented with different kinds of precisions, including: 'adaptive' (the default) bases the precision of time, date, and timestamp values on the database column's precision; 'adaptive\_time\_microseconds' like 'adaptive' mode, but TIME fields always use microseconds precision; 'connect' always represents time, date, and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision.|adaptive|string| +|tombstonesOnDelete|Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.|false|boolean| +|topicNamingStrategy|The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc.|io.debezium.schema.SchemaTopicNamingStrategy|string| +|topicPrefix|Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be accepted.||string| +|transactionMetadataFactory|Class to make transaction context \& transaction struct/schemas|io.debezium.pipeline.txmetadata.DefaultTransactionMetadataFactory|string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Unique name for the connector. Attempting to register again with the same name will fail.||string| +|additionalProperties|Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345\&additionalProperties.schema.registry.url=http://localhost:8811/avro||object| +|internalKeyConverter|The Converter class that should be used to serialize and deserialize key data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|internalValueConverter|The Converter class that should be used to serialize and deserialize value data for offsets. The default is JSON converter.|org.apache.kafka.connect.json.JsonConverter|string| +|offsetCommitPolicy|The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface 'OffsetCommitPolicy'. The default is a periodic commit policy based upon time intervals.||string| +|offsetCommitTimeoutMs|Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds.|5000|duration| +|offsetFlushIntervalMs|Interval at which to try committing offsets. The default is 1 minute.|60000|duration| +|offsetStorage|The name of the Java class that is responsible for persistence of connector offsets.|org.apache.kafka.connect.storage.FileOffsetBackingStore|string| +|offsetStorageFileName|Path to file where offsets are to be stored. Required when offset.storage is set to the FileOffsetBackingStore.||string| +|offsetStoragePartitions|The number of partitions used when creating the offset storage topic. Required when offset.storage is set to the 'KafkaOffsetBackingStore'.||integer| +|offsetStorageReplicationFactor|Replication factor used when creating the offset storage topic. Required when offset.storage is set to the KafkaOffsetBackingStore||integer| +|offsetStorageTopic|The name of the Kafka topic where offsets are to be stored. Required when offset.storage is set to the KafkaOffsetBackingStore.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|binaryHandlingMode|Specify how binary (blob, binary, etc.) columns should be represented in change events, including: 'bytes' represents binary data as byte array (default); 'base64' represents binary data as base64-encoded string; 'base64-url-safe' represents binary data as base64-url-safe-encoded string; 'hex' represents binary data as hex-encoded (base16) string|bytes|string| +|columnExcludeList|Regular expressions matching columns to exclude from change events||string| +|columnIncludeList|Regular expressions matching columns to include in change events||string| +|columnPropagateSourceType|A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|converters|Optional list of custom converters that would be used instead of default ones. The converters are defined using '.type' config option and configured using options '.'||string| +|customMetricTags|The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example: k1=v1,k2=v2||string| +|databaseHostname|Resolvable hostname or IP address of the database server.||string| +|databaseInstance|The SQL Server instance name||string| +|databaseNames|The names of the databases from which the connector should capture changes||string| +|databasePassword|Password of the database user to be used when connecting to the database.||string| +|databasePort|Port of the database server.|1433|integer| +|databaseQueryTimeoutMs|Time to wait for a query to execute, given in milliseconds. Defaults to 600 seconds (600,000 ms); zero means there is no limit.|10m|duration| +|databaseUser|Name of the database user to be used when connecting to the database.||string| +|dataQueryMode|Controls how the connector queries CDC data. The default is 'function', which means the data is queried by means of calling cdc.fn\_cdc\_get\_all\_changes\_# function. The value of 'direct' makes the connector to query the change tables directly.|function|string| +|datatypePropagateSourceType|A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records.||string| +|decimalHandlingMode|Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers.|precise|string| +|errorsMaxRetries|The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries).|-1|integer| +|eventProcessingFailureHandlingMode|Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.|fail|string| +|heartbeatActionQuery|The query executed with every heartbeat.||string| +|heartbeatIntervalMs|Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default.|0ms|duration| +|heartbeatTopicsPrefix|The prefix that is used to name heartbeat topics.Defaults to \_\_debezium-heartbeat.|\_\_debezium-heartbeat|string| +|includeSchemaChanges|Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value include logical description of the new schema and optionally the DDL statement(s). The default is 'true'. This is independent of how the connector internally records database schema history.|true|boolean| +|includeSchemaComments|Whether the connector parse table and column's comment to metadata object. Note: Enable this option will bring the implications on memory usage. The number and size of ColumnImpl objects is what largely impacts how much memory is consumed by the Debezium connectors, and adding a String to each of them can potentially be quite heavy. The default is 'false'.|false|boolean| +|incrementalSnapshotAllowSchemaChanges|Detect schema change during an incremental snapshot and re-select a current chunk to avoid locking DDLs. Note that changes to a primary key are not supported and can cause incorrect results if performed during an incremental snapshot. Another limitation is that if a schema change affects only columns' default values, then the change won't be detected until the DDL is processed from the binlog stream. This doesn't affect the snapshot events' values, but the schema of snapshot events may have outdated defaults.|false|boolean| +|incrementalSnapshotChunkSize|The maximum size of chunk (number of documents/rows) for incremental snapshotting|1024|integer| +|incrementalSnapshotOptionRecompile|Add OPTION(RECOMPILE) on each SELECT statement during the incremental snapshot process. This prevents parameter sniffing but can cause CPU pressure on the source database.|false|boolean| +|incrementalSnapshotWatermarkingStrategy|Specify the strategy used for watermarking during an incremental snapshot: 'insert\_insert' both open and close signal is written into signal data collection (default); 'insert\_delete' only open signal is written on signal data collection, the close will delete the relative open signal;|INSERT\_INSERT|string| +|maxBatchSize|Maximum size of each batch of source records. Defaults to 2048.|2048|integer| +|maxIterationTransactions|This property can be used to reduce the connector memory usage footprint when changes are streamed from multiple tables per database.|500|integer| +|maxQueueSize|Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size.|8192|integer| +|maxQueueSizeInBytes|Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled|0|integer| +|messageKeyColumns|A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':', where the table names could be defined as (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connector, and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) will be used as message key. Example: dbserver1.inventory.orderlines:orderId,orderLineId;dbserver1.inventory.orders:id||string| +|notificationEnabledChannels|List of notification channels names that are enabled.||string| +|notificationSinkTopicName|The name of the topic for the notifications. This is required in case 'sink' is in the list of enabled channels||string| +|pollIntervalMs|Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms.|500ms|duration| +|postProcessors|Optional list of post processors. The processors are defined using '.type' config option and configured using options ''||string| +|provideTransactionMetadata|Enables transaction metadata extraction together with event counting|false|boolean| +|retriableRestartConnectorWaitMs|Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms.|10s|duration| +|schemaHistoryInternal|The name of the SchemaHistory class that should be used to store and recover database schema changes. The configuration properties for the history are prefixed with the 'schema.history.internal.' string.|io.debezium.storage.kafka.history.KafkaSchemaHistory|string| +|schemaHistoryInternalFileFilename|The path to the file that will be used to record the database schema history||string| +|schemaHistoryInternalSkipUnparseableDdl|Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse. If skipping is enabled then Debezium can miss metadata changes.|false|boolean| +|schemaHistoryInternalStoreOnlyCapturedDatabasesDdl|Controls what DDL will Debezium store in database schema history. By default (true) only DDL that manipulates a table from captured schema/database will be stored. If set to false, then Debezium will store all incoming DDL statements.|false|boolean| +|schemaHistoryInternalStoreOnlyCapturedTablesDdl|Controls what DDL will Debezium store in database schema history. By default (false) Debezium will store all incoming DDL statements. If set to true, then only DDL that manipulates a captured table will be stored.|false|boolean| +|schemaNameAdjustmentMode|Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro\_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like \_uxxxx. Note: \_ is an escape sequence like backslash in Java;'none' does not apply any adjustment (default)|none|string| +|signalDataCollection|The name of the data collection that is used to send signals/commands to Debezium. Signaling is disabled when not set.||string| +|signalEnabledChannels|List of channels names that are enabled. Source channel is enabled by default|source|string| +|signalPollIntervalMs|Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds.|5s|duration| +|skippedOperations|The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped. By default, only truncate operations will be skipped.|t|string| +|snapshotDelayMs|A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|snapshotFetchSize|The maximum number of records that should be loaded into memory while performing a snapshot.||integer| +|snapshotIncludeCollectionList|This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector.||string| +|snapshotIsolationMode|Controls which transaction isolation level is used and how long the connector locks the captured tables. The default is 'repeatable\_read', which means that repeatable read isolation level is used. In addition, type of acquired lock during schema snapshot depends on snapshot.locking.mode property. Using a value of 'exclusive' ensures that the connector holds the type of lock specified with snapshot.locking.mode property (and thus prevents any reads and updates) for all captured tables during the entire snapshot duration. When 'snapshot' is specified, connector runs the initial snapshot in SNAPSHOT isolation level, which guarantees snapshot consistency. In addition, neither table nor row-level locks are held. When 'read\_committed' is specified, connector runs the initial snapshot in READ COMMITTED isolation level. No long-running locks are taken, so that initial snapshot does not prevent other transactions from updating table rows. Snapshot consistency is not guaranteed.In 'read\_uncommitted' mode neither table nor row-level locks are acquired, but connector does not guarantee snapshot consistency.|repeatable\_read|string| +|snapshotLockTimeoutMs|The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds|10s|duration| +|snapshotMaxThreads|The maximum number of threads used to perform the snapshot. Defaults to 1.|1|integer| +|snapshotMode|The criteria for running a snapshot upon startup of the connector. Select one of the following snapshot options: 'initial' (default): If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures the current full state of the configured tables. After the snapshot completes, the connector begins to stream changes from the transaction log.; 'initial\_only': The connector performs a snapshot as it does for the 'initial' option, but after the connector completes the snapshot, it stops, and does not stream changes from the transaction log.; 'schema\_only': If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures only the schema (table structures), but not any table data. After the snapshot completes, the connector begins to stream changes from the transaction log.|initial|string| +|snapshotModeConfigurationBasedSnapshotData|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnDataError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the data should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotOnSchemaError|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error.|false|boolean| +|snapshotModeConfigurationBasedSnapshotSchema|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the schema should be snapshotted or not.|false|boolean| +|snapshotModeConfigurationBasedStartStream|When 'snapshot.mode' is set as configuration\_based, this setting permits to specify whenever the stream should start or not after snapshot.|false|boolean| +|snapshotModeCustomName|When 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'Snapshotter' interface and is called on each app boot to determine whether to do a snapshot.||string| +|snapshotSelectStatementOverrides|This property contains a comma-separated list of fully-qualified tables (DB\_NAME.TABLE\_NAME) or (SCHEMA\_NAME.TABLE\_NAME), depending on the specific connectors. Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id 'snapshot.select.statement.overrides.DB\_NAME.TABLE\_NAME' or 'snapshot.select.statement.overrides.SCHEMA\_NAME.TABLE\_NAME', respectively. The value of those properties is the select statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted.||string| +|snapshotTablesOrderByRowCount|Controls the order in which tables are processed in the initial snapshot. A descending value will order the tables by row count descending. A ascending value will order the tables by row count ascending. A value of disabled (the default) will disable ordering by row count.|disabled|string| +|sourceinfoStructMaker|The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.|io.debezium.connector.sqlserver.SqlServerSourceInfoStructMaker|string| +|streamingDelayMs|A delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms.|0ms|duration| +|tableExcludeList|A comma-separated list of regular expressions that match the fully-qualified names of tables to be excluded from monitoring||string| +|tableIgnoreBuiltin|Flag specifying whether built-in tables should be ignored.|true|boolean| +|tableIncludeList|The tables for which changes are to be captured||string| +|timePrecisionMode|Time, date, and timestamps can be represented with different kinds of precisions, including: 'adaptive' (the default) bases the precision of time, date, and timestamp values on the database column's precision; 'adaptive\_time\_microseconds' like 'adaptive' mode, but TIME fields always use microseconds precision; 'connect' always represents time, date, and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision.|adaptive|string| +|tombstonesOnDelete|Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.|false|boolean| +|topicNamingStrategy|The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc.|io.debezium.schema.SchemaTopicNamingStrategy|string| +|topicPrefix|Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be accepted.||string| +|transactionMetadataFactory|Class to make transaction context \& transaction struct/schemas|io.debezium.pipeline.txmetadata.DefaultTransactionMetadataFactory|string| diff --git a/camel-dhis2.md b/camel-dhis2.md new file mode 100644 index 0000000000000000000000000000000000000000..1fc7cd06d56c110fbbbb18b879273eaed73b9856 --- /dev/null +++ b/camel-dhis2.md @@ -0,0 +1,305 @@ +# Dhis2 + +**Since Camel 4.0** + +**Both producer and consumer are supported** + +The Camel DHIS2 component leverages the [DHIS2 Java +SDK](https://github.com/dhis2/dhis2-java-sdk) to integrate Apache Camel +with [DHIS2](https://dhis2.org/). DHIS2 is a free, open-source, fully +customizable platform for collecting, analyzing, visualizing, and +sharing aggregate and individual-data for district-level, national, +regional, and international system and program management in health, +education, and other domains. + +Maven users will need to add the following dependency to their +`+pom.xml+`. + + + org.apache.camel + camel-dhis2 + x.x.x + + + +# URI Format + + dhis2://operation/method[?options] + +# Examples + +- Fetch an organisation unit by ID: + + package org.camel.dhis2.example; + + import org.apache.camel.builder.RouteBuilder; + + public class MyRouteBuilder extends RouteBuilder { + + public void configure() { + from("direct:getResource") + .to("dhis2://get/resource?path=organisationUnits/O6uvpzGd5pu&username=admin&password=district&baseApiUrl=https://play.dhis2.org/40.2.2/api") + .unmarshal() + .json(org.hisp.dhis.api.model.v40_2_2.OrganisationUnit.class); + } + } + +- Fetch an organisation unit code by ID: + + package org.camel.dhis2.example; + + import org.apache.camel.builder.RouteBuilder; + + public class MyRouteBuilder extends RouteBuilder { + + public void configure() { + from("direct:getResource") + .to("dhis2://get/resource?path=organisationUnits/O6uvpzGd5pu&fields=code&username=admin&password=district&baseApiUrl=https://play.dhis2.org/40.2.2/api") + .unmarshal() + .json(org.hisp.dhis.api.model.v40_2_2.OrganisationUnit.class); + } + } + +- Fetch all organisation units: + + package org.camel.dhis2.example; + + import org.apache.camel.builder.RouteBuilder; + + public class MyRouteBuilder extends RouteBuilder { + + public void configure() { + from("direct:getCollection") + .to("dhis2://get/collection?path=organisationUnits&arrayName=organisationUnits&username=admin&password=district&baseApiUrl=https://play.dhis2.org/40.2.2/api") + .split().body() + .convertBodyTo(org.hisp.dhis.api.model.v40_2_2.OrganisationUnit.class).log("${body}"); + } + } + +- Fetch all organisation unit codes: + + package org.camel.dhis2.example; + + import org.apache.camel.builder.RouteBuilder; + + public class MyRouteBuilder extends RouteBuilder { + + public void configure() { + from("direct:getCollection") + .to("dhis2://get/collection?path=organisationUnits&fields=code&arrayName=organisationUnits&username=admin&password=district&baseApiUrl=https://play.dhis2.org/40.2.2/api") + .split().body() + .convertBodyTo(org.hisp.dhis.api.model.v40_2_2.OrganisationUnit.class) + .log("${body}"); + } + } + +- Fetch users with a phone number: + + package org.camel.dhis2.example; + + import org.apache.camel.builder.RouteBuilder; + + public class MyRouteBuilder extends RouteBuilder { + + public void configure() { + from("direct:getCollection") + .to("dhis2://get/collection?path=users&filter=phoneNumber:!null:&arrayName=users&username=admin&password=district&baseApiUrl=https://play.dhis2.org/40.2.2/api") + .split().body() + .convertBodyTo(org.hisp.dhis.api.model.v40_2_2.User.class) + .log("${body}"); + } + } + +- Save a data value set + + package org.camel.dhis2.example; + + import org.apache.camel.LoggingLevel; + import org.apache.camel.builder.RouteBuilder; + import org.hisp.dhis.api.model.v40_2_2.DataValueSet; + import org.hisp.dhis.api.model.v40_2_2.DataValue; + import org.hisp.dhis.api.model.v40_2_2.WebMessage; + import org.hisp.dhis.integration.sdk.support.period.PeriodBuilder; + + import java.time.ZoneOffset; + import java.time.ZonedDateTime; + import java.time.format.DateTimeFormatter; + import java.util.Date; + import java.util.List; + + public class MyRouteBuilder extends RouteBuilder { + + public void configure() { + from("direct:postResource") + .setBody(exchange -> new DataValueSet().withCompleteDate( + ZonedDateTime.now(ZoneOffset.UTC).format(DateTimeFormatter.ISO_INSTANT)) + .withOrgUnit("O6uvpzGd5pu") + .withDataSet("lyLU2wR22tC").withPeriod(PeriodBuilder.monthOf(new Date(), -1)) + .withDataValues( + List.of(new DataValue().withDataElement("aIJZ2d2QgVV").withValue("20")))) + .to("dhis2://post/resource?path=dataValueSets&username=admin&password=district&baseApiUrl=https://play.dhis2.org/40.2.2/api") + .unmarshal().json(WebMessage.class) + .choice() + .when(exchange -> !exchange.getMessage().getBody(WebMessage.class).getStatus().equals(WebMessage.StatusRef.OK)) + .log(LoggingLevel.ERROR, "Import error from DHIS2 while saving data value set => ${body}") + .end(); + } + } + +- Update an organisation unit + + package org.camel.dhis2.example; + + import org.apache.camel.LoggingLevel; + import org.apache.camel.builder.RouteBuilder; + import org.hisp.dhis.api.model.v40_2_2.OrganisationUnit; + import org.hisp.dhis.api.model.v40_2_2.WebMessage; + import org.hisp.dhis.integration.sdk.support.period.PeriodBuilder; + + import java.time.ZoneOffset; + import java.time.ZonedDateTime; + import java.time.format.DateTimeFormatter; + import java.util.Date; + import java.util.List; + + public class MyRouteBuilder extends RouteBuilder { + + public void configure() { + from("direct:putResource") + .setBody(exchange -> new OrganisationUnit().withName("Acme").withShortName("Acme").withOpeningDate(new Date())) + .to("dhis2://put/resource?path=organisationUnits/jUb8gELQApl&username=admin&password=district&baseApiUrl=https://play.dhis2.org/40.2.2/api") + .unmarshal().json(WebMessage.class) + .choice() + .when(exchange -> !exchange.getMessage().getBody(WebMessage.class).getStatus().equals(WebMessage.StatusRef.OK)) + .log(LoggingLevel.ERROR, "Import error from DHIS2 while updating org unit => ${body}") + .end(); + } + } + +- Delete an organisation unit + + package org.camel.dhis2.example; + + import org.apache.camel.LoggingLevel; + import org.apache.camel.builder.RouteBuilder; + import org.hisp.dhis.api.model.v40_2_2.WebMessage; + import org.hisp.dhis.integration.sdk.support.period.PeriodBuilder; + + import java.time.ZoneOffset; + import java.time.ZonedDateTime; + import java.time.format.DateTimeFormatter; + import java.util.Date; + import java.util.List; + + public class MyRouteBuilder extends RouteBuilder { + + public void configure() { + from("direct:deleteResource") + .to("dhis2://delete/resource?path=organisationUnits/jUb8gELQApl&username=admin&password=district&baseApiUrl=https://play.dhis2.org/40.2.2/api") + .unmarshal().json(WebMessage.class) + .choice() + .when(exchange -> !exchange.getMessage().getBody(WebMessage.class).getStatus().equals(WebMessage.StatusRef.OK)) + .log(LoggingLevel.ERROR, "Import error from DHIS2 while deleting org unit => ${body}") + .end(); + } + } + +- Run analytics + + package org.camel.dhis2.example; + + import org.apache.camel.builder.RouteBuilder; + + public class MyRouteBuilder extends RouteBuilder { + + public void configure() { + from("direct:resourceTablesAnalytics") + .to("dhis2://resourceTables/analytics?skipAggregate=false&skipEvents=true&lastYears=1&username=admin&password=district&baseApiUrl=https://play.dhis2.org/40.2.2/api"); + } + } + +- Reference DHIS2 client + + package org.camel.dhis2.example; + + import org.apache.camel.builder.RouteBuilder; + import org.hisp.dhis.integration.sdk.Dhis2ClientBuilder; + import org.hisp.dhis.integration.sdk.api.Dhis2Client; + + public class MyRouteBuilder extends RouteBuilder { + + public void configure() { + Dhis2Client dhis2Client = Dhis2ClientBuilder.newClient("https://play.dhis2.org/40.2.2/api", "admin", "district").build(); + getCamelContext().getRegistry().bind("dhis2Client", dhis2Client); + + from("direct:resourceTablesAnalytics") + .to("dhis2://resourceTables/analytics?skipAggregate=true&skipEvents=true&lastYears=1&client=#dhis2Client"); + } + } + +- Set custom query parameters + + package org.camel.dhis2.example; + + import org.apache.camel.builder.RouteBuilder; + + import java.util.List; + import java.util.Map; + + public class MyRouteBuilder extends RouteBuilder { + + public void configure() { + from("direct:postResource") + .setHeader("CamelDhis2.queryParams", constant(Map.of("cacheClear", List.of("true")))) + .to("dhis2://post/resource?path=maintenance&client=#dhis2Client"); + } + } + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|baseApiUrl|DHIS2 server base API URL (e.g., https://play.dhis2.org/2.39.1.1/api)||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|client|References a user-defined org.hisp.dhis.integration.sdk.api.Dhis2Client. This option is mutually exclusive to the baseApiUrl, username, password, and personalAccessToken options||object| +|configuration|To use the shared configuration||object| +|password|Password of the DHIS2 username||string| +|personalAccessToken|Personal access token to authenticate with DHIS2. This option is mutually exclusive to username and password||string| +|username|Username of the DHIS2 user to operate as||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|apiName|API operation (e.g., get)||object| +|methodName|Subject of the API operation (e.g., resource)||string| +|baseApiUrl|DHIS2 server base API URL (e.g., https://play.dhis2.org/2.39.1.1/api)||string| +|inBody|Sets the name of a parameter to be passed in the exchange In Body||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|client|References a user-defined org.hisp.dhis.integration.sdk.api.Dhis2Client. This option is mutually exclusive to the baseApiUrl, username, password, and personalAccessToken options||object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|password|Password of the DHIS2 username||string| +|personalAccessToken|Personal access token to authenticate with DHIS2. This option is mutually exclusive to username and password||string| +|username|Username of the DHIS2 user to operate as||string| diff --git a/camel-digitalocean.md b/camel-digitalocean.md new file mode 100644 index 0000000000000000000000000000000000000000..98f1d0811b1503178b6030e038db0a33ab797e48 --- /dev/null +++ b/camel-digitalocean.md @@ -0,0 +1,1025 @@ +# Digitalocean + +**Since Camel 2.19** + +**Only producer is supported** + +The DigitalOcean component allows you to manage Droplets and resources +within the DigitalOcean cloud with **Camel** by encapsulating +[digitalocean-api-java](https://www.digitalocean.com/community/projects/api-client-in-java). +All the functionality that you are familiar with in the DigitalOcean +control panel is also available through this Camel component. + +# Prerequisites + +You must have a valid DigitalOcean account and a valid OAuth token. You +can generate an OAuth token by visiting the \[Apps \& API\] section of +the DigitalOcean control panel for your account. + +# URI format + +The **DigitalOcean Component** uses the following URI format: + + digitalocean://endpoint?[options] + +where `endpoint` is a DigitalOcean resource type. + +You have to provide an **operation** value for each endpoint, with the +`operation` URI option or the `CamelDigitalOceanOperation` message +header. + +All **operation** values are defined in `DigitalOceanOperations` +enumeration. + +All **header** names used by the component are defined in +`DigitalOceanHeaders` enumeration. + +# Message body result + +All message bodies returned are using objects provided by the +**digitalocean-api-java** library. + +# API Rate Limits + +DigitalOcean REST API encapsulated by camel-digitalocean component is +subjected to API Rate Limiting. ou can find the per-method limits in the +[API Rate Limits +documentation](https://developers.digitalocean.com/documentation/v2/#rate-limit). + +# Account endpoint + + ++++++ + + + + + + + + + + + + + + + + +
operationDescriptionHeadersResult

get

get account info

com.myjeeva.digitalocean.pojo.Account

+ +# BlockStorages endpoint + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
operationDescriptionHeadersResult

list

list all the Block Storage volumes +available on your account

List<com.myjeeva.digitalocean.pojo.Volume>

get

show information about a Block Storage +volume

`CamelDigitalOceanId` _Integer_

com.myjeeva.digitalocean.pojo.Volume

get

show information about a Block Storage +volume by name

`CamelDigitalOceanName` _String_, +
+ `CamelDigitalOceanRegion` _String_

com.myjeeva.digitalocean.pojo.Volume

listSnapshots

retrieve the snapshots that have been +created from a volume

`CamelDigitalOceanId` _Integer_

List<com.myjeeva.digitalocean.pojo.Snapshot>

create

create a new volume

`CamelDigitalOceanVolumeSizeGigabytes` _Integer_, +
+ `CamelDigitalOceanName` _String_, +
+ `CamelDigitalOceanDescription`* _String_, +
+ `CamelDigitalOceanRegion`* _String_

com.myjeeva.digitalocean.pojo.Volume

delete

delete a Block Storage volume, +destroying all data and removing it from your account

`CamelDigitalOceanId`  _Integer_

com.myjeeva.digitalocean.pojo.Delete

delete

delete a Block Storage volume by +name

`CamelDigitalOceanName` _String_, +
+ `CamelDigitalOceanRegion` _String_

com.myjeeva.digitalocean.pojo.Delete

attach

attach a Block Storage volume to a +Droplet

`CamelDigitalOceanId` _Integer_, +
+ `CamelDigitalOceanDropletId` _Integer_, +
+ `CamelDigitalOceanDropletRegion` _String_

com.myjeeva.digitalocean.pojo.Action

attach

attach a Block Storage volume to a +Droplet by name

`CamelDigitalOceanName` _String_, +
+ `CamelDigitalOceanDropletId` _Integer_, +
+ `CamelDigitalOceanDropletRegion` _String_

com.myjeeva.digitalocean.pojo.Action

detach

detach a Block Storage volume from a +Droplet

`CamelDigitalOceanId` _Integer_, +
+ `CamelDigitalOceanDropletId` _Integer_, +
+ `CamelDigitalOceanDropletRegion` _String_

com.myjeeva.digitalocean.pojo.Action

attach

detach a Block Storage volume from a +Droplet by name

`CamelDigitalOceanName` _String_, +
+ `CamelDigitalOceanDropletId` _Integer_, +
+ `CamelDigitalOceanDropletRegion` _String_

com.myjeeva.digitalocean.pojo.Action

resize

resize a Block Storage volume

`CamelDigitalOceanVolumeSizeGigabytes` _Integer_, +
+ `CamelDigitalOceanRegion` _String_

com.myjeeva.digitalocean.pojo.Action

listActions

retrieve all actions that have been +executed on a volume

`CamelDigitalOceanId`  _Integer_

List<com.myjeeva.digitalocean.pojo.Action>

+ +# Droplets endpoint + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
operationDescriptionHeadersResult

list

list all Droplets in your +account

List<com.myjeeva.digitalocean.pojo.Droplet>

get

show an individual droplet

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Droplet

create

create a new Droplet

CamelDigitalOceanName +String,
+CamelDigitalOceanDropletImage String,
+CamelDigitalOceanRegion String,
+CamelDigitalOceanDropletSize String,
+CamelDigitalOceanDropletSSHKeys* +List<String>,
+CamelDigitalOceanDropletEnableBackups* +Boolean,
+CamelDigitalOceanDropletEnableIpv6* Boolean,
+CamelDigitalOceanDropletEnablePrivateNetworking* +Boolean,
+CamelDigitalOceanDropletUserData* String,
+CamelDigitalOceanDropletVolumes* +List<String>,
+CamelDigitalOceanDropletTags +List<String>

com.myjeeva.digitalocean.pojo.Droplet

create

create multiple Droplets

CamelDigitalOceanNames +List<String>,
+CamelDigitalOceanDropletImage String,
+CamelDigitalOceanRegion String,
+CamelDigitalOceanDropletSize String,
+CamelDigitalOceanDropletSSHKeys* +List<String>,
+CamelDigitalOceanDropletEnableBackups* +Boolean,
+CamelDigitalOceanDropletEnableIpv6* Boolean,
+CamelDigitalOceanDropletEnablePrivateNetworking* +Boolean,
+CamelDigitalOceanDropletUserData* String,
+CamelDigitalOceanDropletVolumes* +List<String>,
+CamelDigitalOceanDropletTags +List<String>

com.myjeeva.digitalocean.pojo.Droplet

delete

delete a Droplet,

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Delete

enableBackups

enable backups on an existing +Droplet

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Action

disableBackups

disable backups on an existing +Droplet

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Action

enableIpv6

enable IPv6 networking on an existing +Droplet

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Action

enablePrivateNetworking

enable private networking on an +existing Droplet

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Action

reboot

reboot a Droplet

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Action

powerCycle

power cycle a Droplet

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Action

shutdown

shutdown a Droplet

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Action

powerOff

power off a Droplet

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Action

powerOn

power on a Droplet

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Action

restore

shutdown a Droplet

CamelDigitalOceanId +Integer,
+CamelDigitalOceanImageId Integer

com.myjeeva.digitalocean.pojo.Action

passwordReset

reset the password for a +Droplet

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Action

resize

resize a Droplet

CamelDigitalOceanId +Integer,
+CamelDigitalOceanDropletSize String

com.myjeeva.digitalocean.pojo.Action

rebuild

rebuild a Droplet

CamelDigitalOceanId +Integer,
+CamelDigitalOceanImageId Integer

com.myjeeva.digitalocean.pojo.Action

rename

rename a Droplet

CamelDigitalOceanId +Integer,
+CamelDigitalOceanName String

com.myjeeva.digitalocean.pojo.Action

changeKernel

change the kernel of a Droplet

CamelDigitalOceanId +Integer,
+CamelDigitalOceanKernelId Integer

com.myjeeva.digitalocean.pojo.Action

takeSnapshot

snapshot a Droplet

CamelDigitalOceanId +Integer,
+CamelDigitalOceanName* String

com.myjeeva.digitalocean.pojo.Action

tag

tag a Droplet

CamelDigitalOceanId +Integer,
+CamelDigitalOceanName String

com.myjeeva.digitalocean.pojo.Response

untag

untag a Droplet

CamelDigitalOceanId +Integer,
+CamelDigitalOceanName String

com.myjeeva.digitalocean.pojo.Response

listKernels

retrieve a list of all kernels +available to a Droplet

CamelDigitalOceanId +Integer

List<com.myjeeva.digitalocean.pojo.Kernel>

listSnapshots

retrieve the snapshots that have been +created from a Droplet

CamelDigitalOceanId +Integer

List<com.myjeeva.digitalocean.pojo.Snapshot>

listBackups

retrieve any backups associated with a +Droplet

CamelDigitalOceanId +Integer

List<com.myjeeva.digitalocean.pojo.Backup>

listActions

retrieve all actions that have been +executed on a Droplet

CamelDigitalOceanId +Integer

List<com.myjeeva.digitalocean.pojo.Action>

listNeighbors

retrieve a list of droplets that are +running on the same physical server

CamelDigitalOceanId +Integer

List<com.myjeeva.digitalocean.pojo.Droplet>

listAllNeighbors

retrieve a list of any droplets that +are running on the same physical hardware

List<com.myjeeva.digitalocean.pojo.Droplet>

+ +# Images endpoint + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
operationDescriptionHeadersResult

list

list images available on your +account

CamelDigitalOceanType* +DigitalOceanImageTypes

List<com.myjeeva.digitalocean.pojo.Image>

ownList

retrieve only the private images of a +user

List<com.myjeeva.digitalocean.pojo.Image>

listActions

retrieve all actions that have been +executed on an Image

CamelDigitalOceanId +Integer

List<com.myjeeva.digitalocean.pojo.Action>

get

retrieve information about an image +(public or private) by id

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Image

get

retrieve information about a public +image by slug

CamelDigitalOceanDropletImage +String

com.myjeeva.digitalocean.pojo.Image

update

update an image

CamelDigitalOceanId +Integer,
+CamelDigitalOceanName String

com.myjeeva.digitalocean.pojo.Image

delete

delete an image

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Delete

transfer

transfer an image to another +region

CamelDigitalOceanId +Integer,
+CamelDigitalOceanRegion String

com.myjeeva.digitalocean.pojo.Action

convert

convert an image, for example, a backup +to a snapshot

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Action

+ +# Snapshots endpoint + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
operationDescriptionHeadersResult

list

list all the snapshots available on +your account

CamelDigitalOceanType* +DigitalOceanSnapshotTypes

List<com.myjeeva.digitalocean.pojo.Snapshot>

get

retrieve information about a +snapshot

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Snapshot

delete

delete an snapshot

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Delete

+ +# Keys endpoint + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
operationDescriptionHeadersResult

list

list all the keys in your +account

List<com.myjeeva.digitalocean.pojo.Key>

get

retrieve information about a key by +id

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Key

get

retrieve information about a key by +fingerprint

CamelDigitalOceanKeyFingerprint +String

com.myjeeva.digitalocean.pojo.Key

update

update a key by id

CamelDigitalOceanId +Integer,
+CamelDigitalOceanName String

com.myjeeva.digitalocean.pojo.Key

update

update a key by fingerprint

CamelDigitalOceanKeyFingerprint +String,
+CamelDigitalOceanName String

com.myjeeva.digitalocean.pojo.Key

delete

delete a key by id

CamelDigitalOceanId +Integer

com.myjeeva.digitalocean.pojo.Delete

delete

delete a key by fingerprint

CamelDigitalOceanKeyFingerprint +String

com.myjeeva.digitalocean.pojo.Delete

+ +# Regions endpoint + + ++++++ + + + + + + + + + + + + + + + + +
operationDescriptionHeadersResult

list

list all the regions that are +available

List<com.myjeeva.digitalocean.pojo.Region>

+ +# Sizes endpoint + + ++++++ + + + + + + + + + + + + + + + + +
operationDescriptionHeadersResult

list

list all the sizes that are +available

List<com.myjeeva.digitalocean.pojo.Size>

+ +# Floating IPs endpoint + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
operationDescriptionHeadersResult

list

list all the Floating IPs available on +your account

List<com.myjeeva.digitalocean.pojo.FloatingIP>

create

create a new Floating IP assigned to a +Droplet

CamelDigitalOceanId +Integer

List<com.myjeeva.digitalocean.pojo.FloatingIP>

create

create a new Floating IP assigned to a +Region

CamelDigitalOceanRegion +String

List<com.myjeeva.digitalocean.pojo.FloatingIP>

get

retrieve information about a Floating +IP

CamelDigitalOceanFloatingIPAddress +String

com.myjeeva.digitalocean.pojo.Key

delete

delete a Floating IP and remove it from +your account

CamelDigitalOceanFloatingIPAddress +String

com.myjeeva.digitalocean.pojo.Delete

assign

assign a Floating IP to a +Droplet

CamelDigitalOceanFloatingIPAddress +String,
+CamelDigitalOceanDropletId Integer

com.myjeeva.digitalocean.pojo.Action

unassign

un-assign a Floating IP

CamelDigitalOceanFloatingIPAddress +String

com.myjeeva.digitalocean.pojo.Action

listActions

retrieve all actions that have been +executed on a Floating IP

CamelDigitalOceanFloatingIPAddress +String

List<com.myjeeva.digitalocean.pojo.Action>

+ +# Tags endpoint + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
operationDescriptionHeadersResult

list

list all of your tags

List<com.myjeeva.digitalocean.pojo.Tag>

create

create a Tag

CamelDigitalOceanName +String

com.myjeeva.digitalocean.pojo.Tag

get

retrieve an individual tag

CamelDigitalOceanName +String

com.myjeeva.digitalocean.pojo.Tag

delete

delete a tag

CamelDigitalOceanName +String

com.myjeeva.digitalocean.pojo.Delete

update

update a tag

CamelDigitalOceanName +String,
+CamelDigitalOceanNewName String

com.myjeeva.digitalocean.pojo.Tag

+ +# Examples + +Get your account info + + from("direct:getAccountInfo") + .setHeader(DigitalOceanConstants.OPERATION, constant(DigitalOceanOperations.get)) + .to("digitalocean:account?oAuthToken=XXXXXX") + +Create a droplet + + from("direct:createDroplet") + .setHeader(DigitalOceanConstants.OPERATION, constant("create")) + .setHeader(DigitalOceanHeaders.NAME, constant("myDroplet")) + .setHeader(DigitalOceanHeaders.REGION, constant("fra1")) + .setHeader(DigitalOceanHeaders.DROPLET_IMAGE, constant("ubuntu-14-04-x64")) + .setHeader(DigitalOceanHeaders.DROPLET_SIZE, constant("512mb")) + .to("digitalocean:droplet?oAuthToken=XXXXXX") + +List all your droplets + + from("direct:getDroplets") + .setHeader(DigitalOceanConstants.OPERATION, constant("list")) + .to("digitalocean:droplets?oAuthToken=XXXXXX") + +Retrieve information for the Droplet (dropletId = 34772987) + + from("direct:getDroplet") + .setHeader(DigitalOceanConstants.OPERATION, constant("get")) + .setHeader(DigitalOceanConstants.ID, 34772987) + .to("digitalocean:droplet?oAuthToken=XXXXXX") + +Shutdown information for the Droplet (dropletId = 34772987) + + from("direct:shutdown") + .setHeader(DigitalOceanConstants.ID, 34772987) + .to("digitalocean:droplet?operation=shutdown&oAuthToken=XXXXXX") + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|operation|The operation to perform to the given resource.||object| +|page|Use for pagination. Force the page number.|1|integer| +|perPage|Use for pagination. Set the number of item per request. The maximum number of results per page is 200.|25|integer| +|resource|The DigitalOcean resource type on which perform the operation.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|digitalOceanClient|To use a existing configured DigitalOceanClient as client||object| +|httpProxyHost|Set a proxy host if needed||string| +|httpProxyPassword|Set a proxy password if needed||string| +|httpProxyPort|Set a proxy port if needed||integer| +|httpProxyUser|Set a proxy host if needed||string| +|oAuthToken|DigitalOcean OAuth Token||string| diff --git a/camel-direct.md b/camel-direct.md new file mode 100644 index 0000000000000000000000000000000000000000..fc27d2ad0f1eeb4510c0c7edaab263c5607c2182 --- /dev/null +++ b/camel-direct.md @@ -0,0 +1,76 @@ +# Direct + +**Since Camel 1.0** + +**Both producer and consumer are supported** + +The Direct component provides direct, synchronous invocation of any +consumers when a producer sends a message exchange. This endpoint can be +used to connect existing routes in the **same** camel context. + +**Asynchronous** + +The [SEDA](#seda-component.adoc) component provides asynchronous +invocation of any consumers when a producer sends a message exchange. + +# URI format + + direct:someId[?options] + +Where *someId* can be any string to uniquely identify the endpoint. + +# Samples + +In the route below, we use the direct component to link the two routes +together: + +Java +from("activemq:queue:order.in") +.to("bean:orderServer?method=validate") +.to("direct:processOrder"); + + from("direct:processOrder") + .to("bean:orderService?method=process") + .to("activemq:queue:order.out"); + +Spring XML + + + + + + + + + + + + +See also samples from the [SEDA](#seda-component.adoc) component, how +they can be used together. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|block|If sending a message to a direct endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|timeout|The timeout value to use if block is enabled.|30000|integer| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Name of direct endpoint||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|block|If sending a message to a direct endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active.|true|boolean| +|failIfNoConsumers|Whether the producer should fail by throwing an exception, when sending to a DIRECT endpoint with no active consumers.|true|boolean| +|timeout|The timeout value to use if block is enabled.|30000|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|synchronous|Whether synchronous processing is forced. If enabled then the producer thread, will be forced to wait until the message has been completed before the same thread will continue processing. If disabled (default) then the producer thread may be freed and can do other work while the message is continued processed by other threads (reactive).|false|boolean| diff --git a/camel-disruptor-vm.md b/camel-disruptor-vm.md new file mode 100644 index 0000000000000000000000000000000000000000..83ca571d3d908623709952f8cf0e85c5688b75bd --- /dev/null +++ b/camel-disruptor-vm.md @@ -0,0 +1,114 @@ +# Disruptor-vm + +**Since Camel 2.12** + +**Both producer and consumer are supported** + +The Disruptor component provides asynchronous +[SEDA](https://en.wikipedia.org/wiki/Staged_event-driven_architecture) +behavior similarly to the standard SEDA component. However, it uses a +[Disruptor](https://github.com/LMAX-Exchange/disruptor) instead of a +[BlockingQueue](http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/BlockingQueue.html) +used by the standard [SEDA](#seda-component.adoc). + +As with the SEDA component, buffers of the Disruptor endpoints are only +visible within a **single** CamelContext and no support is provided for +persistence or recovery. The buffers of the **disruptor-vm:** endpoints +also provide support for communication across CamelContexts instances, +so you can use this mechanism to communicate across web applications (as +long as **camel-disruptor.jar** is on the **system/boot** classpath). + +The main advantage of choosing to use the Disruptor component over the +SEDA is performance in use cases where there is high contention between +producer(s) and/or multicasted or concurrent consumers. In those cases, +significant increases of throughput and reduction of latency has been +observed. Performance in scenarios without contention is comparable to +the SEDA component. + +The Disruptor is implemented with the intention of mimicking the +behavior and options of the SEDA component as much as possible. The main +differences between them are the following: + +- The buffer used is always bounded in size (default 1024 exchanges). + +- As the buffer is always bouded, the default behaviour for the + Disruptor is to block while the buffer is full instead of throwing + an exception. This default behavior may be configured on the + component (see options). + +- The Disruptor endpoints don’t implement the `BrowsableEndpoint` + interface. As such, the exchanges currently in the Disruptor can’t + be retrieved, only the number of exchanges. + +- The Disruptor requires its consumers (multicasted or otherwise) to + be statically configured. Adding or removing consumers on the fly + requires complete flushing of all pending exchanges in the + Disruptor. + +- As a result of the reconfiguration: Data sent over a Disruptor is + directly processed and *gone* if there is at least one consumer, + late joiners only get new exchanges published after they’ve joined. + +- The `pollTimeout` option is not supported by the Disruptor + component. + +- When a producer blocks on a full Disruptor, it does not respond to + thread interrupts. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-disruptor + x.x.x + + + +# URI format + + disruptor-vm:someId[?options] + +Where *someId* can be any string that uniquely identifies the endpoint +within the current CamelContext. + +# Options + +# More Documentation + +See the [Disruptor](#disruptor-component.adoc) component for more +information. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bufferSize|To configure the ring buffer size|1024|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|defaultConcurrentConsumers|To configure the default number of concurrent consumers|1|integer| +|defaultMultipleConsumers|To configure the default value for multiple consumers|false|boolean| +|defaultWaitStrategy|To configure the default value for DisruptorWaitStrategy The default value is Blocking.|Blocking|object| +|defaultBlockWhenFull|To configure the default value for block when full The default value is true.|true|boolean| +|defaultProducerType|To configure the default value for DisruptorProducerType The default value is Multi.|Multi|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Name of queue||string| +|size|The maximum capacity of the Disruptors ringbuffer Will be effectively increased to the nearest power of two. Notice: Mind if you use this option, then its the first endpoint being created with the queue name, that determines the size. To make sure all endpoints use same size, then configure the size option on all of them, or the first endpoint being created.|1024|integer| +|concurrentConsumers|Number of concurrent threads processing exchanges.|1|integer| +|multipleConsumers|Specifies whether multiple consumers are allowed. If enabled, you can use Disruptor for Publish-Subscribe messaging. That is, you can send a message to the queue and have each consumer receive a copy of the message. When enabled, this option should be specified on every consumer endpoint.|false|boolean| +|waitStrategy|Defines the strategy used by consumer threads to wait on new exchanges to be published. The options allowed are:Blocking, Sleeping, BusySpin and Yielding.|Blocking|object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|blockWhenFull|Whether a thread that sends messages to a full Disruptor will block until the ringbuffer's capacity is no longer exhausted. By default, the calling thread will block and wait until the message can be accepted. By disabling this option, an exception will be thrown stating that the queue is full.|false|boolean| +|producerType|Defines the producers allowed on the Disruptor. The options allowed are: Multi to allow multiple producers and Single to enable certain optimizations only allowed when one concurrent producer (on one thread or otherwise synchronized) is active.|Multi|object| +|timeout|Timeout (in milliseconds) before a producer will stop waiting for an asynchronous task to complete. You can disable timeout by using 0 or a negative value.|30000|duration| +|waitForTaskToComplete|Option to specify whether the caller should wait for the async task to complete or not before continuing. The following three options are supported: Always, Never or IfReplyExpected. The first two values are self-explanatory. The last value, IfReplyExpected, will only wait if the message is Request Reply based.|IfReplyExpected|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-disruptor.md b/camel-disruptor.md new file mode 100644 index 0000000000000000000000000000000000000000..51480a440ca9ac6ddcd0122685ec646a56711b82 --- /dev/null +++ b/camel-disruptor.md @@ -0,0 +1,273 @@ +# Disruptor + +**Since Camel 2.12** + +**Both producer and consumer are supported** + +The Disruptor component provides asynchronous +[SEDA](https://en.wikipedia.org/wiki/Staged_event-driven_architecture) +behavior similarly to the standard SEDA component. However, it uses a +[Disruptor](https://github.com/LMAX-Exchange/disruptor) instead of a +[BlockingQueue](http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/BlockingQueue.html) +used by the standard [SEDA](#seda-component.adoc). Alternatively, this +component supports a [disruptor-vm](#disruptor-vm-component.adoc) +endpoint. + +The main advantage of choosing to use the Disruptor component over the +SEDA is performance in use cases where there is high contention between +producer(s) and/or multicasted or concurrent Consumers. In those cases, +significant increases in throughput and reduction of latency has been +observed. Performance in scenarios without contention is comparable to +the SEDA component. + +The Disruptor is implemented with the intention of mimicking the +behavior and options of the SEDA component as much as possible. The main +differences between them are the following: + +- The buffer used is always bounded in size (default 1024 exchanges). + +- As the buffer is always bounded, the default behavior for the + Disruptor is to block while the buffer is full instead of throwing + an exception. This default behavior may be configured on the + component (see options). + +- The Disruptor endpoints don’t implement the `BrowsableEndpoint` + interface. As such, the exchanges currently in the Disruptor can’t + be retrieved, only the number of exchanges. + +- The Disruptor requires its consumers (multicasted or otherwise) to + be statically configured. Adding or removing consumers on the fly + requires complete flushing of all pending exchanges in the + Disruptor. + +- As a result of the reconfiguration: Data sent over a Disruptor is + directly processed and *gone* if there is at least one consumer, + late joiners only get new exchanges published after they’ve joined. + +- The `pollTimeout` option is not supported by the Disruptor + component. + +- When a producer blocks on a full Disruptor, it does not respond to + thread interrupts. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-disruptor + x.x.x + + + +# URI format + + disruptor:someId[?options] + +Where *someId* can be any string that uniquely identifies the endpoint +within the current CamelContext. + +# Options + +# Wait strategies + +The wait strategy effects the type of waiting performed by the consumer +threads that are currently waiting for the next exchange to be +published. The following strategies can be chosen: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionAdvice

Blocking

Blocking strategy that uses a lock and +condition variable for Consumers waiting on a barrier.

This strategy can be used when +throughput and low latency are not as important as CPU +resource.

Sleeping

Sleeping strategy that initially spins +then uses a Thread.yield(), and eventually for the minimum +number of nanos the OS and JVM will allow while the Consumers are +waiting on a barrier.

This strategy is a good compromise +between performance and CPU resource. Latency spikes can occur after +quiet periods.

BusySpin

Busy Spin strategy that uses a busy +spin loop for Consumers waiting on a barrier.

This strategy will use CPU resource to +avoid syscalls which can introduce latency jitter. It is best +used when threads can be bound to specific CPU cores.

Yielding

Yielding strategy that uses a +Thread.yield() for Consumers waiting on a barrier after an +initially spinning.

This strategy is a good compromise +between performance and CPU resource without incurring significant +latency spikes.

+ +# Use of Request Reply + +The Disruptor component supports using [Request +Reply](#eips:requestReply-eip.adoc), where the caller will wait for the +Async route to complete. For instance: + + from("mina:tcp://0.0.0.0:9876?textline=true&sync=true").to("disruptor:input"); + from("disruptor:input").to("bean:processInput").to("bean:createResponse"); + +In the route above, we have a TCP listener on port 9876 that accepts +incoming requests. The request is routed to the *disruptor:input* +buffer. As it is a Request Reply message, we wait for the response. When +the consumer on the *disruptor:input* buffer is complete, it copies the +response to the original message response. + +# Concurrent consumers + +By default, the Disruptor endpoint uses a single consumer thread, but +you can configure it to use concurrent consumer threads. So instead of +thread pools you can use: + + from("disruptor:stageName?concurrentConsumers=5").process(...) + +As for the difference between the two, note that a thread pool can +increase/shrink dynamically at runtime depending on load. Whereas the +number of concurrent consumers is always fixed and supported by the +Disruptor internally, so performance will be higher. + +# Thread pools + +Be aware that adding a thread pool to a Disruptor endpoint by doing +something like: + + from("disruptor:stageName").thread(5).process(...) + +Can wind up with adding a normal +[BlockingQueue](http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/BlockingQueue.html) +to be used in conjunction with the Disruptor, effectively negating part +of the performance gains achieved by using the Disruptor. Instead, it is +advices to directly configure the number of threads that process +messages on a Disruptor endpoint using the concurrentConsumers option. + +# Sample + +In the route below, we use the Disruptor to send the request to this +async queue. As such, it is able to send a *fire-and-forget* message for +further processing in another thread, and return a constant reply in +this thread to the original caller. + + public void configure() { + from("direct:start") + // send it to the disruptor that is async + .to("disruptor:next") + // return a constant response + .transform(constant("OK")); + + from("disruptor:next").to("mock:result"); + } + +Here we send a *Hello World* message and expect the reply to be *OK*. + + Object out = template.requestBody("direct:start", "Hello World"); + assertEquals("OK", out); + +The "Hello World" message will be consumed from the Disruptor from +another thread for further processing. Since this is from a unit test, +it will be sent to a mock endpoint where we can do assertions in the +unit test. + +# Using multipleConsumers + +In this example, we have defined two consumers and registered them as +spring beans. + + + + + + + + + + + +Since we have specified multipleConsumers=true on the Disruptor foo +endpoint, we can have those two or more consumers receive their own copy +of the message as a kind of *publish/subscriber* style messaging. As the +beans are part of a unit test, they simply send the message to a mock +endpoint, but notice how we can use *@Consume* to consume from the +Disruptor. + + public class FooEventConsumer { + + @EndpointInject("mock:result") + private ProducerTemplate destination; + + @Consume(ref = "foo") + public void doSomething(String body) { + destination.sendBody("foo" + body); + } + + } + +# Extracting disruptor information + +If needed, information such as buffer size, etc. can be obtained without +using JMX in this fashion: + + DisruptorEndpoint disruptor = context.getEndpoint("disruptor:xxxx"); + int size = disruptor.getBufferSize(); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bufferSize|To configure the ring buffer size|1024|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|defaultConcurrentConsumers|To configure the default number of concurrent consumers|1|integer| +|defaultMultipleConsumers|To configure the default value for multiple consumers|false|boolean| +|defaultWaitStrategy|To configure the default value for DisruptorWaitStrategy The default value is Blocking.|Blocking|object| +|defaultBlockWhenFull|To configure the default value for block when full The default value is true.|true|boolean| +|defaultProducerType|To configure the default value for DisruptorProducerType The default value is Multi.|Multi|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Name of queue||string| +|size|The maximum capacity of the Disruptors ringbuffer Will be effectively increased to the nearest power of two. Notice: Mind if you use this option, then its the first endpoint being created with the queue name, that determines the size. To make sure all endpoints use same size, then configure the size option on all of them, or the first endpoint being created.|1024|integer| +|concurrentConsumers|Number of concurrent threads processing exchanges.|1|integer| +|multipleConsumers|Specifies whether multiple consumers are allowed. If enabled, you can use Disruptor for Publish-Subscribe messaging. That is, you can send a message to the queue and have each consumer receive a copy of the message. When enabled, this option should be specified on every consumer endpoint.|false|boolean| +|waitStrategy|Defines the strategy used by consumer threads to wait on new exchanges to be published. The options allowed are:Blocking, Sleeping, BusySpin and Yielding.|Blocking|object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|blockWhenFull|Whether a thread that sends messages to a full Disruptor will block until the ringbuffer's capacity is no longer exhausted. By default, the calling thread will block and wait until the message can be accepted. By disabling this option, an exception will be thrown stating that the queue is full.|false|boolean| +|producerType|Defines the producers allowed on the Disruptor. The options allowed are: Multi to allow multiple producers and Single to enable certain optimizations only allowed when one concurrent producer (on one thread or otherwise synchronized) is active.|Multi|object| +|timeout|Timeout (in milliseconds) before a producer will stop waiting for an asynchronous task to complete. You can disable timeout by using 0 or a negative value.|30000|duration| +|waitForTaskToComplete|Option to specify whether the caller should wait for the async task to complete or not before continuing. The following three options are supported: Always, Never or IfReplyExpected. The first two values are self-explanatory. The last value, IfReplyExpected, will only wait if the message is Request Reply based.|IfReplyExpected|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-djl.md b/camel-djl.md new file mode 100644 index 0000000000000000000000000000000000000000..ebd69fce53a95c07b11d3957edd520b6ec903801 --- /dev/null +++ b/camel-djl.md @@ -0,0 +1,860 @@ +# Djl + +**Since Camel 3.3** + +**Only producer is supported** + +The **Deep Java Library** component is used to infer deep learning +models from message exchanges data. This component uses the [Deep Java +Library](https://djl.ai/) as the underlying library. + +To use the DJL component, Maven users will need to add the following +dependency to their `pom.xml`: + + + org.apache.camel + camel-djl + x.x.x + + + +# URI format + + djl:application + +Where `application` represents the +[application](https://javadoc.io/doc/ai.djl/api/latest/ai/djl/Application.html) +in the context of DJL, the common functional signature for a group of +deep learning models. + +## Supported applications + +Currently, the component supports the following applications. + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ApplicationInput typesOutput type

cv/image_classification

ai.djl.modality.cv.Image + byte[] + InputStream + File

ai.djl.modality.Classifications

cv/object_detection

ai.djl.modality.cv.Image + byte[] + InputStream + File

ai.djl.modality.cv.output.DetectedObjects

cv/semantic_segmentation

ai.djl.modality.cv.Image + byte[] + InputStream + File

ai.djl.modality.cv.output.CategoryMask

cv/instance_segmentation

ai.djl.modality.cv.Image + byte[] + InputStream + File

ai.djl.modality.cv.output.DetectedObjects

cv/pose_estimation

ai.djl.modality.cv.Image + byte[] + InputStream + File

ai.djl.modality.cv.output.Joints

cv/action_recognition

ai.djl.modality.cv.Image + byte[] + InputStream + File

ai.djl.modality.Classifications

cv/word_recognition

ai.djl.modality.cv.Image + byte[] + InputStream + File

String

cv/image_generation

int[]

ai.djl.modality.cv.Image[]

cv/image_enhancement

ai.djl.modality.cv.Image + byte[] + InputStream + File

ai.djl.modality.cv.Image

nlp/fill_mask

String

String[]

nlp/question_answer

ai.djl.modality.nlp.qa.QAInput + String[]

String

nlp/text_classification

String

ai.djl.modality.Classifications

nlp/sentiment_analysis

String

ai.djl.modality.Classifications

nlp/token_classification

String

ai.djl.modality.Classifications

nlp/text_generation

String

String

nlp/machine_translation

String

String

nlp/multiple_choice

String

String

nlp/text_embedding

String

ai.djl.ndarray.NDArray

audio

ai.djl.modality.audio.Audio + byte[] + InputStream + File

String

timeseries/forecasting

ai.djl.timeseries.TimeSeriesData

ai.djl.timeseries.Forecast

+ +# Model Zoo + +The following tables contain supported models in the model zoos per +application. + +Those applications without a table mean that there are no pre-trained +models found for them from the basic, PyTorch, TensorFlow or MXNet DJL +model zoos. You may still find more models for an application from other +model zoos such as Hugging Face, ONNX, etc. + +## CV - Image Classification + +Application: `cv/image_classification` + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Model familyArtifact IDOptions

MLP

ai.djl.zoo:mlp:0.0.3

mnist

MLP

ai.djl.mxnet:mlp:0.0.1

mnist

ResNet

ai.djl.zoo:resnet:0.0.2

50, flavor=v1, dataset=cifar10

ResNet

ai.djl.pytorch:resnet:0.0.1

50, dataset=imagenet
+18, dataset=imagenet
+101, dataset=imagenet

ResNet

ai.djl.tensorflow:resnet:0.0.1

v1, layers=50, +dataset=imagenet

ResNet

ai.djl.mxnet:resnet:0.0.1

18, flavor=v1, dataset=imagenet
+50, flavor=v2, dataset=imagenet
+101, dataset=imagenet
+152, flavor=v1d, dataset=imagenet
+50, flavor=v1, dataset=cifar10

ResNet-18

ai.djl.pytorch:resnet18_embedding:0.0.1

{}

SENet

ai.djl.mxnet:senet:0.0.1

154, dataset=imagenet

SE-ResNeXt

ai.djl.mxnet:se_resnext:0.0.1

101, flavor=32x4d, +dataset=imagenet
+101, flavor=64x4d, dataset=imagenet

ResNeSt

ai.djl.mxnet:resnest:0.0.1

14, dataset=imagenet
+26, dataset=imagenet
+50, dataset=imagenet
+101, dataset=imagenet
+200, dataset=imagenet
+269, dataset=imagenet

SqueezeNet

ai.djl.mxnet:squeezenet:0.0.1

1.0, dataset=imagenet

MobileNet

ai.djl.tensorflow:mobilenet:0.0.1

v2, dataset=imagenet

MobileNet

ai.djl.mxnet:mobilenet:0.0.1

v1, multiplier=0.25, +dataset=imagenet
+v1, multiplier=0.5, dataset=imagenet
+v1, multiplier=0.75, dataset=imagenet
+v1, multiplier=1.0, dataset=imagenet
+v2, multiplier=0.25, dataset=imagenet
+v2, multiplier=0.5, dataset=imagenet
+v2, multiplier=0.75, dataset=imagenet
+v2, multiplier=1.0, dataset=imagenet
+v3_small, multiplier=1.0, dataset=imagenet
+v3_large, multiplier=1.0, dataset=imagenet

GoogLeNet

ai.djl.mxnet:googlenet:0.0.1

imagenet

Darknet

ai.djl.mxnet:darknet:0.0.1

53, flavor=v3, +dataset=imagenet

Inception v3

ai.djl.mxnet:inceptionv3:0.0.1

imagenet

AlexNet

ai.djl.mxnet:alexnet:0.0.1

imagenet

VGGNet

ai.djl.mxnet:vgg:0.0.1

11, dataset=imagenet
+13, dataset=imagenet
+16, dataset=imagenet
+19, dataset=imagenet
+batch_norm, layers=11, dataset=imagenet
+batch_norm, layers=13, dataset=imagenet
+batch_norm, layers=16, dataset=imagenet
+batch_norm, layers=19, dataset=imagenet

DenseNet

ai.djl.mxnet:densenet:0.0.1

121, dataset=imagenet
+161, dataset=imagenet
+169, dataset=imagenet
+201, dataset=imagenet

Xception

ai.djl.mxnet:xception:0.0.1

65, dataset=imagenet

+ +## CV - Object Detection + +Application: `cv/object_detection` + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Model familyArtifact IDOptions

SSD

ai.djl.zoo:ssd:0.0.2

tiny, dataset=pikachu

SSD

ai.djl.pytorch:ssd:0.0.1

300, backbone=resnet50, +dataset=coco

SSD

ai.djl.tensorflow:ssd:0.0.1

mobilenet_v2, +dataset=openimages_v4

SSD

ai.djl.mxnet:ssd:0.0.1

512, backbone=resnet50, flavor=v1, +dataset=voc
+512, backbone=vgg16, flavor=atrous, dataset=coco
+512, backbone=mobilenet1.0, dataset=voc
+300, backbone=vgg16, flavor=atrous, dataset=voc

YOLO

ai.djl.mxnet:yolo:0.0.1

voc, version=3, backbone=darknet53, +imageSize=320
+voc, version=3, backbone=darknet53, imageSize=416
+voc, version=3, backbone=mobilenet1.0, imageSize=320
+voc, version=3, backbone=mobilenet1.0, imageSize=416
+coco, version=3, backbone=darknet53, imageSize=320
+coco, version=3, backbone=darknet53, imageSize=416
+coco, version=3, backbone=darknet53, imageSize=608
+coco, version=3, backbone=mobilenet1.0, imageSize=320
+coco, version=3, backbone=mobilenet1.0, imageSize=416
+coco, version=3, backbone=mobilenet1.0, imageSize=608

YOLOv5

ai.djl.pytorch:yolo5s:0.0.1

{}

YOLOv8

ai.djl.pytorch:yolov8n:0.0.1

{}

+ +## CV - Semantic Segmentation + +Application: `cv/semantic_segmentation` + + +++++ + + + + + + + + + + + + + + +
Model familyArtifact IDOptions

DeepLabV3

ai.djl.pytorch:deeplabv3:0.0.1

resnet50, flavor=v1b, +dataset=coco

+ +## CV - Instance Segmentation + +Application: `cv/instance_segmentation` + + +++++ + + + + + + + + + + + + + + +
Model familyArtifact IDOptions

Mask R-CNN

ai.djl.mxnet:mask_rcnn:0.0.1

resnet18, flavor=v1b, +dataset=coco
+resnet101, flavor=v1d, dataset=coco

+ +## CV - Pose Estimation + +Application: `cv/pose_estimation` + + +++++ + + + + + + + + + + + + + + +
Model familyArtifact IDOptions

Simple Pose

ai.djl.mxnet:simple_pose:0.0.1

resnet18, flavor=v1b, +dataset=imagenet
+resnet50, flavor=v1b, dataset=imagenet
+resnet101, flavor=v1d, dataset=imagenet
+resnet152, flavor=v1b, dataset=imagenet
+resnet152, flavor=v1d, dataset=imagenet

+ +## CV - Action Recognition + +Application: `cv/action_recognition` + + +++++ + + + + + + + + + + + + + + +
Model familyArtifact IDOptions

Action Recognition

ai.djl.mxnet:action_recognition:0.0.1

vgg16, dataset=ucf101
+inceptionv3, dataset=ucf101

+ +## CV - Image Generation + +Application: `cv/image_generation` + + +++++ + + + + + + + + + + + + + + + + + + + +
Model familyArtifact IDOptions

CycleGAN

ai.djl.pytorch:cyclegan:0.0.1

cezanne
+monet
+ukiyoe
+vangogh

BigGAN

ai.djl.pytorch:biggan-deep:0.0.1

12, size=128, dataset=imagenet
+24, size=256, dataset=imagenet
+12, size=512, dataset=imagenet

+ +## NLP - Question Answer + +Application: `nlp/question_answer` + + +++++ + + + + + + + + + + + + + + + + + + + +
Model familyArtifact IDOptions

BertQA

ai.djl.pytorch:bertqa:0.0.1

distilbert, size=base, cased=false, +dataset=SQuAD
+distilbert, size=base, cased=true, dataset=SQuAD
+bert, cased=false, dataset=SQuAD
+bert, cased=true, dataset=SQuAD
+distilbert, cased=true, dataset=SQuAD

BertQA

ai.djl.mxnet:bertqa:0.0.1

bert, +dataset=book_corpus_wiki_en_uncased

+ +## NLP - Sentiment Analysis + +Application: `nlp/sentiment_analysis` + + +++++ + + + + + + + + + + + + + + +
Model familyArtifact IDOptions

DistilBERT

ai.djl.pytorch:distilbert:0.0.1

distilbert, dataset=sst

+ +## NLP - Word Embedding + +Application: `nlp/word_embedding` + + +++++ + + + + + + + + + + + + + + +
Model familyArtifact IDOptions

GloVe

ai.djl.mxnet:glove:0.0.2

50

+ +## Time Series - Forecasting + +Application: `timeseries/forecasting` + + +++++ + + + + + + + + + + + + + + + + + + + +
Model familyArtifact IDOptions

DeepAR

ai.djl.pytorch:deepar:0.0.1

m5forecast

DeepAR

ai.djl.mxnet:deepar:0.0.1

airpassengers
+m5forecast

+ +# DJL Engine implementation + +Because DJL is deep learning framework-agnostic, you don’t have to make +a choice between frameworks when creating your projects. You can switch +frameworks at any point. To ensure the best performance, DJL also +provides automatic CPU/GPU choice based on hardware configuration. + +## PyTorch engine + +You can pull the PyTorch engine from the central Maven repository by +including the following dependency: + + + ai.djl.pytorch + pytorch-engine + x.x.x + runtime + + +By default, DJL will download the PyTorch native libraries into the +[cache +folder](https://docs.djl.ai/docs/development/cache_management.html) the +first time you run DJL. It will automatically determine the appropriate +jars for your system based on the platform and GPU support. + +More information about [PyTorch engine +installation](https://docs.djl.ai/engines/pytorch/index.html) + +## TensorFlow engine + +You can pull the TensorFlow engine from the central Maven repository by +including the following dependency: + + + ai.djl.tensorflow + tensorflow-engine + x.x.x + runtime + + +By default, DJL will download the TensorFlow native libraries into +[cache +folder](https://docs.djl.ai/docs/development/cache_management.html) the +first time you run DJL. It will automatically determine the appropriate +jars for your system based on the platform and GPU support. + +More information about [TensorFlow engine +installation](https://docs.djl.ai/engines/tensorflow/index.html) + +## MXNet engine + +You can pull the MXNet engine from the central Maven repository by +including the following dependency: + + + ai.djl.mxnet + mxnet-engine + x.x.x + runtime + + +By default, DJL will download the Apache MXNet native libraries into +[cache +folder](https://docs.djl.ai/docs/development/cache_management.html) the +first time you run DJL. It will automatically determine the appropriate +jars for your system based on the platform and GPU support. + +More information about [MXNet engine +installation](https://docs.djl.ai/engines/mxnet/index.html) + +# Examples + +## MNIST image classification from file + + from("file:/data/mnist/0/10.png") + .to("djl:cv/image_classification?artifactId=ai.djl.mxnet:mlp:0.0.1"); + +## Object detection + + from("file:/data/mnist/0/10.png") + .to("djl:cv/image_classification?artifactId=ai.djl.mxnet:mlp:0.0.1"); + +## Custom deep learning model + + // create a deep learning model + Model model = Model.newInstance(); + model.setBlock(new Mlp(28 * 28, 10, new int[]{128, 64})); + model.load(Paths.get(MODEL_DIR), MODEL_NAME); + + // create translator for pre-processing and postprocessing + ImageClassificationTranslator.Builder builder = ImageClassificationTranslator.builder(); + builder.setSynsetArtifactName("synset.txt"); + builder.setPipeline(new Pipeline(new ToTensor())); + builder.optApplySoftmax(true); + ImageClassificationTranslator translator = new ImageClassificationTranslator(builder); + + // Bind model and translator beans + context.getRegistry().bind("MyModel", model); + context.getRegistry().bind("MyTranslator", translator); + + from("file:/data/mnist/0/10.png") + .to("djl:cv/image_classification?model=MyModel&translator=MyTranslator"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|application|Application name||string| +|artifactId|Model Artifact||string| +|model|Model||string| +|translator|Translator||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-dns.md b/camel-dns.md new file mode 100644 index 0000000000000000000000000000000000000000..a88b23372e35017fe2dcc391f3d05a1cd8e5ba36 --- /dev/null +++ b/camel-dns.md @@ -0,0 +1,119 @@ +# Dns + +**Since Camel 2.7** + +**Only producer is supported** + +This is an additional component for Camel to run DNS queries, using +DNSJava. The component is a thin layer on top of +[DNSJava](http://www.xbill.org/dnsjava/). The component offers the +following operations: + +- `ip`: to resolve a domain by its ip + +- `lookup`: to lookup information about the domain + +- `dig`: to run DNS queries + +**Requires SUN JVM** + +The DNSJava library requires running on the SUN JVM. If you use Apache +ServiceMix or Apache Karaf, you’ll need to adjust the +`etc/jre.properties` file, to add `sun.net.spi.nameservice` to the list +of Java platform packages exported. The server will need restarting +before this change takes effect. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-dns + x.x.x + + + +# URI format + +The URI scheme for a DNS component is as follows + + dns://operation[?options] + +# Examples + +## IP lookup + + + + + + +This looks up a domain’s IP. For example, *www.example.com* resolves to +192\.0.32.10. + +The IP address to lookup must be provided in the header with key +`"dns.domain"`. + +## DNS lookup + + + + + + +This returns a set of DNS records associated with a domain. +The name to lookup must be provided in the header with key `"dns.name"`. + +## DNS Dig + +Dig is a Unix command-line utility to run DNS queries. + + + + + + +The query must be provided in the header with key `"dns.query"`. + +# Dns Activation Policy + +The `DnsActivationPolicy` can be used to dynamically start and stop +routes based on dns state. + +If you have instances of the same component running in different +regions, you can configure a route in each region to activate only if +dns is pointing to its region. + +For example, you may have an instance in NYC and an instance in SFO. You +would configure a service CNAME service.example.com to point to +nyc-service.example.com to bring NYC instance up and SFO instance down. +When you change the CNAME service.example.com to point to +sfo-service.example.com — nyc instance would stop its routes and sfo +will bring its routes up. This allows you to switch regions without +restarting actual components. + + + + + + + + + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|dnsType|The type of the lookup.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-docker.md b/camel-docker.md new file mode 100644 index 0000000000000000000000000000000000000000..9c27b3ebfb336094099561f223df4aaf0dbfb1b5 --- /dev/null +++ b/camel-docker.md @@ -0,0 +1,124 @@ +# Docker + +**Since Camel 2.15** + +**Both producer and consumer are supported** + +Camel component for communicating with Docker. + +The Docker Camel component leverages the +[docker-java](https://github.com/docker-java/docker-java) via the +[Docker Remote +API](https://docs.docker.com/reference/api/docker_remote_api). + +# URI format + + docker://[operation]?[options] + +Where **operation** is the specific action to perform on Docker. + +# Header Strategy + +All URI options can be passed as Header properties. Values found in a +message header take precedence over URI parameters. A header property +takes the form of a URI option prefixed with **CamelDocker** as shown +below + + ++++ + + + + + + + + + + + + +
URI OptionHeader Property

containerId

CamelDockerContainerId

+ +# Examples + +The following example consumes events from Docker: + + from("docker://events?host=192.168.59.103&port=2375").to("log:event"); + +The following example queries Docker for system-wide information + + from("docker://info?host=192.168.59.103&port=2375").to("log:info"); + +# Dependencies + +To use Docker in your Camel routes, you need to add a dependency on +**camel-docker**, which implements the component. + +If you use Maven, you can add the following to your pom.xml, +substituting the version number for the latest and greatest release (see +the download page for the latest versions). + + + org.apache.camel + camel-docker + x.x.x + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|To use the shared docker configuration||object| +|email|Email address associated with the user||string| +|host|Docker host|localhost|string| +|port|Docker port|2375|integer| +|requestTimeout|Request timeout for response (in seconds)||integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|cmdExecFactory|The fully qualified class name of the DockerCmdExecFactory implementation to use|com.github.dockerjava.netty.NettyDockerCmdExecFactory|string| +|followRedirectFilter|Whether to follow redirect filter|false|boolean| +|loggingFilter|Whether to use logging filter|false|boolean| +|maxPerRouteConnections|Maximum route connections|100|integer| +|maxTotalConnections|Maximum total connections|100|integer| +|parameters|Additional configuration parameters as key/value pairs||object| +|serverAddress|Server address for docker registry.|https://index.docker.io/v1/|string| +|socket|Socket connection mode|true|boolean| +|certPath|Location containing the SSL certificate chain||string| +|password|Password to authenticate with||string| +|secure|Use HTTPS communication|false|boolean| +|tlsVerify|Check TLS|false|boolean| +|username|User name to authenticate with||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|operation|Which operation to use||object| +|email|Email address associated with the user||string| +|host|Docker host|localhost|string| +|port|Docker port|2375|integer| +|requestTimeout|Request timeout for response (in seconds)||integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|cmdExecFactory|The fully qualified class name of the DockerCmdExecFactory implementation to use|com.github.dockerjava.netty.NettyDockerCmdExecFactory|string| +|followRedirectFilter|Whether to follow redirect filter|false|boolean| +|loggingFilter|Whether to use logging filter|false|boolean| +|maxPerRouteConnections|Maximum route connections|100|integer| +|maxTotalConnections|Maximum total connections|100|integer| +|parameters|Additional configuration parameters as key/value pairs||object| +|serverAddress|Server address for docker registry.|https://index.docker.io/v1/|string| +|socket|Socket connection mode|true|boolean| +|certPath|Location containing the SSL certificate chain||string| +|password|Password to authenticate with||string| +|secure|Use HTTPS communication|false|boolean| +|tlsVerify|Check TLS|false|boolean| +|username|User name to authenticate with||string| diff --git a/camel-drill.md b/camel-drill.md new file mode 100644 index 0000000000000000000000000000000000000000..c4c0c9bfbb3fc8b4a4e79447fba50d96fedafcdb --- /dev/null +++ b/camel-drill.md @@ -0,0 +1,56 @@ +# Drill + +**Since Camel 2.19** + +**Only producer is supported** + +The Drill component gives you the ability to query the [Apache Drill +Cluster](https://drill.apache.org/). + +Drill is an Apache open-source SQL query engine for Big Data +exploration. Drill is designed from the ground up to support +high-performance analysis on the semi-structured and rapidly evolving +data coming from modern Big Data applications, while still providing the +familiarity and ecosystem of ANSI SQL, the industry-standard query +language + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-drill + x.x.x + + + +# URI format + + drill://host[?options] + +# Options + +# Drill Producer + +The producer executes a query using the **CamelDrillQuery** header and +puts the results into the body. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|Host name or IP address||string| +|clusterId|Cluster ID https://drill.apache.org/docs/using-the-jdbc-driver/#determining-the-cluster-id||string| +|directory|Drill directory||string| +|mode|Connection mode: zk: Zookeeper drillbit: Drillbit direct connection https://drill.apache.org/docs/using-the-jdbc-driver/|ZK|object| +|port|Port number|2181|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-dropbox.md b/camel-dropbox.md new file mode 100644 index 0000000000000000000000000000000000000000..6837e062c521b4ac4d5755c1543bcb25e31facce --- /dev/null +++ b/camel-dropbox.md @@ -0,0 +1,520 @@ +# Dropbox + +**Since Camel 2.14** + +**Both producer and consumer are supported** + +The Dropbox component allows you to treat +[Dropbox](https://www.dropbox.com) remote folders as a producer or +consumer of messages. Using the [Dropbox Java Core +API](https://github.com/dropbox/dropbox-sdk-java), this camel component +has the following features: + +- As a consumer, download files and search files by queries + +- As a producer, download files, move files between remote + directories, delete files/dir, upload files and search files by + queries + +To work with Dropbox API, you need to obtain an **accessToken**, +**expireIn**, **refreshToken**, **apiKey**, **apiSecret** and a +**clientIdentifier.** +You can refer to the [Dropbox +documentation](https://dropbox.tech/developers/migrating-app-permissions-and-access-tokens) +that explains how to get them. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-dropbox + x.x.x + + + +# URI format + + dropbox://[operation]?[options] + +Where **operation** is the specific action (typically is a CRUD action) +to perform on Dropbox remote folder. + +# Operations + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

del

deletes files or directories on +Dropbox

get

download files from Dropbox

move

move files from folders on +Dropbox

put

upload files on Dropbox

search

search files on Dropbox based on string +queries

+ +**Operations** require additional options to work. Some are mandatory +for the specific operation. + +# Del operation + +Delete files on Dropbox. + +Works only as a Camel producer. + +Below are listed the options for this operation: + + +++++ + + + + + + + + + + + + + + +
PropertyMandatoryDescription

remotePath

true

Folder or file to delete on +Dropbox

+ +## Samples + + from("direct:start") + .to("dropbox://del?accessToken=XXX&clientIdentifier=XXX&expireIn=1000&refreshToken=XXXX" + +"&apiKey=XXXXX&apiSecret=XXXXXX&remotePath=/root/folder1") + .to("mock:result"); + + from("direct:start") + .to("dropbox://del?accessToken=XXX&clientIdentifier=XXX&expireIn=1000&refreshToken=XXXX" + +"&apiKey=XXXXX&apiSecret=XXXXXX&remotePath=/root/folder1/file1.tar.gz") + .to("mock:result"); + +## Result Message Body + +The following objects are set on message body result: + + ++++ + + + + + + + + + + + + +
Object typeDescription

String

name of the path deleted on +dropbox

+ +# Get (download) operation + +Download files from Dropbox. + +Works as a Camel producer or Camel consumer. + +Below are listed the options for this operation: + + +++++ + + + + + + + + + + + + + + +
PropertyMandatoryDescription

remotePath

true

Folder or file to download from +Dropbox

+ +## Samples + + from("direct:start") + .to("dropbox://get?accessToken=XXX&clientIdentifier=XXX&expireIn=1000&refreshToken=XXXX" + +"&apiKey=XXXXX&apiSecret=XXXXXX&remotePath=/root/folder1/file1.tar.gz") + .to("file:///home/kermit/?fileName=file1.tar.gz"); + + from("direct:start") + .to("dropbox://get?accessToken=XXX&clientIdentifier=XXX&expireIn=1000&refreshToken=XXXX" + +"&apiKey=XXXXX&apiSecret=XXXXXX&remotePath=/root/folder1") + .to("mock:result"); + + from("dropbox://get?accessToken=XXX&clientIdentifier=XXX&expireIn=1000&refreshToken=XXXX" + +"&apiKey=XXXXX&apiSecret=XXXXXX&remotePath=/root/folder1") + .to("file:///home/kermit/"); + +## Result Message Body + +The following objects are set on message body result: + + ++++ + + + + + + + + + + + + + + + + +
Object typeDescription

byte[] or +CachedOutputStream if stream caching is enabled

in case of single file download, stream +represents the file downloaded

Map<String, byte[]> +or Map<String, CachedOutputStream> if stream caching +is enabled

in the case of multiple files +downloaded, a map with as a key the path of the remote file downloaded +and as value the stream representing the file downloaded

+ +# Move operation + +Move files on Dropbox between one folder to another. + +Works only as a Camel producer. + +Below are listed the options for this operation: + + +++++ + + + + + + + + + + + + + + + + + + + +
PropertyMandatoryDescription

remotePath

true

Original file or folder to +move

newRemotePath

true

Destination file or folder

+ +## Samples + + from("direct:start") + .to("dropbox://move?accessToken=XXX&clientIdentifier=XXX&expireIn=1000&refreshToken=XXXX" + +"&apiKey=XXXXX&apiSecret=XXXXXX&remotePath=/root/folder1&newRemotePath=/root/folder2") + .to("mock:result"); + +## Result Message Body + +The following objects are set on message body result: + + ++++ + + + + + + + + + + + + +
Object typeDescription

String

name of the path moved on +dropbox

+ +# Put (upload) operation + +Upload files on Dropbox. + +Works as a Camel producer. + +Below are listed the options for this operation: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
PropertyMandatoryDescription

uploadMode

true

add or force this option specifies how +a file should be saved on dropbox: in the case of add, the +new file will be renamed if a file with the same name already exists on +dropbox. In the case of force, if a file with the same name +already exists on dropbox, this will be overwritten.

localPath

false

Folder or file to upload on Dropbox +from the local filesystem. If this option has been configured, then it +takes precedence over uploading as a single file with content from the +Camel message body (the message body is converted into a byte +array).

remotePath

false

Folder destination on Dropbox. If the +property is not set, the component will upload the file on a remote path +equal to the local path. With Windows or without an absolute localPath +you may run into an exception like the following:

+

Caused by: java.lang.IllegalArgumentException: path: bad +path: must start with "/": "C:/My/File"
+OR
+Caused by: java.lang.IllegalArgumentException: path: bad path: +must start with "/": "MyFile"
+

+ +## Samples + + from("direct:start").to("dropbox://put?accessToken=XXX&clientIdentifier=XXX&expireIn=1000&refreshToken=XXXX" + +"&apiKey=XXXXX&apiSecret=XXXXXX&uploadMode=add&localPath=/root/folder1") + .to("mock:result"); + + from("direct:start").to("dropbox://put?accessToken=XXX&clientIdentifier=XXX&expireIn=1000&refreshToken=XXXX" + +"&apiKey=XXXXX&apiSecret=XXXXXX&uploadMode=add&localPath=/root/folder1&remotePath=/root/folder2") + .to("mock:result"); + +And to upload a single file with content from the message body + + from("direct:start") + .setHeader(DropboxConstants.HEADER_PUT_FILE_NAME, constant("myfile.txt")) + .to("dropbox://put?accessToken=XXX&clientIdentifier=XXX&expireIn=1000&refreshToken=XXXX" + +"&apiKey=XXXXX&apiSecret=XXXXXX&uploadMode=add&remotePath=/root/folder2") + .to("mock:result"); + +The name of the file can be provided in the header +`DropboxConstants.HEADER_PUT_FILE_NAME` or `Exchange.FILE_NAME` in that +order of precedence. If no header has been provided then the message id +(uuid) is used as the file name. + +## Result Message Body + +The following objects are set on message body result: + + ++++ + + + + + + + + + + + + + + + + +
Object typeDescription

String

in case of single file upload, result +of the upload operation, OK or KO

Map<String, DropboxResultCode>

in the case of multiple files upload, a +map with as a key the path of the remote file uploaded and as value the +result of the upload operation, OK or KO

+ +# Search operation + +Search inside a remote Dropbox folder including its subdirectories. + +Works as Camel producer and as Camel consumer. + +Below are listed the options for this operation: + + +++++ + + + + + + + + + + + + + + + + + + + +
PropertyMandatoryDescription

remotePath

true

Folder on Dropbox where to search +in.

query

true

A space-separated list of sub-strings +to search for. A file matches only if it contains all the sub-strings. +If this option is not set, all files will be matched. The query is +required to be provided in either the endpoint configuration or as a +header CamelDropboxQuery on the Camel message.

+ +## Samples + + from("dropbox://search?accessToken=XXX&clientIdentifier=XXX&expireIn=1000&refreshToken=XXXX" + +"&apiKey=XXXXX&apiSecret=XXXXXX&remotePath=/XXX&query=XXX") + .to("mock:result"); + + from("direct:start") + .setHeader("CamelDropboxQuery", constant("XXX")) + .to("dropbox://search?accessToken=XXX&clientIdentifier=XXX&expireIn=1000&refreshToken=XXXX" + +"&apiKey=XXXXX&apiSecret=XXXXXX&remotePath=/XXX") + .to("mock:result"); + +## Result Message Body + +The following objects are set on message body result: + + ++++ + + + + + + + + + + + + +
Object typeDescription

List<com.dropbox.core.v2.files.SearchMatchV2>

list of the file path found. For more +information on this object, refer to Dropbox +documentation.

+ +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|operation|The specific action (typically is a CRUD action) to perform on Dropbox remote folder.||object| +|clientIdentifier|Name of the app registered to make API requests||string| +|query|A space-separated list of sub-strings to search for. A file matches only if it contains all the sub-strings. If this option is not set, all files will be matched.||string| +|remotePath|Original file or folder to move||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|localPath|Optional folder or file to upload on Dropbox from the local filesystem. If this option has not been configured then the message body is used as the content to upload.||string| +|newRemotePath|Destination file or folder||string| +|uploadMode|Which mode to upload. in case of add the new file will be renamed if a file with the same name already exists on dropbox. in case of force if a file with the same name already exists on dropbox, this will be overwritten.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|client|To use an existing DbxClient instance as Dropbox client.||object| +|accessToken|The access token to make API requests for a specific Dropbox user||string| +|apiKey|The apiKey to make API requests for a specific Dropbox user||string| +|apiSecret|The apiSecret to make API requests for a specific Dropbox user||string| +|expireIn|The expire time to access token for a specific Dropbox user||integer| +|refreshToken|The refresh token to refresh the access token for a specific Dropbox user||string| diff --git a/camel-dynamic-router-control.md b/camel-dynamic-router-control.md new file mode 100644 index 0000000000000000000000000000000000000000..fd9b4de1b0155f04add6eff1a1999c9e53399a63 --- /dev/null +++ b/camel-dynamic-router-control.md @@ -0,0 +1,275 @@ +# Dynamic-router-control + +**Since Camel 4.4** + +**Only producer is supported** + +The Dynamic Router Control endpoint is a special type of endpoint in the +Dynamic Router component where routing participants can subscribe or +unsubscribe dynamically at runtime. By sending control messages to this +endpoint, participants can specify their own routing rules and alter the +dynamic rule base of the Dynamic Router component in real-time. +Participants can choose between using URI query parameters, and sending +a control message as the exchange message body. + +# URI format + + dynamic-router-control:controlAction[?options] + +# Subscribing + +Subscribing can be achieved by using query parameters in the control +endpoint URI, or by sending a `DynamicRouterControlMessage` to the +control endpoint URI. + +## URI examples + +**Example Java URI `RouteBuilder` Subscription** + + // Send a subscribe request to the dynamic router that will match every exchange and route messages to the URI: "direct:myDestination" + from("direct:subscribe").to("dynamic-router-control:subscribe?subscribeChannel=myChannel&subscriptionId=mySubId&destinationUri=direct:myDestination&priority=5&predicate=true&expressionLanguage=simple"); + +**Example Java URI `ProducerTemplate` Subscription** + + CamelContext context = new DefaultCamelContext(); + context.start(); + ProducerTemplate template = context.createProducerTemplate(); + RouteBuilder.addRoutes(context, rb -> { + // Route for subscriber destination + rb.from("direct:myDestination") + .to("log:dynamicRouterExample?showAll=true&multiline=true"); + // Route for subscribing + rb.from("direct://subscribe") + .toD("dynamic-router-control://subscribe" + + "?subscribeChannel=${header.subscribeChannel}" + + "&subscriptionId=${header.subscriptionId}" + + "&destinationUri=${header.destinationUri}" + + "&priority=${header.priority}" + + "&predicateBean=${header.predicateBean}"); + }); + Predicate predicate = PredicateBuilder.constant(true); + context.getRegistry().bind("predicate", predicate); + template.sendBodyAndHeaders("direct:subscribe", "", + Map.of("subscribeChannel", "test", + "subscriptionId", "testSubscription1", + "destinationUri", "direct:myDestination", + "priority", "1", + "predicateBean", "predicate")); + +Above, because the control URI is dynamic, and since a +`ProducerTemplate` does not have a built-in way to send to a dynamic +URI, we have to send subscription parameters from a `ProducerTemplate` +in a different way. The dynamic-aware endpoint uses headers "under the +hood", because the URI params are converted to headers, so we can set +the headers deliberately. + +## DynamicRouterControlMessage example + +**Example Java `DynamicRouterControlMessage` Subscription** + + DynamicRouterControlMessage controlMessage = DynamicRouterControlMessage.newBuilder() + .subscribeChannel("myChannel") + .subscriptionId("mySubId") + .destinationUri("direct:myDestination") + .priority(5) + .predicate("true") + .expressionLanguage("simple") + .build(); + producerTemplate.sendBody("dynamic-router-control:subscribe", controlMessage); + +# Unsubscribing + +Like subscribing, unsubscribing can also be achieved by using query +parameters in the control endpoint URI, or by sending a +`DynamicRouterControlMessage` to the control endpoint URI. The +difference is that unsubscribing can be achieved by using either one or +two parameters. + +## URI examples + +**Example Java URI `RouteBuilder` Unsubscribe** + + from("direct:subscribe").to("dynamic-router-control:unsubscribe?subscribeChannel=myChannel&subscriptionId=mySubId"); + +**Example Java URI `ProducerTemplate` Unsubscribe** + + CamelContext context = new DefaultCamelContext(); + context.start(); + ProducerTemplate template = context.createProducerTemplate(); + RouteBuilder.addRoutes(context, rb -> { + // Route for unsubscribing + rb.from("direct://unsubscribe") + .toD("dynamic-router-control://unsubscribe" + + "?subscribeChannel=${header.subscribeChannel}" + + "&subscriptionId=${header.subscriptionId}"); + }); + template.sendBodyAndHeaders("direct:unsubscribe", "", + Map.of("subscribeChannel", "test", + "subscriptionId", "testSubscription1")); + +Above, because the control URI is dynamic, we have to send it from a +`ProducerTemplate` in a different way. The dynamic-aware endpoint uses +headers, rather than URI params, so we set the headers deliberately. + +## DynamicRouterControlMessage example + +**Example Java `DynamicRouterControlMessage` Unsubscribe** + + DynamicRouterControlMessage controlMessage = DynamicRouterControlMessage.newBuilder() + .subscribeChannel("myChannel") + .subscriptionId("mySubId") + .build(); + producerTemplate.sendBody("dynamic-router-control:unsubscribe", controlMessage); + +# The Dynamic Rule Base + +To determine if an exchange is suitable for any of the participants, all +predicates for the participants that are subscribed to the channel are +evaluated until the first result of "true" is found, by default. If the +Dynamic Router is configured with the `recipientMode` set to `allMatch`, +then all recipients with matching predicates will be selected. The +exchange will be routed to the corresponding endpoint(s). The rule base +contains a default filter registered at the least priority (which is the +highest integer number). Like the "default" case of a switch statement +in Java, any message that is not appropriate for any registered +participants will be processed by this filter. The filter logs +information about the dropped message at **debug** level, by default. To +turn the level up to **warn**, include `warnDroppedMessage=true` in the +component URI. + +Rules are registered in a channel, and they are logically separate from +rules in another channel. Subscription IDs must be unique within a +channel, although multiple subscriptions of the same name may coexist in +a dynamic router instance if they are in separate channels. + +The Dynamic Router employs the use of +[Predicate](#manual::predicate.adoc) as rules. Any valid predicate may +be used to determine the suitability of exchanges for a participating +recipient, whether they are simple or compound predicates. Although it +is advised to view the complete documentation, an example simple +predicate might look like the following: + +**Example simple predicate** + + // The "messageType" must be "payment" + Predicate msgType = header("messageType").isEqualTo("payment"); + +# JMX Control and Monitoring Operations + +The Dynamic Router Control component supports some JMX operations that +allow you to control and monitor the component. It is beyond the scope +of this document to go into detail about JMX, so this is a list of the +operations that are supported. For more information about JMX, see the +[JMX](#manual::jmx.adoc) documentation. + +**Subscribing with a predicate expression** + + String subscribeWithPredicateExpression(String, String, String, int, String, String, boolean) + +This operation provides the ability to subscribe to a channel with a +predicate expression. The parameters, in order, are as follows: + +- subscription ID + +- channel name + +- destination URI + +- priority + +- predicate expression + +- expression language + +- update the subscription (true), or add a new one (false) + +**Subscribing with a predicate bean** + + String subscribeWithPredicateBean(String, String, String, int, String, boolean) + +This operation provides the ability to subscribe to a channel with the +name of a Predicate that has been bound in the registry. The parameters, +in order, are as follows: + +- subscription ID + +- channel name + +- destination URI + +- priority + +- predicate bean name + +- update the subscription (true), or add a new one (false) + +**Subscribing with a predicate instance** + + String subscribeWithPredicateInstance(String, String, String, int, Object, boolean) + +This operation provides the ability to subscribe to a channel with an +instance of a Predicate. The parameters, in order, are as follows: + +- subscription ID + +- channel name + +- destination URI + +- priority + +- predicate instance + +- update the subscription (true), or add a new one (false) + +**Unsubscribing** + + boolean removeSubscription(String, String) + +This operation provides the ability to unsubscribe from a channel. The +parameters, in order, are as follows: + +- subscription ID + +- channel name + +**Getting the subscriptions map** + + Map> getSubscriptionsMap() + +This operation provides the ability to get the subscriptions map. The +map is keyed by channel name, and the values are a set of prioritized +filters. + +**Getting the subscriptions statistics map** + + Map> getSubscriptionsStatisticsMap() + +This operation provides the ability to get the subscriptions statistics +map. The map is keyed by channel name, and the values are a list of +prioritized filter statistics, including the number of messages that +have matched the filter, and had the exchange sent to the destination +URI. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|controlAction|Control action||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|destinationUri|The destination URI for exchanges that match.||string| +|expressionLanguage|The subscription predicate language.|simple|string| +|predicate|The subscription predicate.||string| +|predicateBean|A Predicate instance in the registry.||object| +|priority|The subscription priority.||integer| +|subscribeChannel|The channel to subscribe to||string| +|subscriptionId|The subscription ID; if unspecified, one will be assigned and returned.||string| diff --git a/camel-dynamic-router.md b/camel-dynamic-router.md new file mode 100644 index 0000000000000000000000000000000000000000..d684018562a38733abbc2387c3e472d8bd5fa2b1 --- /dev/null +++ b/camel-dynamic-router.md @@ -0,0 +1,453 @@ +# Dynamic-router + +**Since Camel 3.15** + +**Only producer is supported** + +The Dynamic Router Component is an implementation of the Dynamic Router +EIP. Participants may send subscription messages over a special control +channel, at runtime, to specify the conditions under which messages are +routed to their endpoint (also provided in the control channel message). +In this way, the Dynamic Router is an extension of the content-based +router EIP. When a recipient wishes to remove itself, it can also send a +message to unsubscribe. + +Note that, while Camel Core contains an implementation of the Dynamic +Router EIP, this component is a completely separate implementation that +aims to be a closer reflection of the EIP description. The main +differences between the Core implementation and this component +implementation are as follows: + +*Control Channel* +A reserved communication channel by which routing participants can +subscribe or unsubscribe to receiving messages that meet their criteria. + +- **core**: does not have a communication channel for control + messages. Perhaps the "re-circulation" behavior, discussed below, is + the core Dynamic Router’s control channel interpretation. + +- **component**: provides a control channel for participants to + subscribe and unsubscribe with control messages that contain a + `Predicate` to determine `Exchange` suitability, and the `Endpoint` + URI that a matching `Exchange` will be sent to. + +*Dynamic Rule Base* +The Dynamic Router should have a list of routing recipients' criteria +that define the terms under which an exchange is suitable for them to +receive. + +- **core**: implements a dynamic version of a `Routing Slip` for this + purpose, but that is not inherently dynamic in terms of its content. + If the content of this slip is dynamic, it will be up to the user to + define and implement that capability. + +- **component**: builds the rule base at runtime, and maintains it as + participants subscribe or unsubscribe via the control channel. + +*Message Re-Circulation* +The Dynamic Router EIP description does not specify any message +re-circulation behavior. + +- **core**: provides a feature that continuously routes the exchange + to a recipient, then back through the dynamic router, until a + recipient returns `null` to signify routing termination. This may be + an interpretation of the control channel feature. + +- **component**: does not provide a re-circulation feature. If this is + the desired behavior, the user will have to define and implement + this behavior. E.g., create a simple route to send a response back + through the Dynamic Router under some condition(s). + +For some use cases, the core Dynamic Router will be more appropriate. In +other cases, the Dynamic Router Component will be a better fit. + +# URI format + + dynamic-router:channel[?options] + +The `channel` is the routing channel that allows messaging to be +logically separate from other channels. Any string that can be included +in a URI is a valid channel name. Each channel can have a set of +participant subscriptions, and can consume messages to be routed to +appropriate recipients. The only reserved channel is the `control` +channel. This is a single channel that handles control messages for +participants to subscribe or unsubscribe for messaging over a desired +channel. + +These messages will be described in greater detail below, with examples. + +# Usage + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-dynamic-router + x.x.x + + + +Gradle users will need to add the following dependency to their +`build.gradle` for this component: + + implementation group: 'org.apache.camel', name: 'camel-dynamic-router', version: 'x.x.x' + // use the same version as your Camel core version + +The Dynamic Router component is used in the same way that other +components are used. Include the dynamic-router URI as a consumer in a +route, along with the channel name. + +Java +**Example Java DSL Route Definition** + + // Send a message to the Dynamic Router channel named "orders" + from("direct:start").to("dynamic-router:orders"); + +Spring XML +**Example XML Route Definition** + + + + + + +# Dynamic Router EIP Component Use Cases + +The benefit of the Dynamic Router EIP Component can best be seen, +perhaps, through looking at some use cases. These examples are not the +only possibilities with this component, but they show the basics of two +main usages — message routing within a single JVM, and message routing +across multiple JVMs. + +## Dynamic Router within a single JVM or Application + +The Dynamic Router EIP component can receive messages from a single +source and dispatch them to interested recipients. If we have a simple +point-of-sale application, we might have services that: + +1. Process orders + +2. Adjust inventory counts + +3. Process returns + +For the purpose of this example, the exact steps that each service +carries out are not as important as the fact that each service needs to +be notified that it needs to do something under the right condition(s). +So, each service will subscribe to participate in routing: + +**Orders processing service subscription** + + DynamicRouterControlMessage controlMessage = DynamicRouterControlMessage.newBuilder() + .subscribeChannel("orders") + .subscriptionId("orderProcessing") + .destinationUri("direct:orders") + .priority(5) + .predicate("{(headers.command == 'processOrder'}") + .expressionLanguage("simple") + .build(); + producerTemplate.sendBody("dynamic-router-control:subscribe", controlMessage); + +**Inventory service subscription** + + DynamicRouterControlMessage controlMessage = DynamicRouterControlMessage.newBuilder() + .subscribeChannel("orders") + .subscriptionId("inventoryProcessing") + .destinationUri("direct:orders") + .priority(5) + .predicate("{headers.command == 'processOrder' or headers.command == 'processReturn'}") + .expressionLanguage("simple") + .build(); + producerTemplate.sendBody("dynamic-router-control:subscribe", controlMessage); + +**Returns processing service subscription** + + DynamicRouterControlMessage controlMessage = DynamicRouterControlMessage.newBuilder() + .subscribeChannel("orders") + .subscriptionId("orderProcessing") + .destinationUri("direct:orders") + .priority(5) + .predicate("{(headers.command == 'processReturn'}") + .expressionLanguage("simple") + .build(); + producerTemplate.sendBody("dynamic-router-control:subscribe", controlMessage); + +Above, we have the Orders service subscribing for all messages where the +`command` header is "processOrder", and the Returns service subscribing +for all messages where the `command` header is "processReturn". The +Inventory service is interested in **both** types of messages, since it +must deduct from the inventory when an order request comes through, and +it must add to inventory counts when a return request comes through. So, +for either type of message, two services will be notified. + +The order messages get sent to the dynamic router: + +**Routing order/return request messages** + + from("direct:start") + .process(myOrderProcessor) + .to("dynamic-router:orders"); + +Note the `.process(myOrderProcessor)` step. If incoming messages need to +be validated, enriched, transformed, or otherwise augmented, that can be +done before the Dynamic Router receives the message. Then, when the +Dynamic Router receives a message, it checks the `Exchange` against all +subscriptions for the *orders* channel to determine if it is suitable +for any of the recipients. Orders should have a header (`command` → +`processOrder`), so the message will be routed to the *orders* service, +and the inventory service. The system will process the order details, +and then the inventory service will deduct from merchandize counts. +Likewise, returns should have a header (`command` → `processReturn`), so +the message will be routed to the returns service, where the return +details will be processed, and the inventory service will increase the +relevant merchandise counts. + +### Further learning: a complete Spring Boot example + +In the `camel-spring-boot-examples` project, the `dynamic-router-eip` +module serves as a complete example in this category that you can run +and/or experiment with to get a practical feel for how you might use +this in your own single-JVM application. + +## Dynamic Router across multiple JVMs or Applications + +The Dynamic Router EIP component is particularly well-suited to serve as +the primary orchestration mechanism between various applications and +services that comprise an application stack. Note that the Dynamic +Router cannot achieve this by itself, and that some other transport is +required to allow messages to pass between services that exist in +separate JVMs. For example, a message transport implementation like +Kafka, Artemis, or Protocol Buffers, could be used. + +Let’s look at the point-of-sale example in a different context. In a +microservice architecture, this system would have several separate +application modules, with the orders service, inventory service, and +returns service, contained within their own microservice (application). +Similar to the single-JVM example, all services will subscribe, but they +will need to send their subscriptions through a transport that can +communicate to another JVM. Their subscriptions might look like: + +**Orders processing service subscription** + + DynamicRouterControlMessage controlMessage = DynamicRouterControlMessage.newBuilder() + .subscribeChannel("orders") + .subscriptionId("orderProcessing") + .destinationUri("direct:orders") + .priority(5) + .predicate("{(headers.command == 'processOrder'}") + .expressionLanguage("simple") + .build(); + ObjectMapper mapper = new ObjectMapper(new JsonFactory()); + producerTemplate.sendBody("kafka://subscriptions", mapper.writeValueAsString(controlMessage)); + +**Inventory service subscription** + + DynamicRouterControlMessage controlMessage = DynamicRouterControlMessage.newBuilder() + .subscribeChannel("orders") + .subscriptionId("inventoryProcessing") + .destinationUri("direct:orders") + .priority(5) + .predicate("{headers.command == 'processOrder' or headers.command == 'processReturn'}") + .expressionLanguage("simple") + .build(); + ObjectMapper mapper = new ObjectMapper(new JsonFactory()); + producerTemplate.sendBody("kafka://subscriptions", mapper.writeValueAsString(controlMessage)); + +**Returns processing service subscription** + + DynamicRouterControlMessage controlMessage = DynamicRouterControlMessage.newBuilder() + .subscribeChannel("orders") + .subscriptionId("orderProcessing") + .destinationUri("direct:orders") + .priority(5) + .predicate("{(headers.command == 'processReturn'}") + .expressionLanguage("simple") + .build(); + ObjectMapper mapper = new ObjectMapper(new JsonFactory()); + producerTemplate.sendBody("kafka://subscriptions", mapper.writeValueAsString(controlMessage)); + +In another module, additional routing will serve as a bridge to get the +message from Kafka to the control channel of the Dynamic Router: + +**Bridge from Kafka to the Dynamic Router control channel** + + RouteBuilder subscriptionRouter() { + return new RouteBuilder(camelContext) { + @Override + public void configure() { + from("kafka:subscriptions") + .unmarshal().json(DynamicRouterControlMessage.class) + .to("dynamic-router-control:subscribe"); + } + }; + } + +Order requests or return requests might also arrive via Kafka. The route +is essentially the same as the route in the single-JVM example. Instead +of forwarding the incoming message, as-is, from the "direct" component +to the router, the messages are deserialized from a String, and +converted to an instance of the "order" object. Then, it can be sent to +the Dynamic Router for evaluation and distribution to the appropriate +subscribing recipients: + +**Routing order/return request messages from Kafka to the Dynamic +Router** + + from("kafka://orders") + .unmarshal().json(MyOrderMessage.class) + .process(myOrderProcessor) + .to("dynamic-router:orders"); + +Note the `.process(myOrderProcessor)` step. If incoming messages need to +be validated, enriched, transformed, or otherwise augmented, that can be +done before the Dynamic Router receives the message. Then, when the +Dynamic Router receives a message, it checks the `Exchange` against all +subscriptions for the "orders" channel to determine if it is suitable +for any of the recipients. Orders should have a header (`command` → +`processOrder`), so the message will be routed to the orders service, +and the inventory service. The system will process the order details, +and then the inventory service will deduct from merchandise counts. +Likewise, returns should have a header (`command` → `processReturn`), so +the message will be routed to the returns service, where the return +details will be processed, and the inventory service will increase the +relevant merchandise counts. + +### Further learning: a complete Spring Boot example + +In the `camel-spring-boot-examples` project, the +`dynamic-router-eip-multimodule` module serves as a complete example in +this category that you can run and/or experiment with to get a practical +feel for how you might use this in your own multi-JVM application stack. + +# JMX Control and Monitoring Operations + +The Dynamic Router Control component supports some JMX operations that +allow you to control and monitor the component. It is beyond the scope +of this document to go into detail about JMX, so this is a list of the +operations that are supported. For more information about JMX, see the +[JMX](#manual::jmx.adoc) documentation. + +**Subscribing with a predicate expression** + + String subscribeWithPredicateExpression(String, String, String, int, String, String, boolean) + +This operation provides the ability to subscribe to a channel with a +predicate expression. The parameters, in order, are as follows: + +- subscription ID + +- channel name + +- destination URI + +- priority + +- predicate expression + +- expression language + +- update the subscription (true), or add a new one (false) + +**Subscribing with a predicate bean** + + String subscribeWithPredicateBean(String, String, String, int, String, boolean) + +This operation provides the ability to subscribe to a channel with the +name of a Predicate that has been bound in the registry. The parameters, +in order, are as follows: + +- subscription ID + +- channel name + +- destination URI + +- priority + +- predicate bean name + +- update the subscription (true), or add a new one (false) + +**Subscribing with a predicate instance** + + String subscribeWithPredicateInstance(String, String, String, int, Object, boolean) + +This operation provides the ability to subscribe to a channel with an +instance of a Predicate. The parameters, in order, are as follows: + +- subscription ID + +- channel name + +- destination URI + +- priority + +- predicate instance + +- update the subscription (true), or add a new one (false) + +**Unsubscribing** + + boolean removeSubscription(String, String) + +This operation provides the ability to unsubscribe from a channel. The +parameters, in order, are as follows: + +- subscription ID + +- channel name + +**Getting the subscriptions map** + + Map> getSubscriptionsMap() + +This operation provides the ability to get the subscriptions map. The +map is keyed by channel name, and the values are a set of prioritized +filters. + +**Getting the subscriptions statistics map** + + Map> getSubscriptionsStatisticsMap() + +This operation provides the ability to get the subscriptions statistics +map. The map is keyed by channel name, and the values are a list of +prioritized filter statistics, including the number of messages that +have matched the filter, and had the exchange sent to the destination +URI. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|channel|Channel for the Dynamic Router. For example, if the Dynamic Router URI is dynamic-router://test, then the channel is test. Channels are a way of keeping routing participants, their rules, and exchanges logically separate from the participants, rules, and exchanges on other channels. This can be seen as analogous to VLANs in networking.||string| +|aggregationStrategy|Refers to an AggregationStrategy to be used to assemble the replies from the multicasts, into a single outgoing message from the Multicast. By default, Camel will use the last reply as the outgoing message. You can also use a POJO as the AggregationStrategy.||string| +|aggregationStrategyBean|Refers to an AggregationStrategy to be used to assemble the replies from the multicasts, into a single outgoing message from the Multicast. By default, Camel will use the last reply as the outgoing message. You can also use a POJO as the AggregationStrategy.||object| +|aggregationStrategyMethodAllowNull|If this option is false then the aggregate method is not used if there was no data to enrich. If this option is true then null values is used as the oldExchange (when no data to enrich), when using POJOs as the AggregationStrategy|false|boolean| +|aggregationStrategyMethodName|You can use a POJO as the AggregationStrategy. This refers to the name of the method that aggregates the exchanges.||string| +|cacheSize|When caching producer endpoints, this is the size of the cache. Default is 100.|100|integer| +|executorService|Refers to a custom Thread Pool to be used for parallel processing. Notice that, if you set this option, then parallel processing is automatically implied, and you do not have to enable that option in addition to this one.||string| +|executorServiceBean|Refers to a custom Thread Pool to be used for parallel processing. Notice that, if you set this option, then parallel processing is automatically implied, and you do not have to enable that option in addition to this one.||object| +|ignoreInvalidEndpoints|Ignore the invalid endpoint exception when attempting to create a producer with an invalid endpoint.|false|boolean| +|onPrepare|Uses the Processor when preparing the org.apache.camel.Exchange to be sent. This can be used to deep-clone messages that should be sent, or to provide any custom logic that is needed before the exchange is sent. This is the name of a bean in the registry.||string| +|onPrepareProcessor|Uses the Processor when preparing the org.apache.camel.Exchange to be sent. This can be used to deep-clone messages that should be sent, or to provide any custom logic that is needed before the exchange is sent. This is a Processor instance.||object| +|parallelAggregate|If enabled then the aggregate method on AggregationStrategy can be called concurrently. Notice that this would require the implementation of AggregationStrategy to be implemented as thread-safe. By default, this is false, meaning that Camel synchronizes the call to the aggregate method. Though, in some use-cases, this can be used to archive higher performance when the AggregationStrategy is implemented as thread-safe.|false|boolean| +|parallelProcessing|If enabled, then sending via multicast occurs concurrently. Note that the caller thread will still wait until all messages have been fully processed before it continues. It is only the sending and processing of the replies from the multicast recipients that happens concurrently. When parallel processing is enabled, then the Camel routing engine will continue processing using the last used thread from the parallel thread pool. However, if you want to use the original thread that called the multicast, then make sure to enable the synchronous option as well.|false|boolean| +|recipientMode|Recipient mode: firstMatch or allMatch|firstMatch|string| +|shareUnitOfWork|Shares the org.apache.camel.spi.UnitOfWork with the parent and each of the sub messages. Multicast will, by default, not share a unit of work between the parent exchange and each multicasted exchange. This means each sub exchange has its own individual unit of work.|false|boolean| +|stopOnException|Will stop further processing if an exception or failure occurred during processing of an org.apache.camel.Exchange and the caused exception will be thrown. Will also stop if processing the exchange failed (has a fault message), or an exception was thrown and handled by the error handler (such as using onException). In all situations, the multicast will stop further processing. This is the same behavior as in the pipeline that is used by the routing engine. The default behavior is to not stop, but to continue processing until the end.|false|boolean| +|streaming|If enabled, then Camel will process replies out-of-order (e.g., in the order they come back). If disabled, Camel will process replies in the same order as defined by the multicast.|false|boolean| +|synchronous|Sets whether synchronous processing should be strictly used. When enabled then the same thread is used to continue routing after the multicast is complete, even if parallel processing is enabled.|false|boolean| +|timeout|Sets a total timeout specified in milliseconds, when using parallel processing. If the Multicast has not been able to send and process all replies within the given timeframe, then the timeout triggers and the Multicast breaks out and continues. Notice that, if you provide a TimeoutAwareAggregationStrategy, then the timeout method is invoked before breaking out. If the timeout is reached with running tasks still remaining, certain tasks (for which it is difficult for Camel to shut down in a graceful manner) may continue to run. So use this option with a bit of care.|-1|integer| +|warnDroppedMessage|Flag to log a warning if no predicates match for an exchange.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-ehcache.md b/camel-ehcache.md new file mode 100644 index 0000000000000000000000000000000000000000..29a93884244c8d9795800b801bbf2dc3f8cbf845 --- /dev/null +++ b/camel-ehcache.md @@ -0,0 +1,150 @@ +# Ehcache + +**Since Camel 2.18** + +**Both producer and consumer are supported** + +The Ehcache component enables you to perform caching operations using +Ehcache 3 as the Cache Implementation. + +The Cache consumer is an event based consumer and can be used to listen +and respond to specific cache activities. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-ehcache + x.x.x + + + +# URI format + + ehcache://cacheName[?options] + +# Ehcache based idempotent repository example: + + CacheManager manager = CacheManagerBuilder.newCacheManager(new XmlConfiguration("ehcache.xml")); + EhcacheIdempotentRepository repo = new EhcacheIdempotentRepository(manager, "idempotent-cache"); + + from("direct:in") + .idempotentConsumer(header("messageId"), idempotentRepo) + .to("mock:out"); + +# Ehcache based aggregation repository example: + + public class EhcacheAggregationRepositoryRoutesTest extends CamelTestSupport { + private static final String ENDPOINT_MOCK = "mock:result"; + private static final String ENDPOINT_DIRECT = "direct:one"; + private static final int[] VALUES = generateRandomArrayOfInt(10, 0, 30); + private static final int SUM = IntStream.of(VALUES).reduce(0, (a, b) -> a + b); + private static final String CORRELATOR = "CORRELATOR"; + + @EndpointInject(ENDPOINT_MOCK) + private MockEndpoint mock; + + @Produce(uri = ENDPOINT_DIRECT) + private ProducerTemplate producer; + + @Test + public void checkAggregationFromOneRoute() throws Exception { + mock.expectedMessageCount(VALUES.length); + mock.expectedBodiesReceived(SUM); + + IntStream.of(VALUES).forEach( + i -> producer.sendBodyAndHeader(i, CORRELATOR, CORRELATOR) + ); + + mock.assertIsSatisfied(); + } + + private Exchange aggregate(Exchange oldExchange, Exchange newExchange) { + if (oldExchange == null) { + return newExchange; + } else { + Integer n = newExchange.getIn().getBody(Integer.class); + Integer o = oldExchange.getIn().getBody(Integer.class); + Integer v = (o == null ? 0 : o) + (n == null ? 0 : n); + + oldExchange.getIn().setBody(v, Integer.class); + + return oldExchange; + } + } + + @Override + protected RoutesBuilder createRouteBuilder() throws Exception { + return new RouteBuilder() { + @Override + public void configure() throws Exception { + from(ENDPOINT_DIRECT) + .routeId("AggregatingRouteOne") + .aggregate(header(CORRELATOR)) + .aggregationRepository(createAggregateRepository()) + .aggregationStrategy(EhcacheAggregationRepositoryRoutesTest.this::aggregate) + .completionSize(VALUES.length) + .to("log:org.apache.camel.component.ehcache.processor.aggregate.level=INFO&showAll=true&mulltiline=true") + .to(ENDPOINT_MOCK); + } + }; + } + + protected EhcacheAggregationRepository createAggregateRepository() throws Exception { + CacheManager cacheManager = CacheManagerBuilder.newCacheManager(new XmlConfiguration("ehcache.xml")); + cacheManager.init(); + + EhcacheAggregationRepository repository = new EhcacheAggregationRepository(); + repository.setCacheManager(cacheManager); + repository.setCacheName("aggregate"); + + return repository; + } + } + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheManager|The cache manager||object| +|cacheManagerConfiguration|The cache manager configuration||object| +|configurationUri|URI pointing to the Ehcache XML configuration file's location||string| +|createCacheIfNotExist|Configure if a cache need to be created if it does exist or can't be pre-configured.|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|eventFiring|Set the delivery mode (synchronous, asynchronous)|ASYNCHRONOUS|object| +|eventOrdering|Set the delivery mode (ordered, unordered)|ORDERED|object| +|eventTypes|Set the type of events to listen for (EVICTED,EXPIRED,REMOVED,CREATED,UPDATED). You can specify multiple entries separated by comma.||string| +|action|To configure the default cache action. If an action is set in the message header, then the operation from the header takes precedence.||string| +|key|To configure the default action key. If a key is set in the message header, then the key from the header takes precedence.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configuration|The default cache configuration to be used to create caches.||object| +|configurations|A map of cache configuration to be used to create caches.||object| +|keyType|The cache key type, default java.lang.Object||string| +|valueType|The cache value type, default java.lang.Object||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|the cache name||string| +|cacheManager|The cache manager||object| +|cacheManagerConfiguration|The cache manager configuration||object| +|configurationUri|URI pointing to the Ehcache XML configuration file's location||string| +|createCacheIfNotExist|Configure if a cache need to be created if it does exist or can't be pre-configured.|true|boolean| +|eventFiring|Set the delivery mode (synchronous, asynchronous)|ASYNCHRONOUS|object| +|eventOrdering|Set the delivery mode (ordered, unordered)|ORDERED|object| +|eventTypes|Set the type of events to listen for (EVICTED,EXPIRED,REMOVED,CREATED,UPDATED). You can specify multiple entries separated by comma.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|action|To configure the default cache action. If an action is set in the message header, then the operation from the header takes precedence.||string| +|key|To configure the default action key. If a key is set in the message header, then the key from the header takes precedence.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|configuration|The default cache configuration to be used to create caches.||object| +|configurations|A map of cache configuration to be used to create caches.||object| +|keyType|The cache key type, default java.lang.Object||string| +|valueType|The cache value type, default java.lang.Object||string| diff --git a/camel-elasticsearch-rest-client.md b/camel-elasticsearch-rest-client.md new file mode 100644 index 0000000000000000000000000000000000000000..70bd94528a6ff97cb955d4434cb1f23d7a4877ce --- /dev/null +++ b/camel-elasticsearch-rest-client.md @@ -0,0 +1,223 @@ +# Elasticsearch-rest-client + +**Since Camel 4.3** + +**Only producer is supported** + +The ElasticSearch component allows you to interface with +[ElasticSearch](https://www.elastic.co/products/elasticsearch) 8.x API +or [OpenSearch](https://opensearch.org/) using the ElasticSearch Java +Low level Rest Client. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-elasticsearch-rest-client + x.x.x + + + +# URI format + + elasticsearch-rest-client://clusterName[?options] + +# Elasticsearch Low level Rest Client Operations + +The following operations are currently supported. + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
operationmessage bodydescription

INDEX_OR_UPDATE

String, +byte[], Reader or InputStream +content to index or update

Adds or updates content to an index and +returns the resulting id in the message body. You can set +the name of the target index from the indexName URI +parameter option, or by providing a message header with the key +INDEX_NAME. When updating indexed content, you must provide +its id via a message header with the key ID .

GET_BY_ID

String id of content to +retrieve

Retrieves a JSON String representation +of the indexed document, corresponding to the given index id and sets it +as the message exchange body. You can set the name of the target index +from the indexName URI parameter option, or by providing a +message header with the key INDEX_NAME. You must provide +the index id of the content to retrieve either in the message body, or +via a message header with the key ID .

DELETE

String id of content to +delete

Deletes the specified +indexName and returns a boolean value as the +message exchange body, indicating whether the operation was successful. +You can set the name of the target index from the indexName +URI parameter option, or by providing a message header with the key +INDEX_NAME. You must provide the index id of the content to +delete either in the message body, or via a message header with the key +ID .

CREATE_INDEX

Creates the specified +indexName and returns a boolean value as the +message exchange body, indicating whether the operation was successful. +You can set the name of the target index to create from the +indexName URI parameter option, or by providing a message +header with the key INDEX_NAME. You may also provide a +header with the key INDEX_SETTINGS where the value is a +JSON String representation of the index settings.

DELETE_INDEX

Deletes the specified +indexName and returns a boolean value as the +message exchange body, indicating whether the operation was successful. +You can set the name of the target index to create from the +indexName URI parameter option, or by providing a message +header with the key INDEX_NAME.

SEARCH

Map (optional)

Search for content with either a +Map of String keys & values of query +criteria. Or a JSON string representation of the query. Matching +documents are returned as a JSON string set on the message exchange +body. You can set the JSON query String by providing a message header +with the key SEARCH_QUERY. You can set the message exchange +body to a Map of String keys & values for +the query criteria.

+ +# Index Content Example + +To index some content. + + from("direct:index") + .setBody().constant("{\"content\": \"ElasticSearch With Camel\"}") + .to("elasticsearch-rest-client://myCluster?operation=INDEX_OR_UPDATE&indexName=myIndex"); + +To update existing indexed content, provide the `ID` message header and +the message body with the updated content. + + from("direct:index") + .setHeader("ID").constant("1") + .setBody().constant("{\"content\": \"ElasticSearch REST Client With Camel\"}") + .to("elasticsearch-rest-client://myCluster?operation=INDEX_OR_UPDATE&indexName=myIndex"); + +# Get By ID Example + + from("direct:getById") + .setHeader("ID").constant("1") + .to("elasticsearch-rest-client://myCluster?operation=GET_BY_ID&indexName=myIndex"); + +# Delete Example + +To delete indexed content, provide the `ID` message header. + + from("direct:getById") + .setHeader("ID").constant("1") + .to("elasticsearch-rest-client://myCluster?operation=DELETE&indexName=myIndex"); + +# Create Index Example + +To create a new index. + + from("direct:createIndex") + .to("elasticsearch-rest-client://myCluster?operation=CREATE_INDEX&indexName=myIndex"); + +To create a new index with some custom settings. + + String indexSettings = "{\"settings\":{\"number_of_replicas\": 1,\"number_of_shards\": 3,\"analysis\": {},\"refresh_interval\": \"1s\"},\"mappings\":{\"dynamic\": false,\"properties\": {\"title\": {\"type\": \"text\", \"analyzer\": \"english\"}}}}"; + + from("direct:createIndex") + .setHeader("INDEX_SETTINGS").constant(indexSettings) + .to("elasticsearch-rest-client://myCluster?operation=CREATE_INDEX&indexName=myIndex"); + +# Delete Index Example + +To delete an index. + + from("direct:deleteIndex") + .to("elasticsearch-rest-client://myCluster?operation=DELETE_INDEX&indexName=myIndex"); + +# Search Example + +Search with a JSON query. + + from("direct:search") + .setHeader("SEARCH_QUERY").constant("{\"query\":{\"match\":{\"content\":\"ElasticSearch With Camel\"}}}") + .to("elasticsearch-rest-client://myCluster?operation=SEARCH&indexName=myIndex"); + +Search on specific field(s) using `Map`. + + Map criteria = new HashMap<>(); + criteria.put("content", "Camel"); + + from("direct:search") + .setBody().constant(criteria) + .to("elasticsearch-rest-client://myCluster?operation=SEARCH&indexName=myIndex"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|connectionTimeout|Connection timeout|30000|integer| +|hostAddressesList|List of host Addresses, multiple hosts can be separated by comma.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|socketTimeout|Socket timeout|30000|integer| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|enableSniffer|Enabling Sniffer|false|boolean| +|restClient|Rest Client of type org.elasticsearch.client.RestClient. This is only for advanced usage||object| +|sniffAfterFailureDelay|Sniffer after failure delay (in millis)|60000|integer| +|snifferInterval|Sniffer interval (in millis)|60000|integer| +|certificatePath|Certificate Path||string| +|password|Password||string| +|user|Username||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|clusterName|Cluster Name||string| +|connectionTimeout|Connection timeout|30000|integer| +|hostAddressesList|List of host Addresses, multiple hosts can be separated by comma.||string| +|indexName|Index Name||string| +|operation|Operation||object| +|socketTimeout|Socket timeout|30000|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|enableSniffer|Enabling Sniffer|false|boolean| +|restClient|Rest Client of type org.elasticsearch.client.RestClient. This is only for advanced usage||object| +|sniffAfterFailureDelay|Sniffer after failure delay (in millis)|60000|integer| +|snifferInterval|Sniffer interval (in millis)|60000|integer| +|certificatePath|Certificate Path||string| +|password|Password||string| +|user|Username||string| diff --git a/camel-elasticsearch.md b/camel-elasticsearch.md new file mode 100644 index 0000000000000000000000000000000000000000..d3fcb071cf1d86bf6176c0f688ff24c08e43df3b --- /dev/null +++ b/camel-elasticsearch.md @@ -0,0 +1,375 @@ +# Elasticsearch + +**Since Camel 3.19** + +**Only producer is supported** + +The ElasticSearch component allows you to interface with an +[ElasticSearch](https://www.elastic.co/products/elasticsearch) 8.x API +using the Java API Client library. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-elasticsearch + x.x.x + + + +# URI format + + elasticsearch://clusterName[?options] + +# Message Operations + +The following ElasticSearch operations are currently supported. Set an +endpoint URI option or exchange header with a key of "operation" and a +value set to one of the following. Some operations also require other +parameters or the message body to be set. + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
operationmessage bodydescription

Index

Map, +String, byte[], +Reader, InputStream or +IndexRequest.Builder content to index

Adds content to an index and returns +the content’s indexId in the body. You can set the name of the target +index by setting the message header with the key "indexName". You can +set the indexId by setting the message header with the key +"indexId".

GetById

String or +GetRequest.Builder index id of content to +retrieve

Retrieves the document corresponding to +the given index id and returns a GetResponse object in the +body. You can set the name of the target index by setting the message +header with the key "indexName". You can set the type of document by +setting the message header with the key "documentClass".

Delete

String or +DeleteRequest.Builder index id of content to +delete

Deletes the specified indexName and +returns a Result object in the body. You can set the name of the target +index by setting the message header with the key "indexName".

DeleteIndex

String or +DeleteIndexRequest.Builder index name of the index to +delete

Deletes the specified indexName and +returns a status code in the body. You can set the name of the target +index by setting the message header with the key "indexName".

Bulk

Iterable or +BulkRequest.Builder of any type that is already +accepted (DeleteOperation.Builder for delete operation, +UpdateOperation.Builder for update operation, CreateOperation.Builder +for create operation, byte[], InputStream, String, Reader, Map or any +document type for index operation)

Adds/Updates/Deletes content from/to an +index and returns a List<BulkResponseItem> object in the body You +can set the name of the target index by setting the message header with +the key "indexName".

Search

Map, +String or +SearchRequest.Builder

Search the content with the map of +query string. You can set the name of the target index by setting the +message header with the key "indexName". You can set the number of hits +to return by setting the message header with the key "size". You can set +the starting document offset by setting the message header with the key +"from".

MultiSearch

MsearchRequest.Builder

Multiple search in one

MultiGet

Iterable<String> +or MgetRequest.Builder the id of the document to +retrieve

Multiple get in one

+

You can set the name of the target index by setting the message +header with the key "indexName".

Exists

None

Check whether the index exists or not +and returns a Boolean flag in the body.

+

You must set the name of the target index by setting the message +header with the key "indexName".

Update

byte[], +InputStream, String, +Reader, Map or any document type +content to update

Updates content to an index and returns +the content’s indexId in the body. You can set the name of the target +index by setting the message header with the key "indexName". You can +set the indexId by setting the message header with the key "indexId". Be +aware of the fact that unlike the component +camel-elasticsearch-rest, by default, the expected content of +an update request must be the same as what the Update +API expects, consequently if you want to update one part of an +existing document, you need to embed the content to update into a "doc" +object. To change the default behavior, it is possible to configure it +globally at the component level thanks to the option +enableDocumentOnlyMode or by request by setting the header +ElasticsearchConstants.PARAM_DOCUMENT_MODE to true.

Ping

None

Pings the Elasticsearch cluster and +returns true if the ping succeeded, false otherwise

+ +# Configure the component and enable basic authentication + +To use the Elasticsearch component, it has to be configured with a +minimum configuration. + + ElasticsearchComponent elasticsearchComponent = new ElasticsearchComponent(); + elasticsearchComponent.setHostAddresses("myelkhost:9200"); + camelContext.addComponent("elasticsearch", elasticsearchComponent); + +For basic authentication with elasticsearch or using reverse http proxy +in front of the elasticsearch cluster, simply setup basic authentication +and SSL on the component like the example below + + ElasticsearchComponent elasticsearchComponent = new ElasticsearchComponent(); + elasticsearchComponent.setHostAddresses("myelkhost:9200"); + elasticsearchComponent.setUser("elkuser"); + elasticsearchComponent.setPassword("secure!!"); + elasticsearchComponent.setEnableSSL(true); + elasticsearchComponent.setCertificatePath(certPath); + + camelContext.addComponent("elasticsearch", elasticsearchComponent); + +# Index Example + +Below is a simple INDEX example + + from("direct:index") + .to("elasticsearch://elasticsearch?operation=Index&indexName=twitter"); + + + + + + +**For this operation, you’ll need to specify an `indexId` header.** + +A client would simply need to pass a body message containing a Map to +the route. The result body contains the indexId created. + + Map map = new HashMap(); + map.put("content", "test"); + String indexId = template.requestBody("direct:index", map, String.class); + +# Search Example + +Searching on specific field(s) and value use the Operation ´Search´. +Pass in the query JSON String or the Map + + from("direct:search") + .to("elasticsearch://elasticsearch?operation=Search&indexName=twitter"); + + + + + + + String query = "{\"query\":{\"match\":{\"content\":\"new release of ApacheCamel\"}}}"; + HitsMetadata response = template.requestBody("direct:search", query, HitsMetadata.class); + +Search on specific field(s) using Map. + + Map actualQuery = new HashMap<>(); + actualQuery.put("content", "new release of ApacheCamel"); + + Map match = new HashMap<>(); + match.put("match", actualQuery); + + Map query = new HashMap<>(); + query.put("query", match); + HitsMetadata response = template.requestBody("direct:search", query, HitsMetadata.class); + +Search using Elasticsearch scroll api to fetch all results. + + from("direct:search") + .to("elasticsearch://elasticsearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000"); + + + + + + + String query = "{\"query\":{\"match\":{\"content\":\"new release of ApacheCamel\"}}}"; + try (ElasticsearchScrollRequestIterator response = template.requestBody("direct:search", query, ElasticsearchScrollRequestIterator.class)) { + // do something smart with results + } + +[Split EIP](#eips:split-eip.adoc) can also be used. + + from("direct:search") + .to("elasticsearch://elasticsearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000") + .split() + .body() + .streaming() + .to("mock:output") + .end(); + +# MultiSearch Example + +MultiSearching on specific field(s) and value uses the Operation +´MultiSearch´. Pass in the MultiSearchRequest instance + + from("direct:multiSearch") + .to("elasticsearch://elasticsearch?operation=MultiSearch"); + + + + + + +MultiSearch on specific field(s) + + MsearchRequest.Builder builder = new MsearchRequest.Builder().index("twitter").searches( + new RequestItem.Builder().header(new MultisearchHeader.Builder().build()) + .body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build(), + new RequestItem.Builder().header(new MultisearchHeader.Builder().build()) + .body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build()); + List> response = template.requestBody("direct:multiSearch", builder, List.class); + +# Document type + +For all the search operations, it is possible to indicate the type of +document to retrieve to get the result already unmarshalled with the +expected type. + +The document type can be set using the header "documentClass" or via the +uri parameter of the same name. + +# Using Camel Elasticsearch with Spring Boot + +When you use `camel-elasticsearch-starter` with Spring Boot v2, then you +must declare the following dependency in your own `pom.xml`. + + + jakarta.json + jakarta.json-api + 2.0.2 + + +This is needed because Spring Boot v2 provides jakarta.json-api:1.1.6, +and Elasticsearch requires to use json-api v2. + +## Use RestClient provided by Spring Boot + +By default, Spring Boot will auto configure an Elasticsearch RestClient +that will be used by camel, it is possible to customize the client with +the following basic properties: + + spring.elasticsearch.uris=myelkhost:9200 + spring.elasticsearch.username=elkuser + spring.elasticsearch.password=secure!! + +More information can be found in +[https://docs.spring.io/spring-boot/docs/current/reference/html/application-properties.html#application-properties.data.spring.elasticsearch.connection-timeout](https://docs.spring.io/spring-boot/docs/current/reference/html/application-properties.html#application-properties.data.spring.elasticsearch.connection-timeout) + +## Disable Sniffer when using Spring Boot + +When Spring Boot is on the classpath, the Sniffer client for +Elasticsearch is enabled by default. This option can be disabled in the +Spring Boot Configuration: + + spring: + autoconfigure: + exclude: org.springframework.boot.autoconfigure.elasticsearch.ElasticsearchRestClientAutoConfiguration + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|connectionTimeout|The time in ms to wait before connection will timeout.|30000|integer| +|enableDocumentOnlyMode|Indicates whether the body of the message contains only documents. By default, it is set to false to be able to do the same requests as what the Document API supports (see https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html for more details). To ease the migration of routes based on the legacy component camel-elasticsearch-rest, you should consider enabling the mode especially if your routes do update operations.|false|boolean| +|hostAddresses|Comma separated list with ip:port formatted remote transport addresses to use. The ip and port options must be left blank for hostAddresses to be considered instead.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|maxRetryTimeout|The time in ms before retry|30000|integer| +|socketTimeout|The timeout in ms to wait before the socket will timeout.|30000|integer| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|client|To use an existing configured Elasticsearch client, instead of creating a client per endpoint. This allow to customize the client with specific settings.||object| +|enableSniffer|Enable automatically discover nodes from a running Elasticsearch cluster. If this option is used in conjunction with Spring Boot then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot).|false|boolean| +|sniffAfterFailureDelay|The delay of a sniff execution scheduled after a failure (in milliseconds)|60000|integer| +|snifferInterval|The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions|300000|integer| +|certificatePath|The path of the self-signed certificate to use to access to Elasticsearch.||string| +|enableSSL|Enable SSL|false|boolean| +|password|Password for authenticate||string| +|user|Basic authenticate user||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|clusterName|Name of the cluster||string| +|connectionTimeout|The time in ms to wait before connection will timeout.|30000|integer| +|disconnect|Disconnect after it finish calling the producer|false|boolean| +|enableDocumentOnlyMode|Indicates whether the body of the message contains only documents. By default, it is set to false to be able to do the same requests as what the Document API supports (see https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html for more details). To ease the migration of routes based on the legacy component camel-elasticsearch-rest, you should consider enabling the mode especially if your routes do update operations.|false|boolean| +|from|Starting index of the response.||integer| +|hostAddresses|Comma separated list with ip:port formatted remote transport addresses to use.||string| +|indexName|The name of the index to act against||string| +|maxRetryTimeout|The time in ms before retry|30000|integer| +|operation|What operation to perform||object| +|scrollKeepAliveMs|Time in ms during which elasticsearch will keep search context alive|60000|integer| +|size|Size of the response.||integer| +|socketTimeout|The timeout in ms to wait before the socket will timeout.|30000|integer| +|useScroll|Enable scroll usage|false|boolean| +|waitForActiveShards|Index creation waits for the write consistency number of shards to be available|1|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|documentClass|The class to use when deserializing the documents.|ObjectNode|string| +|enableSniffer|Enable automatically discover nodes from a running Elasticsearch cluster. If this option is used in conjunction with Spring Boot then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot).|false|boolean| +|sniffAfterFailureDelay|The delay of a sniff execution scheduled after a failure (in milliseconds)|60000|integer| +|snifferInterval|The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions|300000|integer| +|certificatePath|The certificate that can be used to access the ES Cluster. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|enableSSL|Enable SSL|false|boolean| diff --git a/camel-etcd3.md b/camel-etcd3.md new file mode 100644 index 0000000000000000000000000000000000000000..e5508b4fbcbd19d0d21dd5d550642c456882b391 --- /dev/null +++ b/camel-etcd3.md @@ -0,0 +1,170 @@ +# Etcd3 + +**Since Camel 3.19** + +**Both producer and consumer are supported** + +The Etcd v3 component allows you to work with Etcd, a distributed +reliable key-value store. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-etcd3 + x.x.x + + + +# URI Format + + etcd3:path[?options] + +# Producer Operations (Since 3.20) + +Apache Camel supports different etcd operations. + +To define the operation, set the exchange header with a key of +`CamelEtcdAction` and a value set to one of the following: + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
operationinput message bodyoutput message bodydescription

set

String value of the +key-value pair to put

PutResponse result of a +put operation

Puts a new key-value pair into etcd +where the option path or the exchange header +CamelEtcdPath is the key. You can set the key charset by +setting the exchange header with the key +CamelEtcdKeyCharset. You can set the value charset by +setting the exchange header with the key +CamelEtcdValueCharset.

get

None

GetResponse result of the +get operation

Retrieve the key-value pair(s) that +match with the key corresponding to the option path or the +exchange header CamelEtcdPath. You can set the key charset +by setting the exchange header with the key +CamelEtcdKeyCharset. You indicate if the key is a prefix by +setting the exchange header with the key CamelEtcdIsPrefix +to true.

delete

None

DeleteResponse result of +the delete operation

Delete the key-value pair(s) that match +with the key corresponding to the option path or the +exchange header CamelEtcdPath. You can set the key charset +by setting the exchange header with the key +CamelEtcdKeyCharset. You indicate if the key is a prefix by +setting the exchange header with the key CamelEtcdIsPrefix +to true.

+

== Consumer (Since 3.20)

+

The consumer of the etcd components allows watching changes on the +matching key-value pair(s). One exchange is created per event with the +header CamelEtcdPath set to the path of the corresponding +key-value pair and the body of type WatchEvent.

+

You can set the key charset by setting the exchange header with the +key CamelEtcdKeyCharset. You indicate if the key is a +prefix by setting the exchange header with the key +CamelEtcdIsPrefix to true.

+

By default, the consumer receives only the latest changes, but it is +also possible to start watching events from a specific revision by +setting the option fromIndex to the expected starting +index.

+

== AggregationRepository

+

The Etcd v3 component provides an AggregationStrategy to +use etcd as the backend datastore.

+

== RoutePolicy (Since 3.20)

+

The Etcd v3 component provides a RoutePolicy to use etcd +as clustered lock.

+ +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration.||object| +|endpoints|Configure etcd server endpoints using the IPNameResolver. Multiple endpoints can be separated by comma.|http://localhost:2379|string| +|keyCharset|Configure the charset to use for the keys.|UTF-8|string| +|namespace|Configure the namespace of keys used. / will be treated as no namespace.||string| +|prefix|To apply an action on all the key-value pairs whose key that starts with the target path.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|fromIndex|The index to watch from|0|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|valueCharset|Configure the charset to use for the values.|UTF-8|string| +|authHeaders|Configure the headers to be added to auth request headers.||object| +|authority|Configure the authority used to authenticate connections to servers.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|connectionTimeout|Configure the connection timeout.||object| +|headers|Configure the headers to be added to http request headers.||object| +|keepAliveTime|Configure the interval for gRPC keepalives. The current minimum allowed by gRPC is 10 seconds.|30 seconds|object| +|keepAliveTimeout|Configure the timeout for gRPC keepalives.|10 seconds|object| +|loadBalancerPolicy|Configure etcd load balancer policy.||string| +|maxInboundMessageSize|Configure the maximum message size allowed for a single gRPC frame.||integer| +|retryDelay|Configure the delay between retries in milliseconds.|500|integer| +|retryMaxDelay|Configure the max backing off delay between retries in milliseconds.|2500|integer| +|retryMaxDuration|Configure the retries max duration.||object| +|servicePath|The path to look for service discovery.|/services/|string| +|password|Configure etcd auth password.||string| +|sslContext|Configure SSL/TLS context to use instead of the system default.||object| +|userName|Configure etcd auth user.||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|path|The path the endpoint refers to||string| +|endpoints|Configure etcd server endpoints using the IPNameResolver. Multiple endpoints can be separated by comma.|http://localhost:2379|string| +|keyCharset|Configure the charset to use for the keys.|UTF-8|string| +|namespace|Configure the namespace of keys used. / will be treated as no namespace.||string| +|prefix|To apply an action on all the key-value pairs whose key that starts with the target path.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|fromIndex|The index to watch from|0|integer| +|valueCharset|Configure the charset to use for the values.|UTF-8|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|authHeaders|Configure the headers to be added to auth request headers.||object| +|authority|Configure the authority used to authenticate connections to servers.||string| +|connectionTimeout|Configure the connection timeout.||object| +|headers|Configure the headers to be added to http request headers.||object| +|keepAliveTime|Configure the interval for gRPC keepalives. The current minimum allowed by gRPC is 10 seconds.|30 seconds|object| +|keepAliveTimeout|Configure the timeout for gRPC keepalives.|10 seconds|object| +|loadBalancerPolicy|Configure etcd load balancer policy.||string| +|maxInboundMessageSize|Configure the maximum message size allowed for a single gRPC frame.||integer| +|retryDelay|Configure the delay between retries in milliseconds.|500|integer| +|retryMaxDelay|Configure the max backing off delay between retries in milliseconds.|2500|integer| +|retryMaxDuration|Configure the retries max duration.||object| +|servicePath|The path to look for service discovery.|/services/|string| +|password|Configure etcd auth password.||string| +|sslContext|Configure SSL/TLS context to use instead of the system default.||object| +|userName|Configure etcd auth user.||string| diff --git a/camel-exec.md b/camel-exec.md new file mode 100644 index 0000000000000000000000000000000000000000..9970856ab78532f434c8b3ad6d187d1f65fbd243 --- /dev/null +++ b/camel-exec.md @@ -0,0 +1,176 @@ +# Exec + +**Since Camel 2.3** + +**Only producer is supported** + +The Exec component can be used to execute system commands. + +# Dependencies + +Maven users need to add the following dependency to their `pom.xml` + + + org.apache.camel + camel-exec + ${camel-version} + + +Where `${camel-version`} must be replaced by the actual version of +Camel. + +# URI format + + exec://executable[?options] + +Where `executable` is the name, or file path, of the system command that +will be executed. If executable name is used (e.g. `exec:java`), the +executable must in the system path. + +# Message body + +If the component receives an `in` message body that is convertible to +`java.io.InputStream`, it is used to feed input to the executable via +its standard input (`stdin`). After execution, [the message +body](http://camel.apache.org/exchange.html) is the result of the +execution. That is, an `org.apache.camel.components.exec.ExecResult` +instance containing the `stdout`, `stderr`, *exit value*, and the *out +file*. + +This component supports the following `ExecResult` [type +converters](http://camel.apache.org/type-converter.html) for +convenience: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + +
FromTo

ExecResult

java.io.InputStream

ExecResult

String

ExecResult

byte []

ExecResult

org.w3c.dom.Document

+ +If an *out file* is specified (in the endpoint via `outFile` or the +message headers via `ExecBinding.EXEC_COMMAND_OUT_FILE`), the converters +will return the content of the *out file*. If no *out file* is used, +then this component will convert the `stdout` of the process to the +target type. For more details, please refer to the [usage +examples](#exec-component.adoc) below. + +# Usage examples + +## Executing word count (Linux) + +The example below executes `wc` (word count, Linux) to count the words +in file `/usr/share/dict/words`. The word count (*output*) is written to +the standard output stream of `wc`: + + from("direct:exec") + .to("exec:wc?args=--words /usr/share/dict/words") + .process(new Processor() { + public void process(Exchange exchange) throws Exception { + // By default, the body is ExecResult instance + assertIsInstanceOf(ExecResult.class, exchange.getIn().getBody()); + // Use the Camel Exec String type converter to convert the ExecResult to String + // In this case, the stdout is considered as output + String wordCountOutput = exchange.getIn().getBody(String.class); + // do something with the word count + } + }); + +## Executing `java` + +The example below executes `java` with two arguments: `-server` and +`-version`, if `java` is in the system path. + + from("direct:exec") + .to("exec:java?args=-server -version") + +The example below executes `java` in `c:\temp` with three arguments: +`-server`, `-version` and the system property `user.name`. + + from("direct:exec") + .to("exec:c:/program files/jdk/bin/java?args=-server -version -Duser.name=Camel&workingDir=c:/temp") + +## Executing Ant scripts + +The following example executes [Apache Ant](http://ant.apache.org/) +(Windows only) with the build file `CamelExecBuildFile.xml`, provided +that `ant.bat` is in the system path, and that `CamelExecBuildFile.xml` +is in the current directory. + + from("direct:exec") + .to("exec:ant.bat?args=-f CamelExecBuildFile.xml") + +In the next example, the `ant.bat` command redirects its output to +`CamelExecOutFile.txt` with `-l`. The file `CamelExecOutFile.txt` is +used as the *out file* with `outFile=CamelExecOutFile.txt`. The example +assumes that `ant.bat` is in the system path, and that +`CamelExecBuildFile.xml` is in the current directory. + + from("direct:exec") + .to("exec:ant.bat?args=-f CamelExecBuildFile.xml -l CamelExecOutFile.txt&outFile=CamelExecOutFile.txt") + .process(new Processor() { + public void process(Exchange exchange) throws Exception { + InputStream outFile = exchange.getIn().getBody(InputStream.class); + assertIsInstanceOf(InputStream.class, outFile); + // do something with the out file here + } + }); + +## Executing `echo` (Windows) + +Commands such as `echo` and `dir` can be executed only with the command +interpreter of the operating system. This example shows how to execute +such a command - `echo` - in Windows. + + from("direct:exec").to("exec:cmd?args=/C echo echoString") + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|executable|Sets the executable to be executed. The executable must not be empty or null.||string| +|args|The arguments may be one or many whitespace-separated tokens.||string| +|binding|A reference to a org.apache.commons.exec.ExecBinding in the Registry.||object| +|commandExecutor|A reference to a org.apache.commons.exec.ExecCommandExecutor in the Registry that customizes the command execution. The default command executor utilizes the commons-exec library, which adds a shutdown hook for every executed command.||object| +|commandLogLevel|Logging level to be used for commands during execution. The default value is DEBUG. Possible values are TRACE, DEBUG, INFO, WARN, ERROR or OFF. (Values of ExecCommandLogLevelType enum)|DEBUG|object| +|exitValues|The exit values of successful executions. If the process exits with another value, an exception is raised. Comma-separated list of exit values. And empty list (the default) sets no expected exit values and disables the check.||string| +|outFile|The name of a file, created by the executable, that should be considered as its output. If no outFile is set, the standard output (stdout) of the executable will be used instead.||string| +|timeout|The timeout, in milliseconds, after which the executable should be terminated. If execution has not completed within the timeout, the component will send a termination request.||duration| +|useStderrOnEmptyStdout|A boolean indicating that when stdout is empty, this component will populate the Camel Message Body with stderr. This behavior is disabled (false) by default.|false|boolean| +|workingDir|The directory in which the command should be executed. If null, the working directory of the current process will be used.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-fhir.md b/camel-fhir.md new file mode 100644 index 0000000000000000000000000000000000000000..9b14c5b3b82708c69d6801b5a7e1168d153a5948 --- /dev/null +++ b/camel-fhir.md @@ -0,0 +1,138 @@ +# Fhir + +**Since Camel 2.23** + +**Both producer and consumer are supported** + +The FHIR component integrates with the [HAPI-FHIR](http://hapifhir.io/) +library, which is an open-source implementation of the +[FHIR](http://hl7.org/implement/standards/fhir/) (Fast Healthcare +Interoperability Resources) specification in Java. + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-fhir + ${camel-version} + + +# URI Format + +The FHIR Component uses the following URI format: + + fhir://endpoint-prefix/endpoint?[options] + +Endpoint prefix can be one of: + +- capabilities + +- create + +- delete + +- history + +- load-page + +- meta + +- operation + +- patch + +- read + +- search + +- transaction + +- update + +- validate + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|encoding|Encoding to use for all request||string| +|fhirVersion|The FHIR Version to use|R4|string| +|log|Will log every requests and responses|false|boolean| +|prettyPrint|Pretty print all request|false|boolean| +|serverUrl|The FHIR server base URL||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|client|To use the custom client||object| +|clientFactory|To use the custom client factory||object| +|compress|Compresses outgoing (POST/PUT) contents to the GZIP format|false|boolean| +|configuration|To use the shared configuration||object| +|connectionTimeout|How long to try and establish the initial TCP connection (in ms)|10000|integer| +|deferModelScanning|When this option is set, model classes will not be scanned for children until the child list for the given type is actually accessed.|false|boolean| +|fhirContext|FhirContext is an expensive object to create. To avoid creating multiple instances, it can be set directly.||object| +|forceConformanceCheck|Force conformance check|false|boolean| +|sessionCookie|HTTP session cookie to add to every request||string| +|socketTimeout|How long to block for individual read/write operations (in ms)|10000|integer| +|summary|Request that the server modify the response using the \_summary param||string| +|validationMode|When should Camel validate the FHIR Server's conformance statement|ONCE|string| +|proxyHost|The proxy host||string| +|proxyPassword|The proxy password||string| +|proxyPort|The proxy port||integer| +|proxyUser|The proxy username||string| +|accessToken|OAuth access token||string| +|password|Password to use for basic authentication||string| +|username|Username to use for basic authentication||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|apiName|What kind of operation to perform||object| +|methodName|What sub operation to use for the selected operation||string| +|encoding|Encoding to use for all request||string| +|fhirVersion|The FHIR Version to use|R4|string| +|inBody|Sets the name of a parameter to be passed in the exchange In Body||string| +|log|Will log every requests and responses|false|boolean| +|prettyPrint|Pretty print all request|false|boolean| +|serverUrl|The FHIR server base URL||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|client|To use the custom client||object| +|clientFactory|To use the custom client factory||object| +|compress|Compresses outgoing (POST/PUT) contents to the GZIP format|false|boolean| +|connectionTimeout|How long to try and establish the initial TCP connection (in ms)|10000|integer| +|deferModelScanning|When this option is set, model classes will not be scanned for children until the child list for the given type is actually accessed.|false|boolean| +|fhirContext|FhirContext is an expensive object to create. To avoid creating multiple instances, it can be set directly.||object| +|forceConformanceCheck|Force conformance check|false|boolean| +|sessionCookie|HTTP session cookie to add to every request||string| +|socketTimeout|How long to block for individual read/write operations (in ms)|10000|integer| +|summary|Request that the server modify the response using the \_summary param||string| +|validationMode|When should Camel validate the FHIR Server's conformance statement|ONCE|string| +|proxyHost|The proxy host||string| +|proxyPassword|The proxy password||string| +|proxyPort|The proxy port||integer| +|proxyUser|The proxy username||string| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessToken|OAuth access token||string| +|password|Password to use for basic authentication||string| +|username|Username to use for basic authentication||string| diff --git a/camel-file-watch.md b/camel-file-watch.md new file mode 100644 index 0000000000000000000000000000000000000000..b891586f9bf17ac4450b946f2742424838364ed5 --- /dev/null +++ b/camel-file-watch.md @@ -0,0 +1,61 @@ +# File-watch + +**Since Camel 3.0** + +**Only consumer is supported** + +This component can be used to watch file modification events in the +folder. It is based on the project +[directory-watcher](https://github.com/gmethvin/directory-watcher). + +# URI Options + +# Examples: + +## Recursive watch all events (file creation, file deletion, file modification): + + from("file-watch://some-directory") + .log("File event: ${header.CamelFileEventType} occurred on file ${header.CamelFileName} at ${header.CamelFileLastModified}"); + +## Recursive watch for creation and deletion of txt files: + + from("file-watch://some-directory?events=DELETE,CREATE&antInclude=**/*.txt") + .log("File event: ${header.CamelFileEventType} occurred on file ${header.CamelFileName} at ${header.CamelFileLastModified}"); + +## Create a snapshot of file when modified: + + from("file-watch://some-directory?events=MODIFY&recursive=false") + .setHeader(Exchange.FILE_NAME, simple("${header.CamelFileName}.${header.CamelFileLastModified}")) + .to("file:some-directory/snapshots"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|useFileHashing|Enables or disables file hashing to detect duplicate events. If you disable this, you can get some events multiple times on some platforms and JDKs. Check java.nio.file.WatchService limitations for your target platform.|true|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|concurrentConsumers|The number of concurrent consumers. Increase this value, if your route is slow to prevent buffering in queue.|1|integer| +|fileHasher|Reference to io.methvin.watcher.hashing.FileHasher. This prevents emitting duplicate events on some platforms. For working with large files and if you dont need detect multiple modifications per second per file, use #lastModifiedTimeFileHasher. You can also provide custom implementation in registry.|#murmur3FFileHasher|object| +|pollThreads|The number of threads polling WatchService. Increase this value, if you see OVERFLOW messages in log.|1|integer| +|queueSize|Maximum size of queue between WatchService and consumer. Unbounded by default.|2147483647|integer| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|path|Path of directory to consume events from.||string| +|antInclude|ANT style pattern to match files. The file is matched against path relative to endpoint path. Pattern must be also relative (not starting with slash)|\*\*|string| +|autoCreate|Auto create directory if does not exist.|true|boolean| +|events|Comma separated list of events to watch. Possible values: CREATE,MODIFY,DELETE|CREATE,MODIFY,DELETE|string| +|recursive|Watch recursive in current and child directories (including newly created directories).|true|boolean| +|useFileHashing|Enables or disables file hashing to detect duplicate events. If you disable this, you can get some events multiple times on some platforms and JDKs. Check java.nio.file.WatchService limitations for your target platform.|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|concurrentConsumers|The number of concurrent consumers. Increase this value, if your route is slow to prevent buffering in queue.|1|integer| +|fileHasher|Reference to io.methvin.watcher.hashing.FileHasher. This prevents emitting duplicate events on some platforms. For working with large files and if you dont need detect multiple modifications per second per file, use #lastModifiedTimeFileHasher. You can also provide custom implementation in registry.|#murmur3FFileHasher|object| +|pollThreads|The number of threads polling WatchService. Increase this value, if you see OVERFLOW messages in log.|1|integer| +|queueSize|Maximum size of queue between WatchService and consumer. Unbounded by default.|2147483647|integer| diff --git a/camel-file.md b/camel-file.md new file mode 100644 index 0000000000000000000000000000000000000000..c39e36ed733307c7ef9df63f9a65ab24342ddc90 --- /dev/null +++ b/camel-file.md @@ -0,0 +1,920 @@ +# File + +**Since Camel 1.0** + +**Both producer and consumer are supported** + +The File component provides access to file systems, allowing files to be +processed by any other Camel Components or messages from other +components to be saved to disk. + +# URI format + + file:directoryName[?options] + +Where `directoryName` represents the underlying file directory. + +**Only directories** + +Camel supports only endpoints configured with a starting directory. So +the `directoryName` **must be** a directory. If you want to consume a +single file only, you can use the `fileName` option, e.g., by setting +`fileName=thefilename`. Also, the starting directory must not contain +dynamic expressions with `${ }` placeholders. Again use the `fileName` +option to specify the dynamic part of the filename. + +**Avoid reading files currently being written by another application** + +Beware the JDK File IO API is a bit limited in detecting whether another +application is currently writing/copying a file. And the implementation +can be different depending on the OS platform as well. This could lead +to that Camel thinks the file is not locked by another process and start +consuming it. Therefore, you have to do your own investigation what +suites your environment. To help with this Camel provides different +`readLock` options and `doneFileName` option that you can use. See also +[Consuming files from folders where others drop files +directly](#File2-Consumingfilesfromfolderswhereothersdropfilesdirectly). + +**Default behavior for file producer** + +By default, it will override any existing file if one exists with the +same name. + +# Move, Pre Move and Delete operations + +By default, Camel will move consumed files to the `.camel` subfolder +relative to the directory where the file was consumed. + +If you want to delete the file after processing, the route should be: + + from("file://inbox?delete=true").to("bean:handleOrder"); + +There is a sample [showing reading from a directory and the default move +operation](#File2-ReadingFromADirectoryAndTheDefaultMoveOperation) +below. + +## Move, Delete and the Routing process + +Any move or delete operations are executed after the routing has +completed. So, during processing of the `Exchange` the file is still +located in the inbox folder. + +Let’s illustrate this with an example: + + from("file://inbox?move=.done").to("bean:handleOrder"); + +When a file is dropped in the `inbox` folder, the file consumer notices +this and creates a new `FileExchange` that is routed to the +`handleOrder` bean. The bean then processes the `File` object. At this +point in time the file is still located in the `inbox` folder. After the +bean completes, and thus the route is completed, the file consumer will +perform the move operation and move the file to the `.done` sub-folder. + +The `move` and the `preMove` options are considered as a directory name. +Though, if you use an expression such as [File +Language](#languages:file-language.adoc), or +[Simple](#languages:simple-language.adoc), then the result of the +expression evaluation is the file name to be used. E.g., if you set: + + move=../backup/copy-of-${file:name} + +Then that’s using the [File Language](#languages:file-language.adoc) +which we use to return the file name to be used. This can be either +relative or absolute. If relative, the directory is created as a +subfolder from within the folder where the file was consumed. + +## Move and Pre Move operations + +We have introduced a `preMove` operation to move files **before** they +are processed. This allows you to mark which files have been scanned as +they are moved to this subfolder before being processed. + + from("file://inbox?preMove=inprogress").to("bean:handleOrder"); + +You can combine the `preMove` and the regular `move`: + + from("file://inbox?preMove=inprogress&move=.done").to("bean:handleOrder"); + +So in this situation, the file is in the `inprogress` folder when being +processed, and after it’s processed, it’s moved to the `.done` folder. + +## Fine-grained control over Move and PreMove option + +The `move` and `preMove` options are Expression-based, so we have the +full power of the [File Language](#languages:file-language.adoc) to do +advanced configuration of the directory and name pattern. Camel will, in +fact, internally convert the directory name you enter into a [File +Language](#languages:file-language.adoc) expression. So, when we use +`move=.done` Camel will convert this into: +`${file:parent}/.done/${file:onlyname}`. + +This is only done if Camel detects that you have not provided a `${ }` +in the option value yourself. So when you enter a `${ }` Camel will +**not** convert it, and thus you have the full power. + +So if we want to move the file into a backup folder with today’s date, +as the pattern, we can do: + + move=backup/${date:now:yyyyMMdd}/${file:name} + +## About moveFailed + +The `moveFailed` option allows you to move files that **could not** be +processed successfully to another location such as an error folder of +your choice. For example, to move the files in an error folder with a +timestamp you can use + +See more examples at [File Language](#languages:file-language.adoc) + +# Exchange Properties, file consumer only + +As the file consumer implements the `BatchConsumer` it supports batching +the files it polls. By batching, we mean that Camel will add the +following additional properties to the Exchange: + + ++++ + + + + + + + + + + + + + + + + + + + + +
PropertyDescription

CamelBatchSize

The total number of files that was +polled in this batch.

CamelBatchIndex

The current index of the batch. Starts +from 0.

CamelBatchComplete

A boolean value indicating +the last Exchange in the batch. Is only true for the last +entry.

+ +This allows you, for instance, to know how many files exist in this +batch and for instance, let the Aggregator2 aggregate this number of +files. + +# Using charset + +The `charset` option allows configuring the encoding of the files on +both the consumer and producer endpoints. For example, if you read utf-8 +files and want to convert the files to iso-8859-1, you can do: + + from("file:inbox?charset=utf-8") + .to("file:outbox?charset=iso-8859-1") + +You can also use the `convertBodyTo` in the route. In the example below, +we have still input files in utf-8 format, but we want to convert the +file content to a byte array in iso-8859-1 format. And then let a bean +process the data. Before writing the content to the outbox folder using +the current charset. + + from("file:inbox?charset=utf-8") + .convertBodyTo(byte[].class, "iso-8859-1") + .to("bean:myBean") + .to("file:outbox"); + +If you omit the charset on the consumer endpoint, then Camel does not +know the charset of the file, and would by default use `UTF-8`. However, +you can configure a JVM system property to override and use a different +default encoding with the key `org.apache.camel.default.charset`. + +In the example below, this could be a problem if the files are not in +UTF-8 encoding, which would be the default encoding for read the files. +In this example when writing the files, the content has already been +converted to a byte array, and thus would write the content directly as +is (without any further encodings). + + from("file:inbox") + .convertBodyTo(byte[].class, "iso-8859-1") + .to("bean:myBean") + .to("file:outbox"); + +You can also override and control the encoding dynamic when writing +files, by setting a property on the exchange with the key +`Exchange.CHARSET_NAME`. For example, in the route below, we set the +property with a value from a message header. + + from("file:inbox") + .convertBodyTo(byte[].class, "iso-8859-1") + .to("bean:myBean") + .setProperty(Exchange.CHARSET_NAME, header("someCharsetHeader")) + .to("file:outbox"); + +We suggest keeping things simpler, so if you pick up files with the same +encoding, and want to write the files in a specific encoding, then favor +to use the `charset` option on the endpoints. + +Notice that if you have explicitly configured a `charset` option on the +endpoint, then that configuration is used, regardless of the +`Exchange.CHARSET_NAME` property. + +If you have some issues then you can enable DEBUG logging on +`org.apache.camel.component.file`, and Camel logs when it reads/write a +file using a specific charset. For example, the route below will log the +following: + + from("file:inbox?charset=utf-8") + .to("file:outbox?charset=iso-8859-1") + +And the logs: + + DEBUG GenericFileConverter - Read file /Users/davsclaus/workspace/camel/camel-core/target/charset/input/input.txt with charset utf-8 + DEBUG FileOperations - Using Reader to write file: target/charset/output.txt with charset: iso-8859-1 + +# Common gotchas with folder and filenames + +When Camel is producing files (writing files), there are a few gotchas +affecting how to set a filename of your choice. By default, Camel will +use the message ID as the filename, and since the message ID is normally +a unique generated ID, you will end up with filenames such as: +`ID-MACHINENAME-2443-1211718892437-1-0`. If such a filename is not +desired, then you must provide a filename in the `CamelFileName` message +header. The constant, `Exchange.FILE_NAME`, can also be used. + +The sample code below produces files using the message ID as the +filename: + + from("direct:report").to("file:target/reports"); + +To use `report.txt` as the filename you have to do: + + from("direct:report").setHeader(Exchange.FILE_NAME, constant("report.txt")).to( "file:target/reports"); + +1. the same as above, but with `CamelFileName`: + + + + from("direct:report").setHeader("CamelFileName", constant("report.txt")).to( "file:target/reports"); + +And a syntax where we set the filename on the endpoint with the +`fileName` URI option. + + from("direct:report").to("file:target/reports/?fileName=report.txt"); + +# Filename Expression + +Filename can be set either using the **expression** option or as a +string-based [File Language](#languages:file-language.adoc) expression +in the `CamelFileName` header. See the [File +Language](#languages:file-language.adoc) for syntax and samples. + +# Consuming files from folders where others drop files directly + +Beware if you consume files from a folder where other applications write +files too. Take a look at the different `readLock` options to see what +suits your use cases. The best approach is, however, to write to another +folder and, after writing, move the file in the drop folder. However, if +you write files directly to the drop folder, then the option `changed` +could better detect whether a file is currently being written/copied as +it uses a file changed algorithm to see whether the file size / +modification changes over a period of time. The other `readLock` options +rely on Java File API that, sadly, is not always very good at detecting +this. You may also want to look at the `doneFileName` option, which uses +a marker file (*done file*) to signal when a file is done and ready to +be consumed. + +# Done files + +## Using done files + +See also section [*writing done files*](#File2-WritingDoneFiles) below. + +If you want only to consume files when a *done file* exists, then you +can use the `doneFileName` option on the endpoint. + + from("file:bar?doneFileName=done"); + +It will only consume files from the *bar* folder if a *done file* exists +in the same directory as the target files. Camel will automatically +delete the *done file* when it’s done consuming the files. + +Camel does not delete automatically the *done file* if `noop=true` is +configured. + +However, it is more common to have one *done file* per target file. This +means there is a 1:1 correlation. To do this, you must use dynamic +placeholders in the `doneFileName` option. Currently, Camel supports the +following two dynamic tokens: `file:name` and `file:name.noext` which +must be enclosed in `${ }`. The consumer only supports the static part +of the *done file* name as either prefix or suffix (not both). + +Suffix +from("file:bar?doneFileName=${file:name}.done"); + +In this example, the files will only be polled if there exists a *done +file* with the name `_file name_.done`. For example: + +- `hello.txt`: is the file to be consumed + +- `hello.txt.done`: is the associated `done` file + +Prefix +from("file:bar?doneFileName=ready-${file:name}"); + +You can also use a prefix for the *done file*, such as: + +- `hello.txt`: is the file to be consumed + +- `ready-hello.txt`: is the associated `done` file + +## Writing done files + +After you have written a file, you may want to write an additional *done +file* as a kind of marker, to indicate to others that the file is +finished and has been written. To do that, you can use the +`doneFileName` option on the file producer endpoint. + + .to("file:bar?doneFileName=done"); + +This will create a file named `done` in the same directory as the target +file. + +However, it is more common to have one *done file* per target file. This +means there is a 1:1 correlation. To do this, you must use dynamic +placeholders in the `doneFileName` option. Currently, Camel supports the +following two dynamic tokens: `file:name` and `file:name.noext`. They +must be enclosed in `${ }`: + +Prefix and file name +.to("file:bar?doneFileName=done-${file:name}"); + +This will, for example, create a file named `done-foo.txt` if the target +file was `foo.txt` in the same directory as the target file. + +Suffix and file name +.to("file:bar?doneFileName=${file:name}.done"); + +This will, for example, create a file named `foo.txt.done` if the target +file was `foo.txt` in the same directory as the target file. + +File name without the extension +.to("file:bar?doneFileName=${file:name.noext}.done"); + +Will, for example, create a file named `foo.done` if the target file was +`foo.txt` in the same directory as the target file. + +# Using flatten + +If you want to store the files in the `outputdir` directory in the same +directory, disregarding the source directory layout (e.g., to flatten +out the path), you add the `flatten=true` option on the file producer +side: + + from("file://inputdir/?recursive=true&delete=true").to("file://outputdir?flatten=true") + +It will result in the following output layout: + + outputdir/foo.txt + outputdir/bar.txt + +# Writing to files + +Camel is also able to write files, i.e., produce files. In the sample +below, we receive some reports on the SEDA queue that we process before +they are being written to a directory. + +## Write to subdirectory using `Exchange.FILE_NAME` + +Using a single route, it is possible to write a file to any number of +subdirectories. If you have a route setup as such: + + + + + + +You can have `myBean` set the header `Exchange.FILE_NAME` to values such +as: + + Exchange.FILE_NAME = hello.txt => /rootDirectory/hello.txt + Exchange.FILE_NAME = foo/bye.txt => /rootDirectory/foo/bye.txt + +This allows you to have a single route to write files to multiple +destinations. + +## Writing file through the temporary directory relative to the final destination + +Sometimes you need to temporarily write the files to some directory +relative to the destination directory. Such a situation usually happens +when some external process with limited filtering capabilities is +reading from the directory you are writing to. In the example below +files will be written to the `/var/myapp/filesInProgress` directory and +after data transfer is done, they will be atomically moved to the\` +/var/myapp/finalDirectory \`directory. + + from("direct:start"). + to("file:///var/myapp/finalDirectory?tempPrefix=/../filesInProgress/"); + +# Avoiding reading the same file more than once (idempotent consumer) + +Camel supports Idempotent Consumer directly within the component, so it +will skip already processed files. This feature can be enabled by +setting the `idempotent=true` option. + + from("file://inbox?idempotent=true").to("..."); + +Camel uses the absolute file name as the idempotent key, to detect +duplicate files. You can customize this key by using an expression in +the idempotentKey option. For example, to use both the name and the file +size as the key + + + + + + +By default, Camel uses an in-memory store for keeping track of consumed +files. It uses the least recently used cache holding up to 1000 entries. +You can plug in your own implementation of this store by using the +`idempotentRepository` option using the `#` sign in the value to +indicate it’s a referring to a bean in the Registry with the specified +`id`. + + + + + + + + + +Camel will log at `DEBUG` level if it skips a file because it has been +consumed before: + + DEBUG FileConsumer is idempotent and the file has been consumed before. Will skip this file: target\idempotent\report.txt + +# Idempotent Repository + +## Using a file-based idempotent repository + +In this section we will use the file-based idempotent repository +`org.apache.camel.processor.idempotent.FileIdempotentRepository` instead +of the in-memory one that is used as default. This repository uses a +first level cache to avoid reading the file repository. It will only use +the file repository to store the content of the first level cache. +Thereby, the repository can survive server restarts. It will load the +content of the file into the first level cache upon startup. The file +structure is basic as it stores the key in separate lines in the file. +By default, the file store has a size limit of 1mb. When the file grows +larger, Camel will truncate the file store, rebuilding the content by +flushing the first level cache into a fresh empty file. + +We configure our repository using Spring XML creating our file +idempotent repository and define our file consumer to use our repository +with the `idempotentRepository` using `#` sign to indicate Registry +lookup: + +## Using a JPA based idempotent repository + +In this section, we will use the JPA based idempotent repository instead +of the in-memory based that is used as default. + +First we need a persistence-unit in `META-INF/persistence.xml` where we +need to use the class +`org.apache.camel.processor.idempotent.jpa.MessageProcessed` as the +model. + + + org.apache.camel.processor.idempotent.jpa.MessageProcessed + + + + + + + + + + +Next, we can create our JPA idempotent repository in the spring XML file +as well: + + + + + + + + + +And yes then we just need to refer to the **jpaStore** bean in the file +consumer endpoint using the `idempotentRepository` using the `#` syntax +option: + + + + + + +# Filtering Strategies + +Camel supports pluggable filtering strategies. They are described below. + +## Filter using the `GenericFilter` + +The `filter` option allows you to implement a custom filter in Java code +by implementing the `org.apache.camel.component.file.GenericFileFilter` +interface. + +### Implementing a GenericFilter + +The interface has an `accept` method that returns a boolean. The meaning +of the return values are: + +- `true` to include the file + +- `false` to skip the file. + +There is also a `isDirectory` method on `GenericFile` to inform whether +the file is a directory. This allows you to filter unwanted directories, +to avoid traversing down unwanted directories. + +### Using the `GenericFilter` + +You can then configure the endpoint with such a filter to skip certain +files being processed. + +In the sample we have built our own filter that skips files starting +with `skip` in the filename: + +And then we can configure our route using the `filter` attribute to +reference our filter (using `#` notation) that we have defined in the +spring XML file: + + + + + + + + + +## Filtering using ANT path matcher + +The ANT path matcher is based on +[AntPathMatcher](http://static.springframework.org/spring/docs/2.5.x/api/org/springframework/util/AntPathMatcher.html). + +The file paths are matched with the following rules: + +- `?` matches one character + +- `*` matches zero or more characters + +- `**` matches zero or more directories in a path + +The `antInclude` and `antExclude` options make it easy to specify ANT +style include/exclude without having to define the filter. See the URI +options above for more information. + +The sample below demonstrates how to use it: + + from("file://inbox?antInclude=**/*.txt").to("..."); + +# Sorting Strategies + +Camel supports pluggable sorting strategies. They are described below. + +## Sorting using Comparator + +This strategy it to use the build in `java.util.Comparator` in Java. You +can then configure the endpoint with such a comparator and have Camel +sort the files before being processed. + +In the sample, we have built our own comparator that just sorts by file +name: + +And then we can configure our route using the **sorter** option to +reference to our sorter (`mySorter`) we have defined in the spring XML +file: + + + + + + + + + +**URI options can reference beans using the # syntax** + +In the Spring DSL route above notice that we can refer to beans in the +Registry by prefixing the id with `#`. So writing `sorter=#mySorter`, +will instruct Camel to go look in the Registry for a bean with the ID, +`mySorter`. + +## Sorting using sortBy + +Camel supports pluggable sorting strategies. This strategy uses the +[File Language](#languages:file-language.adoc) to configure the sorting. +The `sortBy` option is configured as follows: + + sortBy=group 1;group 2;group 3;... + +Where each group is separated with semicolon. In the simple situations +you just use one group, so a simple example could be: + + sortBy=file:name + +This will sort by file name, you can reverse the order by prefixing +`reverse:` to the group, so the sorting is now Z to A: + + sortBy=reverse:file:name + +As we have the full power of [File +Language](#languages:file-language.adoc), we can use some of the other +parameters, so if we want to sort by file size, we do: + + sortBy=file:length + +You can configure to ignore the case, using `ignoreCase:` for string +comparison, so if you want to use file name sorting but to ignore the +case, then we do: + + sortBy=ignoreCase:file:name + +You can combine ignore case and reverse. However, `reverse` must be +specified first: + + sortBy=reverse:ignoreCase:file:name + +In the sample below we want to sort by the last modified file, so we do: + + sortBy=file:modified + +And then we want to group by name as a second option, so files with the +same modification time are sorted by name: + + sortBy=file:modified;file:name + +Now there is an issue here, can you spot it? Well, the modified +timestamp of the file is too fine as it will be in milliseconds, but +what if we want to sort by date only and then subgroup by name? Well, as +we have the true power of [File +Language](#languages:file-language.adoc), we can use its date command +that supports patterns. So this can be solved as: + + sortBy=date:file:yyyyMMdd;file:name + +Yeah, that is pretty powerful, oh by the way, you can also use reverse +per group, so we could reverse the file names: + + sortBy=date:file:yyyyMMdd;reverse:file:name + +# Using GenericFileProcessStrategy + +The option `processStrategy` can be used to use a custom +`GenericFileProcessStrategy` that allows you to implement your own +*begin*, *commit* and *rollback* logic. For instance, let’s assume a +system writes a file in a folder you should consume. But you should not +start consuming the file before another *ready* file has been written as +well. + +So by implementing our own `GenericFileProcessStrategy` we can implement +this as: + +- In the `begin()` method we can test whether the special *ready* file + exists. The `begin` method returns a `boolean` to indicate if we can + consume the file or not. + +- In the `abort()` method special logic can be executed in case the + `begin` operation returned `false`, for example, to clean up + resources etc. + +- in the `commit()` method we can move the actual file and also delete + the *ready* file. + +# Using bridgeErrorHandler + +If you want to use the Camel Error Handler to deal with any exception +occurring in the file consumer, then you can enable the +`bridgeErrorHandler` option as shown below: + + // to handle any IOException being thrown + onException(IOException.class) + .handled(true) + .log("IOException occurred due: ${exception.message}") + .transform().simple("Error ${exception.message}") + .to("mock:error"); + + // this is the file route that pickup files. Notice how we bridge the consumer to use the Camel routing error handler + // the exclusiveReadLockStrategy is only configured because this is from a unit test, so we use that to simulate exceptions + from("file:target/nospace?bridgeErrorHandler=true") + .convertBodyTo(String.class) + .to("mock:result"); + +So all you have to do is to enable this option, and the error handler in +the route will take it from there. + +When using bridgeErrorHandler, then `interceptors`, `OnCompletions` do +**not** apply. The Exchange is processed directly by the Camel Error +Handler, and does not allow prior actions such as interceptors, +onCompletion to take action. + +# Debug logging + +This component has log level **TRACE** that can be helpful if you have +problems. + +# Samples + +## Reading from a directory and the default move operation + +By default, Camel will move any processed file into a `.camel` +subdirectory in the directory the file was consumed from. + + from("file://inputdir/?recursive=true&delete=true").to("file://outputdir") + +Affects the layout as follows: +**before** + + inputdir/foo.txt + inputdir/sub/bar.txt + +**after** + + inputdir/.camel/foo.txt + inputdir/sub/.camel/bar.txt + outputdir/foo.txt + outputdir/sub/bar.txt + +## Read from a directory and write to another directory + + from("file://inputdir/?delete=true").to("file://outputdir") + +## Read from a directory and write to another directory using a dynamic name + + from("file://inputdir/?delete=true").to("file://outputdir?fileName=copy-of-${file:name}") + +Listen to a directory and create a message for each file dropped there. +Copy the contents to the `outputdir` and delete the file in the +`inputdir`. + +## Reading recursively from a directory and writing to another + + from("file://inputdir/?recursive=true&delete=true").to("file://outputdir") + +Listen to a directory and create a message for each file dropped there. +Copy the contents to the `outputdir` and delete the file in the +`inputdir`. Will scan recursively into subdirectories. Will lay out the +files in the same directory structure in the `outputdir` as the +`inputdir`, including any subdirectories. + + inputdir/foo.txt + inputdir/sub/bar.txt + +It will result in the following output layout: + + outputdir/foo.txt + outputdir/sub/bar.txt + +## Read from a directory and process the message in java + + from("file://inputdir/").process(new Processor() { + public void process(Exchange exchange) throws Exception { + Object body = exchange.getIn().getBody(); + // do some business logic with the input body + } + }); + +The body will be a `File` object that points to the file that was just +dropped into the `inputdir` directory. + +## Using expression for filenames + +In this sample, we want to move consumed files to a backup folder using +today’s date as a subfolder name: + + from("file://inbox?move=backup/${date:now:yyyyMMdd}/${file:name}").to("..."); + +See [File Language](#languages:file-language.adoc) for more samples. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|directoryName|The starting directory||string| +|charset|This option is used to specify the encoding of the file. You can use this on the consumer, to specify the encodings of the files, which allow Camel to know the charset it should load the file content in case the file content is being accessed. Likewise when writing a file, you can use this option to specify which charset to write the file as well. Do mind that when writing the file Camel may have to read the message content into memory to be able to convert the data into the configured charset, so do not use this if you have big messages.||string| +|doneFileName|Producer: If provided, then Camel will write a 2nd done file when the original file has been written. The done file will be empty. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders. The done file will always be written in the same folder as the original file. Consumer: If provided, Camel will only consume files if a done file exists. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders.The done file is always expected in the same folder as the original file. Only ${file.name} and ${file.name.next} is supported as dynamic placeholders.||string| +|fileName|Use Expression such as File Language to dynamically set the filename. For consumers, it's used as a filename filter. For producers, it's used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today's file using the File Language syntax: mydata-${date:now:yyyyMMdd}.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards.||string| +|delete|If true, the file will be deleted after it is processed successfully.|false|boolean| +|moveFailed|Sets the move failure expression based on Simple language. For example, to move files into a .error subdirectory use: .error. Note: When moving the files to the fail location Camel will handle the error and will not pick up the file again.||string| +|noop|If true, the file is not moved or deleted in any way. This option is good for readonly data, or for ETL type requirements. If noop=true, Camel will set idempotent=true as well, to avoid consuming the same files over and over again.|false|boolean| +|preMove|Expression (such as File Language) used to dynamically set the filename when moving it before processing. For example to move in-progress files into the order directory set this value to order.||string| +|preSort|When pre-sort is enabled then the consumer will sort the file and directory names during polling, that was retrieved from the file system. You may want to do this in case you need to operate on the files in a sorted order. The pre-sort is executed before the consumer starts to filter, and accept files to process by Camel. This option is default=false meaning disabled.|false|boolean| +|recursive|If a directory, will look for files in all the sub-directories as well.|false|boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|directoryMustExist|Similar to the startingDirectoryMustExist option but this applies during polling (after starting the consumer).|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|extendedAttributes|To define which file attributes of interest. Like posix:permissions,posix:owner,basic:lastAccessTime, it supports basic wildcard like posix:, basic:lastAccessTime||string| +|includeHiddenDirs|Whether to accept hidden directories. Directories which names starts with dot is regarded as a hidden directory, and by default not included. Set this option to true to include hidden directories in the file consumer.|false|boolean| +|includeHiddenFiles|Whether to accept hidden files. Files which names starts with dot is regarded as a hidden file, and by default not included. Set this option to true to include hidden files in the file consumer.|false|boolean| +|inProgressRepository|A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used.||object| +|localWorkDirectory|When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory.||string| +|onCompletionExceptionHandler|To use a custom org.apache.camel.spi.ExceptionHandler to handle any thrown exceptions that happens during the file on completion process where the consumer does either a commit or rollback. The default implementation will log any exception at WARN level and ignore.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|probeContentType|Whether to enable probing of the content type. If enable then the consumer uses Files#probeContentType(java.nio.file.Path) to determine the content-type of the file, and store that as a header with key Exchange#FILE\_CONTENT\_TYPE on the Message.|false|boolean| +|processStrategy|A pluggable org.apache.camel.component.file.GenericFileProcessStrategy allowing you to implement your own readLock option or similar. Can also be used when special conditions must be met before a file can be consumed, such as a special ready file exists. If this option is set then the readLock option does not apply.||object| +|startingDirectoryMustExist|Whether the starting directory must exist. Mind that the autoCreate option is default enabled, which means the starting directory is normally auto created if it doesn't exist. You can disable autoCreate and enable this to ensure the starting directory must exist. Will throw an exception if the directory doesn't exist.|false|boolean| +|startingDirectoryMustHaveAccess|Whether the starting directory has access permissions. Mind that the startingDirectoryMustExist parameter must be set to true in order to verify that the directory exists. Will thrown an exception if the directory doesn't have read and write permissions.|false|boolean| +|appendChars|Used to append characters (text) after writing files. This can for example be used to add new lines or other separators when writing and appending new files or existing files. To specify new-line (slash-n or slash-r) or tab (slash-t) characters then escape with an extra slash, eg slash-slash-n.||string| +|checksumFileAlgorithm|If provided, then Camel will write a checksum file when the original file has been written. The checksum file will contain the checksum created with the provided algorithm for the original file. The checksum file will always be written in the same folder as the original file.||string| +|fileExist|What to do if a file already exists with the same name. Override, which is the default, replaces the existing file. - Append - adds content to the existing file. - Fail - throws a GenericFileOperationException, indicating that there is already an existing file. - Ignore - silently ignores the problem and does not override the existing file, but assumes everything is okay. - Move - option requires to use the moveExisting option to be configured as well. The option eagerDeleteTargetFile can be used to control what to do if an moving the file, and there exists already an existing file, otherwise causing the move operation to fail. The Move option will move any existing files, before writing the target file. - TryRename is only applicable if tempFileName option is in use. This allows to try renaming the file from the temporary name to the actual name, without doing any exists check. This check may be faster on some file systems and especially FTP servers.|Override|object| +|flatten|Flatten is used to flatten the file name path to strip any leading paths, so it's just the file name. This allows you to consume recursively into sub-directories, but when you eg write the files to another directory they will be written in a single directory. Setting this to true on the producer enforces that any file name in CamelFileName header will be stripped for any leading paths.|false|boolean| +|jailStartingDirectory|Used for jailing (restricting) writing files to the starting directory (and sub) only. This is enabled by default to not allow Camel to write files to outside directories (to be more secured out of the box). You can turn this off to allow writing files to directories outside the starting directory, such as parent or root folders.|true|boolean| +|moveExisting|Expression (such as File Language) used to compute file name to use when fileExist=Move is configured. To move files into a backup subdirectory just enter backup. This option only supports the following File Language tokens: file:name, file:name.ext, file:name.noext, file:onlyname, file:onlyname.noext, file:ext, and file:parent. Notice the file:parent is not supported by the FTP component, as the FTP component can only move any existing files to a relative directory based on current dir as base.||string| +|tempFileName|The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. The location for tempFilename is relative to the final file location in the option 'fileName', not the target directory in the base uri. For example if option fileName includes a directory prefix: dir/finalFilename then tempFileName is relative to that subdirectory dir.||string| +|tempPrefix|This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files.||string| +|allowNullBody|Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged.|false|boolean| +|chmod|Specify the file permissions which is sent by the producer, the chmod value must be between 000 and 777; If there is a leading digit like in 0755 we will ignore it.||string| +|chmodDirectory|Specify the directory permissions used when the producer creates missing directories, the chmod value must be between 000 and 777; If there is a leading digit like in 0755 we will ignore it.||string| +|eagerDeleteTargetFile|Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation.|true|boolean| +|forceWrites|Whether to force syncing writes to the file system. You can turn this off if you do not want this level of guarantee, for example if writing to logs / audit logs etc; this would yield better performance.|true|boolean| +|keepLastModified|Will keep the last modified timestamp from the source file (if any). Will use the FileConstants.FILE\_LAST\_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|moveExistingFileStrategy|Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided||object| +|autoCreate|Automatically create missing directories in the file's pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to.|true|boolean| +|bufferSize|Buffer size in bytes used for writing files (or in case of FTP for downloading and uploading files).|131072|integer| +|copyAndDeleteOnRenameFail|Whether to fallback and do a copy and delete file, in case the file could not be renamed directly. This option is not available for the FTP component.|true|boolean| +|renameUsingCopy|Perform rename operations using a copy and delete strategy. This is primarily used in environments where the regular rename operation is unreliable (e.g. across different file systems or networks). This option takes precedence over the copyAndDeleteOnRenameFail parameter that will automatically fall back to the copy and delete strategy, but only after additional delays.|false|boolean| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|antExclude|Ant style filter exclusion. If both antInclude and antExclude are used, antExclude takes precedence over antInclude. Multiple exclusions may be specified in comma-delimited format.||string| +|antFilterCaseSensitive|Sets case sensitive flag on ant filter.|true|boolean| +|antInclude|Ant style filter inclusion. Multiple inclusions may be specified in comma-delimited format.||string| +|eagerMaxMessagesPerPoll|Allows for controlling whether the limit from maxMessagesPerPoll is eager or not. If eager then the limit is during the scanning of files. Where as false would scan all files, and then perform sorting. Setting this option to false allows for sorting all files first, and then limit the poll. Mind that this requires a higher memory usage as all file details are in memory to perform the sorting.|true|boolean| +|exclude|Is used to exclude files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris||string| +|excludeExt|Is used to exclude files matching file extension name (case insensitive). For example to exclude bak files, then use excludeExt=bak. Multiple extensions can be separated by comma, for example to exclude bak and dat files, use excludeExt=bak,dat. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options.||string| +|filter|Pluggable filter as a org.apache.camel.component.file.GenericFileFilter class. Will skip files if filter returns false in its accept() method.||object| +|filterDirectory|Filters the directory based on Simple language. For example to filter on current date, you can use a simple date pattern such as ${date:now:yyyMMdd}||string| +|filterFile|Filters the file based on Simple language. For example to filter on file size, you can use ${file:size} 5000||string| +|idempotent|Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again.|false|boolean| +|idempotentEager|Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again.|false|boolean| +|idempotentKey|To use a custom idempotent key. By default the absolute path of the file is used. You can use the File Language, for example to use the file name and file size, you can do: idempotentKey=${file:name}-${file:size}||string| +|idempotentRepository|A pluggable repository org.apache.camel.spi.IdempotentRepository which by default use MemoryIdempotentRepository if none is specified and idempotent is true.||object| +|include|Is used to include files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris||string| +|includeExt|Is used to include files matching file extension name (case insensitive). For example to include txt files, then use includeExt=txt. Multiple extensions can be separated by comma, for example to include txt and xml files, use includeExt=txt,xml. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options.||string| +|maxDepth|The maximum depth to traverse when recursively processing a directory.|2147483647|integer| +|maxMessagesPerPoll|To define a maximum messages to gather per poll. By default no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disabled it. Notice: If this option is in use then the File and FTP components will limit before any sorting. For example if you have 100000 files and use maxMessagesPerPoll=500, then only the first 500 files will be picked up, and then sorted. You can use the eagerMaxMessagesPerPoll option and set this to false to allow to scan all files first and then sort afterwards.||integer| +|minDepth|The minimum depth to start processing when recursively processing a directory. Using minDepth=1 means the base directory. Using minDepth=2 means the first sub directory.||integer| +|move|Expression (such as Simple Language) used to dynamically set the filename when moving it after processing. To move files into a .done subdirectory just enter .done.||string| +|exclusiveReadLockStrategy|Pluggable read-lock as a org.apache.camel.component.file.GenericFileExclusiveReadLockStrategy implementation.||object| +|readLock|Used by consumer, to only poll the files if it has exclusive read-lock on the file (i.e. the file is not in-progress or being written). Camel will wait until the file lock is granted. This option provides the build in strategies: - none - No read lock is in use - markerFile - Camel creates a marker file (fileName.camelLock) and then holds a lock on it. This option is not available for the FTP component - changed - Changed is using file length/modification timestamp to detect whether the file is currently being copied or not. Will at least use 1 sec to determine this, so this option cannot consume files as fast as the others, but can be more reliable as the JDK IO API cannot always determine whether a file is currently being used by another process. The option readLockCheckInterval can be used to set the check frequency. - fileLock - is for using java.nio.channels.FileLock. This option is not avail for Windows OS and the FTP component. This approach should be avoided when accessing a remote file system via a mount/share unless that file system supports distributed file locks. - rename - rename is for using a try to rename the file as a test if we can get exclusive read-lock. - idempotent - (only for file component) idempotent is for using a idempotentRepository as the read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-changed - (only for file component) idempotent-changed is for using a idempotentRepository and changed as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-rename - (only for file component) idempotent-rename is for using a idempotentRepository and rename as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that.Notice: The various read locks is not all suited to work in clustered mode, where concurrent consumers on different nodes is competing for the same files on a shared file system. The markerFile using a close to atomic operation to create the empty marker file, but its not guaranteed to work in a cluster. The fileLock may work better but then the file system need to support distributed file locks, and so on. Using the idempotent read lock can support clustering if the idempotent repository supports clustering, such as Hazelcast Component or Infinispan.|none|string| +|readLockCheckInterval|Interval in millis for the read-lock, if supported by the read lock. This interval is used for sleeping between attempts to acquire the read lock. For example when using the changed read lock, you can set a higher interval period to cater for slow writes. The default of 1 sec. may be too fast if the producer is very slow writing the file. Notice: For FTP the default readLockCheckInterval is 5000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that ample time is allowed for the read lock process to try to grab the lock before the timeout was hit.|1000|integer| +|readLockDeleteOrphanLockFiles|Whether or not read lock with marker files should upon startup delete any orphan read lock files, which may have been left on the file system, if Camel was not properly shutdown (such as a JVM crash). If turning this option to false then any orphaned lock file will cause Camel to not attempt to pickup that file, this could also be due another node is concurrently reading files from the same shared directory.|true|boolean| +|readLockIdempotentReleaseAsync|Whether the delayed release task should be synchronous or asynchronous. See more details at the readLockIdempotentReleaseDelay option.|false|boolean| +|readLockIdempotentReleaseAsyncPoolSize|The number of threads in the scheduled thread pool when using asynchronous release tasks. Using a default of 1 core threads should be sufficient in almost all use-cases, only set this to a higher value if either updating the idempotent repository is slow, or there are a lot of files to process. This option is not in-use if you use a shared thread pool by configuring the readLockIdempotentReleaseExecutorService option. See more details at the readLockIdempotentReleaseDelay option.||integer| +|readLockIdempotentReleaseDelay|Whether to delay the release task for a period of millis. This can be used to delay the release tasks to expand the window when a file is regarded as read-locked, in an active/active cluster scenario with a shared idempotent repository, to ensure other nodes cannot potentially scan and acquire the same file, due to race-conditions. By expanding the time-window of the release tasks helps prevents these situations. Note delaying is only needed if you have configured readLockRemoveOnCommit to true.||integer| +|readLockIdempotentReleaseExecutorService|To use a custom and shared thread pool for asynchronous release tasks. See more details at the readLockIdempotentReleaseDelay option.||object| +|readLockLoggingLevel|Logging level used when a read lock could not be acquired. By default a DEBUG is logged. You can change this level, for example to OFF to not have any logging. This option is only applicable for readLock of types: changed, fileLock, idempotent, idempotent-changed, idempotent-rename, rename.|DEBUG|object| +|readLockMarkerFile|Whether to use marker file with the changed, rename, or exclusive read lock types. By default a marker file is used as well to guard against other processes picking up the same files. This behavior can be turned off by setting this option to false. For example if you do not want to write marker files to the file systems by the Camel application.|true|boolean| +|readLockMinAge|This option is applied only for readLock=changed. It allows to specify a minimum age the file must be before attempting to acquire the read lock. For example use readLockMinAge=300s to require the file is at last 5 minutes old. This can speedup the changed read lock as it will only attempt to acquire files which are at least that given age.|0|integer| +|readLockMinLength|This option is applied only for readLock=changed. It allows you to configure a minimum file length. By default Camel expects the file to contain data, and thus the default value is 1. You can set this option to zero, to allow consuming zero-length files.|1|integer| +|readLockRemoveOnCommit|This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file is succeeded and a commit happens. By default the file is not removed which ensures that any race-condition do not occur so another active node may attempt to grab the file. Instead the idempotent repository may support eviction strategies that you can configure to evict the file name entry after X minutes - this ensures no problems with race conditions. See more details at the readLockIdempotentReleaseDelay option.|false|boolean| +|readLockRemoveOnRollback|This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file failed and a rollback happens. If this option is false, then the file name entry is confirmed (as if the file did a commit).|true|boolean| +|readLockTimeout|Optional timeout in millis for the read-lock, if supported by the read-lock. If the read-lock could not be granted and the timeout triggered, then Camel will skip the file. At next poll Camel, will try the file again, and this time maybe the read-lock could be granted. Use a value of 0 or lower to indicate forever. Currently fileLock, changed and rename support the timeout. Notice: For FTP the default readLockTimeout value is 20000 instead of 10000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that ample time is allowed for the read lock process to try to grab the lock before the timeout was hit.|10000|integer| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|shuffle|To shuffle the list of files (sort in random order)|false|boolean| +|sortBy|Built-in sort by using the File Language. Supports nested sorts, so you can have a sort by file name and as a 2nd group sort by modified date.||string| +|sorter|Pluggable sorter as a java.util.Comparator class.||object| diff --git a/camel-flatpack.md b/camel-flatpack.md new file mode 100644 index 0000000000000000000000000000000000000000..48bc6f723922a9e7eb1d0acb541878dfb649db93 --- /dev/null +++ b/camel-flatpack.md @@ -0,0 +1,278 @@ +# Flatpack + +**Since Camel 1.4** + +**Both producer and consumer are supported** + +The Flatpack component supports fixed width and delimited file parsing +via the [FlatPack library](http://flatpack.sourceforge.net). +**Notice:** This component only supports consuming from flatpack files +to Object model. You can not (yet) write from Object model to flatpack +format. + +# URI format + + flatpack:[delim|fixed]:flatPackConfig.pzmap.xml[?options] + +Or for a delimited file handler with no configuration file just use + + flatpack:someName[?options] + +# Examples + +- `flatpack:fixed:foo.pzmap.xml` creates a fixed-width endpoint using + the `foo.pzmap.xml` file configuration. + +- `flatpack:delim:bar.pzmap.xml` creates a delimited endpoint using + the `bar.pzmap.xml` file configuration. + +- `flatpack:foo` creates a delimited endpoint called `foo` with no + file configuration. + +# Message Body + +The component delivers the data in the IN message as a +`org.apache.camel.component.flatpack.DataSetList` object that has +converters for `java.util.Map` or `java.util.List`. +Usually you want the `Map` if you process one row at a time +(`splitRows=true`). Use `List` for the entire content +(`splitRows=false`), where each element in the list is a `Map`. +Each `Map` contains the key for the column name and its corresponding +value. + +For example to get the firstname from the sample below: + + Map row = exchange.getIn().getBody(Map.class); + String firstName = row.get("FIRSTNAME"); + +However, you can also always get it as a `List` (even for +`splitRows=true`). The same example: + + List data = exchange.getIn().getBody(List.class); + Map row = (Map)data.get(0); + String firstName = row.get("FIRSTNAME"); + +# Header and Trailer records + +The header and trailer notions in Flatpack are supported. However, you +**must** use fixed record IDs: + +- `header` for the header record (must be lowercase) + +- `trailer` for the trailer record (must be lowercase) + +The example below illustrates this fact that we have a header and a +trailer. You can omit one or both of them if not needed. + + + + + + + + + + + + + + + + + + +# Using the endpoint + +A common use case is sending a file to this endpoint for further +processing in a separate route. For example: + + + + + + + + + + ... + + + +You can also convert the payload of each message created to a `Map` for +easy Bean Integration + +# Flatpack DataFormat + +The [Flatpack](#flatpack-component.adoc) component ships with the +Flatpack data format that can be used to format between fixed width or +delimited text messages to a `List` of rows as `Map`. + +- marshal = from `List>` to `OutputStream` (can be + converted to `String`) + +- unmarshal = from `java.io.InputStream` (such as a `File` or + `String`) to a `java.util.List` as an + `org.apache.camel.component.flatpack.DataSetList` instance. + The result of the operation will contain all the data. If you need + to process each row one by one you can split the exchange, using + Splitter. + +**Notice:** The Flatpack library does currently not support header and +trailers for the marshal operation. + +# Options + +The data format has the following options: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OptionDefaultDescription

definition

null

The flatpack pzmap configuration file. +Can be omitted in simpler situations, but its preferred to use the +pzmap.

fixed

false

Delimited or fixed.

ignoreFirstRecord

true

Whether the first line is ignored for +delimited files (for the column headers).

textQualifier

"

If the text is qualified with a char +such as ".

delimiter

,

The delimiter char (could be +; , or similar)

parserFactory

null

Uses the default Flatpack parser +factory.

allowShortLines

false

Allows for lines to be shorter than +expected and ignores the extra characters.

ignoreExtraColumns

false

Allows for lines to be longer than +expected and ignores the extra characters.

+ +# Usage + +To use the data format, simply instantiate an instance and invoke the +marshal or unmarshal operation in the route builder: + + FlatpackDataFormat fp = new FlatpackDataFormat(); + fp.setDefinition(new ClassPathResource("INVENTORY-Delimited.pzmap.xml")); + ... + from("file:order/in").unmarshal(df).to("seda:queue:neworder"); + +The sample above will read files from the `order/in` folder and +unmarshal the input using the Flatpack configuration file +`INVENTORY-Delimited.pzmap.xml` that configures the structure of the +files. The result is a `DataSetList` object we store on the SEDA queue. + + FlatpackDataFormat df = new FlatpackDataFormat(); + df.setDefinition(new ClassPathResource("PEOPLE-FixedLength.pzmap.xml")); + df.setFixed(true); + df.setIgnoreFirstRecord(false); + + from("seda:people").marshal(df).convertBodyTo(String.class).to("jms:queue:people"); + +In the code above we marshal the data from a Object representation as a +`List` of rows as `Maps`. The rows as `Map` contains the column name as +the key, and the corresponding value. This structure can be created in +Java code from e.g. a processor. We marshal the data according to the +Flatpack format and convert the result as a `String` object and store it +on a JMS queue. + +# Dependencies + +To use Flatpack in your camel routes you need to add the a dependency on +**camel-flatpack** which implements this data format. + +If you use maven you could just add the following to your pom.xml, +substituting the version number for the latest \& greatest release (see +the download page for the latest versions). + + + org.apache.camel + camel-flatpack + x.x.x + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|type|Whether to use fixed or delimiter|delim|object| +|resourceUri|URL for loading the flatpack mapping file from classpath or file system||string| +|allowShortLines|Allows for lines to be shorter than expected and ignores the extra characters|false|boolean| +|delimiter|The default character delimiter for delimited files.|,|string| +|ignoreExtraColumns|Allows for lines to be longer than expected and ignores the extra characters|false|boolean| +|ignoreFirstRecord|Whether the first line is ignored for delimited files (for the column headers).|true|boolean| +|splitRows|Sets the Component to send each row as a separate exchange once parsed|true|boolean| +|textQualifier|The text qualifier for delimited files.||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| diff --git a/camel-flink.md b/camel-flink.md new file mode 100644 index 0000000000000000000000000000000000000000..054e7e83124c7444a3f55b92d978abb9f37eb518 --- /dev/null +++ b/camel-flink.md @@ -0,0 +1,99 @@ +# Flink + +**Since Camel 2.18** + +**Only producer is supported** + +This documentation page covers the [Apache +Flink](https://flink.apache.org) component for the Apache Camel. The +**camel-flink** component provides a bridge between Camel components and +Flink tasks. This component provides a way to route a message from +various transports, dynamically choosing a flink task to execute, use an +incoming message as input data for the task and finally deliver the +results back to the Camel pipeline. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-flink + x.x.x + + + +# URI Format + +Currently, the Flink Component supports only Producers. One can create +DataSet, DataStream jobs. + + flink:dataset?dataset=#myDataSet&dataSetCallback=#dataSetCallback + flink:datastream?datastream=#myDataStream&dataStreamCallback=#dataStreamCallback + +# Flink DataSet Callback + + @Bean + public DataSetCallback dataSetCallback() { + return new DataSetCallback() { + public Long onDataSet(DataSet dataSet, Object... objects) { + try { + dataSet.print(); + return new Long(0); + } catch (Exception e) { + return new Long(-1); + } + } + }; + } + +# Flink DataStream Callback + + @Bean + public VoidDataStreamCallback dataStreamCallback() { + return new VoidDataStreamCallback() { + @Override + public void doOnDataStream(DataStream dataStream, Object... objects) throws Exception { + dataStream.flatMap(new Splitter()).print(); + + environment.execute("data stream test"); + } + }; + } + +# Camel-Flink Producer call + + CamelContext camelContext = new SpringCamelContext(context); + + String pattern = "foo"; + + try { + ProducerTemplate template = camelContext.createProducerTemplate(); + camelContext.start(); + Long count = template.requestBody("flink:dataSet?dataSet=#myDataSet&dataSetCallback=#countLinesContaining", pattern, Long.class); + } finally { + camelContext.stop(); + } + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|dataSetCallback|Function performing action against a DataSet.||object| +|dataStream|DataStream to compute against.||object| +|dataStreamCallback|Function performing action against a DataStream.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|endpointType|Type of the endpoint (dataset, datastream).||object| +|collect|Indicates if results should be collected or counted.|true|boolean| +|dataSet|DataSet to compute against.||object| +|dataSetCallback|Function performing action against a DataSet.||object| +|dataStream|DataStream to compute against.||object| +|dataStreamCallback|Function performing action against a DataStream.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-fop.md b/camel-fop.md new file mode 100644 index 0000000000000000000000000000000000000000..8a368040f9ee940978fd8752225fc057f7eaecb0 --- /dev/null +++ b/camel-fop.md @@ -0,0 +1,235 @@ +# Fop + +**Since Camel 2.10** + +**Only producer is supported** + +The FOP component allows you to render a message into different output +formats using [Apache +FOP](http://xmlgraphics.apache.org/fop/index.html). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-fop + x.x.x + + + +# URI format + + fop://outputFormat?[options] + +# Output Formats + +The primary output format is PDF, but other output +[formats](http://xmlgraphics.apache.org/fop/0.95/output.html) are also +supported: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
nameoutputFormatdescription

PDF

application/pdf

Portable Document Format

PS

application/postscript

Adobe Postscript

PCL

application/x-pcl

Printer Control Language

PNG

image/png

PNG images

JPEG

image/jpeg

JPEG images

SVG

image/svg+xml

Scalable Vector Graphics

XML

application/X-fop-areatree

Area tree representation

MIF

application/mif

FrameMaker’s MIF

RTF

application/rtf

Rich Text Format

TXT

text/plain

Text

+ +The complete list of valid output formats can be found in the +`MimeConstants.java` source file. + +# Configuration file + +The location of a configuration file with the following +[structure](http://xmlgraphics.apache.org/fop/1.0/configuration.html). +The file is loaded from the classpath by default. You can use `file:`, +or `classpath:` as prefix to load the resource from file or classpath. +In previous releases, the file is always loaded from the file system. + +# Message Operations + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
namedefault valuedescription

CamelFop.Output.Format

Overrides the output format for that +message

CamelFop.Encrypt.userPassword

PDF user password

CamelFop.Encrypt.ownerPassword

PDF owner passoword

CamelFop.Encrypt.allowPrint

true

Allows printing the PDF

CamelFop.Encrypt.allowCopyContent

true

Allows copying content of the +PDF

CamelFop.Encrypt.allowEditContent

true

Allows editing content of the +PDF

CamelFop.Encrypt.allowEditAnnotations

true

Allows editing annotation of the +PDF

CamelFop.Render.producer

Apache FOP

Metadata element for the +system/software that produces the document

CamelFop.Render.creator

Metadata element for the user that +created the document

CamelFop.Render.creationDate

Creation Date

CamelFop.Render.author

Author of the content of the +document

CamelFop.Render.title

Title of the document

CamelFop.Render.subject

Subject of the document

CamelFop.Render.keywords

Set of keywords applicable to this +document

+ +# Example + +Below is an example route that renders PDFs from xml data and xslt +template and saves the PDF files in the target folder: + + from("file:source/data/xml") + .to("xslt:xslt/template.xsl") + .to("fop:application/pdf") + .to("file:target/data"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|outputType|The primary output format is PDF but other output formats are also supported.||object| +|fopFactory|Allows to use a custom configured or implementation of org.apache.fop.apps.FopFactory.||object| +|userConfigURL|The location of a configuration file which can be loaded from classpath or file system.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-freemarker.md b/camel-freemarker.md new file mode 100644 index 0000000000000000000000000000000000000000..fa44e7b1ad43288c9dd647844f939ee75fde6f81 --- /dev/null +++ b/camel-freemarker.md @@ -0,0 +1,192 @@ +# Freemarker + +**Since Camel 2.10** + +**Only producer is supported** + +The **freemarker:** component allows for processing a message using a +[FreeMarker](http://freemarker.org/) template. This can be ideal when +using Templating to generate responses for requests. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-freemarker + x.x.x + + +# URI format + + freemarker:templateName[?options] + +Where **templateName** is the classpath-local URI of the template to +invoke; or the complete URL of the remote template (e.g.: +`\file://folder/myfile.ftl`). + +# Headers + +Headers set during the FreeMarker evaluation are returned to the message +and added as headers. This provides a mechanism for the FreeMarker +component to return values to the Message. + +For example, set the header value of `fruit` in the FreeMarker template: + + ${request.setHeader('fruit', 'Apple')} + +The header, `fruit`, is now accessible from the `message.out.headers`. + +# FreeMarker Context + +Camel will provide exchange information in the FreeMarker context (just +a `Map`). The `Exchange` is transferred as: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
keyvalue

exchange

The Exchange +itself.

exchange.properties

The Exchange +properties.

variables

The variables

headers

The headers of the In message.

camelContext

The Camel Context.

request

The In message.

body

The In message body.

response

The Out message (only for InOut message +exchange pattern).

+ +You can set up your custom FreeMarker context in the message header with +the key "**CamelFreemarkerDataModel**" just like this + + Map variableMap = new HashMap(); + variableMap.put("headers", headersMap); + variableMap.put("body", "Monday"); + variableMap.put("exchange", exchange); + exchange.getIn().setHeader("CamelFreemarkerDataModel", variableMap); + +# Hot reloading + +The FreeMarker template resource is by default **not** hot reloadable +for both file and classpath resources (expanded jar). If you set +`contentCache=false`, then Camel will not cache the resource and hot +reloading is thus enabled. This scenario can be used in development. + +# Dynamic templates + +Camel provides two headers by which you can define a different resource +location for a template or the template content itself. If any of these +headers is set, then Camel uses this over the endpoint configured +resource. This allows you to provide a dynamic template at runtime. + +# Samples + +For example, you could use something like: + + from("activemq:My.Queue"). + to("freemarker:com/acme/MyResponse.ftl"); + +To use a FreeMarker template to formulate a response for a message for +InOut message exchanges (where there is a `JMSReplyTo` header). + +If you want to use InOnly and consume the message and send it to another +destination, you could use: + + from("activemq:My.Queue"). + to("freemarker:com/acme/MyResponse.ftl"). + to("activemq:Another.Queue"); + +And to disable the content cache, e.g., for development usage where the +`.ftl` template should be hot reloaded: + + from("activemq:My.Queue"). + to("freemarker:com/acme/MyResponse.ftl?contentCache=false"). + to("activemq:Another.Queue"); + +And a file-based resource: + + from("activemq:My.Queue"). + to("freemarker:file://myfolder/MyResponse.ftl?contentCache=false"). + to("activemq:Another.Queue"); + +It’s possible to specify what template the component should use +dynamically via a header, so for example: + + from("direct:in"). + setHeader(FreemarkerConstants.FREEMARKER_RESOURCE_URI).constant("path/to/my/template.ftl"). + to("freemarker:dummy?allowTemplateFromHeader=true"); + +# The Email Sample + +In this sample, we want to use FreeMarker templating for an order +confirmation email. The email template is laid out in FreeMarker as: + + Dear ${headers.lastName}, ${headers.firstName} + + Thanks for the order of ${headers.item}. + + Regards Camel Riders Bookstore + ${body} + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|localizedLookup|Enables/disables localized template lookup. Disabled by default.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configuration|To use an existing freemarker.template.Configuration instance as the configuration.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|resourceUri|Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod.||string| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|configuration|Sets the Freemarker configuration to use||object| +|contentCache|Sets whether to use resource content cache or not|false|boolean| +|encoding|Sets the encoding to be used for loading the template file.||string| +|templateUpdateDelay|Number of seconds the loaded template resource will remain in the cache.||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-ftp.md b/camel-ftp.md new file mode 100644 index 0000000000000000000000000000000000000000..96781ddf02f4d8944564efd35e9548c9052b829b --- /dev/null +++ b/camel-ftp.md @@ -0,0 +1,667 @@ +# Ftp + +**Since Camel 1.1** + +**Both producer and consumer are supported** + +This component provides access to remote file systems over the FTP and +SFTP protocols. + +When consuming from remote FTP server, make sure you read the section +[Default when consuming files](#FTP-DefaultWhenConsumingFiles) further +below for details related to consuming files. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-ftp + x.x.x + + + +# URI format + + ftp://[username@]hostname[:port]/directoryname[?options] + sftp://[username@]hostname[:port]/directoryname[?options] + ftps://[username@]hostname[:port]/directoryname[?options] + +Where `directoryname` represents the underlying directory. The directory +name is a relative path. Absolute path is **not** supported. The +relative path can contain nested folders, such as /inbox/us. + +Camel translates the absolute path to relative by trimming all leading +slashes from `directoryname`. There’ll be a warning (`WARN` level) +message printed in the logs. + +The `autoCreate` option is supported. When consumer starts, before +polling is scheduled, there’s additional FTP operation performed to +create the directory configured for endpoint. The default value for +`autoCreate` is `true`. + +If no **username** is provided, then `anonymous` login is attempted +using no password. +If no **port** number is provided, Camel will provide default values +according to the protocol (ftp = 21, sftp = 22, ftps = 2222). + +You can append query options to the URI in the following format, +`?option=value&option=value&...` + +This component uses two different libraries for the actual FTP work. FTP +and FTPS use [Apache Commons Net](http://commons.apache.org/net/) while +SFTP uses [JCraft JSCH](http://www.jcraft.com/jsch/). + +FTPS, also known as FTP Secure, is an extension to FTP that adds support +for the Transport Layer Security (TLS) and the Secure Sockets Layer +(SSL) cryptographic protocols. + +# FTPS component default trust store + +When using the `ftpClient.` properties related to SSL with the FTPS +component, the trust store accepts all certificates. If you only want +trust selective certificates, you have to configure the trust store with +the `ftpClient.trustStore.xxx` options or by configuring a custom +`ftpClient`. + +When using `sslContextParameters`, the trust store is managed by the +configuration of the provided SSLContextParameters instance. + +You can configure additional options on the `ftpClient` and +`ftpClientConfig` from the URI directly by using the `ftpClient.` or +`ftpClientConfig.` prefix. + +For example, to set the `setDataTimeout` on the `FTPClient` to 30 +seconds you can do: + + from("ftp://foo@myserver?password=secret&ftpClient.dataTimeout=30000").to("bean:foo"); + +You can mix and match and use both prefixes, for example, to configure +date format or timezones. + + from("ftp://foo@myserver?password=secret&ftpClient.dataTimeout=30000&ftpClientConfig.serverLanguageCode=fr").to("bean:foo"); + +You can have as many of these options as you like. + +See the documentation of the Apache Commons FTP FTPClientConfig for +possible options and more details. And as well for Apache Commons FTP +`FTPClient`. + +If you do not like having many and long configurations in the url, you +can refer to the `ftpClient` or `ftpClientConfig` to use by letting +Camel lookup in the Registry for it. + +For example: + + + + + + +And then let Camel look up this bean when you use the # notation in the +url. + + from("ftp://foo@myserver?password=secret&ftpClientConfig=#myConfig").to("bean:foo"); + +# Concurrency + +The FTP consumer (with the same endpoint) does not support concurrency +(the backing FTP client is not thread safe). You can use multiple FTP +consumers to poll from different endpoints. It is only a single endpoint +that does not support concurrent consumers. + +The FTP producer does **not** have this issue, it supports concurrency. + +# Default when consuming files + +The FTP consumer will by default leave the consumed files untouched on +the remote FTP server. You have to configure it explicitly if you want +it to delete the files or move them to another location. For example, +you can use `delete=true` to delete the files, or use `move=.done` to +move the files into a hidden done subdirectory. + +The regular File consumer is different as it will by default move files +to a `.camel` sub directory. The reason Camel does **not** do this by +default for the FTP consumer is that it may lack permissions by default +to be able to move or delete files. + +## limitations + +The option `readLock` can be used to force Camel **not** to consume +files that are currently in the progress of being written. However, this +option is turned off by default, as it requires that the user has write +access. See the options table for more details about read locks. There +are other solutions to avoid consuming files that are currently being +written over FTP; for instance, you can write to a temporary destination +and move the file after it has been written. + +When moving files using `move` or `preMove` option the files are +restricted to the FTP\_ROOT folder. That prevents you from moving files +outside the FTP area. If you want to move files to another area, you can +use soft links and move files into a soft linked folder. + +# Exchange Properties + +Camel sets the following exchange properties + + ++++ + + + + + + + + + + + + + + + + + + + + +
HeaderDescription

CamelBatchIndex

The current index out of total number +of files being consumed in this batch.

CamelBatchSize

The total number of files being +consumed in this batch.

CamelBatchComplete

true if there are no more +files in this batch.

+ +# About timeouts + +The two sets of libraries (see top) have different API for setting +timeout. You can use the `connectTimeout` option for both of them to set +a timeout in millis to establish a network connection. An individual +`soTimeout` can also be set on the FTP/FTPS, which corresponds to using +`ftpClient.soTimeout`. Notice SFTP will automatically use +`connectTimeout` as its `soTimeout`. The `timeout` option only applies +for FTP/FTPS as the data timeout, which corresponds to the +`ftpClient.dataTimeout` value. All timeout values are in millis. + +# Using Local Work Directory + +Camel supports consuming from remote FTP servers and downloading the +files directly into a local work directory. This avoids reading the +entire remote file content into memory as it is streamed directly into +the local file using `FileOutputStream`. + +Camel will store to a local file with the same name as the remote file, +though with `.inprogress` as the extension while the file is being +downloaded. Afterward, the file is renamed to remove the `.inprogress` +suffix. And finally, when the Exchange is complete, the local file is +deleted. + +So if you want to download files from a remote FTP server and store it +as files, then you need to route to a file endpoint such as: + + from("ftp://someone@someserver.com?password=secret&localWorkDirectory=/tmp").to("file://inbox"); + +The route above is ultra efficient as it avoids reading the entire file +content into memory. It will download the remote file directly to a +local file stream. The `java.io.File` handle is then used as the +Exchange body. The file producer leverages this fact and can work +directly on the work file `java.io.File` handle and perform a +`java.io.File.rename` to the target filename. As Camel knows it’s a +local work file, it can optimize and use a rename instead of a file +copy, as the work file is meant to be deleted anyway. + +# Stepwise changing directories + +Camel FTP can operate in two modes in terms of traversing directories +when consuming files (e.g., downloading) or producing files (e.g., +uploading) + +- stepwise + +- not stepwise + +You may want to pick either one depending on your situation and security +issues. Some Camel end users can only download files if they use +stepwise, while others can only download if they do not. + +You can use the `stepwise` option to control the behavior. + +Note that stepwise changing of directory will in most cases only work +when the user is confined to its home directory and when the home +directory is reported as `"/"`. + +The difference between the two of them is best illustrated with an +example. Suppose we have the following directory structure on the remote +FTP server, we need to traverse and download files: + + / + /one + /one/two + /one/two/sub-a + /one/two/sub-b + +And that we have a file in each of `sub-a` (`a.txt`) and `sub-b` +(`b.txt`) folder. + +Default (Stepwise enabled) +**Using `stepwise=true` (default mode)** + + TYPE A + 200 Type set to A + PWD + 257 "/" is current directory. + CWD one + 250 CWD successful. "/one" is current directory. + CWD two + 250 CWD successful. "/one/two" is current directory. + SYST + 215 UNIX emulated by FileZilla + PORT 127,0,0,1,17,94 + 200 Port command successful + LIST + 150 Opening data channel for directory list. + 226 Transfer OK + CWD sub-a + 250 CWD successful. "/one/two/sub-a" is current directory. + PORT 127,0,0,1,17,95 + 200 Port command successful + LIST + 150 Opening data channel for directory list. + 226 Transfer OK + CDUP + 200 CDUP successful. "/one/two" is current directory. + CWD sub-b + 250 CWD successful. "/one/two/sub-b" is current directory. + PORT 127,0,0,1,17,96 + 200 Port command successful + LIST + 150 Opening data channel for directory list. + 226 Transfer OK + CDUP + 200 CDUP successful. "/one/two" is current directory. + CWD / + 250 CWD successful. "/" is current directory. + PWD + 257 "/" is current directory. + CWD one + 250 CWD successful. "/one" is current directory. + CWD two + 250 CWD successful. "/one/two" is current directory. + PORT 127,0,0,1,17,97 + 200 Port command successful + RETR foo.txt + 150 Opening data channel for file transfer. + 226 Transfer OK + CWD / + 250 CWD successful. "/" is current directory. + PWD + 257 "/" is current directory. + CWD one + 250 CWD successful. "/one" is current directory. + CWD two + 250 CWD successful. "/one/two" is current directory. + CWD sub-a + 250 CWD successful. "/one/two/sub-a" is current directory. + PORT 127,0,0,1,17,98 + 200 Port command successful + RETR a.txt + 150 Opening data channel for file transfer. + 226 Transfer OK + CWD / + 250 CWD successful. "/" is current directory. + PWD + 257 "/" is current directory. + CWD one + 250 CWD successful. "/one" is current directory. + CWD two + 250 CWD successful. "/one/two" is current directory. + CWD sub-b + 250 CWD successful. "/one/two/sub-b" is current directory. + PORT 127,0,0,1,17,99 + 200 Port command successful + RETR b.txt + 150 Opening data channel for file transfer. + 226 Transfer OK + CWD / + 250 CWD successful. "/" is current directory. + QUIT + 221 Goodbye + disconnected. + +As you can see when stepwise is enabled, it will traverse the directory +structure using CD xxx. + +Stepwise Disabled +**Using `stepwise=false`** + + 230 Logged on + TYPE A + 200 Type set to A + SYST + 215 UNIX emulated by FileZilla + PORT 127,0,0,1,4,122 + 200 Port command successful + LIST one/two + 150 Opening data channel for directory list + 226 Transfer OK + PORT 127,0,0,1,4,123 + 200 Port command successful + LIST one/two/sub-a + 150 Opening data channel for directory list + 226 Transfer OK + PORT 127,0,0,1,4,124 + 200 Port command successful + LIST one/two/sub-b + 150 Opening data channel for directory list + 226 Transfer OK + PORT 127,0,0,1,4,125 + 200 Port command successful + RETR one/two/foo.txt + 150 Opening data channel for file transfer. + 226 Transfer OK + PORT 127,0,0,1,4,126 + 200 Port command successful + RETR one/two/sub-a/a.txt + 150 Opening data channel for file transfer. + 226 Transfer OK + PORT 127,0,0,1,4,127 + 200 Port command successful + RETR one/two/sub-b/b.txt + 150 Opening data channel for file transfer. + 226 Transfer OK + QUIT + 221 Goodbye + disconnected. + +As you can see when not using stepwise, there is no CD operation invoked +at all. + +# Filtering Strategies + +Camel supports pluggable filtering strategies. They are described below. + +See also the documentation for filtering strategies on the [File +component](#file-component.adoc). + +## Custom filtering + +Camel supports pluggable filtering strategies. This strategy it to use +the build in `org.apache.camel.component.file.GenericFileFilter` in +Java. You can then configure the endpoint with such a filter to skip +certain filters before being processed. + +In the sample, we have built our own filter that only accepts files +starting with `report` in the filename. + +And then we can configure our route using the **filter** attribute to +reference our filter (using `#` notation) that we have defined in the +spring XML file: + + + + + + + + + +## Filtering using ANT path matcher + +The ANT path matcher is a filter shipped out-of-the-box in the +**camel-spring** jar. So you need to depend on **camel-spring** if you +are using Maven. The reason is that we leverage Spring’s +[AntPathMatcher](http://static.springsource.org/spring/docs/3.0.x/api/org/springframework/util/AntPathMatcher.html) +to do the actual matching. + +The file paths are matched with the following rules: + +- `?` matches one character + +- `*` matches zero or more characters + +- `**` matches zero or more directories in a path + +The sample below demonstrates how to use it: + + from("ftp://admin@localhost:2222/public/camel?antInclude=**/*.txt").to("..."); + +# Using a proxy with SFTP + +To use an HTTP proxy to connect to your remote host, you can configure +your route in the following way: + + + + + + + + + + + + +You can also assign a username and password to the proxy, if necessary. +Please consult the documentation for `com.jcraft.jsch.Proxy` to discover +all options. + +# Setting preferred SFTP authentication method + +If you want to explicitly specify the list of authentication methods +that should be used by `sftp` component, use `preferredAuthentications` +option. If, for example, you would like Camel to attempt to authenticate +with a private/public SSH key and fallback to user/password +authentication in the case when no public key is available, use the +following route configuration: + + from("sftp://localhost:9999/root?username=admin&password=admin&preferredAuthentications=publickey,password"). + to("bean:processFile"); + +# Consuming a single file using a fixed name + +When you want to download a single file and knows the file name, you can +use `fileName=myFileName.txt` to tell Camel the name of the file to +download. By default, the consumer will still do an FTP LIST command to +do a directory listing and then filter these files based on the +`fileName` option. Though in this use-case it may be desirable to turn +off the directory listing by setting `useList=false`. For example, the +user account used to log in to the FTP server may not have permission to +do an FTP LIST command. So you can turn off this with `useList=false`, +and then provide the fixed name of the file to download with +`fileName=myFileName.txt`, then the FTP consumer can still download the +file. If the file for some reason does not exist, then Camel will by +default throw an exception, you can turn this off and ignore this by +setting `ignoreFileNotFoundOrPermissionError=true`. + +For example, to have a Camel route that picks up a single file, and +delete it after use you can do + + from("ftp://admin@localhost:21/nolist/?password=admin&stepwise=false&useList=false&ignoreFileNotFoundOrPermissionError=true&fileName=report.txt&delete=true") + .to("activemq:queue:report"); + +Notice that we have used all the options we talked above. + +You can also use this with `ConsumerTemplate`. For example, to download +a single file (if it exists) and grab the file content as a String type: + + String data = template.retrieveBodyNoWait("ftp://admin@localhost:21/nolist/?password=admin&stepwise=false&useList=false&ignoreFileNotFoundOrPermissionError=true&fileName=report.txt&delete=true", String.class); + +# Debug logging + +This component has log level **TRACE** that can be helpful if you have +problems. + +# Samples + +In the sample below, we set up Camel to download all the reports from +the FTP server once every hour (60 min) as BINARY content and store it +as files on the local file system. + +And the route using XML DSL: + + + + + + +## Consuming a remote FTPS server (implicit SSL) and client authentication + + from("ftps://admin@localhost:2222/public/camel?password=admin&securityProtocol=SSL&implicit=true + &ftpClient.keyStore.file=./src/test/resources/server.jks + &ftpClient.keyStore.password=password&ftpClient.keyStore.keyPassword=password") + .to("bean:foo"); + +## Consuming a remote FTPS server (explicit TLS) and a custom trust store configuration + + from("ftps://admin@localhost:2222/public/camel?password=admin&ftpClient.trustStore.file=./src/test/resources/server.jks&ftpClient.trustStore.password=password") + .to("bean:foo"); + +## Examples + + ftp://someone@someftpserver.com/public/upload/images/holiday2008?password=secret&binary=true + + ftp://someoneelse@someotherftpserver.co.uk:12049/reports/2008/password=secret&binary=false + + ftp://publicftpserver.com/download + +# More information + +This component is an extension of the [File +component](#file-component.adoc). + +You can find additional samples and details on the File component page. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|Hostname of the FTP server||string| +|port|Port of the FTP server||integer| +|directoryName|The starting directory||string| +|binary|Specifies the file transfer mode, BINARY or ASCII. Default is ASCII (false).|false|boolean| +|charset|This option is used to specify the encoding of the file. You can use this on the consumer, to specify the encodings of the files, which allow Camel to know the charset it should load the file content in case the file content is being accessed. Likewise when writing a file, you can use this option to specify which charset to write the file as well. Do mind that when writing the file Camel may have to read the message content into memory to be able to convert the data into the configured charset, so do not use this if you have big messages.||string| +|disconnect|Whether or not to disconnect from remote FTP server right after use. Disconnect will only disconnect the current connection to the FTP server. If you have a consumer which you want to stop, then you need to stop the consumer/route instead.|false|boolean| +|doneFileName|Producer: If provided, then Camel will write a 2nd done file when the original file has been written. The done file will be empty. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders. The done file will always be written in the same folder as the original file. Consumer: If provided, Camel will only consume files if a done file exists. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders.The done file is always expected in the same folder as the original file. Only ${file.name} and ${file.name.next} is supported as dynamic placeholders.||string| +|fileName|Use Expression such as File Language to dynamically set the filename. For consumers, it's used as a filename filter. For producers, it's used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today's file using the File Language syntax: mydata-${date:now:yyyyMMdd}.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards.||string| +|passiveMode|Sets passive mode connections. Default is active mode connections.|false|boolean| +|separator|Sets the path separator to be used. UNIX = Uses unix style path separator Windows = Uses windows style path separator Auto = (is default) Use existing path separator in file name|UNIX|object| +|transferLoggingIntervalSeconds|Configures the interval in seconds to use when logging the progress of upload and download operations that are in-flight. This is used for logging progress when operations takes longer time.|5|integer| +|transferLoggingLevel|Configure the logging level to use when logging the progress of upload and download operations.|DEBUG|object| +|transferLoggingVerbose|Configures whether the perform verbose (fine grained) logging of the progress of upload and download operations.|false|boolean| +|fastExistsCheck|If set this option to be true, camel-ftp will use the list file directly to check if the file exists. Since some FTP server may not support to list the file directly, if the option is false, camel-ftp will use the old way to list the directory and check if the file exists. This option also influences readLock=changed to control whether it performs a fast check to update file information or not. This can be used to speed up the process if the FTP server has a lot of files.|false|boolean| +|delete|If true, the file will be deleted after it is processed successfully.|false|boolean| +|moveFailed|Sets the move failure expression based on Simple language. For example, to move files into a .error subdirectory use: .error. Note: When moving the files to the fail location Camel will handle the error and will not pick up the file again.||string| +|noop|If true, the file is not moved or deleted in any way. This option is good for readonly data, or for ETL type requirements. If noop=true, Camel will set idempotent=true as well, to avoid consuming the same files over and over again.|false|boolean| +|preMove|Expression (such as File Language) used to dynamically set the filename when moving it before processing. For example to move in-progress files into the order directory set this value to order.||string| +|preSort|When pre-sort is enabled then the consumer will sort the file and directory names during polling, that was retrieved from the file system. You may want to do this in case you need to operate on the files in a sorted order. The pre-sort is executed before the consumer starts to filter, and accept files to process by Camel. This option is default=false meaning disabled.|false|boolean| +|recursive|If a directory, will look for files in all the sub-directories as well.|false|boolean| +|resumeDownload|Configures whether resume download is enabled. This must be supported by the FTP server (almost all FTP servers support it). In addition the options localWorkDirectory must be configured so downloaded files are stored in a local directory, and the option binary must be enabled, which is required to support resuming of downloads.|false|boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|streamDownload|Sets the download method to use when not using a local working directory. If set to true, the remote files are streamed to the route as they are read. When set to false, the remote files are loaded into memory before being sent into the route. If enabling this option then you must set stepwise=false as both cannot be enabled at the same time.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|download|Whether the FTP consumer should download the file. If this option is set to false, then the message body will be null, but the consumer will still trigger a Camel Exchange that has details about the file such as file name, file size, etc. It's just that the file will not be downloaded.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|handleDirectoryParserAbsoluteResult|Allows you to set how the consumer will handle subfolders and files in the path if the directory parser results in with absolute paths The reason for this is that some FTP servers may return file names with absolute paths, and if so then the FTP component needs to handle this by converting the returned path into a relative path.|false|boolean| +|ignoreFileNotFoundOrPermissionError|Whether to ignore when (trying to list files in directories or when downloading a file), which does not exist or due to permission error. By default when a directory or file does not exist or insufficient permission, then an exception is thrown. Setting this option to true allows to ignore that instead.|false|boolean| +|inProgressRepository|A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used.||object| +|localWorkDirectory|When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory.||string| +|onCompletionExceptionHandler|To use a custom org.apache.camel.spi.ExceptionHandler to handle any thrown exceptions that happens during the file on completion process where the consumer does either a commit or rollback. The default implementation will log any exception at WARN level and ignore.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|processStrategy|A pluggable org.apache.camel.component.file.GenericFileProcessStrategy allowing you to implement your own readLock option or similar. Can also be used when special conditions must be met before a file can be consumed, such as a special ready file exists. If this option is set then the readLock option does not apply.||object| +|useList|Whether to allow using LIST command when downloading a file. Default is true. In some use cases you may want to download a specific file and are not allowed to use the LIST command, and therefore you can set this option to false. Notice when using this option, then the specific file to download does not include meta-data information such as file size, timestamp, permissions etc, because those information is only possible to retrieve when LIST command is in use.|true|boolean| +|checksumFileAlgorithm|If provided, then Camel will write a checksum file when the original file has been written. The checksum file will contain the checksum created with the provided algorithm for the original file. The checksum file will always be written in the same folder as the original file.||string| +|fileExist|What to do if a file already exists with the same name. Override, which is the default, replaces the existing file. - Append - adds content to the existing file. - Fail - throws a GenericFileOperationException, indicating that there is already an existing file. - Ignore - silently ignores the problem and does not override the existing file, but assumes everything is okay. - Move - option requires to use the moveExisting option to be configured as well. The option eagerDeleteTargetFile can be used to control what to do if an moving the file, and there exists already an existing file, otherwise causing the move operation to fail. The Move option will move any existing files, before writing the target file. - TryRename is only applicable if tempFileName option is in use. This allows to try renaming the file from the temporary name to the actual name, without doing any exists check. This check may be faster on some file systems and especially FTP servers.|Override|object| +|flatten|Flatten is used to flatten the file name path to strip any leading paths, so it's just the file name. This allows you to consume recursively into sub-directories, but when you eg write the files to another directory they will be written in a single directory. Setting this to true on the producer enforces that any file name in CamelFileName header will be stripped for any leading paths.|false|boolean| +|jailStartingDirectory|Used for jailing (restricting) writing files to the starting directory (and sub) only. This is enabled by default to not allow Camel to write files to outside directories (to be more secured out of the box). You can turn this off to allow writing files to directories outside the starting directory, such as parent or root folders.|true|boolean| +|moveExisting|Expression (such as File Language) used to compute file name to use when fileExist=Move is configured. To move files into a backup subdirectory just enter backup. This option only supports the following File Language tokens: file:name, file:name.ext, file:name.noext, file:onlyname, file:onlyname.noext, file:ext, and file:parent. Notice the file:parent is not supported by the FTP component, as the FTP component can only move any existing files to a relative directory based on current dir as base.||string| +|tempFileName|The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. The location for tempFilename is relative to the final file location in the option 'fileName', not the target directory in the base uri. For example if option fileName includes a directory prefix: dir/finalFilename then tempFileName is relative to that subdirectory dir.||string| +|tempPrefix|This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files.||string| +|allowNullBody|Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged.|false|boolean| +|chmod|Allows you to set chmod on the stored file. For example chmod=640.||string| +|disconnectOnBatchComplete|Whether or not to disconnect from remote FTP server right after a Batch upload is complete. disconnectOnBatchComplete will only disconnect the current connection to the FTP server.|false|boolean| +|eagerDeleteTargetFile|Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation.|true|boolean| +|keepLastModified|Will keep the last modified timestamp from the source file (if any). Will use the FileConstants.FILE\_LAST\_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|moveExistingFileStrategy|Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided||object| +|sendNoop|Whether to send a noop command as a pre-write check before uploading files to the FTP server. This is enabled by default as a validation of the connection is still valid, which allows to silently re-connect to be able to upload the file. However if this causes problems, you can turn this option off.|true|boolean| +|activePortRange|Set the client side port range in active mode. The syntax is: minPort-maxPort Both port numbers are inclusive, eg 10000-19999 to include all 1xxxx ports.||string| +|autoCreate|Automatically create missing directories in the file's pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to.|true|boolean| +|bufferSize|Buffer size in bytes used for writing files (or in case of FTP for downloading and uploading files).|131072|integer| +|connectTimeout|Sets the connect timeout for waiting for a connection to be established Used by both FTPClient and JSCH|10000|duration| +|ftpClient|To use a custom instance of FTPClient||object| +|ftpClientConfig|To use a custom instance of FTPClientConfig to configure the FTP client the endpoint should use.||object| +|ftpClientConfigParameters|Used by FtpComponent to provide additional parameters for the FTPClientConfig||object| +|ftpClientParameters|Used by FtpComponent to provide additional parameters for the FTPClient||object| +|maximumReconnectAttempts|Specifies the maximum reconnect attempts Camel performs when it tries to connect to the remote FTP server. Use 0 to disable this behavior.||integer| +|reconnectDelay|Delay in millis Camel will wait before performing a reconnect attempt.|1000|duration| +|siteCommand|Sets optional site command(s) to be executed after successful login. Multiple site commands can be separated using a new line character.||string| +|soTimeout|Sets the so timeout FTP and FTPS Is the SocketOptions.SO\_TIMEOUT value in millis. Recommended option is to set this to 300000 so as not have a hanged connection. On SFTP this option is set as timeout on the JSCH Session instance.|300000|duration| +|stepwise|Sets whether we should stepwise change directories while traversing file structures when downloading files, or as well when uploading a file to a directory. You can disable this if you for example are in a situation where you cannot change directory on the FTP server due security reasons. Stepwise cannot be used together with streamDownload.|true|boolean| +|throwExceptionOnConnectFailed|Should an exception be thrown if connection failed (exhausted)By default exception is not thrown and a WARN is logged. You can use this to enable exception being thrown and handle the thrown exception from the org.apache.camel.spi.PollingConsumerPollStrategy rollback method.|false|boolean| +|timeout|Sets the data timeout for waiting for reply Used only by FTPClient|30000|duration| +|antExclude|Ant style filter exclusion. If both antInclude and antExclude are used, antExclude takes precedence over antInclude. Multiple exclusions may be specified in comma-delimited format.||string| +|antFilterCaseSensitive|Sets case sensitive flag on ant filter.|true|boolean| +|antInclude|Ant style filter inclusion. Multiple inclusions may be specified in comma-delimited format.||string| +|eagerMaxMessagesPerPoll|Allows for controlling whether the limit from maxMessagesPerPoll is eager or not. If eager then the limit is during the scanning of files. Where as false would scan all files, and then perform sorting. Setting this option to false allows for sorting all files first, and then limit the poll. Mind that this requires a higher memory usage as all file details are in memory to perform the sorting.|true|boolean| +|exclude|Is used to exclude files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris||string| +|excludeExt|Is used to exclude files matching file extension name (case insensitive). For example to exclude bak files, then use excludeExt=bak. Multiple extensions can be separated by comma, for example to exclude bak and dat files, use excludeExt=bak,dat. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options.||string| +|filter|Pluggable filter as a org.apache.camel.component.file.GenericFileFilter class. Will skip files if filter returns false in its accept() method.||object| +|filterDirectory|Filters the directory based on Simple language. For example to filter on current date, you can use a simple date pattern such as ${date:now:yyyMMdd}||string| +|filterFile|Filters the file based on Simple language. For example to filter on file size, you can use ${file:size} 5000||string| +|idempotent|Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again.|false|boolean| +|idempotentEager|Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again.|false|boolean| +|idempotentKey|To use a custom idempotent key. By default the absolute path of the file is used. You can use the File Language, for example to use the file name and file size, you can do: idempotentKey=${file:name}-${file:size}||string| +|idempotentRepository|A pluggable repository org.apache.camel.spi.IdempotentRepository which by default use MemoryIdempotentRepository if none is specified and idempotent is true.||object| +|include|Is used to include files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris||string| +|includeExt|Is used to include files matching file extension name (case insensitive). For example to include txt files, then use includeExt=txt. Multiple extensions can be separated by comma, for example to include txt and xml files, use includeExt=txt,xml. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options.||string| +|maxDepth|The maximum depth to traverse when recursively processing a directory.|2147483647|integer| +|maxMessagesPerPoll|To define a maximum messages to gather per poll. By default no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disabled it. Notice: If this option is in use then the File and FTP components will limit before any sorting. For example if you have 100000 files and use maxMessagesPerPoll=500, then only the first 500 files will be picked up, and then sorted. You can use the eagerMaxMessagesPerPoll option and set this to false to allow to scan all files first and then sort afterwards.||integer| +|minDepth|The minimum depth to start processing when recursively processing a directory. Using minDepth=1 means the base directory. Using minDepth=2 means the first sub directory.||integer| +|move|Expression (such as Simple Language) used to dynamically set the filename when moving it after processing. To move files into a .done subdirectory just enter .done.||string| +|exclusiveReadLockStrategy|Pluggable read-lock as a org.apache.camel.component.file.GenericFileExclusiveReadLockStrategy implementation.||object| +|readLock|Used by consumer, to only poll the files if it has exclusive read-lock on the file (i.e. the file is not in-progress or being written). Camel will wait until the file lock is granted. This option provides the build in strategies: - none - No read lock is in use - markerFile - Camel creates a marker file (fileName.camelLock) and then holds a lock on it. This option is not available for the FTP component - changed - Changed is using file length/modification timestamp to detect whether the file is currently being copied or not. Will at least use 1 sec to determine this, so this option cannot consume files as fast as the others, but can be more reliable as the JDK IO API cannot always determine whether a file is currently being used by another process. The option readLockCheckInterval can be used to set the check frequency. - fileLock - is for using java.nio.channels.FileLock. This option is not avail for Windows OS and the FTP component. This approach should be avoided when accessing a remote file system via a mount/share unless that file system supports distributed file locks. - rename - rename is for using a try to rename the file as a test if we can get exclusive read-lock. - idempotent - (only for file component) idempotent is for using a idempotentRepository as the read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-changed - (only for file component) idempotent-changed is for using a idempotentRepository and changed as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-rename - (only for file component) idempotent-rename is for using a idempotentRepository and rename as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that.Notice: The various read locks is not all suited to work in clustered mode, where concurrent consumers on different nodes is competing for the same files on a shared file system. The markerFile using a close to atomic operation to create the empty marker file, but its not guaranteed to work in a cluster. The fileLock may work better but then the file system need to support distributed file locks, and so on. Using the idempotent read lock can support clustering if the idempotent repository supports clustering, such as Hazelcast Component or Infinispan.|none|string| +|readLockCheckInterval|Interval in millis for the read-lock, if supported by the read lock. This interval is used for sleeping between attempts to acquire the read lock. For example when using the changed read lock, you can set a higher interval period to cater for slow writes. The default of 1 sec. may be too fast if the producer is very slow writing the file. Notice: For FTP the default readLockCheckInterval is 5000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that ample time is allowed for the read lock process to try to grab the lock before the timeout was hit.|1000|integer| +|readLockDeleteOrphanLockFiles|Whether or not read lock with marker files should upon startup delete any orphan read lock files, which may have been left on the file system, if Camel was not properly shutdown (such as a JVM crash). If turning this option to false then any orphaned lock file will cause Camel to not attempt to pickup that file, this could also be due another node is concurrently reading files from the same shared directory.|true|boolean| +|readLockLoggingLevel|Logging level used when a read lock could not be acquired. By default a DEBUG is logged. You can change this level, for example to OFF to not have any logging. This option is only applicable for readLock of types: changed, fileLock, idempotent, idempotent-changed, idempotent-rename, rename.|DEBUG|object| +|readLockMarkerFile|Whether to use marker file with the changed, rename, or exclusive read lock types. By default a marker file is used as well to guard against other processes picking up the same files. This behavior can be turned off by setting this option to false. For example if you do not want to write marker files to the file systems by the Camel application.|true|boolean| +|readLockMinAge|This option is applied only for readLock=changed. It allows to specify a minimum age the file must be before attempting to acquire the read lock. For example use readLockMinAge=300s to require the file is at last 5 minutes old. This can speedup the changed read lock as it will only attempt to acquire files which are at least that given age.|0|integer| +|readLockMinLength|This option is applied only for readLock=changed. It allows you to configure a minimum file length. By default Camel expects the file to contain data, and thus the default value is 1. You can set this option to zero, to allow consuming zero-length files.|1|integer| +|readLockRemoveOnCommit|This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file is succeeded and a commit happens. By default the file is not removed which ensures that any race-condition do not occur so another active node may attempt to grab the file. Instead the idempotent repository may support eviction strategies that you can configure to evict the file name entry after X minutes - this ensures no problems with race conditions. See more details at the readLockIdempotentReleaseDelay option.|false|boolean| +|readLockRemoveOnRollback|This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file failed and a rollback happens. If this option is false, then the file name entry is confirmed (as if the file did a commit).|true|boolean| +|readLockTimeout|Optional timeout in millis for the read-lock, if supported by the read-lock. If the read-lock could not be granted and the timeout triggered, then Camel will skip the file. At next poll Camel, will try the file again, and this time maybe the read-lock could be granted. Use a value of 0 or lower to indicate forever. Currently fileLock, changed and rename support the timeout. Notice: For FTP the default readLockTimeout value is 20000 instead of 10000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that ample time is allowed for the read lock process to try to grab the lock before the timeout was hit.|10000|integer| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|account|Account to use for login||string| +|password|Password to use for login||string| +|username|Username to use for login||string| +|shuffle|To shuffle the list of files (sort in random order)|false|boolean| +|sortBy|Built-in sort by using the File Language. Supports nested sorts, so you can have a sort by file name and as a 2nd group sort by modified date.||string| +|sorter|Pluggable sorter as a java.util.Comparator class.||object| diff --git a/camel-ftps.md b/camel-ftps.md new file mode 100644 index 0000000000000000000000000000000000000000..c9a28227968d5cfac942717a91168d594901464b --- /dev/null +++ b/camel-ftps.md @@ -0,0 +1,164 @@ +# Ftps + +**Since Camel 2.2** + +**Both producer and consumer are supported** + +This component provides access to remote file systems over the FTP +secure and SFTP protocols. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-ftp + x.x.x + + + +# More Information + +For more information, you can look at [FTP +component](#ftp-component.adoc). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|Hostname of the FTP server||string| +|port|Port of the FTP server||integer| +|directoryName|The starting directory||string| +|binary|Specifies the file transfer mode, BINARY or ASCII. Default is ASCII (false).|false|boolean| +|charset|This option is used to specify the encoding of the file. You can use this on the consumer, to specify the encodings of the files, which allow Camel to know the charset it should load the file content in case the file content is being accessed. Likewise when writing a file, you can use this option to specify which charset to write the file as well. Do mind that when writing the file Camel may have to read the message content into memory to be able to convert the data into the configured charset, so do not use this if you have big messages.||string| +|disconnect|Whether or not to disconnect from remote FTP server right after use. Disconnect will only disconnect the current connection to the FTP server. If you have a consumer which you want to stop, then you need to stop the consumer/route instead.|false|boolean| +|doneFileName|Producer: If provided, then Camel will write a 2nd done file when the original file has been written. The done file will be empty. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders. The done file will always be written in the same folder as the original file. Consumer: If provided, Camel will only consume files if a done file exists. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders.The done file is always expected in the same folder as the original file. Only ${file.name} and ${file.name.next} is supported as dynamic placeholders.||string| +|fileName|Use Expression such as File Language to dynamically set the filename. For consumers, it's used as a filename filter. For producers, it's used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today's file using the File Language syntax: mydata-${date:now:yyyyMMdd}.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards.||string| +|passiveMode|Sets passive mode connections. Default is active mode connections.|false|boolean| +|separator|Sets the path separator to be used. UNIX = Uses unix style path separator Windows = Uses windows style path separator Auto = (is default) Use existing path separator in file name|UNIX|object| +|transferLoggingIntervalSeconds|Configures the interval in seconds to use when logging the progress of upload and download operations that are in-flight. This is used for logging progress when operations takes longer time.|5|integer| +|transferLoggingLevel|Configure the logging level to use when logging the progress of upload and download operations.|DEBUG|object| +|transferLoggingVerbose|Configures whether the perform verbose (fine grained) logging of the progress of upload and download operations.|false|boolean| +|fastExistsCheck|If set this option to be true, camel-ftp will use the list file directly to check if the file exists. Since some FTP server may not support to list the file directly, if the option is false, camel-ftp will use the old way to list the directory and check if the file exists. This option also influences readLock=changed to control whether it performs a fast check to update file information or not. This can be used to speed up the process if the FTP server has a lot of files.|false|boolean| +|delete|If true, the file will be deleted after it is processed successfully.|false|boolean| +|moveFailed|Sets the move failure expression based on Simple language. For example, to move files into a .error subdirectory use: .error. Note: When moving the files to the fail location Camel will handle the error and will not pick up the file again.||string| +|noop|If true, the file is not moved or deleted in any way. This option is good for readonly data, or for ETL type requirements. If noop=true, Camel will set idempotent=true as well, to avoid consuming the same files over and over again.|false|boolean| +|preMove|Expression (such as File Language) used to dynamically set the filename when moving it before processing. For example to move in-progress files into the order directory set this value to order.||string| +|preSort|When pre-sort is enabled then the consumer will sort the file and directory names during polling, that was retrieved from the file system. You may want to do this in case you need to operate on the files in a sorted order. The pre-sort is executed before the consumer starts to filter, and accept files to process by Camel. This option is default=false meaning disabled.|false|boolean| +|recursive|If a directory, will look for files in all the sub-directories as well.|false|boolean| +|resumeDownload|Configures whether resume download is enabled. This must be supported by the FTP server (almost all FTP servers support it). In addition the options localWorkDirectory must be configured so downloaded files are stored in a local directory, and the option binary must be enabled, which is required to support resuming of downloads.|false|boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|streamDownload|Sets the download method to use when not using a local working directory. If set to true, the remote files are streamed to the route as they are read. When set to false, the remote files are loaded into memory before being sent into the route. If enabling this option then you must set stepwise=false as both cannot be enabled at the same time.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|download|Whether the FTP consumer should download the file. If this option is set to false, then the message body will be null, but the consumer will still trigger a Camel Exchange that has details about the file such as file name, file size, etc. It's just that the file will not be downloaded.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|handleDirectoryParserAbsoluteResult|Allows you to set how the consumer will handle subfolders and files in the path if the directory parser results in with absolute paths The reason for this is that some FTP servers may return file names with absolute paths, and if so then the FTP component needs to handle this by converting the returned path into a relative path.|false|boolean| +|ignoreFileNotFoundOrPermissionError|Whether to ignore when (trying to list files in directories or when downloading a file), which does not exist or due to permission error. By default when a directory or file does not exist or insufficient permission, then an exception is thrown. Setting this option to true allows to ignore that instead.|false|boolean| +|inProgressRepository|A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used.||object| +|localWorkDirectory|When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory.||string| +|onCompletionExceptionHandler|To use a custom org.apache.camel.spi.ExceptionHandler to handle any thrown exceptions that happens during the file on completion process where the consumer does either a commit or rollback. The default implementation will log any exception at WARN level and ignore.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|processStrategy|A pluggable org.apache.camel.component.file.GenericFileProcessStrategy allowing you to implement your own readLock option or similar. Can also be used when special conditions must be met before a file can be consumed, such as a special ready file exists. If this option is set then the readLock option does not apply.||object| +|useList|Whether to allow using LIST command when downloading a file. Default is true. In some use cases you may want to download a specific file and are not allowed to use the LIST command, and therefore you can set this option to false. Notice when using this option, then the specific file to download does not include meta-data information such as file size, timestamp, permissions etc, because those information is only possible to retrieve when LIST command is in use.|true|boolean| +|checksumFileAlgorithm|If provided, then Camel will write a checksum file when the original file has been written. The checksum file will contain the checksum created with the provided algorithm for the original file. The checksum file will always be written in the same folder as the original file.||string| +|fileExist|What to do if a file already exists with the same name. Override, which is the default, replaces the existing file. - Append - adds content to the existing file. - Fail - throws a GenericFileOperationException, indicating that there is already an existing file. - Ignore - silently ignores the problem and does not override the existing file, but assumes everything is okay. - Move - option requires to use the moveExisting option to be configured as well. The option eagerDeleteTargetFile can be used to control what to do if an moving the file, and there exists already an existing file, otherwise causing the move operation to fail. The Move option will move any existing files, before writing the target file. - TryRename is only applicable if tempFileName option is in use. This allows to try renaming the file from the temporary name to the actual name, without doing any exists check. This check may be faster on some file systems and especially FTP servers.|Override|object| +|flatten|Flatten is used to flatten the file name path to strip any leading paths, so it's just the file name. This allows you to consume recursively into sub-directories, but when you eg write the files to another directory they will be written in a single directory. Setting this to true on the producer enforces that any file name in CamelFileName header will be stripped for any leading paths.|false|boolean| +|jailStartingDirectory|Used for jailing (restricting) writing files to the starting directory (and sub) only. This is enabled by default to not allow Camel to write files to outside directories (to be more secured out of the box). You can turn this off to allow writing files to directories outside the starting directory, such as parent or root folders.|true|boolean| +|moveExisting|Expression (such as File Language) used to compute file name to use when fileExist=Move is configured. To move files into a backup subdirectory just enter backup. This option only supports the following File Language tokens: file:name, file:name.ext, file:name.noext, file:onlyname, file:onlyname.noext, file:ext, and file:parent. Notice the file:parent is not supported by the FTP component, as the FTP component can only move any existing files to a relative directory based on current dir as base.||string| +|tempFileName|The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. The location for tempFilename is relative to the final file location in the option 'fileName', not the target directory in the base uri. For example if option fileName includes a directory prefix: dir/finalFilename then tempFileName is relative to that subdirectory dir.||string| +|tempPrefix|This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files.||string| +|allowNullBody|Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged.|false|boolean| +|chmod|Allows you to set chmod on the stored file. For example chmod=640.||string| +|disconnectOnBatchComplete|Whether or not to disconnect from remote FTP server right after a Batch upload is complete. disconnectOnBatchComplete will only disconnect the current connection to the FTP server.|false|boolean| +|eagerDeleteTargetFile|Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation.|true|boolean| +|keepLastModified|Will keep the last modified timestamp from the source file (if any). Will use the FileConstants.FILE\_LAST\_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|moveExistingFileStrategy|Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided||object| +|sendNoop|Whether to send a noop command as a pre-write check before uploading files to the FTP server. This is enabled by default as a validation of the connection is still valid, which allows to silently re-connect to be able to upload the file. However if this causes problems, you can turn this option off.|true|boolean| +|activePortRange|Set the client side port range in active mode. The syntax is: minPort-maxPort Both port numbers are inclusive, eg 10000-19999 to include all 1xxxx ports.||string| +|autoCreate|Automatically create missing directories in the file's pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to.|true|boolean| +|bufferSize|Buffer size in bytes used for writing files (or in case of FTP for downloading and uploading files).|131072|integer| +|connectTimeout|Sets the connect timeout for waiting for a connection to be established Used by both FTPClient and JSCH|10000|duration| +|ftpClient|To use a custom instance of FTPClient||object| +|ftpClientConfig|To use a custom instance of FTPClientConfig to configure the FTP client the endpoint should use.||object| +|ftpClientConfigParameters|Used by FtpComponent to provide additional parameters for the FTPClientConfig||object| +|ftpClientParameters|Used by FtpComponent to provide additional parameters for the FTPClient||object| +|maximumReconnectAttempts|Specifies the maximum reconnect attempts Camel performs when it tries to connect to the remote FTP server. Use 0 to disable this behavior.||integer| +|reconnectDelay|Delay in millis Camel will wait before performing a reconnect attempt.|1000|duration| +|siteCommand|Sets optional site command(s) to be executed after successful login. Multiple site commands can be separated using a new line character.||string| +|soTimeout|Sets the so timeout FTP and FTPS Is the SocketOptions.SO\_TIMEOUT value in millis. Recommended option is to set this to 300000 so as not have a hanged connection. On SFTP this option is set as timeout on the JSCH Session instance.|300000|duration| +|stepwise|Sets whether we should stepwise change directories while traversing file structures when downloading files, or as well when uploading a file to a directory. You can disable this if you for example are in a situation where you cannot change directory on the FTP server due security reasons. Stepwise cannot be used together with streamDownload.|true|boolean| +|throwExceptionOnConnectFailed|Should an exception be thrown if connection failed (exhausted)By default exception is not thrown and a WARN is logged. You can use this to enable exception being thrown and handle the thrown exception from the org.apache.camel.spi.PollingConsumerPollStrategy rollback method.|false|boolean| +|timeout|Sets the data timeout for waiting for reply Used only by FTPClient|30000|duration| +|antExclude|Ant style filter exclusion. If both antInclude and antExclude are used, antExclude takes precedence over antInclude. Multiple exclusions may be specified in comma-delimited format.||string| +|antFilterCaseSensitive|Sets case sensitive flag on ant filter.|true|boolean| +|antInclude|Ant style filter inclusion. Multiple inclusions may be specified in comma-delimited format.||string| +|eagerMaxMessagesPerPoll|Allows for controlling whether the limit from maxMessagesPerPoll is eager or not. If eager then the limit is during the scanning of files. Where as false would scan all files, and then perform sorting. Setting this option to false allows for sorting all files first, and then limit the poll. Mind that this requires a higher memory usage as all file details are in memory to perform the sorting.|true|boolean| +|exclude|Is used to exclude files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris||string| +|excludeExt|Is used to exclude files matching file extension name (case insensitive). For example to exclude bak files, then use excludeExt=bak. Multiple extensions can be separated by comma, for example to exclude bak and dat files, use excludeExt=bak,dat. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options.||string| +|filter|Pluggable filter as a org.apache.camel.component.file.GenericFileFilter class. Will skip files if filter returns false in its accept() method.||object| +|filterDirectory|Filters the directory based on Simple language. For example to filter on current date, you can use a simple date pattern such as ${date:now:yyyMMdd}||string| +|filterFile|Filters the file based on Simple language. For example to filter on file size, you can use ${file:size} 5000||string| +|idempotent|Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again.|false|boolean| +|idempotentEager|Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again.|false|boolean| +|idempotentKey|To use a custom idempotent key. By default the absolute path of the file is used. You can use the File Language, for example to use the file name and file size, you can do: idempotentKey=${file:name}-${file:size}||string| +|idempotentRepository|A pluggable repository org.apache.camel.spi.IdempotentRepository which by default use MemoryIdempotentRepository if none is specified and idempotent is true.||object| +|include|Is used to include files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris||string| +|includeExt|Is used to include files matching file extension name (case insensitive). For example to include txt files, then use includeExt=txt. Multiple extensions can be separated by comma, for example to include txt and xml files, use includeExt=txt,xml. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options.||string| +|maxDepth|The maximum depth to traverse when recursively processing a directory.|2147483647|integer| +|maxMessagesPerPoll|To define a maximum messages to gather per poll. By default no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disabled it. Notice: If this option is in use then the File and FTP components will limit before any sorting. For example if you have 100000 files and use maxMessagesPerPoll=500, then only the first 500 files will be picked up, and then sorted. You can use the eagerMaxMessagesPerPoll option and set this to false to allow to scan all files first and then sort afterwards.||integer| +|minDepth|The minimum depth to start processing when recursively processing a directory. Using minDepth=1 means the base directory. Using minDepth=2 means the first sub directory.||integer| +|move|Expression (such as Simple Language) used to dynamically set the filename when moving it after processing. To move files into a .done subdirectory just enter .done.||string| +|exclusiveReadLockStrategy|Pluggable read-lock as a org.apache.camel.component.file.GenericFileExclusiveReadLockStrategy implementation.||object| +|readLock|Used by consumer, to only poll the files if it has exclusive read-lock on the file (i.e. the file is not in-progress or being written). Camel will wait until the file lock is granted. This option provides the build in strategies: - none - No read lock is in use - markerFile - Camel creates a marker file (fileName.camelLock) and then holds a lock on it. This option is not available for the FTP component - changed - Changed is using file length/modification timestamp to detect whether the file is currently being copied or not. Will at least use 1 sec to determine this, so this option cannot consume files as fast as the others, but can be more reliable as the JDK IO API cannot always determine whether a file is currently being used by another process. The option readLockCheckInterval can be used to set the check frequency. - fileLock - is for using java.nio.channels.FileLock. This option is not avail for Windows OS and the FTP component. This approach should be avoided when accessing a remote file system via a mount/share unless that file system supports distributed file locks. - rename - rename is for using a try to rename the file as a test if we can get exclusive read-lock. - idempotent - (only for file component) idempotent is for using a idempotentRepository as the read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-changed - (only for file component) idempotent-changed is for using a idempotentRepository and changed as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-rename - (only for file component) idempotent-rename is for using a idempotentRepository and rename as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that.Notice: The various read locks is not all suited to work in clustered mode, where concurrent consumers on different nodes is competing for the same files on a shared file system. The markerFile using a close to atomic operation to create the empty marker file, but its not guaranteed to work in a cluster. The fileLock may work better but then the file system need to support distributed file locks, and so on. Using the idempotent read lock can support clustering if the idempotent repository supports clustering, such as Hazelcast Component or Infinispan.|none|string| +|readLockCheckInterval|Interval in millis for the read-lock, if supported by the read lock. This interval is used for sleeping between attempts to acquire the read lock. For example when using the changed read lock, you can set a higher interval period to cater for slow writes. The default of 1 sec. may be too fast if the producer is very slow writing the file. Notice: For FTP the default readLockCheckInterval is 5000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that ample time is allowed for the read lock process to try to grab the lock before the timeout was hit.|1000|integer| +|readLockDeleteOrphanLockFiles|Whether or not read lock with marker files should upon startup delete any orphan read lock files, which may have been left on the file system, if Camel was not properly shutdown (such as a JVM crash). If turning this option to false then any orphaned lock file will cause Camel to not attempt to pickup that file, this could also be due another node is concurrently reading files from the same shared directory.|true|boolean| +|readLockLoggingLevel|Logging level used when a read lock could not be acquired. By default a DEBUG is logged. You can change this level, for example to OFF to not have any logging. This option is only applicable for readLock of types: changed, fileLock, idempotent, idempotent-changed, idempotent-rename, rename.|DEBUG|object| +|readLockMarkerFile|Whether to use marker file with the changed, rename, or exclusive read lock types. By default a marker file is used as well to guard against other processes picking up the same files. This behavior can be turned off by setting this option to false. For example if you do not want to write marker files to the file systems by the Camel application.|true|boolean| +|readLockMinAge|This option is applied only for readLock=changed. It allows to specify a minimum age the file must be before attempting to acquire the read lock. For example use readLockMinAge=300s to require the file is at last 5 minutes old. This can speedup the changed read lock as it will only attempt to acquire files which are at least that given age.|0|integer| +|readLockMinLength|This option is applied only for readLock=changed. It allows you to configure a minimum file length. By default Camel expects the file to contain data, and thus the default value is 1. You can set this option to zero, to allow consuming zero-length files.|1|integer| +|readLockRemoveOnCommit|This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file is succeeded and a commit happens. By default the file is not removed which ensures that any race-condition do not occur so another active node may attempt to grab the file. Instead the idempotent repository may support eviction strategies that you can configure to evict the file name entry after X minutes - this ensures no problems with race conditions. See more details at the readLockIdempotentReleaseDelay option.|false|boolean| +|readLockRemoveOnRollback|This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file failed and a rollback happens. If this option is false, then the file name entry is confirmed (as if the file did a commit).|true|boolean| +|readLockTimeout|Optional timeout in millis for the read-lock, if supported by the read-lock. If the read-lock could not be granted and the timeout triggered, then Camel will skip the file. At next poll Camel, will try the file again, and this time maybe the read-lock could be granted. Use a value of 0 or lower to indicate forever. Currently fileLock, changed and rename support the timeout. Notice: For FTP the default readLockTimeout value is 20000 instead of 10000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that ample time is allowed for the read lock process to try to grab the lock before the timeout was hit.|10000|integer| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|account|Account to use for login||string| +|disableSecureDataChannelDefaults|Use this option to disable default options when using secure data channel. This allows you to be in full control what the execPbsz and execProt setting should be used. Default is false|false|boolean| +|execPbsz|When using secure data channel you can set the exec protection buffer size||integer| +|execProt|The exec protection level PROT command. C - Clear S - Safe(SSL protocol only) E - Confidential(SSL protocol only) P - Private||string| +|ftpClientKeyStoreParameters|Set the key store parameters||object| +|ftpClientTrustStoreParameters|Set the trust store parameters||object| +|implicit|Set the security mode (Implicit/Explicit). true - Implicit Mode / False - Explicit Mode|false|boolean| +|password|Password to use for login||string| +|securityProtocol|Set the underlying security protocol.|TLSv1.3|string| +|sslContextParameters|Gets the JSSE configuration that overrides any settings in FtpsEndpoint#ftpClientKeyStoreParameters, ftpClientTrustStoreParameters, and FtpsConfiguration#getSecurityProtocol().||object| +|username|Username to use for login||string| +|shuffle|To shuffle the list of files (sort in random order)|false|boolean| +|sortBy|Built-in sort by using the File Language. Supports nested sorts, so you can have a sort by file name and as a 2nd group sort by modified date.||string| +|sorter|Pluggable sorter as a java.util.Comparator class.||object| diff --git a/camel-geocoder.md b/camel-geocoder.md new file mode 100644 index 0000000000000000000000000000000000000000..4813e618b4f700508872b3e9852f36fec507fe1a --- /dev/null +++ b/camel-geocoder.md @@ -0,0 +1,108 @@ +# Geocoder + +**Since Camel 2.12** + +**Only producer is supported** + +The Geocoder component is used for looking up geocodes (latitude and +longitude) for a given address, or reverse lookup. + +The component uses either a hosted [Nominatim +server](https://github.com/openstreetmap/Nominatim) or the [Java API for +Google Geocoder](https://code.google.com/p/geocoder-java/) library. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-geocoder + x.x.x + + + +# URI format + + geocoder:address:name[?options] + geocoder:latlng:latitude,longitude[?options] + +# Exchange data format + +Notice not all headers may be provided depending on available data and +mode in use (`address` vs. `latlng`). + +## Body using a Nominatim Server + +Camel will deliver the body as a JSONv2 type. + +## Body using a Google Server + +Camel will deliver the body as a +`com.google.code.geocoder.model.GeocodeResponse` type. +And if the address is `"current"` then the response is a String type +with a JSON representation of the current location. + +If the option `headersOnly` is set to `true` then the message body is +left as-is, and only headers will be added to the Exchange. + +# Samples + +In the example below, we get the latitude and longitude for Paris, +France + + from("direct:start") + .to("geocoder:address:Paris, France?type=NOMINATIM&serverUrl=https://nominatim.openstreetmap.org") + +If you provide a header with the `CamelGeoCoderAddress` then that +overrides the endpoint configuration, so to get the location of +Copenhagen, Denmark we can send a message with a headers as shown: + + template.sendBodyAndHeader("direct:start", "Hello", GeoCoderConstants.ADDRESS, "Copenhagen, Denmark"); + +To get the address for a latitude and longitude we can do: + + from("direct:start") + .to("geocoder:latlng:40.714224,-73.961452") + .log("Location ${header.CamelGeocoderAddress} is at lat/lng: ${header.CamelGeocoderLatlng} and in country ${header.CamelGeoCoderCountryShort}") + +Which will log + + Location 285 Bedford Avenue, Brooklyn, NY 11211, USA is at lat/lng: 40.71412890,-73.96140740 and in country US + +To get the current location using the Google GeoCoder, you can use +"current" as the address as shown: + + from("direct:start") + .to("geocoder:address:current") + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|geoApiContext|Configuration for Google maps API||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|address|The geo address which should be prefixed with address:||string| +|latlng|The geo latitude and longitude which should be prefixed with latlng:||string| +|headersOnly|Whether to only enrich the Exchange with headers, and leave the body as-is.|false|boolean| +|language|The language to use.|en|string| +|serverUrl|URL to the geocoder server. Mandatory for Nominatim server.||string| +|type|Type of GeoCoding server. Supported Nominatim and Google.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|proxyAuthDomain|Proxy Authentication Domain to access Google GeoCoding server.||string| +|proxyAuthHost|Proxy Authentication Host to access Google GeoCoding server.||string| +|proxyAuthMethod|Authentication Method to Google GeoCoding server.||string| +|proxyAuthPassword|Proxy Password to access GeoCoding server.||string| +|proxyAuthUsername|Proxy Username to access GeoCoding server.||string| +|proxyHost|Proxy Host to access GeoCoding server.||string| +|proxyPort|Proxy Port to access GeoCoding server.||integer| +|apiKey|API Key to access Google. Mandatory for Google GeoCoding server.||string| +|clientId|Client ID to access Google GeoCoding server.||string| +|clientKey|Client Key to access Google GeoCoding server.||string| diff --git a/camel-git.md b/camel-git.md new file mode 100644 index 0000000000000000000000000000000000000000..83aca3fb8f83fea4fe16a74e04334b95baf2b650 --- /dev/null +++ b/camel-git.md @@ -0,0 +1,94 @@ +# Git + +**Since Camel 2.16** + +**Both producer and consumer are supported** + +The Git component allows you to work with a generic Git repository. + + + org.apache.camel + camel-git + x.x.x + + + +**URI Format** + + git://localRepositoryPath[?options] + +# URI Options + +The producer allows doing operations on a specific repository. The +consumer allows consuming commits, tags, and branches in a specific +repository. + +# Producer Example + +Below is an example route of a producer that adds a file test.java to a +local repository, commits it with a specific message on the `main` +branch and then pushes it to remote repository. + + from("direct:start") + .setHeader(GitConstants.GIT_FILE_NAME, constant("test.java")) + .to("git:///tmp/testRepo?operation=add") + .setHeader(GitConstants.GIT_COMMIT_MESSAGE, constant("first commit")) + .to("git:///tmp/testRepo?operation=commit") + .to("git:///tmp/testRepo?operation=push&remotePath=https://foo.com/test/test.git&username=xxx&password=xxx") + .to("git:///tmp/testRepo?operation=createTag&tagName=myTag") + .to("git:///tmp/testRepo?operation=pushTag&tagName=myTag&remoteName=origin"); + +# Consumer Example + +Below is an example route of a consumer that consumes commit: + + from("git:///tmp/testRepo?type=commit") + .to(....) + +# Custom config file + +By default, camel-git will load \`\`.gitconfig\`\` file from user home +folder. You can override this by providing your own \`\`.gitconfig\`\` +file. + + from("git:///tmp/testRepo?type=commit&gitConfigFile=file:/tmp/configfile") + .to(....); // will load from os dirs + + from("git:///tmp/testRepo?type=commit&gitConfigFile=classpath:configfile") + .to(....); // will load from resources dir + + from("git:///tmp/testRepo?type=commit&gitConfigFile=http://somedomain.xyz/gitconfigfile") + .to(....); // will load from http. You could also use https + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|localPath|Local repository path||string| +|branchName|The branch name to work on||string| +|type|The consumer type||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|allowEmpty|The flag to manage empty git commits|true|boolean| +|operation|The operation to do on the repository||string| +|password|Remote repository password||string| +|remoteName|The remote repository name to use in particular operation like pull||string| +|remotePath|The remote repository path||string| +|tagName|The tag name to work on||string| +|targetBranchName|Name of target branch in merge operation. If not supplied will try to use init.defaultBranch git configs. If not configured will use default value|master|string| +|username|Remote repository username||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|gitConfigFile|A String with path to a .gitconfig file||string| diff --git a/camel-github.md b/camel-github.md new file mode 100644 index 0000000000000000000000000000000000000000..3a5fb09e31f761c4a5cdb9386c0d6f000ffde6e9 --- /dev/null +++ b/camel-github.md @@ -0,0 +1,182 @@ +# Github + +**Since Camel 2.15** + +**Both producer and consumer are supported** + +The GitHub component interacts with the GitHub API by encapsulating +[egit-github](https://git.eclipse.org/c/egit/egit-github.git/). It +currently provides polling for new pull requests, pull request comments, +tags, and commits. It is also able to produce comments on pull requests, +as well as close the pull request entirely. + +Rather than webhooks, this endpoint relies on simple polling. Reasons +include: + +- Concern for reliability/stability + +- The types of payloads we’re polling aren’t typically large (plus, + paging is available in the API) + +- The need to support apps running somewhere not publicly accessible + where a webhook would fail + +Note that the GitHub API is fairly expansive. Therefore, this component +could be easily expanded to provide additional interactions. + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-github + ${camel-version} + + +# URI format + + github://endpoint[?options] + +# Configuring authentication + +The GitHub component requires to be configured with an authentication +token on either the component or endpoint level. + +For example, to set it on the component: + + GitHubComponent ghc = context.getComponent("github", GitHubComponent.class); + ghc.setOauthToken("mytoken"); + +# Consumer Endpoints: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
EndpointContextBody Type

pullRequest

polling

org.eclipse.egit.github.core.PullRequest

pullRequestComment

polling

org.eclipse.egit.github.core.Comment +(comment on the general pull request discussion) or +org.eclipse.egit.github.core.CommitComment (inline comment +on a pull request diff)

tag

polling

org.eclipse.egit.github.core.RepositoryTag

commit

polling

org.eclipse.egit.github.core.RepositoryCommit

+ +# Producer Endpoints: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
EndpointBodyMessage Headers

pullRequestComment

String (comment text)

- GitHubPullRequest +(integer) (REQUIRED): Pull request number.

+

- GitHubInResponseTo (integer): Required if responding +to another inline comment on the pull request diff. If left off, a +general comment on the pull request discussion is assumed.

closePullRequest

none

- GitHubPullRequest +(integer) (REQUIRED): Pull request number.

createIssue

String (issue body text)

- GitHubIssueTitle +(String) (REQUIRED): Issue Title.

+ +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|oauthToken|GitHub OAuth token. Must be configured on either component or endpoint.||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|type|What git operation to execute||object| +|branchName|Name of branch||string| +|repoName|GitHub repository name||string| +|repoOwner|GitHub repository owner (organization)||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|startingSha|The starting sha to use for polling commits with the commit consumer. The value can either be a sha for the sha to start from, or use beginning to start from the beginning, or last to start from the last commit.|last|string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|eventFetchStrategy|To specify a custom strategy that configures how the EventsConsumer fetches events.||object| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|encoding|To use the given encoding when getting a git commit file||string| +|state|To set git commit status state||string| +|targetUrl|To set git commit status target url||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|oauthToken|GitHub OAuth token. Must be configured on either component or endpoint.||string| diff --git a/camel-google-bigquery-sql.md b/camel-google-bigquery-sql.md new file mode 100644 index 0000000000000000000000000000000000000000..741ace6d7f9f26749016b5b71b0ec96c9f75e306 --- /dev/null +++ b/camel-google-bigquery-sql.md @@ -0,0 +1,104 @@ +# Google-bigquery-sql + +**Since Camel 2.23** + +**Only producer is supported** + +The Google BigQuery SQL component provides access to [Cloud BigQuery +Infrastructure](https://cloud.google.com/bigquery/) via the [Google +Client Services +API](https://developers.google.com/apis-explorer/#p/bigquery/v2/bigquery.jobs.query). + +The current implementation supports only standard SQL [DML +queries](https://cloud.google.com/bigquery/docs/reference/standard-sql/dml-syntax). + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-google-bigquery + + x.x.x + + +# Authentication Configuration + +Google BigQuery component authentication is targeted for use with the +GCP Service Accounts. For more information please refer to [Google Cloud +Platform Auth Guide](https://cloud.google.com/docs/authentication) + +Google security credentials can be set explicitly by providing the path +to the GCP credentials file location. + +Or they are set implicitly, where the connection factory falls back on +[Application Default +Credentials](https://developers.google.com/identity/protocols/application-default-credentials#howtheywork). + +When you have the **service account key** you can provide authentication +credentials to your application code. Google security credentials can be +set through the component endpoint: + + String endpoint = "google-bigquery-sql://project-id:query?serviceAccountKey=/home/user/Downloads/my-key.json"; + +You can also use the base64 encoded content of the authentication +credentials file if you don’t want to set a file system path. + + String endpoint = "google-bigquery-sql://project-id:query?serviceAccountKey=base64:"; + +Or by setting the environment variable `GOOGLE_APPLICATION_CREDENTIALS` +: + + export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/my-key.json" + +# URI Format + + google-bigquery-sql://project-id:query?[options] + +Examples: + + google-bigquery-sql://project-17248459:delete * from test.table where id=@myId + + google-bigquery-sql://project-17248459:delete * from ${datasetId}.${tableId} where id=@myId + +where + +- parameters in form ${name} are extracted from message headers and + formed the translated query. + +- parameters in form @name are extracted from body or message headers + and sent to Google Bigquery. The + `com.google.cloud.bigquery.StandardSQLTypeName` of the parameter is + detected from the type of the parameter using + ` QueryParameterValue.of(T value, Class type)` + +You can externalize your SQL queries to files in the classpath or file +system as shown: + + google-bigquery-sql://project-17248459::classpath:delete.sql + +# Producer Endpoints + +Google BigQuery SQL endpoint expects the payload to be either empty or a +map of query parameters. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|connectionFactory|ConnectionFactory to obtain connection to Bigquery Service. If not provided the default one will be used||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|projectId|Google Cloud Project Id||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|projectId|Google Cloud Project Id||string| +|queryString|BigQuery standard SQL query||string| +|connectionFactory|ConnectionFactory to obtain connection to Bigquery Service. If not provided the default one will be used||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|serviceAccountKey|Service account key in json format to authenticate an application as a service account to google cloud platform||string| diff --git a/camel-google-bigquery.md b/camel-google-bigquery.md new file mode 100644 index 0000000000000000000000000000000000000000..9e47fbec5b52a6b8508b93fd41b93ab2e872deaa --- /dev/null +++ b/camel-google-bigquery.md @@ -0,0 +1,137 @@ +# Google-bigquery + +**Since Camel 2.20** + +**Only producer is supported** + +The Google Bigquery component provides access to [Cloud BigQuery +Infrastructure](https://cloud.google.com/bigquery/) via the [Google +Client Services +API](https://developers.google.com/api-client-library/java/apis/bigquery/v2). + +The current implementation does not use gRPC. + +The current implementation does not support querying BigQuery, i.e., is +a producer only. + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-google-bigquery + + x.x.x + + +# Authentication Configuration + +Google BigQuery component authentication is targeted for use with the +GCP Service Accounts. For more information, please refer to [Google +Cloud Platform Auth Guide](https://cloud.google.com/docs/authentication) + +Google security credentials can be set explicitly by providing the path +to the GCP credentials file location. + +Or they are set implicitly, where the connection factory falls back on +[Application Default +Credentials](https://developers.google.com/identity/protocols/application-default-credentials#howtheywork). + +When you have the **service account key**, you can provide +authentication credentials to your application code. Google security +credentials can be set through the component endpoint: + + String endpoint = "google-bigquery://project-id:datasetId[:tableId]?serviceAccountKey=/home/user/Downloads/my-key.json"; + +You can also use the base64 encoded content of the authentication +credentials file if you don’t want to set a file system path. + + String endpoint = "google-bigquery://project-id:datasetId[:tableId]?serviceAccountKey=base64:"; + +Or by setting the environment variable `GOOGLE_APPLICATION_CREDENTIALS` +: + + export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/my-key.json" + +# URI Format + + google-bigquery://project-id:datasetId[:tableId]?[options] + +# Producer Endpoints + +Producer endpoints can accept and deliver to BigQuery individual and +grouped exchanges alike. Grouped exchanges have +`Exchange.GROUPED_EXCHANGE` property set. + +Google BigQuery producer will send a grouped exchange in a single api +call unless different table suffix or partition decorators are +specified. In that case, it will break it down to ensure data is written +with the correct suffix or partition decorator. + +Google BigQuery endpoint expects the payload to be either a map or list +of maps. A payload containing a map will insert a single row, and a +payload containing a list of maps will insert a row for each entry in +the list. + +# Template tables + +Reference: +[https://cloud.google.com/bigquery/streaming-data-into-bigquery#template-tables](https://cloud.google.com/bigquery/streaming-data-into-bigquery#template-tables) + +Templated tables can be specified using the +`GoogleBigQueryConstants.TABLE_SUFFIX` header. + +I.e. the following route will create tables and insert records sharded +on a per-day basis: + + from("direct:start") + .header(GoogleBigQueryConstants.TABLE_SUFFIX, "_${date:now:yyyyMMdd}") + .to("google-bigquery:sampleDataset:sampleTable") + +Note it is recommended to use partitioning for this use case. + +# Partitioning + +Reference: +[https://cloud.google.com/bigquery/docs/creating-partitioned-tables](https://cloud.google.com/bigquery/docs/creating-partitioned-tables) + +Partitioning is specified when creating a table and if set data will be +automatically partitioned into separate tables. When inserting data a +specific partition can be specified by setting the +`GoogleBigQueryConstants.PARTITION_DECORATOR` header on the exchange. + +# Ensuring data consistency + +Reference: +[https://cloud.google.com/bigquery/streaming-data-into-bigquery#dataconsistency](https://cloud.google.com/bigquery/streaming-data-into-bigquery#dataconsistency) + +An insert id can be set on the exchange with the header +`GoogleBigQueryConstants.INSERT_ID` or by specifying query parameter +`useAsInsertId`. As an insert id need to be specified per row inserted +the exchange header can’t be used when the payload is a list. If the +payload is a list the `GoogleBigQueryConstants.INSERT_ID` will be +ignored. In that case use the query parameter `useAsInsertId`. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|connectionFactory|ConnectionFactory to obtain connection to Bigquery Service. If not provided the default one will be used||object| +|datasetId|BigQuery Dataset Id||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|projectId|Google Cloud Project Id||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|projectId|Google Cloud Project Id||string| +|datasetId|BigQuery Dataset Id||string| +|tableId|BigQuery table id||string| +|connectionFactory|ConnectionFactory to obtain connection to Bigquery Service. If not provided the default one will be used||object| +|useAsInsertId|Field name to use as insert id||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|serviceAccountKey|Service account key in json format to authenticate an application as a service account to google cloud platform||string| diff --git a/camel-google-calendar-stream.md b/camel-google-calendar-stream.md new file mode 100644 index 0000000000000000000000000000000000000000..f45617c8456b98805533bbf95c3eb98ada7d9bf7 --- /dev/null +++ b/camel-google-calendar-stream.md @@ -0,0 +1,116 @@ +# Google-calendar-stream + +**Since Camel 2.23** + +**Only consumer is supported** + +The Google Calendar Stream component provides access to +[Calendar](https://calendar.google.com) via the [Google Calendar Web +APIs](https://developers.google.com/calendar/overview). This component +provides the streaming feature for Calendar events. + +Google Calendar uses the [OAuth 2.0 +protocol](https://developers.google.com/accounts/docs/OAuth2) for +authenticating a Google account and authorizing access to user data. +Before you can use this component, you will need to [create an account +and generate OAuth +credentials](https://developers.google.com/calendar/auth). Credentials +consist of a `clientId`, `clientSecret`, and a `refreshToken`. A handy +resource for generating a long-lived `refreshToken` is the [OAuth +playground](https://developers.google.com/oauthplayground). + +In the case of a [service +account](https://developers.google.com/identity/protocols/oauth2#serviceaccount), +credentials consist of a JSON-file (serviceAccountKey). You can also use +[delegation domain-wide +authority](https://developers.google.com/identity/protocols/oauth2/service-account#delegatingauthority) +(delegate) and one, several, or all possible [Calendar API Auth +Scopes](https://developers.google.com/calendar/api/guides/auth). + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-google-calendar + + x.x.x + + +# URI Format + +The Google Calendar Component uses the following URI format: + + google-calendar-stream://index?[options] + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|applicationName|Google Calendar application name. Example would be camel-google-calendar/1.0||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|calendarId|The calendarId to be used|primary|string| +|clientId|Client ID of the calendar application||string| +|configuration|The configuration||object| +|considerLastUpdate|Take into account the lastUpdate of the last event polled as start date for the next poll|false|boolean| +|consumeFromNow|Consume events in the selected calendar from now on|true|boolean| +|delegate|Delegate for wide-domain service account||string| +|maxResults|Max results to be returned|10|integer| +|query|The query to execute on calendar||string| +|scopes|Specifies the level of permissions you want a calendar application to have to a user account. See https://developers.google.com/calendar/auth for more info.||array| +|syncFlow|Sync events, see https://developers.google.com/calendar/v3/sync Note: not compatible with: 'query' and 'considerLastUpdate' parameters|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|clientFactory|The client Factory||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|accessToken|OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage.||string| +|clientSecret|Client secret of the calendar application||string| +|emailAddress|The emailAddress of the Google Service Account.||string| +|p12FileName|The name of the p12 file which has the private key to use with the Google Service Account.||string| +|refreshToken|OAuth 2 refresh token. Using this, the Google Calendar component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived.||string| +|serviceAccountKey|Service account key in json format to authenticate an application as a service account. Accept base64 adding the prefix base64:||string| +|user|The email address of the user the application is trying to impersonate in the service account flow.||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|index|Specifies an index for the endpoint||string| +|applicationName|Google Calendar application name. Example would be camel-google-calendar/1.0||string| +|calendarId|The calendarId to be used|primary|string| +|clientId|Client ID of the calendar application||string| +|considerLastUpdate|Take into account the lastUpdate of the last event polled as start date for the next poll|false|boolean| +|consumeFromNow|Consume events in the selected calendar from now on|true|boolean| +|delegate|Delegate for wide-domain service account||string| +|maxResults|Max results to be returned|10|integer| +|query|The query to execute on calendar||string| +|scopes|Specifies the level of permissions you want a calendar application to have to a user account. See https://developers.google.com/calendar/auth for more info.||array| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|syncFlow|Sync events, see https://developers.google.com/calendar/v3/sync Note: not compatible with: 'query' and 'considerLastUpdate' parameters|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessToken|OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage.||string| +|clientSecret|Client secret of the calendar application||string| +|emailAddress|The emailAddress of the Google Service Account.||string| +|p12FileName|The name of the p12 file which has the private key to use with the Google Service Account.||string| +|refreshToken|OAuth 2 refresh token. Using this, the Google Calendar component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived.||string| +|serviceAccountKey|Service account key in json format to authenticate an application as a service account. Accept base64 adding the prefix base64:||string| +|user|The email address of the user the application is trying to impersonate in the service account flow.||string| diff --git a/camel-google-calendar.md b/camel-google-calendar.md new file mode 100644 index 0000000000000000000000000000000000000000..23141d901a96d61e162bc6302d0fc38ca0731716 --- /dev/null +++ b/camel-google-calendar.md @@ -0,0 +1,100 @@ +# Google-calendar + +**Since Camel 2.15** + +**Both producer and consumer are supported** + +The Google Calendar component provides access to [Google +Calendar](http://google.com/calendar) via the [Google Calendar Web +APIs](https://developers.google.com/google-apps/calendar/v3/reference/). + +Google Calendar uses the [OAuth 2.0 +protocol](https://developers.google.com/accounts/docs/OAuth2) for +authenticating a Google account and authorizing access to user data. +Before you can use this component, you will need to [create an account +and generate OAuth +credentials](https://developers.google.com/google-apps/calendar/auth). +Credentials consist of a `clientId`, `clientSecret`, and a +`refreshToken`. A handy resource for generating a long-lived +`refreshToken` is the [OAuth +playground](https://developers.google.com/oauthplayground). + +In the case of a [service +account](https://developers.google.com/identity/protocols/oauth2#serviceaccount), +credentials consist of a JSON-file (serviceAccountKey). You can also use +[delegation domain-wide +authority](https://developers.google.com/identity/protocols/oauth2/service-account#delegatingauthority) +(delegate) and one, several, or all possible [Calendar API Auth +Scopes](https://developers.google.com/calendar/api/guides/auth). + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-google-calendar + + x.x.x + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|applicationName|Google calendar application name. Example would be camel-google-calendar/1.0||string| +|clientId|Client ID of the calendar application||string| +|configuration|To use the shared configuration||object| +|delegate|Delegate for wide-domain service account||string| +|scopes|Specifies the level of permissions you want a calendar application to have to a user account. You can separate multiple scopes by comma. See https://developers.google.com/google-apps/calendar/auth for more info.|https://www.googleapis.com/auth/calendar|array| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|clientFactory|To use the GoogleCalendarClientFactory as factory for creating the client. Will by default use BatchGoogleCalendarClientFactory||object| +|accessToken|OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage.||string| +|clientSecret|Client secret of the calendar application||string| +|emailAddress|The emailAddress of the Google Service Account.||string| +|p12FileName|The name of the p12 file which has the private key to use with the Google Service Account.||string| +|refreshToken|OAuth 2 refresh token. Using this, the Google Calendar component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived.||string| +|serviceAccountKey|Service account key in json format to authenticate an application as a service account. Accept base64 adding the prefix base64:||string| +|user|The email address of the user the application is trying to impersonate in the service account flow||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|apiName|What kind of operation to perform||object| +|methodName|What sub operation to use for the selected operation||string| +|applicationName|Google calendar application name. Example would be camel-google-calendar/1.0||string| +|clientId|Client ID of the calendar application||string| +|delegate|Delegate for wide-domain service account||string| +|inBody|Sets the name of a parameter to be passed in the exchange In Body||string| +|scopes|Specifies the level of permissions you want a calendar application to have to a user account. You can separate multiple scopes by comma. See https://developers.google.com/google-apps/calendar/auth for more info.|https://www.googleapis.com/auth/calendar|array| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessToken|OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage.||string| +|clientSecret|Client secret of the calendar application||string| +|emailAddress|The emailAddress of the Google Service Account.||string| +|p12FileName|The name of the p12 file which has the private key to use with the Google Service Account.||string| +|refreshToken|OAuth 2 refresh token. Using this, the Google Calendar component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived.||string| +|serviceAccountKey|Service account key in json format to authenticate an application as a service account. Accept base64 adding the prefix base64:||string| +|user|The email address of the user the application is trying to impersonate in the service account flow||string| diff --git a/camel-google-drive.md b/camel-google-drive.md new file mode 100644 index 0000000000000000000000000000000000000000..2a2c21b16e55c6fd6cdacf4f9260d1484fb1369b --- /dev/null +++ b/camel-google-drive.md @@ -0,0 +1,106 @@ +# Google-drive + +**Since Camel 2.14** + +**Both producer and consumer are supported** + +The Google Drive component provides access to the [Google Drive file +storage service](http://drive.google.com) via the [Google Drive Web +APIs](https://developers.google.com/drive/v2/reference). + +Google Drive uses the [OAuth 2.0 +protocol](https://developers.google.com/accounts/docs/OAuth2) for +authenticating a Google account and authorizing access to user data. +Before you can use this component, you will need to [create an account +and generate OAuth +credentials](https://developers.google.com/drive/web/auth/web-server). +Credentials consist of a `clientId`, `clientSecret`, and a +`refreshToken`. A handy resource for generating a long-lived +`refreshToken` is the [OAuth +playground](https://developers.google.com/oauthplayground). + +In the case of a [service +account](https://developers.google.com/identity/protocols/oauth2#serviceaccount), +credentials consist of a JSON-file (serviceAccountKey). You can also use +[delegation domain-wide +authority](https://developers.google.com/identity/protocols/oauth2/service-account#delegatingauthority) +(delegate) and one, several, or all possible [Drive API (V2) Auth +Scopes](https://developers.google.com/drive/api/v2/about-auth). + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-google-drive + + x.x.x + + +# URI Format + +The GoogleDrive Component uses the following URI format: + + google-drive://endpoint-prefix/endpoint?[options] + +# More Information + +For more information on the endpoints and options see API documentation +at: [https://developers.google.com/drive/v2/reference/](https://developers.google.com/drive/v2/reference/) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|applicationName|Google drive application name. Example would be camel-google-drive/1.0||string| +|clientId|Client ID of the drive application||string| +|configuration|To use the shared configuration||object| +|delegate|Delegate for wide-domain service account||string| +|scopes|Specifies the level of permissions you want a drive application to have to a user account. See https://developers.google.com/drive/web/scopes for more info.||array| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|clientFactory|To use the GoogleCalendarClientFactory as factory for creating the client. Will by default use BatchGoogleDriveClientFactory||object| +|accessToken|OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage.||string| +|clientSecret|Client secret of the drive application||string| +|refreshToken|OAuth 2 refresh token. Using this, the Google Calendar component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived.||string| +|serviceAccountKey|Service account key in json format to authenticate an application as a service account. Accept base64 adding the prefix base64:||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|apiName|What kind of operation to perform||object| +|methodName|What sub operation to use for the selected operation||string| +|applicationName|Google drive application name. Example would be camel-google-drive/1.0||string| +|clientFactory|To use the GoogleCalendarClientFactory as factory for creating the client. Will by default use BatchGoogleDriveClientFactory||object| +|clientId|Client ID of the drive application||string| +|delegate|Delegate for wide-domain service account||string| +|inBody|Sets the name of a parameter to be passed in the exchange In Body||string| +|scopes|Specifies the level of permissions you want a drive application to have to a user account. See https://developers.google.com/drive/web/scopes for more info.||array| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessToken|OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage.||string| +|clientSecret|Client secret of the drive application||string| +|refreshToken|OAuth 2 refresh token. Using this, the Google Calendar component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived.||string| +|serviceAccountKey|Service account key in json format to authenticate an application as a service account. Accept base64 adding the prefix base64:||string| diff --git a/camel-google-functions.md b/camel-google-functions.md new file mode 100644 index 0000000000000000000000000000000000000000..83a3b6e5d1532e733916d1901bdae997c63fd05e --- /dev/null +++ b/camel-google-functions.md @@ -0,0 +1,210 @@ +# Google-functions + +**Since Camel 3.9** + +**Only producer is supported** + +The Google Functions component provides access to [Google Cloud +Functions](https://cloud.google.com/functions/) via the [Google Cloud +Functions Client for +Java](https://github.com/googleapis/java-functions). + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-google-functions + + x.x.x + + +# Authentication Configuration + +Google Functions component authentication is targeted for use with the +GCP Service Accounts. For more information, please refer to [Google +Cloud +Authentication](https://github.com/googleapis/google-cloud-java#authentication). + +When you have the **service account key**, you can provide +authentication credentials to your application code. Google security +credentials can be set through the component endpoint: + + String endpoint = "google-functions://myCamelFunction?serviceAccountKey=/home/user/Downloads/my-key.json"; + +Or by setting the environment variable `GOOGLE_APPLICATION_CREDENTIALS` +: + + export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/my-key.json" + +# URI Format + + google-functions://functionName[?options] + +You can append query options to the URI in the following format, +`?options=value&option2=value&...` + +For example, to call the function `myCamelFunction` from the project +`myProject` and location `us-central1`, use the following snippet: + + from("direct:start") + .to("google-functions://myCamelFunction?project=myProject&location=us-central1&operation=callFunction&serviceAccountKey=/home/user/Downloads/my-key.json"); + +# Usage + +## Google Functions Producer operations + +Google Functions component provides the following operation on the +producer side: + +- listFunctions + +- getFunction + +- callFunction + +- generateDownloadUrl + +- generateUploadUrl + +- createFunction + +- updateFunction + +- deleteFunction + +If you don’t specify an operation by default, the producer will use the +`callFunction` operation. + +## Advanced component configuration + +If you need to have more control over the `client` instance +configuration, you can create your own instance and refer to it in your +Camel google-functions component configuration: + + from("direct:start") + .to("google-functions://myCamelFunction?client=#myClient"); + +## Google Functions Producer Operation examples + +- `ListFunctions`: This operation invokes the Google Functions client + and gets the list of cloud Functions + + + + //list functions + from("direct:start") + .to("google-functions://myCamelFunction?serviceAccountKey=/home/user/Downloads/my-key.json&project=myProject&location=us-central1&operation=listFunctions") + .log("body:${body}") + +This operation will get the list of cloud functions for the project +`myProject` and location `us-central1`. + +- `GetFunction`: this operation gets the Cloud Functions object + + + + //get function + from("direct:start") + .to("google-functions://myCamelFunction?serviceAccountKey=/home/user/Downloads/my-key.json&project=myProject&location=us-central1&operation=getFunction") + .log("body:${body}") + .to("mock:result"); + +This operation will get the `CloudFunction` object for the project +`myProject`, location `us-central1` and functionName `myCamelFunction`. + +- `CallFunction`: this operation calls the function using an HTTP + request + + + + //call function + from("direct:start") + .process(exchange -> { + exchange.getIn().setBody("just a message"); + }) + .to("google-functions://myCamelFunction?serviceAccountKey=/home/user/Downloads/my-key.json&project=myProject&location=us-central1&operation=callFunction") + .log("body:${body}") + .to("mock:result"); + +- `GenerateDownloadUrl`: this operation generates the signed URL for + downloading deployed function source code. + + + + //generate download url + from("direct:start") + .to("google-functions://myCamelFunction?serviceAccountKey=/home/user/Downloads/my-key.json&project=myProject&location=us-central1&operation=generateDownloadUrl") + .log("body:${body}") + .to("mock:result"); + +- `GenerateUploadUrl`: this operation generates a signed URL for + uploading a function source code. + + + + from("direct:start") + .to("google-functions://myCamelFunction?serviceAccountKey=/home/user/Downloads/my-key.json&project=myProject&location=us-central1&operation=generateUploadUrl") + .log("body:${body}") + .to("mock:result"); + +- `createFunction`: this operation creates a new function. + + + + from("direct:start") + .process(exchange -> { + exchange.getIn().setHeader(GoogleCloudFunctionsConstants.ENTRY_POINT, "com.example.Example"); + exchange.getIn().setHeader(GoogleCloudFunctionsConstants.RUNTIME, "java11"); + exchange.getIn().setHeader(GoogleCloudFunctionsConstants.SOURCE_ARCHIVE_URL, "gs://myBucket/source.zip"); + }) + .to("google-functions://myCamelFunction?serviceAccountKey=/home/user/Downloads/my-key.json&project=myProject&location=us-central1&operation=createFunction") + .log("body:${body}") + .to("mock:result"); + +- `updateFunction`: this operation updates existing function. + + + + from("direct:start") + .process(exchange -> { + UpdateFunctionRequest request = UpdateFunctionRequest.newBuilder() + .setFunction(CloudFunction.newBuilder().build()) + .setUpdateMask(FieldMask.newBuilder().build()).build(); + exchange.getIn().setBody(request); + }) + .to("google-functions://myCamelFunction?serviceAccountKey=/home/user/Downloads/my-key.json&project=myProject&location=us-central1&operation=updateFunction&pojoRequest=true") + .log("body:${body}") + .to("mock:result"); + +- `deleteFunction`: this operation Deletes a function with the given + name from the specified project. + + + + from("direct:start") + .to("google-functions://myCamelFunction?serviceAccountKey=/home/user/Downloads/my-key.json&project=myProject&location=us-central1&operation=deleteFunction") + .log("body:${body}") + .to("mock:result"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|functionName|The user-defined name of the function||string| +|serviceAccountKey|Service account key to authenticate an application as a service account||string| +|location|The Google Cloud Location (Region) where the Function is located||string| +|operation|The operation to perform on the producer.||object| +|pojoRequest|Specifies if the request is a pojo request|false|boolean| +|project|The Google Cloud Project name where the Function is located||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|client|The client to use during service invocation.||object| diff --git a/camel-google-mail-stream.md b/camel-google-mail-stream.md new file mode 100644 index 0000000000000000000000000000000000000000..a54b6ae1a77a594156654bd3bb9bb53a27cfda72 --- /dev/null +++ b/camel-google-mail-stream.md @@ -0,0 +1,109 @@ +# Google-mail-stream + +**Since Camel 2.22** + +**Only consumer is supported** + +The Google Mail component provides access to [Gmail](http://gmail.com/) +via the [Google Mail Web +APIs](https://developers.google.com/gmail/api/v1/reference/). This +component provides the streaming feature for Messages. + +Google Mail uses the [OAuth 2.0 +protocol](https://developers.google.com/accounts/docs/OAuth2) for +authenticating a Google account and authorizing access to user data. +Before you can use this component, you will need to [create an account +and generate OAuth +credentials](https://developers.google.com/gmail/api/auth/web-server). +Credentials consist of a `clientId`, `clientSecret`, and a +`refreshToken`. A handy resource for generating a long-lived +`refreshToken` is the [OAuth +playground](https://developers.google.com/oauthplayground). + +In the case of a [service +account](https://developers.google.com/identity/protocols/oauth2#serviceaccount), +credentials consist of a JSON-file (serviceAccountKey). You can also use +[delegation domain-wide +authority](https://developers.google.com/identity/protocols/oauth2/service-account#delegatingauthority) +(delegate) and one, several, or all possible [GMail API Auth +Scopes](https://developers.google.com/gmail/api/auth/scopes). + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-google-mail + + x.y.z + + +# URI Format + +The GoogleMail Component uses the following URI format: + + google-mail-stream://index?[options] + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|applicationName|Google mail application name. Example would be camel-google-mail/1.0||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|clientId|Client ID of the mail application||string| +|delegate|Delegate for wide-domain service account||string| +|labels|Comma separated list of labels to take into account||string| +|markAsRead|Mark the message as read once it has been consumed|true|boolean| +|maxResults|Max results to be returned|10|integer| +|query|The query to execute on gmail box|is:unread|string| +|raw|Whether to store the entire email message in an RFC 2822 formatted and base64url encoded string (in JSon format), in the Camel message body.|false|boolean| +|scopes|GMail scopes||array| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|clientFactory|The client Factory||object| +|configuration|The configuration||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|accessToken|OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage.||string| +|clientSecret|Client secret of the mail application||string| +|refreshToken|OAuth 2 refresh token. Using this, the Google Calendar component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived.||string| +|serviceAccountKey|Sets .json file with credentials for Service account||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|index|Currently not in use||string| +|applicationName|Google mail application name. Example would be camel-google-mail/1.0||string| +|clientId|Client ID of the mail application||string| +|delegate|Delegate for wide-domain service account||string| +|labels|Comma separated list of labels to take into account||string| +|markAsRead|Mark the message as read once it has been consumed|true|boolean| +|maxResults|Max results to be returned|10|integer| +|query|The query to execute on gmail box|is:unread|string| +|raw|Whether to store the entire email message in an RFC 2822 formatted and base64url encoded string (in JSon format), in the Camel message body.|false|boolean| +|scopes|GMail scopes||array| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessToken|OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage.||string| +|clientSecret|Client secret of the mail application||string| +|refreshToken|OAuth 2 refresh token. Using this, the Google Calendar component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived.||string| +|serviceAccountKey|Sets .json file with credentials for Service account||string| diff --git a/camel-google-mail.md b/camel-google-mail.md new file mode 100644 index 0000000000000000000000000000000000000000..a9a42323a57dd8ceaf1620f2468a073eb9ef0c1a --- /dev/null +++ b/camel-google-mail.md @@ -0,0 +1,105 @@ +# Google-mail + +**Since Camel 2.15** + +**Both producer and consumer are supported** + +The Google Mail component provides access to [Gmail](http://gmail.com/) +via the [Google Mail Web +APIs](https://developers.google.com/gmail/api/v1/reference/). + +Google Mail uses the [OAuth 2.0 +protocol](https://developers.google.com/accounts/docs/OAuth2) for +authenticating a Google account and authorizing access to user data. +Before you can use this component, you will need to [create an account +and generate OAuth +credentials](https://developers.google.com/gmail/api/auth/web-server). +Credentials consist of a `clientId`, `clientSecret`, and a +`refreshToken`. A handy resource for generating a long-lived +`refreshToken` is the [OAuth +playground](https://developers.google.com/oauthplayground). + +In the case of a [service +account](https://developers.google.com/identity/protocols/oauth2#serviceaccount), +credentials consist of a JSON-file (serviceAccountKey). You can also use +[delegation domain-wide +authority](https://developers.google.com/identity/protocols/oauth2/service-account#delegatingauthority) +(delegate) and one, several, or all possible [GMail API Auth +Scopes](https://developers.google.com/gmail/api/auth/scopes). + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-google-mail + + x.y.z + + +# URI Format + +The GoogleMail Component uses the following URI format: + + google-mail://endpoint-prefix/endpoint?[options] + +# More Information + +For more information on the endpoints and options see API documentation +at: [https://developers.google.com/gmail/api/v1/reference/](https://developers.google.com/gmail/api/v1/reference/) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|applicationName|Google mail application name. Example would be camel-google-mail/1.0||string| +|clientId|Client ID of the mail application||string| +|configuration|To use the shared configuration||object| +|delegate|Delegate for wide-domain service account||string| +|scopes|GMail scopes||array| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|clientFactory|To use the GoogleCalendarClientFactory as factory for creating the client. Will by default use BatchGoogleMailClientFactory||object| +|accessToken|OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage.||string| +|clientSecret|Client secret of the mail application||string| +|refreshToken|OAuth 2 refresh token. Using this, the Google Calendar component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived.||string| +|serviceAccountKey|Service account key in json format to authenticate an application as a service account. Accept base64 adding the prefix base64:||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|apiName|What kind of operation to perform||object| +|methodName|What sub operation to use for the selected operation||string| +|applicationName|Google mail application name. Example would be camel-google-mail/1.0||string| +|clientId|Client ID of the mail application||string| +|delegate|Delegate for wide-domain service account||string| +|inBody|Sets the name of a parameter to be passed in the exchange In Body||string| +|scopes|GMail scopes||array| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessToken|OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage.||string| +|clientSecret|Client secret of the mail application||string| +|refreshToken|OAuth 2 refresh token. Using this, the Google Calendar component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived.||string| +|serviceAccountKey|Service account key in json format to authenticate an application as a service account. Accept base64 adding the prefix base64:||string| diff --git a/camel-google-pubsub-lite.md b/camel-google-pubsub-lite.md new file mode 100644 index 0000000000000000000000000000000000000000..08a89f21b5d790f44fe53f5c32393f211d38198b --- /dev/null +++ b/camel-google-pubsub-lite.md @@ -0,0 +1,127 @@ +# Google-pubsub-lite + +**Since Camel 4.6** + +**Both producer and consumer are supported** + +The Google PubSub Lite component provides access to [Cloud Pub/Sub Lite +Infrastructure](https://cloud.google.com/pubsub/) via the [Google Cloud +Pub/Sub Lite Client for +Java](https://github.com/googleapis/java-pubsublite). + +The standard [Google Pub/Sub component](#google-pubsub-component.adoc) +isn’t compatible with Pub/Sub Lite service due to API and message model +differences. Please refer to the following links to learn more about +these differences: + +- [Pub/Sub Lite + Overview](https://cloud.google.com/pubsub/docs/overview#lite) + +- [Choosing between Pub/Sub or Pub/Sub + Lite](https://cloud.google.com/pubsub/docs/choosing-pubsub-or-lite) + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-google-pubsub-lite + + x.x.x + + +# URI Format + +The Google PubSub Component uses the following URI format: + + google-pubsub-lite://project-id:location:destinationName?[options] + +Destination Name can be either a topic or a subscription name. + +# Producer Endpoints + +Google PubSub Lite expects the payload to be `byte[]` array, Producer +endpoints will send: + +- String body as `byte[]` encoded as UTF-8 + +- `byte[]` body as is + +- Everything else will be serialised into a `byte[]` array + +A Map set as message header `GooglePubsubConstants.ATTRIBUTES` will be +sent as PubSub attributes. + +When producing messages set the message header +`GooglePubsubConstants.ORDERING_KEY`. This will be set as the PubSub +Lite orderingKey for the message. You can find more information on +[Using ordering +keys](https://cloud.google.com/pubsub/lite/docs/publishing#using_ordering_keys). + +# Consumer Endpoints + +Google PubSub Lite will redeliver the message if it has not been +acknowledged within the time period set as a configuration option on the +subscription. + +The component will acknowledge the message once exchange processing has +been completed. + +# Message Body + +The consumer endpoint returns the content of the message as `byte[]` - +exactly as the underlying system sends it. It is up for the route to +convert/unmarshall the contents. + +# Examples + +You’ll need to provide a connectionFactory to the ActiveMQ Component, to +have the following examples working. + +## Producer Example + + from("timer://scheduler?fixedRate=true&period=5s") + .setBody(simple("Hello World ${date:now:HH:mm:ss.SSS}")) + .to("google-pubsub-lite:123456789012:europe-west3-a:my-topic-lite") + .log("Message sent: ${body}"); + +## Consumer Example + + from("google-pubsub-lite:123456789012:europe-west3-a:my-subscription-lite") + .log("Message received: ${body}"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|consumerBytesOutstanding|The number of quota bytes that may be outstanding to the client. Must be greater than the allowed size of the largest message (1 MiB).|10485760|integer| +|consumerMessagesOutstanding|The number of messages that may be outstanding to the client. Must be 0.|1000|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|publisherCacheSize|Maximum number of producers to cache. This could be increased if you have producers for lots of different topics.|100|integer| +|publisherCacheTimeout|How many milliseconds should each producer stay alive in the cache.|180000|integer| +|publisherTerminationTimeout|How many milliseconds should a producer be allowed to terminate.|60000|integer| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|serviceAccountKey|The Service account key that can be used as credentials for the PubSub Lite publisher/subscriber. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|projectId|The Google Cloud PubSub Lite Project Id||integer| +|location|The Google Cloud PubSub Lite location||string| +|destinationName|The Destination Name. For the consumer this will be the subscription name, while for the producer this will be the topic name.||string| +|loggerId|Logger ID to use when a match to the parent route required||string| +|ackMode|AUTO = exchange gets ack'ed/nack'ed on completion. NONE = downstream process has to ack/nack explicitly|AUTO|object| +|concurrentConsumers|The number of parallel streams consuming from the subscription|1|integer| +|maxAckExtensionPeriod|Set the maximum period a message ack deadline will be extended. Value in seconds|3600|integer| +|maxMessagesPerPoll|The max number of messages to receive from the server in a single API call|1|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|pubsubEndpoint|Pub/Sub endpoint to use. Required when using message ordering, and ensures that messages are received in order even when multiple publishers are used||string| +|serializer|A custom GooglePubsubLiteSerializer to use for serializing message payloads in the producer||object| +|serviceAccountKey|The Service account key that can be used as credentials for the PubSub publisher/subscriber. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| diff --git a/camel-google-pubsub.md b/camel-google-pubsub.md new file mode 100644 index 0000000000000000000000000000000000000000..99ee7928b6d63444d9813bac904bb8ec8876c4b1 --- /dev/null +++ b/camel-google-pubsub.md @@ -0,0 +1,175 @@ +# Google-pubsub + +**Since Camel 2.19** + +**Both producer and consumer are supported** + +The Google Pubsub component provides access to [Cloud Pub/Sub +Infrastructure](https://cloud.google.com/pubsub/) via the [Google Cloud +Java Client for Google Cloud +Pub/Sub](https://github.com/googleapis/java-pubsub). + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-google-pubsub + + x.x.x + + +# URI Format + +The Google Pubsub Component uses the following URI format: + + google-pubsub://project-id:destinationName?[options] + +Destination Name can be either a topic or a subscription name. + +# Producer Endpoints + +Producer endpoints can accept and deliver to PubSub individual and +grouped exchanges alike. Grouped exchanges have +`Exchange.GROUPED_EXCHANGE` property set. + +Google PubSub expects the payload to be byte\[\] array, Producer +endpoints will send: + +- String body as `byte[]` encoded as UTF-8 + +- `byte[]` body as is + +- Everything else will be serialised into a `byte[]` array + +A Map set as message header `GooglePubsubConstants.ATTRIBUTES` will be +sent as PubSub attributes. + +Google PubSub supports ordered message delivery. + +To enable this, set the options `messageOrderingEnabled` to true, and +the `pubsubEndpoint` to a GCP region. + +When producing messages set the message header +`GooglePubsubConstants.ORDERING_KEY` . This will be set as the PubSub +orderingKey for the message. + +For more information, see [Ordering +messages](https://cloud.google.com/pubsub/docs/ordering). + +Once exchange has been delivered to PubSub the PubSub Message ID will be +assigned to the header `GooglePubsubConstants.MESSAGE_ID`. + +# Consumer Endpoints + +Google PubSub will redeliver the message if it has not been acknowledged +within the time period set as a configuration option on the +subscription. + +The component will acknowledge the message once exchange processing has +been completed. + +If the route throws an exception, the exchange is marked as failed, and +the component will NACK the message. It will be redelivered immediately. + +To ack/nack the message the component uses Acknowledgement ID stored as +header `GooglePubsubConstants.ACK_ID`. If the header is removed or +tampered with, the ack will fail and the message will be redelivered +again after the ack deadline. + +# Message Body + +The consumer endpoint returns the content of the message as `byte[]`. +Exactly as the underlying system sends it. It is up for the route to +convert/unmarshall the contents. + +# Authentication Configuration + +By default, this component acquires credentials using +`GoogleCredentials.getApplicationDefault()`. This behavior can be +disabled by setting *authenticate* option to `false`, in which case +requests to Google API will be made without authentication details. This +is only desirable when developing against an emulator. This behavior can +be altered by supplying a path to a service account key file. + +# Rollback and Redelivery + +The rollback for Google PubSub relies on the idea of the Acknowledgement +Deadline - the time period where Google PubSub expects to receive the +acknowledgement. If the acknowledgement has not been received, the +message is redelivered. + +Google provides an API to extend the deadline for a message. + +More information in [Google PubSub +Documentation](https://cloud.google.com/pubsub/docs/subscriber#ack_deadline) + +So, rollback is effectively a deadline extension API call with zero +value - i.e., deadline is reached now, and the message can be +redelivered to the next consumer. + +It is possible to delay the message redelivery by setting the +acknowledgement deadline explicitly for the rollback by setting the +message header `GooglePubsubConstants.ACK_DEADLINE` to the value in +seconds. + +# Manual Acknowledgement + +By default, the PubSub consumer will acknowledge messages once the +exchange has been processed, or negative-acknowledge them if the +exchange has failed. + +If the *ackMode* option is set to `NONE`, the component will not +acknowledge messages, and it is up to the route to do so. In this case, +a `GooglePubsubAcknowledge` object is stored in the header +`GooglePubsubConstants.GOOGLE_PUBSUB_ACKNOWLEDGE` and can be used to +acknowledge messages: + + from("google-pubsub:{{project.name}}:{{subscription.name}}?ackMode=NONE") + .process(exchange -> { + GooglePubsubAcknowledge acknowledge = exchange.getIn().getHeader(GooglePubsubConstants.GOOGLE_PUBSUB_ACKNOWLEDGE, GooglePubsubAcknowledge.class); + acknowledge.ack(exchange); // or .nack(exchange) + }); + +Manual acknowledgement works with both the asynchronous and synchronous +consumers and will use the acknowledgement id which is stored in +`GooglePubsubConstants.ACK_ID`. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|authenticate|Use Credentials when interacting with PubSub service (no authentication is required when using emulator).|true|boolean| +|endpoint|Endpoint to use with local Pub/Sub emulator.||string| +|serviceAccountKey|The Service account key that can be used as credentials for the PubSub publisher/subscriber. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|synchronousPullRetryableCodes|Comma-separated list of additional retryable error codes for synchronous pull. By default the PubSub client library retries ABORTED, UNAVAILABLE, UNKNOWN||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|publisherCacheSize|Maximum number of producers to cache. This could be increased if you have producers for lots of different topics.||integer| +|publisherCacheTimeout|How many milliseconds should each producer stay alive in the cache.||integer| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|publisherTerminationTimeout|How many milliseconds should a producer be allowed to terminate.||integer| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|projectId|The Google Cloud PubSub Project Id||string| +|destinationName|The Destination Name. For the consumer this will be the subscription name, while for the producer this will be the topic name.||string| +|authenticate|Use Credentials when interacting with PubSub service (no authentication is required when using emulator).|true|boolean| +|loggerId|Logger ID to use when a match to the parent route required||string| +|serviceAccountKey|The Service account key that can be used as credentials for the PubSub publisher/subscriber. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|ackMode|AUTO = exchange gets ack'ed/nack'ed on completion. NONE = downstream process has to ack/nack explicitly|AUTO|object| +|concurrentConsumers|The number of parallel streams consuming from the subscription|1|integer| +|maxAckExtensionPeriod|Set the maximum period a message ack deadline will be extended. Value in seconds|3600|integer| +|maxMessagesPerPoll|The max number of messages to receive from the server in a single API call|1|integer| +|synchronousPull|Synchronously pull batches of messages|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|messageOrderingEnabled|Should message ordering be enabled|false|boolean| +|pubsubEndpoint|Pub/Sub endpoint to use. Required when using message ordering, and ensures that messages are received in order even when multiple publishers are used||string| +|serializer|A custom GooglePubsubSerializer to use for serializing message payloads in the producer||object| diff --git a/camel-google-secret-manager.md b/camel-google-secret-manager.md new file mode 100644 index 0000000000000000000000000000000000000000..aaad199e36e28441246051e42b8b280138f050c1 --- /dev/null +++ b/camel-google-secret-manager.md @@ -0,0 +1,313 @@ +# Google-secret-manager + +**Since Camel 3.16** + +**Only producer is supported** + +The Google Secret Manager component provides access to [Google Cloud +Secret Manager](https://cloud.google.com/secret-manager/) + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-google-secret-manager + + x.x.x + + +# Authentication Configuration + +Google Secret Manager component authentication is targeted for use with +the GCP Service Accounts. For more information, please refer to [Google +Cloud +Authentication](https://github.com/googleapis/google-cloud-java#authentication). + +When you have the **service account key**, you can provide +authentication credentials to your application code. Google security +credentials can be set through the component endpoint: + + String endpoint = "google-secret-manager://myCamelFunction?serviceAccountKey=/home/user/Downloads/my-key.json"; + +Or by setting the environment variable `GOOGLE_APPLICATION_CREDENTIALS` +: + + export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/my-key.json" + +# URI Format + + google-secret-manager://functionName[?options] + +You can append query options to the URI in the following format, +`?options=value&option2=value&...` + +For example, in order to call the function `myCamelFunction` from the +project `myProject` and location `us-central1`, use the following +snippet: + + from("google-secret-manager://myProject?serviceAccountKey=/home/user/Downloads/my-key.json&operation=createSecret") + .to("direct:test"); + +## Using GCP Secret Manager Properties Source + +To use GCP Secret Manager, you need to provide `serviceAccountKey` file +and GCP `projectId`. This can be done using environmental variables +before starting the application: + + export $CAMEL_VAULT_GCP_SERVICE_ACCOUNT_KEY=file:////path/to/service.accountkey + export $CAMEL_VAULT_GCP_PROJECT_ID=projectId + +You can also configure the credentials in the `application.properties` +file such as: + + camel.vault.gcp.serviceAccountKey = serviceAccountKey + camel.vault.gcp.projectId = projectId + +If you want instead to use the [GCP default client +instance](https://cloud.google.com/docs/authentication/production), +you’ll need to provide the following env variables: + + export $CAMEL_VAULT_GCP_USE_DEFAULT_INSTANCE=true + export $CAMEL_VAULT_GCP_PROJECT_ID=projectId + +You can also configure the credentials in the `application.properties` +file such as: + + camel.vault.gcp.useDefaultInstance = true + camel.vault.aws.projectId = region + +At this point you’ll be able to reference a property in the following +way by using `gcp:` as prefix in the `{{ }}` syntax: + + + + + + + + +Where `route` will be the name of the secret stored in the GCP Secret +Manager Service. + +You could specify a default value in case the secret is not present on +GCP Secret Manager: + + + + + + + + +In this case, if the secret doesn’t exist, the property will fall back +to `default` as value. + +Also, you are able to get a particular field of the secret, if you have, +for example, a secret named database of this form: + + { + "username": "admin", + "password": "password123", + "engine": "postgres", + "host": "127.0.0.1", + "port": "3128", + "dbname": "db" + } + +You’re able to do get single secret value in your route, like for +example: + + + + + + + + +Or re-use the property as part of an endpoint. + +You could specify a default value in case the particular field of secret +is not present on GCP Secret Manager: + + + + + + + + +In this case, if the secret doesn’t exist or the secret exists, but the +username field is not part of the secret, the property will fall back to +"admin" as value. + +There is also the syntax to get a particular version of the secret for +both the approach, with field/default value specified or only with +secret: + + + + + + + + +This approach will return the RAW route secret with version *1*. + + + + + + + + +This approach will return the route secret value with version *1* or +default value in case the secret doesn’t exist or the version doesn’t +exist. + + + + + + + + +This approach will return the username field of the database secret with +version *1* or admin in case the secret doesn’t exist or the version +doesn’t exist. + +There are only two requirements: - Adding `camel-google-secret-manager` +JAR to your Camel application. - Give the service account used +permissions to do operation at secret management level, (for example, +accessing the secret payload, or being admin of secret manager service) + +## Automatic `CamelContext` reloading on Secret Refresh + +Being able to reload Camel context on a Secret Refresh could be done by +specifying the usual credentials (the same used for Google Secret +Manager Property Function). + +With Environment variables: + + export $CAMEL_VAULT_GCP_USE_DEFAULT_INSTANCE=true + export $CAMEL_VAULT_GCP_PROJECT_ID=projectId + +or as plain Camel main properties: + + camel.vault.gcp.useDefaultInstance = true + camel.vault.aws.projectId = projectId + +Or by specifying a path to a service account key file, instead of using +the default instance. + +To enable the automatic refresh, you’ll need additional properties to +set: + + camel.vault.gcp.projectId= projectId + camel.vault.gcp.refreshEnabled=true + camel.vault.gcp.refreshPeriod=60000 + camel.vault.gcp.secrets=hello* + camel.vault.gcp.subscriptionName=subscriptionName + camel.main.context-reload-enabled = true + +where `camel.vault.gcp.refreshEnabled` will enable the automatic context +reload, `camel.vault.gcp.refreshPeriod` is the interval of time between +two different checks for update events and `camel.vault.gcp.secrets` is +a regex representing the secrets we want to track for updates. + +Note that `camel.vault.gcp.secrets` is not mandatory: if not specified +the task responsible for checking updates events will take into accounts +or the properties with an `gcp:` prefix. + +The `camel.vault.gcp.subscriptionName` is the subscription name created +in relation to the Google PubSub topic associated with the tracked +secrets. + +This mechanism while making use of the notification system related to +Google Secret Manager: through this feature, every secret could be +associated with one up to ten Google Pubsub Topics. These topics will +receive events related to the life cycle of the secret. + +There are only two requirements: - Adding `camel-google-secret-manager` +JAR to your Camel application. - Give the service account used +permissions to do operation at secret management level, (for example, +accessing the secret payload, or being admin of secret manager service +and also have permission over the Pubsub service) + +## Google Secret Manager Producer operations + +Google Functions component provides the following operation on the +producer side: + +- `createSecret` + +- `getSecretVersion` + +- `deleteSecret` + +- `listSecrets` + +If you don’t specify an operation by default, the producer will use the +`createSecret` operation. + +## Google Secret Manager Producer Operation examples + +- `createSecret`: This operation will create a secret in the Secret + Manager service + + + + from("direct:start") + .setHeader("GoogleSecretManagerConstants.SECRET_ID, constant("test")) + .setBody(constant("hello")) + .to("google-functions://myProject?serviceAccountKey=/home/user/Downloads/my-key.json&operation=createSecret") + .log("body:${body}") + +- `getSecretVersion`: This operation will retrieve a secret value with + the latest version in the Secret Manager service + + + + from("direct:start") + .setHeader("GoogleSecretManagerConstants.SECRET_ID, constant("test")) + .to("google-functions://myProject?serviceAccountKey=/home/user/Downloads/my-key.json&operation=getSecretVersion") + .log("body:${body}") + +This will log the value of the secret "test". + +- `deleteSecret`: This operation will delete a secret + + + + from("direct:start") + .setHeader("GoogleSecretManagerConstants.SECRET_ID, constant("test")) + .to("google-functions://myProject?serviceAccountKey=/home/user/Downloads/my-key.json&operation=deleteSecret") + +- ‘listSecrets\`: This operation will return the secrets’ list for the + project myProject + + + + from("direct:start") + .setHeader("GoogleSecretManagerConstants.SECRET_ID, constant("test")) + .to("google-functions://myProject?serviceAccountKey=/home/user/Downloads/my-key.json&operation=listSecrets") + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|project|The Google Cloud Project Id name related to the Secret Manager||string| +|serviceAccountKey|Service account key to authenticate an application as a service account||string| +|operation|The operation to perform on the producer.||object| +|pojoRequest|Specifies if the request is a pojo request|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|client|The client to use during service invocation.||object| diff --git a/camel-google-sheets-stream.md b/camel-google-sheets-stream.md new file mode 100644 index 0000000000000000000000000000000000000000..74d334e166eda1665467e995d1e7634f86da8428 --- /dev/null +++ b/camel-google-sheets-stream.md @@ -0,0 +1,152 @@ +# Google-sheets-stream + +**Since Camel 2.23** + +**Only consumer is supported** + +The Google Sheets component provides access to +[Sheets](https://sheets.google.com/) via the [Google Sheets Web +APIs](https://developers.google.com/sheets/api/reference/rest/). + +Google Sheets uses the [OAuth 2.0 +protocol](https://developers.google.com/accounts/docs/OAuth2) for +authenticating a Google account and authorizing access to user data. +Before you can use this component, you will need to [create an account +and generate OAuth +credentials](https://developers.google.com/google-apps/sheets/auth). +Credentials consist of a `clientId`, `clientSecret`, and a +`refreshToken`. A handy resource for generating a long-lived +`refreshToken` is the [OAuth +playground](https://developers.google.com/oauthplayground). + +In the case of a [service +account](https://developers.google.com/identity/protocols/oauth2#serviceaccount), +credentials consist of a JSON-file (serviceAccountKey). You can also use +[delegation domain-wide +authority](https://developers.google.com/identity/protocols/oauth2/service-account#delegatingauthority) +(delegate) and one, several, or all possible [Sheets API Auth +Scopes](https://developers.google.com/sheets/api/guides/authorizing). + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-google-sheets + + x.x.x + + +# URI Format + +The Google Sheets Component uses the following URI format: + + google-sheets-stream://apiName?[options] + +# ValueInputOption + +Many of the APIs with Google sheets require including the following +header, with one of the enum values: + + +++++ + + + + + + + + + + + + + + + + + +

Header

Enum

Description

CamelGoogleSheets.ValueInputOption

RAW

The values the user has entered will +not be parsed and will be stored as-is.

CamelGoogleSheets.ValueInputOption

USER_ENTERED

The values will be parsed as if the +user typed them into the UI. Numbers will stay as numbers, but strings +may be converted to numbers, dates, etc. following the same rules that +are applied when entering text into a cell via the Google Sheets +UI.

+ +# More information + +For more information on the endpoints and options see API documentation +at: [https://developers.google.com/sheets/api/reference/rest/](https://developers.google.com/sheets/api/reference/rest/) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|applicationName|Google Sheets application name. Example would be camel-google-sheets/1.0||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|clientId|Client ID of the sheets application||string| +|configuration|To use the shared configuration||object| +|delegate|Delegate for wide-domain service account||string| +|includeGridData|True if grid data should be returned.|false|boolean| +|majorDimension|Specifies the major dimension that results should use..|ROWS|string| +|maxResults|Specify the maximum number of returned results. This will limit the number of rows in a returned value range data set or the number of returned value ranges in a batch request.||integer| +|range|Specifies the range of rows and columns in a sheet to get data from.||string| +|scopes|Specifies the level of permissions you want a sheets application to have to a user account. See https://developers.google.com/identity/protocols/googlescopes for more info.||array| +|splitResults|True if value range result should be split into rows or columns to process each of them individually. When true each row or column is represented with a separate exchange in batch processing. Otherwise value range object is used as exchange junk size.|false|boolean| +|valueRenderOption|Determines how values should be rendered in the output.|FORMATTED\_VALUE|string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|clientFactory|To use the GoogleSheetsClientFactory as factory for creating the client. Will by default use BatchGoogleSheetsClientFactory||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|accessToken|OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage.||string| +|clientSecret|Client secret of the sheets application||string| +|refreshToken|OAuth 2 refresh token. Using this, the Google Sheets component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived.||string| +|serviceAccountKey|Sets .json file with credentials for Service account||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|spreadsheetId|Specifies the spreadsheet identifier that is used to identify the target to obtain.||string| +|applicationName|Google Sheets application name. Example would be camel-google-sheets/1.0||string| +|clientId|Client ID of the sheets application||string| +|delegate|Delegate for wide-domain service account||string| +|includeGridData|True if grid data should be returned.|false|boolean| +|majorDimension|Specifies the major dimension that results should use..|ROWS|string| +|maxResults|Specify the maximum number of returned results. This will limit the number of rows in a returned value range data set or the number of returned value ranges in a batch request.||integer| +|range|Specifies the range of rows and columns in a sheet to get data from.||string| +|scopes|Specifies the level of permissions you want a sheets application to have to a user account. See https://developers.google.com/identity/protocols/googlescopes for more info.||array| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|splitResults|True if value range result should be split into rows or columns to process each of them individually. When true each row or column is represented with a separate exchange in batch processing. Otherwise value range object is used as exchange junk size.|false|boolean| +|valueRenderOption|Determines how values should be rendered in the output.|FORMATTED\_VALUE|string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessToken|OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage.||string| +|clientSecret|Client secret of the sheets application||string| +|refreshToken|OAuth 2 refresh token. Using this, the Google Sheets component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived.||string| +|serviceAccountKey|Sets .json file with credentials for Service account||string| diff --git a/camel-google-sheets.md b/camel-google-sheets.md new file mode 100644 index 0000000000000000000000000000000000000000..dc70e2dcdc0d60c8119677ba7048d861d7d6edfa --- /dev/null +++ b/camel-google-sheets.md @@ -0,0 +1,150 @@ +# Google-sheets + +**Since Camel 2.23** + +**Both producer and consumer are supported** + +The Google Sheets component provides access to [Google +Sheets](http://google.com/sheets) via the [Google Sheets Web +APIs](https://developers.google.com/sheets/api/reference/rest/). + +Google Sheets uses the [OAuth 2.0 +protocol](https://developers.google.com/accounts/docs/OAuth2) for +authenticating a Google account and authorizing access to user data. +Before you can use this component, you will need to [create an account +and generate OAuth +credentials](https://developers.google.com/google-apps/sheets/auth). +Credentials consist of a `clientId`, `clientSecret`, and a +`refreshToken`. A handy resource for generating a long-lived +`refreshToken` is the [OAuth +playground](https://developers.google.com/oauthplayground). + +In the case of a [service +account](https://developers.google.com/identity/protocols/oauth2#serviceaccount), +credentials consist of a JSON-file (serviceAccountKey). You can also use +[delegation domain-wide +authority](https://developers.google.com/identity/protocols/oauth2/service-account#delegatingauthority) +(delegate) and one, several, or all possible [Sheets API Auth +Scopes](https://developers.google.com/sheets/api/guides/authorizing). + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-google-sheets + + x.x.x + + +# URI Format + +The GoogleSheets Component uses the following URI format: + + google-sheets://endpoint-prefix/endpoint?[options] + +Endpoint prefix can be one of: + +- spreadsheets + +- data + +# ValueInputOption + +Many of the APIs with Google sheets require including the following +header, with one of the enum values: + + +++++ + + + + + + + + + + + + + + + + + +

Header

Enum

Description

CamelGoogleSheets.ValueInputOption

RAW

The values the user has entered will +not be parsed and will be stored as-is.

CamelGoogleSheets.ValueInputOption

USER_ENTERED

The values will be parsed as if the +user typed them into the UI. Numbers will stay as numbers, but strings +may be converted to numbers, dates, etc. following the same rules that +are applied when entering text into a cell via the Google Sheets +UI.

+ +# More information + +For more information on the endpoints and options see API documentation +at: [https://developers.google.com/sheets/api/reference/rest/](https://developers.google.com/sheets/api/reference/rest/) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|applicationName|Google Sheets application name. Example would be camel-google-sheets/1.0||string| +|clientId|Client ID of the sheets application||string| +|configuration|To use the shared configuration||object| +|delegate|Delegate for wide-domain service account||string| +|scopes|Specifies the level of permissions you want a sheets application to have to a user account. See https://developers.google.com/identity/protocols/googlescopes for more info. Multiple scopes can be separated by comma.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|splitResult|When consumer return an array or collection this will generate one exchange per element, and their routes will be executed once for each exchange. Set this value to false to use a single exchange for the entire list or array.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|clientFactory|To use the GoogleSheetsClientFactory as factory for creating the client. Will by default use BatchGoogleSheetsClientFactory||object| +|accessToken|OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage.||string| +|clientSecret|Client secret of the sheets application||string| +|refreshToken|OAuth 2 refresh token. Using this, the Google Sheets component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived.||string| +|serviceAccountKey|Sets .json file with credentials for Service account||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|apiName|What kind of operation to perform||object| +|methodName|What sub operation to use for the selected operation||string| +|applicationName|Google Sheets application name. Example would be camel-google-sheets/1.0||string| +|clientId|Client ID of the sheets application||string| +|delegate|Delegate for wide-domain service account||string| +|inBody|Sets the name of a parameter to be passed in the exchange In Body||string| +|scopes|Specifies the level of permissions you want a sheets application to have to a user account. See https://developers.google.com/identity/protocols/googlescopes for more info. Multiple scopes can be separated by comma.||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|splitResult|When consumer return an array or collection this will generate one exchange per element, and their routes will be executed once for each exchange. Set this value to false to use a single exchange for the entire list or array.|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessToken|OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage.||string| +|clientSecret|Client secret of the sheets application||string| +|refreshToken|OAuth 2 refresh token. Using this, the Google Sheets component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived.||string| +|serviceAccountKey|Sets .json file with credentials for Service account||string| diff --git a/camel-google-storage.md b/camel-google-storage.md new file mode 100644 index 0000000000000000000000000000000000000000..9466423ca66c5e96fb48070b940bb5f4797e8478 --- /dev/null +++ b/camel-google-storage.md @@ -0,0 +1,297 @@ +# Google-storage + +**Since Camel 3.9** + +**Both producer and consumer are supported** + +The Google Storage component provides access to [Google Cloud +Storage](https://cloud.google.com/storage/) via the [Google java storage +library](https://github.com/googleapis/java-storage). + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-google-storage + + x.x.x + + +# Authentication Configuration + +Google Storage component authentication is targeted for use with the GCP +Service Accounts. For more information, please refer to [Google Storage +Auth +Guide](https://cloud.google.com/storage/docs/reference/libraries#setting_up_authentication). + +When you have the **service account key**, you can provide +authentication credentials to your application code. Google security +credentials can be set through the component endpoint: + + String endpoint = "google-storage://myCamelBucket?serviceAccountKey=/home/user/Downloads/my-key.json"; + +Or by providing the path to the GCP credentials file location: + +Provide authentication credentials to your application code by setting +the environment variable `GOOGLE_APPLICATION_CREDENTIALS` : + + export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/my-key.json" + +# URI Format + + google-storage://bucketNameOrArn?[options] + +By default, the bucket will be created if it doesn’t already exist. You +can append query options to the URI in the following format: +`?options=value&option2=value&...` + +For example, to read file `hello.txt` from bucket `myCamelBucket`, use +the following snippet: + + from("google-storage://myCamelBucket?serviceAccountKey=/home/user/Downloads/my-key.json&objectName=hello.txt") + .to("file:/var/downloaded"); + +# Usage + +## Google Storage Producer operations + +Google Storage component provides the following operations on the +producer side: + +- `copyObject` + +- `listObjects` + +- `deleteObject` + +- `deleteBucket` + +- `listBuckets` + +- `getObject` + +- `createDownloadLink` + +If you don’t specify an operation explicitly, the producer will a file +upload. + +## Advanced component configuration + +If you need to have more control over the `storageClient` instance +configuration, you can create your own instance and refer to it in your +Camel google-storage component configuration: + + from("google-storage://myCamelBucket?storageClient=#client") + .to("mock:result"); + +## Google Storage Producer Operation examples + +- File Upload: This operation will upload a file to the Google Storage + based on the body content + + + + //upload a file + byte[] payload = "Camel rocks!".getBytes(); + ByteArrayInputStream bais = new ByteArrayInputStream(payload); + from("direct:start") + .process( exchange -> { + exchange.getIn().setHeader(GoogleCloudStorageConstants.OBJECT_NAME, "camel.txt"); + exchange.getIn().setBody(bais); + }) + .to("google-storage://myCamelBucket?serviceAccountKey=/home/user/Downloads/my-key.json") + .log("uploaded file object:${header.CamelGoogleCloudStorageObjectName}, body:${body}"); + +This operation will upload the file `camel.txt` with the content +`"Camel rocks!"` in the myCamelBucket bucket + +- `CopyObject`: this operation copies an object from one bucket to a + different one + + + + from("direct:start").process( exchange -> { + exchange.getIn().setHeader(GoogleCloudStorageConstants.OPERATION, GoogleCloudStorageOperations.copyObject); + exchange.getIn().setHeader(GoogleCloudStorageConstants.OBJECT_NAME, "camel.txt" ); + exchange.getIn().setHeader(GoogleCloudStorageConstants.DESTINATION_BUCKET_NAME, "myCamelBucket_dest"); + exchange.getIn().setHeader(GoogleCloudStorageConstants.DESTINATION_OBJECT_NAME, "camel_copy.txt"); + }) + .to("google-storage://myCamelBucket?serviceAccountKey=/home/user/Downloads/my-key.json") + .to("mock:result"); + +This operation will copy the object with the name expressed in the +header DESTINATION\_OBJECT\_NAME to the DESTINATION\_BUCKET\_NAME +bucket, from the bucket myCamelBucket. + +- `DeleteObject`: this operation deletes an object from a bucket + + + + from("direct:start").process( exchange -> { + exchange.getIn().setHeader(GoogleCloudStorageConstants.OPERATION, GoogleCloudStorageOperations.deleteObject); + exchange.getIn().setHeader(GoogleCloudStorageConstants.OBJECT_NAME, "camel.txt" ); + }) + .to("google-storage://myCamelBucket?serviceAccountKey=/home/user/Downloads/my-key.json") + .to("mock:result"); + +This operation will delete the object from the bucket myCamelBucket. + +- `ListBuckets`: this operation lists the buckets for this account in + this region + + + + from("direct:start") + .to("google-storage://myCamelBucket?serviceAccountKey=/home/user/Downloads/my-key.json&operation=listBuckets") + .to("mock:result"); + +This operation will list the buckets for this account. + +- `DeleteBucket`: this operation deletes the bucket specified as URI + parameter or header + + + + from("direct:start") + .to("google-storage://myCamelBucket?serviceAccountKey=/home/user/Downloads/my-key.json&operation=deleteBucket") + .to("mock:result"); + +This operation will delete the bucket myCamelBucket. + +- `ListObjects`: this operation list object in a specific bucket + + + + from("direct:start") + .to("google-storage://myCamelBucket?serviceAccountKey=/home/user/Downloads/my-key.json&operation=listObjects") + .to("mock:result"); + +This operation will list the objects in the myCamelBucket bucket. + +- `GetObject`: this operation gets a single object in a specific + bucket + + + + from("direct:start") + .process( exchange -> { + exchange.getIn().setHeader(GoogleCloudStorageConstants.OBJECT_NAME, "camel.txt"); + }) + .to("google-storage://myCamelBucket?serviceAccountKey=/home/user/Downloads/my-key.json&operation=getObject") + .to("mock:result"); + +This operation will return a Blob object instance related to the +`OBJECT_NAME` object in `myCamelBucket` bucket. + +- `CreateDownloadLink`: this operation will return a download link + + + + from("direct:start") + .process( exchange -> { + exchange.getIn().setHeader(GoogleCloudStorageConstants.OBJECT_NAME, "camel.txt" ); + exchange.getIn().setHeader(GoogleCloudStorageConstants.DOWNLOAD_LINK_EXPIRATION_TIME, 86400000L); //1 day + }) + .to("google-storage://myCamelBucket?serviceAccountKey=/home/user/Downloads/my-key.json&operation=createDownloadLink") + .to("mock:result"); + +This operation will return a download link url for the file OBJECT\_NAME +in the bucket myCamelBucket. It’s possible to specify the expiration +time for the created link through the header +DOWNLOAD\_LINK\_EXPIRATION\_TIME. If not specified, by default it is 5 +minutes. + +# Bucket Auto creation + +With the option `autoCreateBucket` users are able to avoid the +autocreation of a Bucket in case it doesn’t exist. The default for this +option is `true`. If set to false, any operation on a not-existent +bucket won’t be successful and an error will be returned. + +# MoveAfterRead consumer option + +In addition to `deleteAfterRead` it has been added another option, +`moveAfterRead`. With this option enabled the consumed object will be +moved to a target `destinationBucket` instead of being only deleted. +This will require specifying the destinationBucket option. As example: + + from("google-storage://myCamelBucket?serviceAccountKey=/home/user/Downloads/my-key.json" + + "&autoCreateBucket=true" + + "&destinationBucket=myCamelProcessedBucket" + + "&moveAfterRead=true" + + "&deleteAfterRead=true" + + "&includeBody=true" + ) + .to("mock:result"); + +In this case, the objects consumed will be moved to +myCamelProcessedBucket bucket and deleted from the original one (because +of deleteAfterRead). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|autoCreateBucket|Setting the autocreation of the bucket bucketName.|true|boolean| +|configuration|The component configuration||object| +|serviceAccountKey|The Service account key that can be used as credentials for the Storage client. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|storageClass|The Cloud Storage class to use when creating the new buckets|STANDARD|object| +|storageClient|The storage client||object| +|storageLocation|The Cloud Storage location to use when creating the new buckets|US-EAST1|string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|deleteAfterRead|Delete objects from the bucket after they have been retrieved. The delete is only performed if the Exchange is committed. If a rollback occurs, the object is not deleted. If this option is false, then the same objects will be retrieve over and over again on the polls.|true|boolean| +|destinationBucket|Define the destination bucket where an object must be moved when moveAfterRead is set to true.||string| +|downloadFileName|The folder or filename to use when downloading the blob. By default, this specifies the folder name, and the name of the file is the blob name. For example, setting this to mydownload will be the same as setting mydownload/${file:name}. You can use dynamic expressions for fine-grained control. For example, you can specify ${date:now:yyyyMMdd}/${file:name} to store the blob in sub folders based on today's day. Only ${file:name} and ${file:name.noext} is supported as dynamic tokens for the blob name.||string| +|filter|A regular expression to include only blobs with name matching it.||string| +|includeBody|If it is true, the Object exchange will be consumed and put into the body. If false the Object stream will be put raw into the body and the headers will be set with the object metadata.|true|boolean| +|includeFolders|If it is true, the folders/directories will be consumed. If it is false, they will be ignored, and Exchanges will not be created for those|true|boolean| +|moveAfterRead|Move objects from the origin bucket to a different bucket after they have been retrieved. To accomplish the operation the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|objectName|The Object name inside the bucket||string| +|operation|Set the operation for the producer||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bucketName|Bucket name or ARN||string| +|autoCreateBucket|Setting the autocreation of the bucket bucketName.|true|boolean| +|serviceAccountKey|The Service account key that can be used as credentials for the Storage client. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|storageClass|The Cloud Storage class to use when creating the new buckets|STANDARD|object| +|storageClient|The storage client||object| +|storageLocation|The Cloud Storage location to use when creating the new buckets|US-EAST1|string| +|deleteAfterRead|Delete objects from the bucket after they have been retrieved. The delete is only performed if the Exchange is committed. If a rollback occurs, the object is not deleted. If this option is false, then the same objects will be retrieve over and over again on the polls.|true|boolean| +|destinationBucket|Define the destination bucket where an object must be moved when moveAfterRead is set to true.||string| +|downloadFileName|The folder or filename to use when downloading the blob. By default, this specifies the folder name, and the name of the file is the blob name. For example, setting this to mydownload will be the same as setting mydownload/${file:name}. You can use dynamic expressions for fine-grained control. For example, you can specify ${date:now:yyyyMMdd}/${file:name} to store the blob in sub folders based on today's day. Only ${file:name} and ${file:name.noext} is supported as dynamic tokens for the blob name.||string| +|filter|A regular expression to include only blobs with name matching it.||string| +|includeBody|If it is true, the Object exchange will be consumed and put into the body. If false the Object stream will be put raw into the body and the headers will be set with the object metadata.|true|boolean| +|includeFolders|If it is true, the folders/directories will be consumed. If it is false, they will be ignored, and Exchanges will not be created for those|true|boolean| +|moveAfterRead|Move objects from the origin bucket to a different bucket after they have been retrieved. To accomplish the operation the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved.|false|boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|objectName|The Object name inside the bucket||string| +|operation|Set the operation for the producer||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| diff --git a/camel-grape.md b/camel-grape.md new file mode 100644 index 0000000000000000000000000000000000000000..bcd63c2854fc857a8d08b1beb17292c90b50935c --- /dev/null +++ b/camel-grape.md @@ -0,0 +1,148 @@ +# Grape + +**Since Camel 2.16** + +**Only producer is supported** + +[Grape](http://docs.groovy-lang.org/latest/html/documentation/grape.html) +component allows you to fetch, load and manage additional jars when +`CamelContext` is running. In practice with the Camel Grape component +you can add new components, data formats and beans to your +`CamelContext` without the restart of the router. + +# Grape options + +# Setting up class loader + +Grape requires using Groovy class loader with the `CamelContext`. You +can enable Groovy class loading on the existing Camel Context using the +`GrapeComponent#grapeCamelContext()` method: + + import static org.apache.camel.component.grape.GrapeComponent.grapeCamelContext; + ... + CamelContext camelContext = grapeCamelContext(new DefaultCamelContext()); + +You can also set up the Groovy class loader used by the Camel context by +yourself: + + camelContext.setApplicationContextClassLoader(new GroovyClassLoader(myClassLoader)); + +For example, the following snippet loads Camel FTP component: + + from("direct:loadCamelFTP"). + to("grape:org.apache.camel/camel-ftp/2.15.2"); + +You can also specify the Maven coordinates by sending them to the +endpoint as the exchange body: + + from("direct:loadCamelFTP"). + setBody().constant("org.apache.camel/camel-ftp/2.15.2"). + to("grape:defaultMavenCoordinates"); + +# Adding the Grape component to the project + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-grape + x.y.z + + + +# Default payload type + +By default, the Camel Grape component operates on the String payloads: + + producerTemplate.sendBody("grape:defaultMavenCoordinates", "org.apache.camel/camel-ftp/2.15.2"); + +Of course, Camel build-in [type conversion +API](#manual::type-converter.adoc) can perform the automatic data type +transformations for you. In the example below, Camel automatically +converts binary payload into the String: + + producerTemplate.sendBody("grape:defaultMavenCoordinates", "org.apache.camel/camel-ftp/2.15.2".getBytes()); + +# Loading components at runtime + +To load the new component at the router runtime, just grab the jar +containing the given component: + + ProducerTemplate template = camelContext.createProducerTemplate(); + template.sendBody("grape:grape", "org.apache.camel/camel-stream/2.15.2"); + template.sendBody("stream:out", "msg"); + +# Loading processors bean at runtime + +To load the new processor bean with your custom business login at the +router runtime, just grab the jar containing the required bean: + + ProducerTemplate template = camelContext.createProducerTemplate(); + template.sendBody("grape:grape", "com.example/my-business-processors/1.0"); + int productId = 1; + int price = template.requestBody("bean:com.example.PricingBean?method=currentProductPrice", productId, int.class) + +# Loading deployed jars after Camel context restart + +After you download new jar, you usually would like to have it loaded by +the Camel again after the restart of the `CamelContext`. It is certainly +possible, as Grape component keeps track of the jar files you have +installed. To load again the installed jars on the context startup, use +the `GrapeEndpoint.loadPatches()` method in your route: + + import static org.apache.camel.component.grape.GrapeEndpoint.loadPatches; + + ... + camelContext.addRoutes( + new RouteBuilder() { + @Override + public void configure() throws Exception { + loadPatches(camelContext); + + from("direct:loadCamelFTP"). + to("grape:org.apache.camel/camel-ftp/2.15.2"); + } + }); + +# Managing the installed jars + +If you would like to check what jars have been installed into the given +`CamelContext`, send a message to the grape endpoint with the +`CamelGrapeCommand` header set to `GrapeCommand.listPatches`: + + from("netty-http:http://0.0.0.0:80/patches"). + setHeader(GrapeConstats.GRAPE_COMMAND, constant(CamelGrapeCommand.listPatches)). + to("grape:list"); + +Connecting to the route defined above using the HTTP client returns the +list of the jars installed by Grape component: + + $ curl http://my-router.com/patches + grape:org.apache.camel/camel-ftp/2.15.2 + grape:org.apache.camel/camel-jms/2.15.2 + +If you would like to remove the installed jars, so these won’t be loaded +again after the context restart, use the +```GrapeCommand.``clearPatches``` command: + + from("netty-http:http://0.0.0.0:80/patches"). + setHeader(GrapeConstats.GRAPE_COMMAND, constant(CamelGrapeCommand.clearPatches)). + setBody().constant("Installed patches have been deleted."); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|patchesRepository|Implementation of org.apache.camel.component.grape.PatchesRepository, by default: FilePatchesRepository||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|defaultCoordinates|Maven coordinates to use as default to grab if the message body is empty.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-graphql.md b/camel-graphql.md new file mode 100644 index 0000000000000000000000000000000000000000..8778dbb03ea9d4db433685f37cb96afd5b98a6fc --- /dev/null +++ b/camel-graphql.md @@ -0,0 +1,163 @@ +# Graphql + +**Since Camel 3.0** + +**Only producer is supported** + +The GraphQL component is a GraphQL client that communicates over HTTP +and supports queries and mutations, but not subscriptions. It uses the +[Apache +HttpClient](https://hc.apache.org/httpcomponents-client-4.5.x/index.html) +library. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-graphql + x.x.x + + + +# Message Body + +If the `variables` and `variablesHeader` parameters are not set and the +IN body is a JsonObject instance, Camel will use it for the operation’s +variables. If the `query` and `queryFile` parameters are not set and the +IN body is a String, Camel will use it as the query. Camel will store +the GraphQL response from the external server on the OUT message body. +All headers from the IN message will be copied to the OUT message, so +headers are preserved during routing. Additionally, Camel will add the +HTTP response headers as well to the OUT message headers. + +# Examples + +## Queries + +Simple queries can be defined directly in the URI: + + from("direct:start") + .to("graphql://http://example.com/graphql?query={books{id name}}") + +The body can also be used for the query: + + from("direct:start") + .setBody(constant("{books{id name}}")) + .to("graphql://http://example.com/graphql") + +The query can come from a header also: + + from("direct:start") + .setHeader("myQuery", constant("{books{id name}}")) + .to("graphql://http://example.com/graphql?queryHeader=myQuery") + +More complex queries can be stored in a file and referenced in the URI: + +booksQuery.graphql file: + + query Books { + books { + id + name + } + } + + from("direct:start") + .to("graphql://http://example.com/graphql?queryFile=booksQuery.graphql") + +When the query file defines multiple operations, it’s required to +specify which one should be executed: + + from("direct:start") + .to("graphql://http://example.com/graphql?queryFile=multipleQueries.graphql&operationName=Books") + +Queries with variables need to reference a JsonObject instance from the +registry: + +bookByIdQuery.graphql file: + + query BookById($id: Int!) { + bookById(id: $id) { + id + name + author + } + } + + @BindToRegistry("bookByIdQueryVariables") + public JsonObject bookByIdQueryVariables() { + JsonObject variables = new JsonObject(); + variables.put("id", "book-1"); + return variables; + } + + from("direct:start") + .to("graphql://http://example.com/graphql?queryFile=bookByIdQuery.graphql&variables=#bookByIdQueryVariables") + +A query that accesses variables via the variablesHeader parameter: + + from("direct:start") + .setHeader("myVariables", () -> { + JsonObject variables = new JsonObject(); + variables.put("id", "book-1"); + return variables; + }) + .to("graphql://http://example.com/graphql?queryFile=bookByIdQuery.graphql&variablesHeader=myVariables") + +## Mutations + +Mutations are like queries with variables. They specify a query and a +reference to a variables' bean: + +addBookMutation.graphql file: + + mutation AddBook($bookInput: BookInput) { + addBook(bookInput: $bookInput) { + id + name + author { + name + } + } + } + + @BindToRegistry("addBookMutationVariables") + public JsonObject addBookMutationVariables() { + JsonObject bookInput = new JsonObject(); + bookInput.put("name", "Typee"); + bookInput.put("authorId", "author-2"); + JsonObject variables = new JsonObject(); + variables.put("bookInput", bookInput); + return variables; + } + + from("direct:start") + .to("graphql://http://example.com/graphql?queryFile=addBookMutation.graphql&variables=#addBookMutationVariables") + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|httpUri|The GraphQL server URI.||string| +|operationName|The query or mutation name.||string| +|proxyHost|The proxy host in the format hostname:port.||string| +|query|The query text.||string| +|queryFile|The query file name located in the classpath.||string| +|queryHeader|The name of a header containing the GraphQL query.||string| +|variables|The JsonObject instance containing the operation variables.||object| +|variablesHeader|The name of a header containing a JsonObject instance containing the operation variables.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|accessToken|The access token sent in the Authorization header.||string| +|jwtAuthorizationType|The JWT Authorization type. Default is Bearer.|Bearer|string| +|password|The password for Basic authentication.||string| +|username|The username for Basic authentication.||string| diff --git a/camel-grpc.md b/camel-grpc.md new file mode 100644 index 0000000000000000000000000000000000000000..a11ca086be9b8f1b589250a2462f3eaa375c1510 --- /dev/null +++ b/camel-grpc.md @@ -0,0 +1,511 @@ +# Grpc + +**Since Camel 2.19** + +**Both producer and consumer are supported** + +The gRPC component allows you to call or expose Remote Procedure Call +(RPC) services using [Protocol Buffers +(protobuf)](https://developers.google.com/protocol-buffers/docs/overview) +exchange format over HTTP/2 transport. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-grpc + x.x.x + + + +# URI format + + grpc:host:port/service[?options] + +# Transport security and authentication support + +The following [authentication](https://grpc.io/docs/guides/auth.html) +mechanisms are built-in to gRPC and available in this component: + +- **SSL/TLS:** gRPC has SSL/TLS integration and promotes the use of + SSL/TLS to authenticate the server, and to encrypt all the data + exchanged between the client and the server. Optional mechanisms are + available for clients to provide certificates for mutual + authentication. + +- **Token-based authentication with Google:** gRPC provides a generic + mechanism to attach metadata-based credentials to requests and + responses. Additional support for acquiring access tokens while + accessing Google APIs through gRPC is provided. In general, this + mechanism must be used as well as SSL/TLS on the channel. + +To enable these features, the following component properties +combinations must be configured: + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Num.OptionParameterValueRequired/Optional

1

SSL/TLS

negotiationType

TLS

Required

keyCertChainResource

Required

keyResource

Required

keyPassword

Optional

trustCertCollectionResource

Optional

2

Token-based authentication with +Google API

authenticationType

GOOGLE

Required

negotiationType

TLS

Required

serviceAccountResource

Required

3

Custom JSON Web Token +implementation authentication

authenticationType

JWT

Required

negotiationType

NONE or TLS

Optional. The TLS/SSL not checking for +this type, but strongly recommended.

jwtAlgorithm

HMAC256(default) or +(HMAC384,HMAC512)

Optional

jwtSecret

Required

jwtIssuer

Optional

jwtSubject

Optional

+ +# gRPC producer resource type mapping + +The table below shows the types of objects in the message body, +depending on the types (simple or stream) of incoming and outgoing +parameters, as well as the invocation style (synchronous or +asynchronous). Please note that invocation of the procedures with +incoming stream parameter in asynchronous style is not allowed. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Invocation styleRequest typeResponse typeRequest Body TypeResult Body Type

synchronous

simple

simple

Object

Object

synchronous

simple

stream

Object

List<Object>

synchronous

stream

simple

not allowed

not allowed

synchronous

stream

stream

not allowed

not allowed

asynchronous

simple

simple

Object

List<Object>

asynchronous

simple

stream

Object

List<Object>

asynchronous

stream

simple

Object or List<Object>

List<Object>

asynchronous

stream

stream

Object or List<Object>

List<Object>

+ +# gRPC Proxy + +It is not possible to create a universal proxy-route for all methods, so +you need to divide your gRPC service into several services by method’s +type: unary, server streaming, client streaming and bidirectional +streaming. + +## Unary + +For unary requests, it is enough to write the following code: + + from("grpc://localhost:1101" + + "/org.apache.camel.component.grpc.PingPong" + ) + .toD("grpc://remotehost:1101" + + "/org.apache.camel.component.grpc.PingPong" + + "?method=${header.CamelGrpcMethodName}" + ) + +## Server streaming + +Server streaming may be done by the same approach as unary, but in that +configuration Camel route will wait stream for completion and will +aggregate all responses to a list before sending that data as response +stream. If this behavior is unacceptable, you need to apply a number of +options: + +1. Set `routeControlledStreamObserver=true` for consumer. Later it will + be used to publish responses; + +2. Set `streamRepliesTo` option for producer to handle streaming nature + of responses; + +3. Set forwarding of `onError` and `onCompleted` for producer; + +4. Set `inheritExchangePropertiesForReplies=true` to inherit + `StreamObserver` obtained on the first step; + +5. Create another route to process streamed data. That route must + contain gRPC-producer step with the only parameter + `toRouteControlledStreamObserver=true` which will publish incoming + exchanges as response stream elements. + +Example: + + from("grpc://localhost:1101" + + "/org.apache.camel.component.grpc.PingPong" + + "?routeControlledStreamObserver=true" + ) + .toD("grpc://remotehost:1101" + + "/org.apache.camel.component.grpc.PingPong" + + "?method=${header.CamelGrpcMethodName}" + + "&streamRepliesTo=direct:next" + + "&forwardOnError=true" + + "&forwardOnCompleted=true" + + "&inheritExchangePropertiesForReplies=true" + ); + + from("direct:next") + .to("grpc://dummy:0/?toRouteControlledStreamObserver=true"); + +## Client streaming and bidirectional streaming + +Both client streaming and bidirectional streaming gRPC methods expose +‘StreamObserver\` as responses’ handler. Therefore, you need the same +technique as described in the server streaming section (all 5 steps). + +But there is another thing: requests also come in streaming mode. So you +need the following: + +1. Set consumer strategy to DELEGATION — that differs from the default + PROPAGATION option in the fact that consumer will not produce + responses at all. If you set PROPAGATION, then you will receive more + responses than you expected; + +2. Forward `onError` and `onCompletion` on consumer; + +3. Set producer strategy to STREAMING. + +Example: + + from("grpc://localhost:1101" + + "/org.apache.camel.component.grpc.PingPong" + + "?routeControlledStreamObserver=true" + + "&consumerStrategy=DELEGATION" + + "&forwardOnError=true" + + "&forwardOnCompleted=true" + ) + .toD("grpc://remotehost:1101" + + "/org.apache.camel.component.grpc.PingPong" + + "?method=${header.CamelGrpcMethodName}" + + "&producerStrategy=STREAMING" + + "&streamRepliesTo=direct:next" + + "&forwardOnError=true" + + "&forwardOnCompleted=true" + + "&inheritExchangePropertiesForReplies=true" + ); + + from("direct:next") + .to("grpc://dummy:0/?toRouteControlledStreamObserver=true"); + +# Examples + +Below is a simple synchronous method invoke with host and port +parameters + + from("direct:grpc-sync") + .to("grpc://remotehost:1101/org.apache.camel.component.grpc.PingPong?method=sendPing&synchronous=true"); + + + + + + +An asynchronous method invoke + + from("direct:grpc-async") + .to("grpc://remotehost:1101/org.apache.camel.component.grpc.PingPong?method=pingAsyncResponse"); + +gRPC service consumer with propagation consumer strategy + + from("grpc://localhost:1101/org.apache.camel.component.grpc.PingPong?consumerStrategy=PROPAGATION") + .to("direct:grpc-service"); + +gRPC service producer with streaming producer strategy (requires a +service that uses "stream" mode as input and output) + + from("direct:grpc-request-stream") + .to("grpc://remotehost:1101/org.apache.camel.component.grpc.PingPong?method=PingAsyncAsync&producerStrategy=STREAMING&streamRepliesTo=direct:grpc-response-stream"); + + from("direct:grpc-response-stream") + .log("Response received: ${body}"); + +gRPC service consumer TLS/SSL security negotiation enabled + + from("grpc://localhost:1101/org.apache.camel.component.grpc.PingPong?consumerStrategy=PROPAGATION&negotiationType=TLS&keyCertChainResource=file:src/test/resources/certs/server.pem&keyResource=file:src/test/resources/certs/server.key&trustCertCollectionResource=file:src/test/resources/certs/ca.pem") + .to("direct:tls-enable") + +gRPC service producer with custom JSON Web Token (JWT) implementation +authentication + + from("direct:grpc-jwt") + .to("grpc://localhost:1101/org.apache.camel.component.grpc.PingPong?method=pingSyncSync&synchronous=true&authenticationType=JWT&jwtSecret=supersecuredsecret"); + +# Configuration + +It is recommended to use the `protobuf-maven-plugin`, which calls the +Protocol Buffer Compiler (protoc) to generate Java source files from +.proto (protocol buffer definition) files. This plugin will generate +procedure request and response classes, their builders and gRPC +procedures stubs classes as well. + +The following steps are required: + +Insert operating system and CPU architecture detection extension inside +**\** tag of the project `pom.xml` or set +`${os.detected.classifier}` parameter manually + + + + kr.motd.maven + os-maven-plugin + 1.7.1 + + + +Insert the gRPC and protobuf Java code generator plugins into the +**\** tag of the project pom.xml + + + org.xolstice.maven.plugins + protobuf-maven-plugin + 0.6.1 + + com.google.protobuf:protoc:${protobuf-version}:exe:${os.detected.classifier} + grpc-java + io.grpc:protoc-gen-grpc-java:${grpc-version}:exe:${os.detected.classifier} + + + + + compile + compile-custom + test-compile + test-compile-custom + + + + + +# More information + +See these resources: + +- [gRPC project site](https://www.grpc.io/) + +- [Maven Protocol Buffers + Plugin](https://www.xolstice.org/protobuf-maven-plugin) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|The gRPC server host name. This is localhost or 0.0.0.0 when being a consumer or remote server host name when using producer.||string| +|port|The gRPC local or remote server port||integer| +|service|Fully qualified service name from the protocol buffer descriptor file (package dot service definition name)||string| +|flowControlWindow|The HTTP/2 flow control window size (MiB)|1048576|integer| +|maxMessageSize|The maximum message size allowed to be received/sent (MiB)|4194304|integer| +|autoDiscoverServerInterceptors|Setting the autoDiscoverServerInterceptors mechanism, if true, the component will look for a ServerInterceptor instance in the registry automatically otherwise it will skip that checking.|true|boolean| +|consumerStrategy|This option specifies the top-level strategy for processing service requests and responses in streaming mode. If an aggregation strategy is selected, all requests will be accumulated in the list, then transferred to the flow, and the accumulated responses will be sent to the sender. If a propagation strategy is selected, request is sent to the stream, and the response will be immediately sent back to the sender. If a delegation strategy is selected, request is sent to the stream, but no response generated under the assumption that all necessary responses will be sent at another part of route. Delegation strategy always comes with routeControlledStreamObserver=true to be able to achieve the assumption.|PROPAGATION|object| +|forwardOnCompleted|Determines if onCompleted events should be pushed to the Camel route.|false|boolean| +|forwardOnError|Determines if onError events should be pushed to the Camel route. Exceptions will be set as message body.|false|boolean| +|initialFlowControlWindow|Sets the initial flow control window in bytes.|1048576|integer| +|keepAliveTime|Sets a custom keepalive time in milliseconds, the delay time for sending next keepalive ping. A value of Long.MAX\_VALUE or a value greater or equal to NettyServerBuilder.AS\_LARGE\_AS\_INFINITE will disable keepalive.|7200000|integer| +|keepAliveTimeout|Sets a custom keepalive timeout in milliseconds, the timeout for keepalive ping requests.|20000|integer| +|maxConcurrentCallsPerConnection|The maximum number of concurrent calls permitted for each incoming server connection. Defaults to no limit.|2147483647|integer| +|maxConnectionAge|Sets a custom max connection age in milliseconds. Connections lasting longer than which will be gracefully terminated. A random jitter of /-10% will be added to the value. A value of Long.MAX\_VALUE (the default) or a value greater or equal to NettyServerBuilder.AS\_LARGE\_AS\_INFINITE will disable max connection age.|9223372036854775807|integer| +|maxConnectionAgeGrace|Sets a custom grace time in milliseconds for the graceful connection termination. A value of Long.MAX\_VALUE (the default) or a value greater or equal to NettyServerBuilder.AS\_LARGE\_AS\_INFINITE is considered infinite.|9223372036854775807|integer| +|maxConnectionIdle|Sets a custom max connection idle time in milliseconds. Connection being idle for longer than which will be gracefully terminated. A value of Long.MAX\_VALUE (the default) or a value greater or equal to NettyServerBuilder.AS\_LARGE\_AS\_INFINITE will disable max connection idle|9223372036854775807|integer| +|maxInboundMetadataSize|Sets the maximum size of metadata allowed to be received. The default is 8 KiB.|8192|integer| +|maxRstFramesPerWindow|Limits the rate of incoming RST\_STREAM frames per connection to maxRstFramesPerWindow per maxRstPeriodSeconds. This option MUST be used in conjunction with maxRstPeriodSeconds for it to be effective.|0|integer| +|maxRstPeriodSeconds|Limits the rate of incoming RST\_STREAM frames per maxRstPeriodSeconds. This option MUST be used in conjunction with maxRstFramesPerWindow for it to be effective.|0|integer| +|permitKeepAliveTime|Sets the most aggressive keep-alive time in milliseconds that clients are permitted to configure. The server will try to detect clients exceeding this rate and will forcefully close the connection.|300000|integer| +|permitKeepAliveWithoutCalls|Sets whether to allow clients to send keep-alive HTTP/ 2 PINGs even if there are no outstanding RPCs on the connection.|false|boolean| +|routeControlledStreamObserver|Lets the route to take control over stream observer. If this value is set to true, then the response observer of gRPC call will be set with the name GrpcConstants.GRPC\_RESPONSE\_OBSERVER in the Exchange object. Please note that the stream observer's onNext(), onError(), onCompleted() methods should be called in the route.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|autoDiscoverClientInterceptors|Setting the autoDiscoverClientInterceptors mechanism, if true, the component will look for a ClientInterceptor instance in the registry automatically otherwise it will skip that checking.|true|boolean| +|inheritExchangePropertiesForReplies|Copies exchange properties from original exchange to all exchanges created for route defined by streamRepliesTo.|false|boolean| +|method|gRPC method name||string| +|producerStrategy|The mode used to communicate with a remote gRPC server. In SIMPLE mode a single exchange is translated into a remote procedure call. In STREAMING mode all exchanges will be sent within the same request (input and output of the recipient gRPC service must be of type 'stream').|SIMPLE|object| +|streamRepliesTo|When using STREAMING client mode, it indicates the endpoint where responses should be forwarded.||string| +|toRouteControlledStreamObserver|Expects that exchange property GrpcConstants.GRPC\_RESPONSE\_OBSERVER is set. Takes its value and calls onNext, onError and onComplete on that StreamObserver. All other gRPC parameters are ignored.|false|boolean| +|userAgent|The user agent header passed to the server||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|authenticationType|Authentication method type in advance to the SSL/TLS negotiation|NONE|object| +|jwtAlgorithm|JSON Web Token sign algorithm|HMAC256|object| +|jwtIssuer|JSON Web Token issuer||string| +|jwtSecret|JSON Web Token secret||string| +|jwtSubject|JSON Web Token subject||string| +|keyCertChainResource|The X.509 certificate chain file resource in PEM format link||string| +|keyPassword|The PKCS#8 private key file password||string| +|keyResource|The PKCS#8 private key file resource in PEM format link||string| +|negotiationType|Identifies the security negotiation type used for HTTP/2 communication|PLAINTEXT|object| +|serviceAccountResource|Service Account key file in JSON format resource link supported by the Google Cloud SDK||string| +|trustCertCollectionResource|The trusted certificates collection file resource in PEM format for verifying the remote endpoint's certificate||string| diff --git a/camel-guava-eventbus.md b/camel-guava-eventbus.md new file mode 100644 index 0000000000000000000000000000000000000000..5cb67f169e5e37a63001b4c2ebe0d8f225c2b1f5 --- /dev/null +++ b/camel-guava-eventbus.md @@ -0,0 +1,158 @@ +# Guava-eventbus + +**Since Camel 2.10** + +**Both producer and consumer are supported** + +The [Google Guava +EventBus](https://google.github.io/guava/releases/19.0/api/docs/com/google/common/eventbus/EventBus.html) +allows publish-subscribe-style communication between components without +requiring the components to explicitly register with one another (and +thus be aware of each other). The **guava-eventbus:** component provides +integration bridge between Camel and [Google Guava +EventBus](https://google.github.io/guava/releases/19.0/api/docs/com/google/common/eventbus/EventBus.html) +infrastructure. With the latter component, messages exchanged with the +Guava `EventBus` can be transparently forwarded to the Camel routes. +EventBus component allows also routing the body of Camel exchanges to +the Guava `EventBus`. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-guava-eventbus + x.x.x + + + +# URI format + + guava-eventbus:busName[?options] + +Where **busName** represents the name of the +`com.google.common.eventbus.EventBus` instance located in the Camel +registry. + +# Usage + +Using `guava-eventbus` component on the consumer side of the route will +capture messages sent to the Guava `EventBus` and forward them to the +Camel route. Guava EventBus consumer processes incoming messages +[asynchronously](http://camel.apache.org/asynchronous-routing-engine.html). + + SimpleRegistry registry = new SimpleRegistry(); + EventBus eventBus = new EventBus(); + registry.put("busName", eventBus); + CamelContext camel = new DefaultCamelContext(registry); + + from("guava-eventbus:busName").to("seda:queue"); + + eventBus.post("Send me to the SEDA queue."); + +Using `guava-eventbus` component on the producer side of the route will +forward body of the Camel exchanges to the Guava `EventBus` instance. + + SimpleRegistry registry = new SimpleRegistry(); + EventBus eventBus = new EventBus(); + registry.put("busName", eventBus); + CamelContext camel = new DefaultCamelContext(registry); + + from("direct:start").to("guava-eventbus:busName"); + + ProducerTemplate producerTemplate = camel.createProducerTemplate(); + producer.sendBody("direct:start", "Send me to the Guava EventBus."); + + eventBus.register(new Object(){ + @Subscribe + public void messageHander(String message) { + System.out.println("Message received from the Camel: " + message); + } + }); + +# DeadEvent considerations + +Keep in mind that due to the limitations caused by the design of the +Guava EventBus, you cannot specify event class to be received by the +listener without creating class annotated with `@Subscribe` method. This +limitation implies that endpoint with `eventClass` option specified +actually listens to all possible events (`java.lang.Object`) and filter +appropriate messages programmatically at runtime. The snipped below +demonstrates an appropriate excerpt from the Camel code base. + + @Subscribe + public void eventReceived(Object event) { + if (eventClass == null || eventClass.isAssignableFrom(event.getClass())) { + doEventReceived(event); + ... + +This drawback of this approach is that `EventBus` instance used by Camel +will never generate `com.google.common.eventbus.DeadEvent` +notifications. If you want Camel to listen only to the precisely +specified event (and therefore enable `DeadEvent` support), use +`listenerInterface` endpoint option. Camel will create a dynamic proxy +over the interface you specify with the latter option and listen only to +messages specified by the interface handler methods. The example of the +listener interface with single method handling only `SpecificEvent` +instances is demonstrated below. + + package com.example; + + public interface CustomListener { + + @Subscribe + void eventReceived(SpecificEvent event); + + } + +The listener presented above could be used in the endpoint definition as +follows. + + from("guava-eventbus:busName?listenerInterface=com.example.CustomListener").to("seda:queue"); + +# Consuming multiple types of events + +To define multiple types of events to be consumed by Guava EventBus +consumer use `listenerInterface` endpoint option, as listener interface +could provide multiple methods marked with the `@Subscribe` annotation. + + package com.example; + + public interface MultipleEventsListener { + + @Subscribe + void someEventReceived(SomeEvent event); + + @Subscribe + void anotherEventReceived(AnotherEvent event); + + } + +The listener presented above could be used in the endpoint definition as +follows. + + from("guava-eventbus:busName?listenerInterface=com.example.MultipleEventsListener").to("seda:queue"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|eventBus|To use the given Guava EventBus instance||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|listenerInterface|The interface with method(s) marked with the Subscribe annotation. Dynamic proxy will be created over the interface so it could be registered as the EventBus listener. Particularly useful when creating multi-event listeners and for handling DeadEvent properly. This option cannot be used together with eventClass option.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|eventBusRef|To lookup the Guava EventBus from the registry with the given name||string| +|eventClass|If used on the consumer side of the route, will filter events received from the EventBus to the instances of the class and superclasses of eventClass. Null value of this option is equal to setting it to the java.lang.Object i.e. the consumer will capture all messages incoming to the event bus. This option cannot be used together with listenerInterface option.||string| +|listenerInterface|The interface with method(s) marked with the Subscribe annotation. Dynamic proxy will be created over the interface so it could be registered as the EventBus listener. Particularly useful when creating multi-event listeners and for handling DeadEvent properly. This option cannot be used together with eventClass option.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-hashicorp-vault.md b/camel-hashicorp-vault.md new file mode 100644 index 0000000000000000000000000000000000000000..23b0cab57195ca205e32d7d29d82d6f96bab5658 --- /dev/null +++ b/camel-hashicorp-vault.md @@ -0,0 +1,161 @@ +# Hashicorp-vault + +**Since Camel 3.18** + +**Only producer is supported** + +The hashicorp-vault component that integrates [Hashicorp +Vault](https://www.vaultproject.io/). + +# URI Format + + + org.apache.camel + camel-hashicorp-vault + x.x.x + + + +## Using Hashicorp Vault Property Function + +To use this function, you’ll need to provide credentials for Hashicorp +vault as environment variables: + + export $CAMEL_VAULT_HASHICORP_TOKEN=token + export $CAMEL_VAULT_HASHICORP_HOST=host + export $CAMEL_VAULT_HASHICORP_PORT=port + export $CAMEL_VAULT_HASHICORP_SCHEME=http/https + +You can also configure the credentials in the `application.properties` +file such as: + + camel.vault.hashicorp.token = token + camel.vault.hashicorp.host = host + camel.vault.hashicorp.port = port + camel.vault.hashicorp.scheme = scheme + +At this point, you’ll be able to reference a property in the following +way: + + + + + + + + +Where route will be the name of the secret stored in the Hashicorp Vault +instance, in the *secret* engine. + +You could specify a default value in case the secret is not present on +Hashicorp Vault instance: + + + + + + + + +In this case, if the secret doesn’t exist in the *secret* engine, the +property will fall back to "default" as value. + +Also, you are able to get a particular field of the secret, if you have, +for example, a secret named database of this form: + + { + "username": "admin", + "password": "password123", + "engine": "postgres", + "host": "127.0.0.1", + "port": "3128", + "dbname": "db" + } + +You’re able to do get single secret value in your route, in the *secret* +engine, like for example: + + + + + + + + +Or re-use the property as part of an endpoint. + +You could specify a default value in case the particular field of secret +is not present on Hashicorp Vault instance, in the *secret* engine: + + + + + + + + +In this case, if the secret doesn’t exist or the secret exists (in the +*secret* engine) but the username field is not part of the secret, the +property will fall back to "admin" as value. + +There is also the syntax to get a particular version of the secret for +both the approach, with field/default value specified or only with +secret: + + + + + + + + +This approach will return the RAW route secret with version *2*, in the +*secret* engine. + + + + + + + + +This approach will return the route secret value with version *2* or +default value in case the secret doesn’t exist or the version doesn’t +exist (in the *secret* engine). + + + + + + + + +This approach will return the username field of the database secret with +version *2* or admin in case the secret doesn’t exist or the version +doesn’t exist (in the *secret* engine). + +The only requirement is adding the camel-hashicorp-vault jar to your +Camel application. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|secretsEngine|Vault Name to be used||string| +|host|Hashicorp Vault instance host to be used||string| +|operation|Operation to be performed||object| +|port|Hashicorp Vault instance port to be used|8200|string| +|scheme|Hashicorp Vault instance scheme to be used|https|string| +|secretPath|Hashicorp Vault instance secret Path to be used||string| +|vaultTemplate|Instance of Vault template||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|token|Token to be used||string| diff --git a/camel-hazelcast-atomicvalue.md b/camel-hazelcast-atomicvalue.md new file mode 100644 index 0000000000000000000000000000000000000000..a397e71ec74cd78553c9fe3b150831cb8e0c1262 --- /dev/null +++ b/camel-hazelcast-atomicvalue.md @@ -0,0 +1,142 @@ +# Hazelcast-atomicvalue + +**Since Camel 2.7** + +**Only producer is supported** + +The [Hazelcast](http://www.hazelcast.com/) atomic number component is +one of Camel Hazelcast Components which allows you to access Hazelcast +atomic number. An atomic number is an object that simply provides a grid +wide number (long). + +# atomic number producer - to("hazelcast-atomicvalue:foo") + +The operations for this producer are: + +- setvalue (set the number with a given value) + +- get + +- increment (+1) + +- decrement (-1) + +- destroy + +- compareAndSet + +- getAndAdd + +## Sample for **set**: + +Java DSL +from("direct:set") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.SET\_VALUE)) +.toF("hazelcast-%sfoo", HazelcastConstants.ATOMICNUMBER\_PREFIX); + +Spring XML + + + +setvalue + + + + +Provide the value to set inside the message body (here the value is 10): +`template.sendBody("direct:set", 10);` + +## Sample for **get**: + +Java DSL +from("direct:get") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.GET)) +.toF("hazelcast-%sfoo", HazelcastConstants.ATOMICNUMBER\_PREFIX); + +Spring XML + + + +get + + + + +You can get the number with +`long body = template.requestBody("direct:get", null, Long.class);`. + +## Sample for **increment**: + +Java DSL +from("direct:increment") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.INCREMENT)) +.toF("hazelcast-%sfoo", HazelcastConstants.ATOMICNUMBER\_PREFIX); + +Spring XML + + + +increment + + + + +The actual value (after increment) will be provided inside the message +body. + +## Sample for **decrement**: + +Java DSL +from("direct:decrement") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.DECREMENT)) +.toF("hazelcast-%sfoo", HazelcastConstants.ATOMICNUMBER\_PREFIX); + +Spring XML + + + +decrement + + + + +The actual value (after decrement) will be provided inside the message +body. + +## Sample for **destroy** + +Java DSL +from("direct:destroy") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.DESTROY)) +.toF("hazelcast-%sfoo", HazelcastConstants.ATOMICNUMBER\_PREFIX); + +Spring XML + + + +destroy + + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||object| +|hazelcastMode|The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default.|node|string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|The name of the cache||string| +|defaultOperation|To specify a default operation to use, if no operation header has been provided.||object| +|hazelcastConfigUri|Hazelcast configuration file.||string| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint.||object| +|hazelcastInstanceName|The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-hazelcast-instance.md b/camel-hazelcast-instance.md new file mode 100644 index 0000000000000000000000000000000000000000..8f8599e32fdddac3fd24b69496f21769b068fc8b --- /dev/null +++ b/camel-hazelcast-instance.md @@ -0,0 +1,52 @@ +# Hazelcast-instance + +**Since Camel 2.7** + +**Only consumer is supported** + +The [Hazelcast](http://www.hazelcast.com/) instance component is one of +Camel Hazelcast Components which allows you to consume join/leave events +of the cache instance in the cluster. Hazelcast makes sense in one +single "server node", but it’s extremely powerful in a clustered +environment. + +# instance consumer - from("hazelcast-instance:foo") + +The instance consumer fires if a new cache instance joins or leaves the +cluster. + +Here’s a sample: + + fromF("hazelcast-%sfoo", HazelcastConstants.INSTANCE_PREFIX) + .log("instance...") + .choice() + .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.ADDED)) + .log("...added") + .to("mock:added") + .otherwise() + .log("...removed") + .to("mock:removed"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||object| +|hazelcastMode|The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default.|node|string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|The name of the cache||string| +|defaultOperation|To specify a default operation to use, if no operation header has been provided.||object| +|hazelcastConfigUri|Hazelcast configuration file.||string| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint.||object| +|hazelcastInstanceName|The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| diff --git a/camel-hazelcast-list.md b/camel-hazelcast-list.md new file mode 100644 index 0000000000000000000000000000000000000000..0c6c0fd9766c855e92aef2e0090e051de29cdfcb --- /dev/null +++ b/camel-hazelcast-list.md @@ -0,0 +1,101 @@ +# Hazelcast-list + +**Since Camel 2.7** + +**Both producer and consumer are supported** + +The [Hazelcast](http://www.hazelcast.com/) List component is one of +Camel Hazelcast Components which allows you to access a Hazelcast +distributed list. + +# Options + +# List producer – to(“hazelcast-list:foo”) + +The list producer provides eight operations: + +- add + +- addAll + +- setvalue + +- get + +- removevalue + +- removeAll + +- clear + +- retainAll + +## Sample for **add**: + + from("direct:add") + .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.ADD)) + .toF("hazelcast-%sbar", HazelcastConstants.LIST_PREFIX); + +## Sample for **get**: + + from("direct:get") + .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.GET)) + .toF("hazelcast-%sbar", HazelcastConstants.LIST_PREFIX) + .to("seda:out"); + +## Sample for **setvalue**: + + from("direct:set") + .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.SET_VALUE)) + .toF("hazelcast-%sbar", HazelcastConstants.LIST_PREFIX); + +## Sample for **removevalue**: + + from("direct:removevalue") + .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_VALUE)) + .toF("hazelcast-%sbar", HazelcastConstants.LIST_PREFIX); + +Note that **CamelHazelcastObjectIndex** header is used for indexing +purpose. + +# List consumer – from(“hazelcast-list:foo”) + +The list consumer provides two operations: \* add \* remove + + fromF("hazelcast-%smm", HazelcastConstants.LIST_PREFIX) + .log("object...") + .choice() + .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.ADDED)) + .log("...added") + .to("mock:added") + .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.REMOVED)) + .log("...removed") + .to("mock:removed") + .otherwise() + .log("fail!"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||object| +|hazelcastMode|The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default.|node|string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|The name of the cache||string| +|defaultOperation|To specify a default operation to use, if no operation header has been provided.||object| +|hazelcastConfigUri|Hazelcast configuration file.||string| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint.||object| +|hazelcastInstanceName|The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-hazelcast-map.md b/camel-hazelcast-map.md new file mode 100644 index 0000000000000000000000000000000000000000..fefc8815af2f0f28aafbaa568f046aafb2421ef7 --- /dev/null +++ b/camel-hazelcast-map.md @@ -0,0 +1,281 @@ +# Hazelcast-map + +**Since Camel 2.7** + +**Both producer and consumer are supported** + +The [Hazelcast](http://www.hazelcast.com/) Map component is one of Camel +Hazelcast Components which allows you to access a Hazelcast distributed +map. + +# Options + +# Map cache producer - to("hazelcast-map:foo") + +If you want to store a value in a map, you can use the map cache +producer. + +The map cache producer provides follow operations specified by +**CamelHazelcastOperationType** header: + +- put + +- putIfAbsent + +- get + +- getAll + +- keySet + +- containsKey + +- containsValue + +- delete + +- update + +- query + +- clear + +- evict + +- evictAll + +You can call the samples with: + + template.sendBodyAndHeader("direct:[put|get|update|delete|query|evict]", "my-foo", HazelcastConstants.OBJECT_ID, "4711"); + +## Sample for **put**: + +Java DSL +from("direct:put") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PUT)) +.toF("hazelcast-%sfoo", HazelcastConstants.MAP\_PREFIX); + +Spring XML + + + +put + + + + +Sample for **put** with eviction: + +Java DSL +from("direct:put") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PUT)) +.setHeader(HazelcastConstants.TTL\_VALUE, constant(Long.valueOf(1))) +.setHeader(HazelcastConstants.TTL\_UNIT, constant(TimeUnit.MINUTES)) +.toF("hazelcast-%sfoo", HazelcastConstants.MAP\_PREFIX); + +Spring XML + + + +put + + +1 + + +TimeUnit.MINUTES + + + + +## Sample for **get**: + +Java DSL +from("direct:get") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.GET)) +.toF("hazelcast-%sfoo", HazelcastConstants.MAP\_PREFIX) +.to("seda:out"); + +Spring XML + + + +get + + + + + +## Sample for **update**: + +Java DSL +from("direct:update") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.UPDATE)) +.toF("hazelcast-%sfoo", HazelcastConstants.MAP\_PREFIX); + +Spring XML + + + +update + + + + +## Sample for **delete**: + +Java DSL +from("direct:delete") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.DELETE)) +.toF("hazelcast-%sfoo", HazelcastConstants.MAP\_PREFIX); + +Spring XML + + + +delete + + + + +## Sample for **query** + +Java DSL +from("direct:query") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.QUERY)) +.toF("hazelcast-%sfoo", HazelcastConstants.MAP\_PREFIX) +.to("seda:out"); + +Spring XML + + + +query + + + + + +For the query operation Hazelcast offers an SQL like syntax to query +your distributed map. + + String q1 = "bar > 1000"; + template.sendBodyAndHeader("direct:query", null, HazelcastConstants.QUERY, q1); + +# Map cache consumer - from("hazelcast-map:foo") + +Hazelcast provides event listeners on their data grid. If you want to be +notified if a cache is manipulated, you can use the map consumer. There +are four events: **put**, **update**, **delete** and **envict**. The +event type will be stored in the "**hazelcast.listener.action**" header +variable. The map consumer provides some additional information inside +these variables: + +Header Variables inside the response message: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescription

CamelHazelcastListenerTime

Long

time of the event in millis

CamelHazelcastListenerType

String

the map consumer sets here +"cachelistener"

CamelHazelcastListenerAction

String

type of event (added, +updated, envicted and +removed).

CamelHazelcastObjectId

String

the oid of the object

CamelHazelcastCacheName

String

the name of the cache (e.g., +"foo")

CamelHazelcastCacheType

String

the type of the cache (e.g., +map)

+ +The object value will be stored within **put** and **update** actions +inside the message body. + +Here’s a sample: + + fromF("hazelcast-%sfoo", HazelcastConstants.MAP_PREFIX) + .log("object...") + .choice() + .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.ADDED)) + .log("...added") + .to("mock:added") + .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.ENVICTED)) + .log("...envicted") + .to("mock:envicted") + .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.UPDATED)) + .log("...updated") + .to("mock:updated") + .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.REMOVED)) + .log("...removed") + .to("mock:removed") + .otherwise() + .log("fail!"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||object| +|hazelcastMode|The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default.|node|string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|The name of the cache||string| +|defaultOperation|To specify a default operation to use, if no operation header has been provided.||object| +|hazelcastConfigUri|Hazelcast configuration file.||string| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint.||object| +|hazelcastInstanceName|The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-hazelcast-multimap.md b/camel-hazelcast-multimap.md new file mode 100644 index 0000000000000000000000000000000000000000..9535afe265c6d12778fee501e058ca036ade9bd3 --- /dev/null +++ b/camel-hazelcast-multimap.md @@ -0,0 +1,160 @@ +# Hazelcast-multimap + +**Since Camel 2.7** + +**Both producer and consumer are supported** + +The [Hazelcast](http://www.hazelcast.com/) Multimap component is one of +Camel Hazelcast Components which allows you to access Hazelcast +distributed multimap. + +# Options + +# multimap cache producer - to("hazelcast-multimap:foo") + +A multimap is a cache where you can store n values to one key. + +The multimap producer provides eight operations: + +- put + +- get + +- removevalue + +- delete + +- containsKey + +- containsValue + +- clear + +- valueCount + +## Sample for **put**: + +Java DSL +from("direct:put") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PUT)) +.to(String.format("hazelcast-%sbar", HazelcastConstants.MULTIMAP\_PREFIX)); + +Spring XML + + + + +put + + + + +## Sample for **removevalue**: + +Java DSL +from("direct:removevalue") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE\_VALUE)) +.toF("hazelcast-%sbar", HazelcastConstants.MULTIMAP\_PREFIX); + +Spring XML + + + + +removevalue + + + + +To remove a value you have to provide the value you want to remove +inside the message body. If you have a multimap object +``\{`key: "4711" values: { "my-foo", "my-bar"``}}\` you have to put +`my-foo` inside the message body to remove the `my-foo` value. + +## Sample for **get**: + +Java DSL +from("direct:get") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.GET)) +.toF("hazelcast-%sbar", HazelcastConstants.MULTIMAP\_PREFIX) +.to("seda:out"); + +Spring XML + + + + +get + + + + + +## Sample for **delete**: + +Java DSL +from("direct:delete") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.DELETE)) +.toF("hazelcast-%sbar", HazelcastConstants.MULTIMAP\_PREFIX); + +Spring XML + + + + +delete + + + + +You can call them in your test class with: + + template.sendBodyAndHeader("direct:[put|get|removevalue|delete]", "my-foo", HazelcastConstants.OBJECT_ID, "4711"); + +# multimap cache consumer - from("hazelcast-multimap:foo") + +For the multimap cache this component provides the same listeners / +variables as for the map cache consumer (except the update and eviction +listener). The only difference is the **multimap** prefix inside the +URI. Here is a sample: + + fromF("hazelcast-%sbar", HazelcastConstants.MULTIMAP_PREFIX) + .log("object...") + .choice() + .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.ADDED)) + .log("...added") + .to("mock:added") + //.when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.ENVICTED)) + // .log("...envicted") + // .to("mock:envicted") + .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.REMOVED)) + .log("...removed") + .to("mock:removed") + .otherwise() + .log("fail!"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||object| +|hazelcastMode|The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default.|node|string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|The name of the cache||string| +|defaultOperation|To specify a default operation to use, if no operation header has been provided.||object| +|hazelcastConfigUri|Hazelcast configuration file.||string| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint.||object| +|hazelcastInstanceName|The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-hazelcast-queue.md b/camel-hazelcast-queue.md new file mode 100644 index 0000000000000000000000000000000000000000..406cd8cd1e91df584f5cc1ab59655bde9114d84c --- /dev/null +++ b/camel-hazelcast-queue.md @@ -0,0 +1,166 @@ +# Hazelcast-queue + +**Since Camel 2.7** + +**Both producer and consumer are supported** + +The [Hazelcast](http://www.hazelcast.com/) Queue component is one of +Camel Hazelcast Components which allows you to access Hazelcast +distributed queue. + +# Queue producer – to(“hazelcast-queue:foo”) + +The queue producer provides 12 operations: + +- add + +- put + +- poll + +- peek + +- offer + +- removevalue + +- remainingCapacity + +- removeAll + +- removeIf + +- drainTo + +- take + +- retainAll + +## Sample for **add**: + + from("direct:add") + .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.ADD)) + .toF("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX); + +## Sample for **put**: + + from("direct:put") + .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PUT)) + .toF("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX); + +## Sample for **poll**: + + from("direct:poll") + .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.POLL)) + .toF("hazelcast:%sbar", HazelcastConstants.QUEUE_PREFIX); + +## Sample for **peek**: + + from("direct:peek") + .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PEEK)) + .toF("hazelcast:%sbar", HazelcastConstants.QUEUE_PREFIX); + +## Sample for **offer**: + + from("direct:offer") + .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.OFFER)) + .toF("hazelcast:%sbar", HazelcastConstants.QUEUE_PREFIX); + +## Sample for **removevalue**: + + from("direct:removevalue") + .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_VALUE)) + .toF("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX); + +## Sample for **remaining capacity**: + + from("direct:remaining-capacity").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMAINING_CAPACITY)).to( + String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX)); + +## Sample for **remove all**: + + from("direct:removeAll").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_ALL)).to( + String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX)); + +## Sample for **remove if**: + + from("direct:removeIf").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_IF)).to( + String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX)); + +## Sample for **drain to**: + + from("direct:drainTo").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.DRAIN_TO)).to( + String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX)); + +## Sample for **take**: + + from("direct:take").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.TAKE)).to( + String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX)); + +## Sample for **retain all**: + + from("direct:retainAll").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.RETAIN_ALL)).to( + String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX)); + +# Queue consumer – from(“hazelcast-queue:foo”) + +The queue consumer provides two different modes: + +- Poll + +- Listen + +Sample for **Poll** mode + + fromF("hazelcast-%sfoo?queueConsumerMode=Poll", HazelcastConstants.QUEUE_PREFIX)).to("mock:result"); + +In this way, the consumer will poll the queue and return the head of the +queue or null after a timeout. + +In Listen mode instead, the consumer will listen for events on queue. + +The queue consumer in Listen mode provides 2 operations: \* add \* +remove + +Sample for **Listen** mode + + fromF("hazelcast-%smm", HazelcastConstants.QUEUE_PREFIX) + .log("object...") + .choice() + .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.ADDED)) + .log("...added") + .to("mock:added") + .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.REMOVED)) + .log("...removed") + .to("mock:removed") + .otherwise() + .log("fail!"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||object| +|hazelcastMode|The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default.|node|string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|The name of the cache||string| +|defaultOperation|To specify a default operation to use, if no operation header has been provided.||object| +|hazelcastConfigUri|Hazelcast configuration file.||string| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint.||object| +|hazelcastInstanceName|The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||string| +|pollingTimeout|Define the polling timeout of the Queue consumer in Poll mode|10000|integer| +|poolSize|Define the Pool size for Queue Consumer Executor|1|integer| +|queueConsumerMode|Define the Queue Consumer mode: Listen or Poll|Listen|object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-hazelcast-replicatedmap.md b/camel-hazelcast-replicatedmap.md new file mode 100644 index 0000000000000000000000000000000000000000..c4f59de3fead7ae4df19489f2c64f51d40a4dfd1 --- /dev/null +++ b/camel-hazelcast-replicatedmap.md @@ -0,0 +1,191 @@ +# Hazelcast-replicatedmap + +**Since Camel 2.16** + +**Both producer and consumer are supported** + +The [Hazelcast](http://www.hazelcast.com/) instance component is one of +Camel Hazelcast Components which allows you to consume join/leave events +of the cache instance in the cluster. A replicated map is a weakly +consistent, distributed key-value data structure with no data partition. + +# replicatedmap cache producer + +The replicatedmap producer provides 6 operations: + +- put + +- get + +- delete + +- clear + +- containsKey + +- containsValue + +## Sample for **put**: + +Java DSL +from("direct:put") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PUT)) +.to(String.format("hazelcast-%sbar", HazelcastConstants.REPLICATEDMAP\_PREFIX)); + +Spring XML + + + + +put + + + + +## Sample for **get**: + +Java DSL +from("direct:get") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.GET)) +.toF("hazelcast-%sbar", HazelcastConstants.REPLICATEDMAP\_PREFIX) +.to("seda:out"); + +Spring XML + + + + +get + + + + + +## Sample for **delete**: + +Java DSL +from("direct:delete") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.DELETE)) +.toF("hazelcast-%sbar", HazelcastConstants.REPLICATEDMAP\_PREFIX); + +Spring XML + + + + +delete + + + + +You can call them in your test class with: + + template.sendBodyAndHeader("direct:[put|get|delete|clear]", "my-foo", HazelcastConstants.OBJECT_ID, "4711"); + +# replicatedmap cache consumer + +For the multimap cache, this component provides the same listeners / +variables as for the map cache consumer (except the update and enviction +listener). The only difference is the **multimap** prefix inside the +URI. Here is a sample: + + fromF("hazelcast-%sbar", HazelcastConstants.MULTIMAP_PREFIX) + .log("object...") + .choice() + .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.ADDED)) + .log("...added") + .to("mock:added") + //.when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.ENVICTED)) + // .log("...envicted") + // .to("mock:envicted") + .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.REMOVED)) + .log("...removed") + .to("mock:removed") + .otherwise() + .log("fail!"); + +Header Variables inside the response message: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescription

CamelHazelcastListenerTime

Long

time of the event in millis

CamelHazelcastListenerType

String

the map consumer sets here +"cachelistener"

CamelHazelcastListenerAction

String

type of event - here +added and removed (and soon +envicted)

CamelHazelcastObjectId

String

the oid of the object

CamelHazelcastCacheName

String

the name of the cache (e.g., +"foo")

CamelHazelcastCacheType

String

the type of the cache (e.g., +replicatedmap)

+ +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||object| +|hazelcastMode|The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default.|node|string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|The name of the cache||string| +|defaultOperation|To specify a default operation to use, if no operation header has been provided.||object| +|hazelcastConfigUri|Hazelcast configuration file.||string| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint.||object| +|hazelcastInstanceName|The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-hazelcast-ringbuffer.md b/camel-hazelcast-ringbuffer.md new file mode 100644 index 0000000000000000000000000000000000000000..27d95435f10ea2c7ce46180eb79c26ec99d07295 --- /dev/null +++ b/camel-hazelcast-ringbuffer.md @@ -0,0 +1,73 @@ +# Hazelcast-ringbuffer + +**Since Camel 2.16** + +**Only producer is supported** + +The [Hazelcast](http://www.hazelcast.com/) ringbuffer component is one +of Camel Hazelcast Components which allows you to access Hazelcast +ringbuffer. Ringbuffer is a distributed data structure where the data is +stored in a ring-like structure. You can think of it as a circular array +with a certain capacity. + +# ringbuffer cache producer + +The ringbuffer producer provides 5 operations: + +- add + +- readOnceHead + +- readOnceTail + +- remainingCapacity + +- capacity + +## Sample for **put**: + +Java DSL +from("direct:put") +.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.ADD)) +.to(String.format("hazelcast-%sbar", HazelcastConstants.RINGBUFFER\_PREFIX)); + +Spring XML + + + + +add + + + + +## Sample for **readonce from head**: + +Java DSL: + + from("direct:get") + .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.READ_ONCE_HEAD)) + .toF("hazelcast-%sbar", HazelcastConstants.RINGBUFFER_PREFIX) + .to("seda:out"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||object| +|hazelcastMode|The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default.|node|string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|The name of the cache||string| +|defaultOperation|To specify a default operation to use, if no operation header has been provided.||object| +|hazelcastConfigUri|Hazelcast configuration file.||string| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint.||object| +|hazelcastInstanceName|The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-hazelcast-seda.md b/camel-hazelcast-seda.md new file mode 100644 index 0000000000000000000000000000000000000000..3ef9ea7994242910c75adb02c6d37181475cb3aa --- /dev/null +++ b/camel-hazelcast-seda.md @@ -0,0 +1,72 @@ +# Hazelcast-seda + +**Since Camel 2.7** + +**Both producer and consumer are supported** + +The [Hazelcast](http://www.hazelcast.com/) SEDA component is one of +Camel Hazelcast Components which allows you to access Hazelcast +BlockingQueue. SEDA component differs from the rest components provided. +It implements a work-queue in order to support asynchronous SEDA +architectures, similar to the core "SEDA" component. + +# SEDA producer – to(“hazelcast-seda:foo”) + +The SEDA producer provides no operations. You only send data to the +specified queue. + +Java DSL +from("direct:foo") +.to("hazelcast-seda:foo"); + +Spring XML + + + + + +# SEDA consumer – from(“hazelcast-seda:foo”) + +The SEDA consumer provides no operations. You only retrieve data from +the specified queue. + +Java DSL +from("hazelcast-seda:foo") +.to("mock:result"); + +Spring XML + + + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||object| +|hazelcastMode|The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default.|node|string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|The name of the cache||string| +|defaultOperation|To specify a default operation to use, if no operation header has been provided.||object| +|hazelcastConfigUri|Hazelcast configuration file.||string| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint.||object| +|hazelcastInstanceName|The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|concurrentConsumers|To use concurrent consumers polling from the SEDA queue.|1|integer| +|onErrorDelay|Milliseconds before consumer continues polling after an error has occurred.|1000|integer| +|pollTimeout|The timeout used when consuming from the SEDA queue. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown.|1000|integer| +|transacted|If set to true then the consumer runs in transaction mode, where the messages in the seda queue will only be removed if the transaction commits, which happens when the processing is complete.|false|boolean| +|transferExchange|If set to true the whole Exchange will be transfered. If header or body contains not serializable objects, they will be skipped.|false|boolean| diff --git a/camel-hazelcast-set.md b/camel-hazelcast-set.md new file mode 100644 index 0000000000000000000000000000000000000000..6f4c95f9605d99e8275ddf395bf00b48191ae958 --- /dev/null +++ b/camel-hazelcast-set.md @@ -0,0 +1,53 @@ +# Hazelcast-set + +**Since Camel 2.7** + +**Both producer and consumer are supported** + +The [Hazelcast](http://www.hazelcast.com/) Set component is one of Camel +Hazelcast Components which allows you to access a Hazelcast distributed +set. + +# set cache producer + +The set producer provides seven operations: + +- add + +- removeAll + +- clear + +- addAll + +- removeAll + +- retainAll + +- getAll + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||object| +|hazelcastMode|The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default.|node|string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|The name of the cache||string| +|defaultOperation|To specify a default operation to use, if no operation header has been provided.||object| +|hazelcastConfigUri|Hazelcast configuration file.||string| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint.||object| +|hazelcastInstanceName|The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-hazelcast-topic.md b/camel-hazelcast-topic.md new file mode 100644 index 0000000000000000000000000000000000000000..ca97414db298d0a2487fd5e3c41e13648c9522ba --- /dev/null +++ b/camel-hazelcast-topic.md @@ -0,0 +1,62 @@ +# Hazelcast-topic + +**Since Camel 2.15** + +**Both producer and consumer are supported** + +The [Hazelcast](http://www.hazelcast.com/) Topic component is one of +Camel Hazelcast Components which allows you to access Hazelcast +distributed topic. + +# Options + +# Topic producer – to(“hazelcast-topic:foo”) + +The topic producer provides only one operation (publish). + +## Sample for **publish**: + + from("direct:add") + .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PUBLISH)) + .toF("hazelcast-%sbar", HazelcastConstants.PUBLISH_OPERATION); + +# Topic consumer – from(“hazelcast-topic:foo”) + +The topic consumer provides only one operation (received). This +component is supposed to support multiple consumption as it’s expected +when it comes to topics, so you are free to have as many consumers as +you need on the same hazelcast topic. + + fromF("hazelcast-%sfoo", HazelcastConstants.TOPIC_PREFIX) + .choice() + .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.RECEIVED)) + .log("...message received") + .otherwise() + .log("...this should never have happened") + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||object| +|hazelcastMode|The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default.|node|string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|The name of the cache||string| +|defaultOperation|To specify a default operation to use, if no operation header has been provided.||object| +|hazelcastConfigUri|Hazelcast configuration file.||string| +|hazelcastInstance|The hazelcast instance reference which can be used for hazelcast endpoint.||object| +|hazelcastInstanceName|The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance.||string| +|reliable|Define if the endpoint will use a reliable Topic struct or not.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-http.md b/camel-http.md new file mode 100644 index 0000000000000000000000000000000000000000..2923b0f42aff9b9f87df954d7d05126706f11745 --- /dev/null +++ b/camel-http.md @@ -0,0 +1,583 @@ +# Http + +**Since Camel 2.3** + +**Only producer is supported** + +The HTTP component provides HTTP-based endpoints for calling external +HTTP resources (as a client to call external servers using HTTP). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-http + x.x.x + + + +# URI format + + http:hostname[:port][/resourceUri][?options] + +Will by default use port 80 for HTTP and 443 for HTTPS. + +# Message Body + +Camel will store the HTTP response from the external server on the *OUT* +body. All headers from the *IN* message will be copied to the *OUT* +message, so headers are preserved during routing. Additionally, Camel +will add the HTTP response headers as well to the *OUT* message headers. + +# Using System Properties + +When setting useSystemProperties to true, the HTTP Client will look for +the following System Properties, and it will use it: + +- `ssl.TrustManagerFactory.algorithm` + +- `javax.net.ssl.trustStoreType` + +- `javax.net.ssl.trustStore` + +- `javax.net.ssl.trustStoreProvider` + +- `javax.net.ssl.trustStorePassword` + +- `java.home` + +- `ssl.KeyManagerFactory.algorithm` + +- `javax.net.ssl.keyStoreType` + +- `javax.net.ssl.keyStore` + +- `javax.net.ssl.keyStoreProvider` + +- `javax.net.ssl.keyStorePassword` + +- `http.proxyHost` + +- `http.proxyPort` + +- `http.nonProxyHosts` + +- `http.keepAlive` + +- `http.maxConnections` + +# Response code + +Camel will handle, according to the HTTP response code: + +- Response code is in the range 100..299, Camel regards it as a + success response. + +- Response code is in the range 300..399, Camel regards it as a + redirection response and will throw a `HttpOperationFailedException` + with the information. + +- Response code is 400+, Camel regards it as an external server + failure and will throw a `HttpOperationFailedException` with the + information. + +**throwExceptionOnFailure** + +The option, `throwExceptionOnFailure`, can be set to `false` to prevent +the `HttpOperationFailedException` from being thrown for failed response +codes. This allows you to get any response from the remote server. + +# Exceptions + +`HttpOperationFailedException` exception contains the following +information: + +- The HTTP status code + +- The HTTP status line (text of the status code) + +- Redirect location if server returned a redirect + +- Response body as a `java.lang.String`, if server provided a body as + response + +# Which HTTP method will be used + +The following algorithm is used to determine what HTTP method should be +used: + +1. Use method provided as endpoint configuration (`httpMethod`). +2. Use method provided in header (`Exchange.HTTP_METHOD`). +3. `GET` if query string is provided in header. +4. `GET` if endpoint is configured with a query string. +5. `POST` if there is data to send (body is not `null`). +6. `GET` otherwise. + +# Configuring URI to call + +You can set the HTTP producer’s URI directly from the endpoint URI. In +the route below, Camel will call out to the external server, `oldhost`, +using HTTP. + + from("direct:start") + .to("http://oldhost"); + +And the equivalent XML DSL: + + + + + + +You can override the HTTP endpoint URI by adding a header with the key +`Exchange.HTTP_URI` on the message. + + from("direct:start") + .setHeader(Exchange.HTTP_URI, constant("http://newhost")) + .to("http://oldhost"); + +In the sample above, Camel will call the [http://newhost](http://newhost) despite the +endpoint is configured with [http://oldhost](http://oldhost). +If the http endpoint is working in bridge mode, it will ignore the +message header of `Exchange.HTTP_URI`. + +# Configuring URI Parameters + +The **http** producer supports URI parameters to be sent to the HTTP +server. The URI parameters can either be set directly on the endpoint +URI or as a header with the key `Exchange.HTTP_QUERY` on the message. + + from("direct:start") + .to("http://oldhost?order=123&detail=short"); + +Or options provided in a header: + + from("direct:start") + .setHeader(Exchange.HTTP_QUERY, constant("order=123&detail=short")) + .to("http://oldhost"); + +# How to set the http method (GET/PATCH/POST/PUT/DELETE/HEAD/OPTIONS/TRACE) to the HTTP producer + +The HTTP component provides a way to set the HTTP request method by +setting the message header. Here is an example: + + from("direct:start") + .setHeader(Exchange.HTTP_METHOD, constant(org.apache.camel.component.http.HttpMethods.POST)) + .to("http://www.google.com") + .to("mock:results"); + +The method can be written a bit shorter using the string constants: + + .setHeader("CamelHttpMethod", constant("POST")) + +And the equivalent XML DSL: + + + + + POST + + + + + +# Using client timeout - SO\_TIMEOUT + +See the +[HttpSOTimeoutTest](https://github.com/apache/camel/blob/main/components/camel-http/src/test/java/org/apache/camel/component/http/HttpSOTimeoutTest.java) +unit test. + +# Configuring a Proxy + +The HTTP component provides a way to configure a proxy. + + from("direct:start") + .to("http://oldhost?proxyAuthHost=www.myproxy.com&proxyAuthPort=80"); + +There is also support for proxy authentication via the +`proxyAuthUsername` and `proxyAuthPassword` options. + +## Using proxy settings outside of URI + +To avoid System properties conflicts, you can set proxy configuration +only from the CamelContext or URI. +Java DSL : + + context.getGlobalOptions().put("http.proxyHost", "172.168.18.9"); + context.getGlobalOptions().put("http.proxyPort", "8080"); + +Spring XML + + + + + + + + +Camel will first set the settings from Java System or CamelContext +Properties and then the endpoint proxy options if provided. So you can +override the system properties with the endpoint options. + +There is also a `http.proxyScheme` property you can set to explicitly +configure the scheme to use. + +# Configuring charset + +If you are using `POST` to send data you can configure the `charset` +using the `Exchange` property: + + exchange.setProperty(Exchange.CHARSET_NAME, "ISO-8859-1"); + +## Sample with scheduled poll + +This sample polls the Google homepage every 10 seconds and write the +page to the file `message.html`: + + from("timer://foo?fixedRate=true&delay=0&period=10000") + .to("http://www.google.com") + .setHeader(FileComponent.HEADER_FILE_NAME, "message.html") + .to("file:target/google"); + +## URI Parameters from the endpoint URI + +In this sample, we have the complete URI endpoint that is just what you +would have typed in a web browser. Multiple URI parameters can of course +be set using the `&` character as separator, just as you would in the +web browser. Camel does no tricks here. + + // we query for Camel at the Google page + template.sendBody("http://www.google.com/search?q=Camel", null); + +## URI Parameters from the Message + + Map headers = new HashMap(); + headers.put(Exchange.HTTP_QUERY, "q=Camel&lr=lang_en"); + // we query for Camel and English language at Google + template.sendBody("http://www.google.com/search", null, headers); + +In the header value above notice that it should **not** be prefixed with +`?` and you can separate parameters as usual with the `&` char. + +## Getting the Response Code + +You can get the HTTP response code from the HTTP component by getting +the value from the Out message header with +`Exchange.HTTP_RESPONSE_CODE`. + + Exchange exchange = template.send("http://www.google.com/search", new Processor() { + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(Exchange.HTTP_QUERY, constant("hl=en&q=activemq")); + } + }); + Message out = exchange.getOut(); + int responseCode = out.getHeader(Exchange.HTTP_RESPONSE_CODE, Integer.class); + +# Disabling Cookies + +To disable cookies in the CookieStore, you can set the HTTP Client to +ignore cookies by adding this URI option: +`httpClient.cookieSpec=ignore`. This doesn’t affect cookies manually set +in the `Cookie` header + +# Basic auth with the streaming message body + +To avoid the `NonRepeatableRequestException`, you need to do the +Preemptive Basic Authentication by adding the option: +`authenticationPreemptive=true` + +# OAuth2 Support + +To get an access token from an Authorization Server and fill that in +Authorization header to do requests to protected services, you will need +to use `oauth2ClientId`, `oauth2ClientSecret` and `oauth2TokenEndpoint` +properties, and those should be defined as specified at RFC 6749 and +provided by your Authorization Server. + +In below example camel will do an underlying request to +`https://localhost:8080/realms/master/protocol/openid-connect/token` +using provided credentials (client id and client secret), then will get +`access_token` from response and lastly will fill it at `Authorization` +header of request which will be done to `https://localhost:9090`. + + String clientId = "my-client-id"; + String clientSecret = "my-client-secret"; + String tokenEndpoint = "https://localhost:8080/realms/master/protocol/openid-connect/token"; + String scope = "my-scope"; // optional scope + + from("direct:start") + .to("https://localhost:9090/?oauth2ClientId=" + clientId + "&oauth2ClientSecret=" + clientSecret + "&oauth2TokenEndpoint=" + tokenEndpoint + "&oauth2Scope=" + scope); + +Camel only provides support for OAuth2 client credentials flow + +Camel does not perform any validation in access token. It’s up to the +underlying service to validate it. + +# Advanced Usage + +If you need more control over the HTTP producer, you should use the +`HttpComponent` where you can set various classes to give you custom +behavior. + +## Setting up SSL for HTTP Client + +Using the JSSE Configuration Utility + +The HTTP component supports SSL/TLS configuration through the [Camel +JSSE Configuration +Utility](#manual::camel-configuration-utilities.adoc). This utility +greatly decreases the amount of component-specific code you need to +write and is configurable at the endpoint and component levels. The +following examples demonstrate how to use the utility with the HTTP +component. + +Programmatic configuration of the component + + KeyStoreParameters ksp = new KeyStoreParameters(); + ksp.setResource("file:/users/home/server/keystore.jks"); + ksp.setPassword("keystorePassword"); + + KeyManagersParameters kmp = new KeyManagersParameters(); + kmp.setKeyStore(ksp); + kmp.setKeyPassword("keyPassword"); + + SSLContextParameters scp = new SSLContextParameters(); + scp.setKeyManagers(kmp); + + HttpComponent httpComponent = getContext().getComponent("https", HttpComponent.class); + httpComponent.setSslContextParameters(scp); + +Spring DSL based configuration of endpoint + + + + + + + + + +Configuring Apache HTTP Client Directly + +Basically, a camel-http component is built on the top of [Apache +HttpClient](https://hc.apache.org/httpcomponents-client-5.1.x/). Please +refer to [SSL/TLS +customization](https://hc.apache.org/httpcomponents-client-4.5.x/current/tutorial/html/connmgmt.html) +(even if the link is referring to an article about version 4, it is +still more or less relevant moreover there is no equivalent for version +5\) for details or have a look into the +`org.apache.camel.component.http.HttpsServerTestSupport` unit test base +class. +You can also implement a custom +`org.apache.camel.component.http.HttpClientConfigurer` to do some +configuration on the http client if you need full control of it. + +However, if you *just* want to specify the keystore and truststore, you +can do this with Apache HTTP `HttpClientConfigurer`, for example: + + KeyStore keystore = ...; + KeyStore truststore = ...; + + SchemeRegistry registry = new SchemeRegistry(); + registry.register(new Scheme("https", 443, new SSLSocketFactory(keystore, "mypassword", truststore))); + +And then you need to create a class that implements +`HttpClientConfigurer`, and registers https protocol providing a +keystore or truststore per the example above. Then, from your camel +route builder class, you can hook it up like so: + + HttpComponent httpComponent = getContext().getComponent("http", HttpComponent.class); + httpComponent.setHttpClientConfigurer(new MyHttpClientConfigurer()); + +If you are doing this using the Spring DSL, you can specify your +`HttpClientConfigurer` using the URI. For example: + + + + + + +As long as you implement the `HttpClientConfigurer` and configure your +keystore and truststore as described above, it will work fine. + +Using HTTPS to authenticate gotchas + +An end user reported that he had a problem with authenticating with +HTTPS. The problem was eventually resolved by providing a custom +configured `org.apache.hc.core5.http.protocol.HttpContext`: + +- 1\. Create a (Spring) factory for HttpContexts: + + + + public class HttpContextFactory { + + private String httpHost = "localhost"; + private String httpPort = 9001; + private String user = "some-user"; + private String password = "my-secret"; + + private HttpClientContext context = HttpClientContext.create(); + private BasicAuthCache authCache = new BasicAuthCache(); + private BasicScheme basicAuth = new BasicScheme(); + + public HttpContext getObject() { + UsernamePasswordCredentials credentials = new UsernamePasswordCredentials(user, password.toCharArray()); + BasicCredentialsProvider provider = new BasicCredentialsProvider(); + HttpHost host = new HttpHost(httpHost, httpPort); + provider.setCredentials(host, credentials); + + authCache.put(host, basicAuth); + + httpContext.setAuthCache(authCache); + httpContext.setCredentialsProvider(provider); + + return httpContext; + } + + // getter and setter + } + +- 2\. Declare an\` HttpContext\` in the Spring application context + file: + + + + + +- 3\. Reference the context in the http URL: + + + + + +Using different SSLContextParameters + +The [HTTP](#http-component.adoc) component only supports one instance of +`org.apache.camel.support.jsse.SSLContextParameters` per component. If +you need to use two or more different instances, then you need to set up +multiple [HTTP](#http-component.adoc) components as shown below. Where +we have two components, each using their own instance of +`sslContextParameters` property. + + + + + + + + + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|skipRequestHeaders|Whether to skip mapping all the Camel headers as HTTP request headers. If there are no data from Camel headers needed to be included in the HTTP request then this can avoid parsing overhead with many object allocations for the JVM garbage collector.|false|boolean| +|skipResponseHeaders|Whether to skip mapping all the HTTP response headers to Camel headers. If there are no data needed from HTTP headers then this can avoid parsing overhead with many object allocations for the JVM garbage collector.|false|boolean| +|cookieStore|To use a custom org.apache.hc.client5.http.cookie.CookieStore. By default the org.apache.hc.client5.http.cookie.BasicCookieStore is used which is an in-memory only cookie store. Notice if bridgeEndpoint=true then the cookie store is forced to be a noop cookie store as cookie shouldn't be stored as we are just bridging (eg acting as a proxy).||object| +|copyHeaders|If this option is true then IN exchange headers will be copied to OUT exchange headers according to copy strategy. Setting this to false, allows to only include the headers from the HTTP response (not propagating IN headers).|true|boolean| +|followRedirects|Whether to the HTTP request should follow redirects. By default the HTTP request does not follow redirects|false|boolean| +|responsePayloadStreamingThreshold|This threshold in bytes controls whether the response payload should be stored in memory as a byte array or be streaming based. Set this to -1 to always use streaming mode.|8192|integer| +|allowJavaSerializedObject|Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk.|false|boolean| +|authCachingDisabled|Disables authentication scheme caching|false|boolean| +|automaticRetriesDisabled|Disables automatic request recovery and re-execution|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|clientConnectionManager|To use a custom and shared HttpClientConnectionManager to manage connections. If this has been configured then this is always used for all endpoints created by this component.||object| +|connectionsPerRoute|The maximum number of connections per route.|20|integer| +|connectionStateDisabled|Disables connection state tracking|false|boolean| +|connectionTimeToLive|The time for connection to live, the time unit is millisecond, the default value is always keep alive.||integer| +|contentCompressionDisabled|Disables automatic content decompression|false|boolean| +|cookieManagementDisabled|Disables state (cookie) management|false|boolean| +|defaultUserAgentDisabled|Disables the default user agent set by this builder if none has been provided by the user|false|boolean| +|httpBinding|To use a custom HttpBinding to control the mapping between Camel message and HttpClient.||object| +|httpClientConfigurer|To use the custom HttpClientConfigurer to perform configuration of the HttpClient that will be used.||object| +|httpConfiguration|To use the shared HttpConfiguration as base configuration.||object| +|httpContext|To use a custom org.apache.hc.core5.http.protocol.HttpContext when executing requests.||object| +|maxTotalConnections|The maximum number of connections.|200|integer| +|redirectHandlingDisabled|Disables automatic redirect handling|false|boolean| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| +|proxyAuthDomain|Proxy authentication domain to use||string| +|proxyAuthHost|Proxy authentication host||string| +|proxyAuthMethod|Proxy authentication method to use||string| +|proxyAuthNtHost|Proxy authentication domain (workstation name) to use with NTML||string| +|proxyAuthPassword|Proxy authentication password||string| +|proxyAuthPort|Proxy authentication port||integer| +|proxyAuthScheme|Proxy authentication protocol scheme||string| +|proxyAuthUsername|Proxy authentication username||string| +|sslContextParameters|To configure security using SSLContextParameters. Important: Only one instance of org.apache.camel.support.jsse.SSLContextParameters is supported per HttpComponent. If you need to use 2 or more different instances, you need to define a new HttpComponent per instance you need.||object| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| +|x509HostnameVerifier|To use a custom X509HostnameVerifier such as DefaultHostnameVerifier or NoopHostnameVerifier.||object| +|connectionRequestTimeout|Returns the connection lease request timeout used when requesting a connection from the connection manager. A timeout value of zero is interpreted as a disabled timeout.|3 minutes|object| +|connectTimeout|Determines the timeout until a new connection is fully established. A timeout value of zero is interpreted as an infinite timeout.|3 minutes|object| +|responseTimeout|Determines the timeout until arrival of a response from the opposite endpoint. A timeout value of zero is interpreted as an infinite timeout. Please note that response timeout may be unsupported by HTTP transports with message multiplexing.|0|object| +|soTimeout|Determines the default socket timeout value for blocking I/O operations.|3 minutes|object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|httpUri|The url of the HTTP endpoint to call.||string| +|disableStreamCache|Determines whether or not the raw input stream from Servlet is cached or not (Camel will read the stream into a in memory/overflow to file, Stream caching) cache. By default Camel will cache the Servlet input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The http producer will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is as the message body.|false|boolean| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object| +|bridgeEndpoint|If the option is true, HttpProducer will ignore the Exchange.HTTP\_URI header, and use the endpoint's URI for request. You may also set the option throwExceptionOnFailure to be false to let the HttpProducer send all the fault response back.|false|boolean| +|connectionClose|Specifies whether a Connection Close header must be added to HTTP Request. By default connectionClose is false.|false|boolean| +|httpMethod|Configure the HTTP method to use. The HttpMethod header cannot override this option if set.||object| +|skipRequestHeaders|Whether to skip mapping all the Camel headers as HTTP request headers. If there are no data from Camel headers needed to be included in the HTTP request then this can avoid parsing overhead with many object allocations for the JVM garbage collector.|false|boolean| +|skipResponseHeaders|Whether to skip mapping all the HTTP response headers to Camel headers. If there are no data needed from HTTP headers then this can avoid parsing overhead with many object allocations for the JVM garbage collector.|false|boolean| +|throwExceptionOnFailure|Option to disable throwing the HttpOperationFailedException in case of failed responses from the remote server. This allows you to get all responses regardless of the HTTP status code.|true|boolean| +|clearExpiredCookies|Whether to clear expired cookies before sending the HTTP request. This ensures the cookies store does not keep growing by adding new cookies which is newer removed when they are expired. If the component has disabled cookie management then this option is disabled too.|true|boolean| +|cookieHandler|Configure a cookie handler to maintain a HTTP session||object| +|cookieStore|To use a custom CookieStore. By default the BasicCookieStore is used which is an in-memory only cookie store. Notice if bridgeEndpoint=true then the cookie store is forced to be a noop cookie store as cookie shouldn't be stored as we are just bridging (eg acting as a proxy). If a cookieHandler is set then the cookie store is also forced to be a noop cookie store as cookie handling is then performed by the cookieHandler.||object| +|copyHeaders|If this option is true then IN exchange headers will be copied to OUT exchange headers according to copy strategy. Setting this to false, allows to only include the headers from the HTTP response (not propagating IN headers).|true|boolean| +|customHostHeader|To use custom host header for producer. When not set in query will be ignored. When set will override host header derived from url.||string| +|deleteWithBody|Whether the HTTP DELETE should include the message body or not. By default HTTP DELETE do not include any HTTP body. However in some rare cases users may need to be able to include the message body.|false|boolean| +|followRedirects|Whether to the HTTP request should follow redirects. By default the HTTP request does not follow redirects|false|boolean| +|getWithBody|Whether the HTTP GET should include the message body or not. By default HTTP GET do not include any HTTP body. However in some rare cases users may need to be able to include the message body.|false|boolean| +|ignoreResponseBody|If this option is true, The http producer won't read response body and cache the input stream|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|okStatusCodeRange|The status codes which are considered a success response. The values are inclusive. Multiple ranges can be defined, separated by comma, e.g. 200-204,209,301-304. Each range must be a single number or from-to with the dash included.|200-299|string| +|preserveHostHeader|If the option is true, HttpProducer will set the Host header to the value contained in the current exchange Host header, useful in reverse proxy applications where you want the Host header received by the downstream server to reflect the URL called by the upstream client, this allows applications which use the Host header to generate accurate URL's for a proxied service|false|boolean| +|userAgent|To set a custom HTTP User-Agent request header||string| +|clientBuilder|Provide access to the http client request parameters used on new RequestConfig instances used by producers or consumers of this endpoint.||object| +|clientConnectionManager|To use a custom HttpClientConnectionManager to manage connections||object| +|connectionsPerRoute|The maximum number of connections per route.|20|integer| +|httpClient|Sets a custom HttpClient to be used by the producer||object| +|httpClientConfigurer|Register a custom configuration strategy for new HttpClient instances created by producers or consumers such as to configure authentication mechanisms etc.||object| +|httpClientOptions|To configure the HttpClient using the key/values from the Map.||object| +|httpConnectionOptions|To configure the connection and the socket using the key/values from the Map.||object| +|httpContext|To use a custom HttpContext instance||object| +|maxTotalConnections|The maximum number of connections.|200|integer| +|useSystemProperties|To use System Properties as fallback for configuration|false|boolean| +|proxyAuthDomain|Proxy authentication domain to use with NTML||string| +|proxyAuthHost|Proxy authentication host||string| +|proxyAuthMethod|Proxy authentication method to use||string| +|proxyAuthNtHost|Proxy authentication domain (workstation name) to use with NTML||string| +|proxyAuthPassword|Proxy authentication password||string| +|proxyAuthPort|Proxy authentication port||integer| +|proxyAuthScheme|Proxy authentication scheme to use||string| +|proxyAuthUsername|Proxy authentication username||string| +|proxyHost|Proxy hostname to use||string| +|proxyPort|Proxy port to use||integer| +|authDomain|Authentication domain to use with NTML||string| +|authenticationPreemptive|If this option is true, camel-http sends preemptive basic authentication to the server.|false|boolean| +|authHost|Authentication host to use with NTML||string| +|authMethod|Authentication methods allowed to use as a comma separated list of values Basic, Digest or NTLM.||string| +|authMethodPriority|Which authentication method to prioritize to use, either as Basic, Digest or NTLM.||string| +|authPassword|Authentication password||string| +|authUsername|Authentication username||string| +|oauth2ClientId|OAuth2 client id||string| +|oauth2ClientSecret|OAuth2 client secret||string| +|oauth2TokenEndpoint|OAuth2 Token endpoint||string| +|sslContextParameters|To configure security using SSLContextParameters. Important: Only one instance of org.apache.camel.util.jsse.SSLContextParameters is supported per HttpComponent. If you need to use 2 or more different instances, you need to define a new HttpComponent per instance you need.||object| +|x509HostnameVerifier|To use a custom X509HostnameVerifier such as DefaultHostnameVerifier or NoopHostnameVerifier||object| diff --git a/camel-hwcloud-dms.md b/camel-hwcloud-dms.md new file mode 100644 index 0000000000000000000000000000000000000000..7e8ed14245140ffb974869555cf0390127eeb7f8 --- /dev/null +++ b/camel-hwcloud-dms.md @@ -0,0 +1,317 @@ +# Hwcloud-dms + +**Since Camel 3.12** + +**Only producer is supported** + +Huawei Cloud Distributed Message Service (DMS) component allows you to +integrate with +[DMS](https://www.huaweicloud.com/intl/en-us/product/dms.html) provided +by Huawei Cloud. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-huaweicloud-dms + x.x.x + + + +# URI Format + + hwcloud-dms:operation[?options] + +# Usage + +## Message properties evaluated by the DMS producer + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelHwCloudDmsOperation

String

Name of operation to invoke

CamelHwCloudDmsEngine

String

The message engine. Either kafka or +rabbitmq

CamelHwCloudDmsInstanceId

String

Instance ID to invoke operation +on

CamelHwCloudDmsName

String

The name of the instance for creating +and updating an instance

CamelHwCloudDmsEngineVersion

String

The version of the message +engine

CamelHwCloudDmsSpecification

String

The baseline bandwidth of a Kafka +instance

CamelHwCloudDmsStorageSpace

int

The message storage space

CamelHwCloudDmsPartitionNum

int

The maximum number of partitions in a +Kafka instance

CamelHwCloudDmsAccessUser

String

The username of a RabbitMQ +instance

CamelHwCloudDmsPassword

String

The password of a RabbitMQ +instance

CamelHwCloudDmsVpcId

String

The VPC ID

CamelHwCloudDmsSecurityGroupId

String

The security group which the instance +belongs to

CamelHwCloudDmsSubnetId

String

The subnet ID

CamelHwCloudDmsAvailableZones

List<String>

The ID of an available zone

CamelHwCloudDmsProductId

String

The product ID

CamelHwCloudDmsKafkaManagerUser

String

The username for logging in to the +Kafka Manager

CamelHwCloudDmsKafkaManagerPassword

String

The password for logging in to the +Kafka Manager

CamelHwCloudDmsStorageSpecCode

String

The storage I/O specification

+ +If any of the above properties are set, they will override their +corresponding query parameter. + +## Message properties set by the DMS producer + + +++++ + + + + + + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelHwCloudDmsInstanceDeleted

boolean

Set as true when the +deleteInstance operation is successful

CamelHwCloudDmsInstanceUpdated

boolean

Set as true when the +updateInstance operation is successful

+ +# List of Supported DMS Operations + +- createInstance + +- deleteInstance + +- listInstances + +- queryInstance + +- updateInstance + +## Create Instance + +To create an instance, you can pass the parameters through the endpoint, +the exchange properties, and the exchange body as a +CreateInstanceRequestBody object or a valid JSON String representation +of it. Refer to this for the [Kafka +parameters](https://support.huaweicloud.com/en-us/api-kafka/kafka-api-180514002.html) +and the [RabbitMQ +parameters](https://support.huaweicloud.com/en-us/api-rabbitmq/rabbitmq-api-180514002.html). +If you choose to pass these parameters through the endpoint or through +exchange properties, you can only input the mandatory parameters shown +in those links. If you would like to have access to all the parameters, +you must pass a CreateInstanceRequestBody object or a valid JSON String +representation of it through the exchange body, as shown below: + + from("direct:triggerRoute") + .setBody(new CreateInstanceRequestBody().withName("new-instance").withDescription("description").with*) // add remaining options + .to("hwcloud-dms:createInstance?region=cn-north-4&accessKey=********&secretKey=********&projectId=*******") + + from("direct:triggerRoute") + .setBody("{\"name\":\"new-instance\",\"description\":\"description\"}") // add remaining options + .to("hwcloud-dms:createInstance?region=cn-north-4&accessKey=********&secretKey=********&projectId=*******") + +## Update Instance + +To update an instance, you must pass the parameters through the exchange +body as an UpdateInstanceRequestBody or a valid JSON String +representation of it. Refer to this for the [Kafka +parameters](https://support.huaweicloud.com/en-us/api-kafka/kafka-api-180514004.html) +and the [RabbitMQ +parameters](https://support.huaweicloud.com/en-us/api-rabbitmq/rabbitmq-api-180514004.html). +An example of how to do this is shown below: + + from("direct:triggerRoute") + .setBody(new UpdateInstanceRequestBody().withName("new-instance").withDescription("description").with*) // add remaining options + .to("hwcloud-dms:updateInstance?instanceId=******®ion=cn-north-4&accessKey=********&secretKey=********&projectId=*******") + + from("direct:triggerRoute") + .setBody("{\"name\":\"new-instance\",\"description\":\"description\"}") // add remaining options + .to("hwcloud-dms:updateInstance?instanceId=******®ion=cn-north-4&accessKey=********&secretKey=********&projectId=*******") + +# Using ServiceKey Configuration Bean + +Access key and secret keys are required to authenticate against cloud +DMS service. You can avoid having them being exposed and scattered over +in your endpoint uri by wrapping them inside a bean of class +`org.apache.camel.component.huaweicloud.common.models.ServiceKeys`. Add +it to the registry and let Camel look it up by referring the object via +endpoint query parameter `serviceKeys`. + +Check the following code snippets: + + + + + + + from("direct:triggerRoute") + .to("hwcloud-dms:listInstances?region=cn-north-4&serviceKeys=#myServiceKeyConfig") + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|operation|Operation to be performed||string| +|accessKey|Access key for the cloud user||string| +|accessUser|The username of a RabbitMQ instance. This option is mandatory when creating a RabbitMQ instance.||string| +|availableZones|The ID of an available zone. This option is mandatory when creating an instance and it cannot be an empty array.||array| +|endpoint|DMS url. Carries higher precedence than region parameter based client initialization||string| +|engine|The message engine. Either kafka or rabbitmq. If the parameter is not specified, all instances will be queried||string| +|engineVersion|The version of the message engine. This option is mandatory when creating an instance.||string| +|ignoreSslVerification|Ignore SSL verification|false|boolean| +|instanceId|The id of the instance. This option is mandatory when deleting or querying an instance||string| +|kafkaManagerPassword|The password for logging in to the Kafka Manager. This option is mandatory when creating a Kafka instance.||string| +|kafkaManagerUser|The username for logging in to the Kafka Manager. This option is mandatory when creating a Kafka instance.||string| +|name|The name of the instance for creating and updating an instance. This option is mandatory when creating an instance||string| +|partitionNum|The maximum number of partitions in a Kafka instance. This option is mandatory when creating a Kafka instance.||integer| +|password|The password of a RabbitMQ instance. This option is mandatory when creating a RabbitMQ instance.||string| +|productId|The product ID. This option is mandatory when creating an instance.||string| +|projectId|Cloud project ID||string| +|proxyHost|Proxy server ip/hostname||string| +|proxyPassword|Proxy authentication password||string| +|proxyPort|Proxy server port||integer| +|proxyUser|Proxy authentication user||string| +|region|DMS service region||string| +|secretKey|Secret key for the cloud user||string| +|securityGroupId|The security group which the instance belongs to. This option is mandatory when creating an instance.||string| +|serviceKeys|Configuration object for cloud service authentication||object| +|specification|The baseline bandwidth of a Kafka instance. This option is mandatory when creating a Kafka instance.||string| +|storageSpace|The message storage space. This option is mandatory when creating an instance.||integer| +|storageSpecCode|The storage I/O specification. This option is mandatory when creating an instance.||string| +|subnetId|The subnet ID. This option is mandatory when creating an instance.||string| +|vpcId|The VPC ID. This option is mandatory when creating an instance.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-hwcloud-frs.md b/camel-hwcloud-frs.md new file mode 100644 index 0000000000000000000000000000000000000000..09ec5435ae8532802be0223f666ee0f49ce98a6b --- /dev/null +++ b/camel-hwcloud-frs.md @@ -0,0 +1,245 @@ +# Hwcloud-frs + +**Since Camel 3.15** + +**Only producer is supported** + +Huawei Cloud Face Recognition Service component allows you to integrate +with [Face Recognition +Service](https://support.huaweicloud.com/intl/en-us/productdesc-face/face_01_0001.html) +provided by Huawei Cloud. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-huaweicloud-frs + x.x.x + + + +# URI format + + hwcloud-frs:operation[?options] + +When using imageBase64 or videoBase64 option, we suggest you use +RAW(base64\_value) to avoid encoding issue. + +# Usage + +## Message properties evaluated by the Face Recognition Service producer + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelHwCloudFrsImageBase64

String

The Base64 character string converted +from an image. This property can be used when the operation is +faceDetection or faceVerification.

CamelHwCloudFrsImageUrl

String

The URL of an image. This property can +be used when the operation is faceDetection or +faceVerification.

CamelHwCloudFrsImageFilePath

String

The local file path of an image. This +property can be used when the operation is faceDetection or +faceVerification.

CamelHwCloudFrsAnotherImageBase64

String

The Base64 character string converted +from another image. This property can be used when the operation is +faceVerification.

CamelHwCloudFrsAnotherImageUrl

String

The URL of another image. This property +can be used when the operation is faceVerification.

CamelHwCloudFrsAnotherImageFilePath

String

The local file path of another image. +This property can be used when the operation is +faceVerification.

CamelHwCloudFrsVideoBase64

String

The Base64 character string converted +from a video. This property can be used when the operation is +faceLiveDetection.

CamelHwCloudFrsVideoUrl

String

The URL of a video. This property can +be used when the operation is faceLiveDetection.

CamelHwCloudFrsVideoFilePath

String

The local file path of a video. This +property can be used when the operation is faceLiveDetection.

CamelHwCloudFrsVideoActions

String

The action code sequence list. This +property can be used when the operation is faceLiveDetection.

CamelHwCloudFrsVideoActionTimes

String

The action time array. This property is +used when the operation is faceLiveDetection.

+ +# List of Supported Operations + +- faceDetection - detect, locate, and analyze the face in an input + image, and output the key facial points and attributes. + +- faceVerification - compare two faces to verify whether they belong + to the same person and return the confidence level + +- faceLiveDetection - determine whether a person in a video is alive + by checking whether the person’s actions in the video are consistent + with those in the input action list + +# Inline Configuration of route + +## faceDetection + +Java DSL + + from("direct:triggerRoute") + .setProperty(FaceRecognitionProperties.FACE_IMAGE_URL, constant("https://xxxx")) + .to("hwcloud-frs:faceDetection?accessKey=*********&secretKey=********&projectId=9071a38e7f6a4ba7b7bcbeb7d4ea6efc®ion=cn-north-4") + +XML DSL + + + + + https://xxxx + + + + +## faceVerification + +Java DSL + + from("direct:triggerRoute") + .setProperty(FaceRecognitionProperties.FACE_IMAGE_BASE64, constant("/9j/4AAQSkZJRgABAQEASABIAAD/2wBDAA0JCgsKCA0LCgsODg0PEyAVExISEyccHhcgLikxMC4pLSwzOko+MzZGNywtQFdBRkxOUlNSMj5aYVpQYEpRUk...")) + .setProperty(FaceRecognitionProperties.ANOTHER_FACE_IMAGE_BASE64, constant("/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgFBgcGBQgHBgcJCAgJDBMMDAsLDBgREg4THBgdHRsYGxofIywlHyEqIRobJjQnKi4vMTIxHiU2Os...")) + .to("hwcloud-frs:faceVerification?accessKey=*********&secretKey=********&projectId=9071a38e7f6a4ba7b7bcbeb7d4ea6efc®ion=cn-north-4") + +XML DSL + + + + + /9j/4AAQSkZJRgABAQEASABIAAD/2wBDAA0JCgsKCA0LCgsODg0PEyAVExISEyccHhcgLikxMC4pLSwzOko+MzZGNywtQFdBRkxOUlNSMj5aYVpQYEpRUk... + + + /9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgFBgcGBQgHBgcJCAgJDBMMDAsLDBgREg4THBgdHRsYGxofIywlHyEqIRobJjQnKi4vMTIxHiU2Os... + + + + +## faceLiveDetection + +Java DSL + + from("direct:triggerRoute") + .setProperty(FaceRecognitionProperties.FACE_VIDEO_FILE_PATH, constant("/tmp/video.mp4")) + .setProperty(FaceRecognitionProperties.FACE_VIDEO_ACTIONS, constant("1,3,2")) + .to("hwcloud-frs:faceLiveDetection?accessKey=*********&secretKey=********&projectId=9071a38e7f6a4ba7b7bcbeb7d4ea6efc®ion=cn-north-4") + +XML DSL + + + + + /tmp/video.mp4 + + + 1,3,2 + + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|operation|Name of Face Recognition operation to perform, including faceDetection, faceVerification and faceLiveDetection||string| +|accessKey|Access key for the cloud user||string| +|actions|This param is mandatory when the operation is faceLiveDetection, indicating the action code sequence list. Actions are separated by commas (,). Currently, the following actions are supported: 1: Shake the head to the left. 2: Shake the head to the right. 3: Nod the head. 4: Mouth movement.||string| +|actionTimes|This param can be used when the operation is faceLiveDetection, indicating the action time array. The length of the array is the same as the number of actions. Each item contains the start time and end time of the action in the corresponding sequence. The unit is the milliseconds from the video start time.||string| +|anotherImageBase64|This param can be used when operation is faceVerification, indicating the Base64 character string converted from the other image. It needs to be configured if imageBase64 is set. The image size cannot exceed 10 MB. The image resolution of the narrow sides must be greater than 15 pixels, and that of the wide sides cannot exceed 4096 pixels. The supported image formats include JPG, PNG, and BMP.||string| +|anotherImageFilePath|This param can be used when operation is faceVerification, indicating the local file path of the other image. It needs to be configured if imageFilePath is set. Image size cannot exceed 8 MB, and it is recommended that the image size be less than 1 MB.||string| +|anotherImageUrl|This param can be used when operation is faceVerification, indicating the URL of the other image. It needs to be configured if imageUrl is set. The options are as follows: 1.HTTP/HTTPS URLs on the public network 2.OBS URLs. To use OBS data, authorization is required, including service authorization, temporary authorization, and anonymous public authorization. For details, see Configuring the Access Permission of OBS.||string| +|endpoint|Fully qualified Face Recognition service url. Carries higher precedence than region based configuration.||string| +|imageBase64|This param can be used when operation is faceDetection or faceVerification, indicating the Base64 character string converted from an image. Any one of imageBase64, imageUrl and imageFilePath needs to be set, and the priority is imageBase64 imageUrl imageFilePath. The Image size cannot exceed 10 MB. The image resolution of the narrow sides must be greater than 15 pixels, and that of the wide sides cannot exceed 4096 pixels. The supported image formats include JPG, PNG, and BMP.||string| +|imageFilePath|This param can be used when operation is faceDetection or faceVerification, indicating the local image file path. Any one of imageBase64, imageUrl and imageFilePath needs to be set, and the priority is imageBase64 imageUrl imageFilePath. Image size cannot exceed 8 MB, and it is recommended that the image size be less than 1 MB.||string| +|imageUrl|This param can be used when operation is faceDetection or faceVerification, indicating the URL of an image. Any one of imageBase64, imageUrl and imageFilePath needs to be set, and the priority is imageBase64 imageUrl imageFilePath. The options are as follows: 1.HTTP/HTTPS URLs on the public network 2.OBS URLs. To use OBS data, authorization is required, including service authorization, temporary authorization, and anonymous public authorization. For details, see Configuring the Access Permission of OBS.||string| +|projectId|Cloud project ID||string| +|proxyHost|Proxy server ip/hostname||string| +|proxyPassword|Proxy authentication password||string| +|proxyPort|Proxy server port||integer| +|proxyUser|Proxy authentication user||string| +|region|Face Recognition service region. Currently only cn-north-1 and cn-north-4 are supported. This is lower precedence than endpoint based configuration.||string| +|secretKey|Secret key for the cloud user||string| +|serviceKeys|Configuration object for cloud service authentication||object| +|videoBase64|This param can be used when operation is faceLiveDetection, indicating the Base64 character string converted from a video. Any one of videoBase64, videoUrl and videoFilePath needs to be set, and the priority is videoBase64 videoUrl videoFilePath. Requirements are as follows: 1.The video size after Base64 encoding cannot exceed 8 MB. It is recommended that the video file be compressed to 200 KB to 2 MB on the client. 2.The video duration must be 1 to 15 seconds. 3.The recommended frame rate is 10 fps to 30 fps. 4.The encapsulation format can be MP4, AVI, FLV, WEBM, ASF, or MOV. 5.The video encoding format can be H.261, H.263, H.264, HEVC, VC-1, VP8, VP9, or WMV3.||string| +|videoFilePath|This param can be used when operation is faceLiveDetection, indicating the local video file path. Any one of videoBase64, videoUrl and videoFilePath needs to be set, and the priority is videoBase64 videoUrl videoFilePath. The video requirements are as follows: 1.The size of a video file cannot exceed 8 MB. It is recommended that the video file be compressed to 200 KB to 2 MB on the client. 2.The video duration must be 1 to 15 seconds. 3.The recommended frame rate is 10 fps to 30 fps. 4.The encapsulation format can be MP4, AVI, FLV, WEBM, ASF, or MOV. 5.The video encoding format can be H.261, H.263, H.264, HEVC, VC-1, VP8, VP9, or WMV3.||string| +|videoUrl|This param can be used when operation is faceLiveDetection, indicating the URL of a video. Any one of videoBase64, videoUrl and videoFilePath needs to be set, and the priority is videoBase64 videoUrl videoFilePath. Currently, only the URL of an OBS bucket on HUAWEI CLOUD is supported and FRS must have the permission to read data in the OBS bucket. For details about how to enable the read permission, see Service Authorization. The video requirements are as follows: 1.The video size after Base64 encoding cannot exceed 8 MB. 2.The video duration must be 1 to 15 seconds. 3.The recommended frame rate is 10 fps to 30 fps. 4.The encapsulation format can be MP4, AVI, FLV, WEBM, ASF, or MOV. 5.The video encoding format can be H.261, H.263, H.264, HEVC, VC-1, VP8, VP9, or WMV3.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|ignoreSslVerification|Ignore SSL verification|false|boolean| diff --git a/camel-hwcloud-functiongraph.md b/camel-hwcloud-functiongraph.md new file mode 100644 index 0000000000000000000000000000000000000000..c4eb1ff2db35df146ce0f893c0319f0024eb9753 --- /dev/null +++ b/camel-hwcloud-functiongraph.md @@ -0,0 +1,156 @@ +# Hwcloud-functiongraph + +**Since Camel 3.11** + +**Only producer is supported** + +Huawei Cloud FunctionGraph component allows you to integrate with +[FunctionGraph](https://www.huaweicloud.com/intl/en-us/product/functiongraph.html) +provided by Huawei Cloud. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-huaweicloud-functiongraph + x.x.x + + + +# URI Format + + hwcloud-functiongraph:operation[?options] + +# Usage + +## Message properties evaluated by the FunctionGraph producer + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelHwCloudFgOperation

String

Name of operation to invoke

CamelHwCloudFgFunction

String

Name of function to invoke operation +on

CamelHwCloudFgPackage

String

Name of the function package

CamelHwCloudFgXCffLogType

String

Type of log to be returned by +FunctionGraph operation

+ +If the operation, function name, or function package are set, they will +override their corresponding query parameter. + +## Message properties set by the FunctionGraph producer + + +++++ + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelHwCloudFgXCffLogs

String

Unique log returned by FunctionGraph +after processing the request if CamelHwCloudFgXCffLogType +is set

+ +# List of Supported FunctionGraph Operations + +- invokeFunction - to invoke a serverless function + +# Using ServiceKey Configuration Bean + +Access key and secret keys are required to authenticate against cloud +FunctionGraph service. You can avoid having them being exposed and +scattered over in your endpoint uri by wrapping them inside a bean of +class +`org.apache.camel.component.huaweicloud.functiongraph.models.ServiceKeys`. +Add it to the registry and let Camel look it up by referring the object +via endpoint query parameter `serviceKeys`. + +Check the following code snippets: + + + + + + + from("direct:triggerRoute") + .setProperty(FunctionGraphProperties.OPERATION, constant("invokeFunction")) + .setProperty(FunctionGraphProperties.FUNCTION_NAME ,constant("your_function_name")) + .setProperty(FunctionGraphProperties.FUNCTION_PACKAGE, constant("your_function_package")) + .to("hwcloud-functiongraph:invokeFunction?projectId=9071a38e7f6a4ba7b7bcbeb7d4ea6efc®ion=cn-north-4&serviceKeys=#myServiceKeyConfig") + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|operation|Operation to be performed||string| +|endpoint|FunctionGraph url. Carries higher precedence than region parameter based client initialization||string| +|functionName|Name of the function to invoke||string| +|functionPackage|Functions that can be logically grouped together|default|string| +|projectId|Cloud project ID||string| +|region|FunctionGraph service region. This is lower precedence than endpoint based configuration||string| +|serviceKeys|Configuration object for cloud service authentication||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|proxyHost|Proxy server ip/hostname||string| +|proxyPassword|Proxy authentication password||string| +|proxyPort|Proxy server port||integer| +|proxyUser|Proxy authentication user||string| +|accessKey|Access key for the cloud user||string| +|ignoreSslVerification|Ignore SSL verification|false|boolean| +|secretKey|Secret key for the cloud user||string| diff --git a/camel-hwcloud-iam.md b/camel-hwcloud-iam.md new file mode 100644 index 0000000000000000000000000000000000000000..3b07f9bf860901ee00ec945a3e13a4d205349b84 --- /dev/null +++ b/camel-hwcloud-iam.md @@ -0,0 +1,161 @@ +# Hwcloud-iam + +**Since Camel 3.11** + +**Only producer is supported** + +Huawei Cloud Identity and Access Management (IAM) component allows you +to integrate with +[IAM](https://www.huaweicloud.com/intl/en-us/product/iam.html) provided +by Huawei Cloud. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-huaweicloud-iam + x.x.x + + + +# URI Format + + hwcloud-iam:operation[?options] + +# Usage + +## Message properties evaluated by the IAM producer + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelHwCloudIamOperation

String

Name of operation to invoke

CamelHwCloudIamUserId

String

User ID to invoke operation on

CamelHwCloudIamGroupId

String

Group ID to invoke operation +on

+ +If any of the above properties are set, they will override their +corresponding query parameter. + +# List of Supported IAM Operations + +- listUsers + +- getUser - `userId` parameter is **required** + +- updateUser - `userId` parameter is **required** + +- listGroups + +- getGroupUsers - `groupId` is **required** + +- updateGroup - `groupId` is **required** + +## Passing Options Through Exchange Body + +There are many options that can be submitted to [update a +user](https://support.huaweicloud.com/en-us/api-iam/iam_08_0011.html) +(Table 4) or to [update a +group](https://support.huaweicloud.com/en-us/api-iam/iam_09_0004.html) +(Table 4). Since there are multiple user/group options, they must be +passed through the exchange body. + +For the `updateUser` operation, you can pass the user options as an +UpdateUserOption object or a Json string: + + from("direct:triggerRoute") + .setBody(new UpdateUserOption().withName("user").withDescription("employee").withEmail("user@email.com")) + .to("hwcloud-iam:updateUser?userId=********®ion=cn-north-4&accessKey=********&secretKey=********") + + from("direct:triggerRoute") + .setBody("{\"name\":\"user\",\"description\":\"employee\",\"email\":\"user@email.com\"}") + .to("hwcloud-iam:updateUser?userId=********®ion=cn-north-4&accessKey=********&secretKey=********") + +For the `updateGroup` operation, you can pass the group options as a +KeystoneUpdateGroupOption object or a Json string: + + from("direct:triggerRoute") + .setBody(new KeystoneUpdateGroupOption().withName("group").withDescription("employees").withDomainId("1234")) + .to("hwcloud-iam:updateUser?groupId=********®ion=cn-north-4&accessKey=********&secretKey=********") + + from("direct:triggerRoute") + .setBody("{\"name\":\"group\",\"description\":\"employees\",\"domain_id\":\"1234\"}") + .to("hwcloud-iam:updateUser?groupId=********®ion=cn-north-4&accessKey=********&secretKey=********") + +# Using ServiceKey Configuration Bean + +Access key and secret keys are required to authenticate against cloud +IAM service. You can avoid having them being exposed and scattered over +in your endpoint uri by wrapping them inside a bean of class +`org.apache.camel.component.huaweicloud.iam.models.ServiceKeys`. Add it +to the registry and let Camel look it up by referring the object via +endpoint query parameter `serviceKeys`. + +Check the following code snippets: + + + + + + + from("direct:triggerRoute") + .setProperty(IAMPropeties.OPERATION, constant("listUsers")) + .setProperty(IAMPropeties.USER_ID ,constant("your_user_id")) + .setProperty(IAMPropeties.GROUP_ID, constant("your_group_id)) + .to("hwcloud-iam:listUsers?region=cn-north-4&serviceKeys=#myServiceKeyConfig") + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|operation|Operation to be performed||string| +|accessKey|Access key for the cloud user||string| +|groupId|Group ID to perform operation with||string| +|ignoreSslVerification|Ignore SSL verification|false|boolean| +|proxyHost|Proxy server ip/hostname||string| +|proxyPassword|Proxy authentication password||string| +|proxyPort|Proxy server port||integer| +|proxyUser|Proxy authentication user||string| +|region|IAM service region||string| +|secretKey|Secret key for the cloud user||string| +|serviceKeys|Configuration object for cloud service authentication||object| +|userId|User ID to perform operation with||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-hwcloud-imagerecognition.md b/camel-hwcloud-imagerecognition.md new file mode 100644 index 0000000000000000000000000000000000000000..5b4b2275ad2c7ec7ec19849c2dd21f982bf40f74 --- /dev/null +++ b/camel-hwcloud-imagerecognition.md @@ -0,0 +1,176 @@ +# Hwcloud-imagerecognition + +**Since Camel 3.12** + +**Only producer is supported** + +Huawei Cloud Image Recognition component allows you to integrate with +[Image +Recognition](https://www.huaweicloud.com/intl/en-us/product/image.html) +provided by Huawei Cloud. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-huaweicloud-imagerecognition + x.x.x + + + +# URI format + + hwcloud-imagerecognition:operation[?options] + +When using imageContent option, we suggest you use +RAW(image\_base64\_value) to avoid encoding issue. + +# Usage + +## Message properties evaluated by the Image Recognition producer + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelHwCloudImageContent

String

The Base64 character string converted +from the image

CamelHwCloudImageUrl

String

The URL of an image

CamelHwCloudImageTagLimit

Integer

The maximum number of the returned tags +when the operation is tagRecognition

CamelHwCloudImageTagLanguage

String

The language of the returned tags when +the operation is tagRecognition

CamelHwCloudImageThreshold

Integer

The threshold of confidence.

+ +# List of Supported Image Recognition Operations + +- celebrityRecognition - to analyze and identify the political + figures, stars and online celebrities contained in the picture, and + return the person information and face coordinates + +- tagRecognition - to recognize hundreds of scenes and thousands of + objects and their properties in natural images + +# Inline Configuration of route + +## celebrityRecognition + +Java DSL + + from("direct:triggerRoute") + .setProperty(ImageRecognitionProperties.IMAGE_URL, constant("https://xxxx")) + .setProperty(ImageRecognitionProperties.THRESHOLD,constant(0.5)) + .to("hwcloud-imagerecognition:celebrityRecognition?accessKey=*********&secretKey=********&projectId=9071a38e7f6a4ba7b7bcbeb7d4ea6efc®ion=cn-north-4") + +XML DSL + + + + + https://xxxx + + + 0.5 + + + + +## tagRecognition + +Java DSL + + from("direct:triggerRoute") + .setProperty(ImageRecognitionProperties.IMAGE_CONTENT, constant("/9j/4AAQSkZJRgABAQEASABIAAD/2wBDAA0JCgsKCA0LCgsODg0PEyAVExISEyccHhcgLikxMC4pLSwzOko+MzZGNywtQFdBRkxOUlNSMj5aYVpQYEpRUk//...")) + .setProperty(ImageRecognitionProperties.THRESHOLD,constant(60)) + .setProperty(ImageRecognitionProperties.TAG_LANGUAGE,constant("en")) + .setProperty(ImageRecognitionProperties.TAG_LIMIT,constant(50)) + .to("hwcloud-imagerecognition:tagRecognition?accessKey=*********&secretKey=********&projectId=9071a38e7f6a4ba7b7bcbeb7d4ea6efc®ion=cn-north-4") + +XML DSL + + + + + /9j/4AAQSkZJRgABAQEASABIAAD/2wBDAA0JCgsKCA0LCgsODg0PEyAVExISEyccHhcgLikxMC4pLSwzOko+MzZGNywtQFdBRkxOUlNSMj5aYVpQYEpRUk//... + + + 60 + + + en + + + 50 + + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|operation|Name of Image Recognition operation to perform, including celebrityRecognition and tagRecognition||string| +|accessKey|Access key for the cloud user||string| +|endpoint|Fully qualified Image Recognition service url. Carries higher precedence than region based configuration.||string| +|imageContent|Indicates the Base64 character string converted from the image. The size cannot exceed 10 MB. The image resolution of the narrow sides must be greater than 15 pixels, and that of the wide sides cannot exceed 4096 pixels.The supported image formats include JPG, PNG, and BMP. Configure either this parameter or imageUrl, and this one carries higher precedence than imageUrl.||string| +|imageUrl|Indicates the URL of an image. The options are as follows: HTTP/HTTPS URLs on the public network OBS URLs. To use OBS data, authorization is required, including service authorization, temporary authorization, and anonymous public authorization. For details, see Configuring the Access Permission of OBS. Configure either this parameter or imageContent, and this one carries lower precedence than imageContent.||string| +|projectId|Cloud project ID||string| +|proxyHost|Proxy server ip/hostname||string| +|proxyPassword|Proxy authentication password||string| +|proxyPort|Proxy server port||integer| +|proxyUser|Proxy authentication user||string| +|region|Image Recognition service region. Currently only cn-north-1 and cn-north-4 are supported. This is lower precedence than endpoint based configuration.||string| +|secretKey|Secret key for the cloud user||string| +|serviceKeys|Configuration object for cloud service authentication||object| +|tagLanguage|Indicates the language of the returned tags when the operation is tagRecognition, including zh and en.|zh|string| +|tagLimit|Indicates the maximum number of the returned tags when the operation is tagRecognition.|50|integer| +|threshold|Indicates the threshold of confidence. When the operation is tagRecognition, this parameter ranges from 0 to 100. Tags whose confidence score is lower than the threshold will not be returned. The default value is 60. When the operation is celebrityRecognition, this parameter ranges from 0 to 1. Labels whose confidence score is lower than the threshold will not be returned. The default value is 0.48.||number| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|ignoreSslVerification|Ignore SSL verification|false|boolean| diff --git a/camel-hwcloud-obs.md b/camel-hwcloud-obs.md new file mode 100644 index 0000000000000000000000000000000000000000..686cc619b6c3862bfee25d6a3469073d864822af --- /dev/null +++ b/camel-hwcloud-obs.md @@ -0,0 +1,243 @@ +# Hwcloud-obs + +**Since Camel 3.12** + +**Both producer and consumer are supported** + +Huawei Cloud Object Storage Service (OBS) component allows you to +integrate with +[OBS](https://www.huaweicloud.com/intl/en-us/product/obs.html) provided +by Huawei Cloud. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-huaweicloud-obs + x.x.x + + + +# URI Format + + hwcloud-obs:operation[?options] + +# Usage + +## Message properties evaluated by the OBS producer + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelHwCloudObsOperation

String

Name of operation to invoke

CamelHwCloudObsBucketName

String

Bucket name to invoke operation +on

CamelHwCloudObsBucketLocation

String

Bucket location when creating a new +bucket

CamelHwCloudObsObjectName

String

Name of the object to be used in +operation. You can also configure the name of the object using this +property while performing putObject operation

+ +If any of the above properties are set, they will override their +corresponding query parameter. + +## Message properties set by the OBS producer + + +++++ + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelHwCloudObsBucketExists

boolean

Return value when invoking the +checkBucketExists operation

+ +# List of Supported OBS Operations + +- listBuckets + +- createBucket - `bucketName` parameter is **required**, + `bucketLocation` parameter is optional + +- deleteBucket - `bucketName` parameter is **required** + +- checkBucketExists - `bucketName` parameter is **required** + +- getBucketMetadata - `bucketName` parameter is **required** + +- listObjects - `bucketName` parameter is **required** + +- getObject - `bucketName` and `objectName` parameters are + **required** + +- putObject - `bucketName` parameter is **required**. If exchange body + contains File, then file name is used as default object name unless + over-ridden via exchange property CamelHwCloudObsObjectName + +## Passing Options Through Exchange Body + +There are many options that can be submitted to the `createBucket` and +`listObjects` operations, so they can be passed through the exchange +body. + +If you would like to configure all the +[parameters](https://support.huaweicloud.com/intl/en-us/api-obs/obs_04_0021.html) +when creating a bucket, you can pass a +[CreateBucketRequest](https://obssdk-intl.obs.ap-southeast-1.myhuaweicloud.com/apidoc/en/java/com/obs/services/model/CreateBucketRequest.html) +object or a Json string into the exchange body. If the exchange body is +empty, a new bucket will be created using the bucketName and +bucketLocation (if provided) passed through the endpoint uri. + + from("direct:triggerRoute") + .setBody(new CreateBucketRequest("Bucket name", "Bucket location")) + .to("hwcloud-obs:createBucket?region=cn-north-4&accessKey=********&secretKey=********") + + from("direct:triggerRoute") + .setBody("{\"bucketName\":\"Bucket name\",\"location\":\"Bucket location\"}") + .to("hwcloud-obs:createBucket?region=cn-north-4&accessKey=********&secretKey=********") + +If you would like to configure all the +[parameters](https://support.huaweicloud.com/intl/en-us/api-obs/obs_04_0022.html) +when listing objects, you can pass a +[ListObjectsRequest](https://obssdk-intl.obs.ap-southeast-1.myhuaweicloud.com/apidoc/en/java/com/obs/services/model/ListObjectsRequest.html) +object or a Json string into the exchange body. If the exchange body is +empty, objects will be listed based on the bucketName passed through the +endpoint uri. + + from("direct:triggerRoute") + .setBody(new ListObjectsRequest("Bucket name", 1000)) + .to("hwcloud-obs:listObjects?region=cn-north-4&accessKey=********&secretKey=********") + + from("direct:triggerRoute") + .setBody("{\"bucketName\":\"Bucket name\",\"maxKeys\":1000"}") + .to("hwcloud-obs:listObjects?region=cn-north-4&accessKey=********&secretKey=********") + +# Using ServiceKey Configuration Bean + +Access key and secret keys are required to authenticate against the OBS +cloud. You can avoid having them being exposed and scattered over in +your endpoint uri by wrapping them inside a bean of class +`org.apache.camel.component.huaweicloud.obs.models.ServiceKeys`. Add it +to the registry and let Camel look it up by referring the object via +endpoint query parameter `serviceKeys`. + +Check the following code snippets: + + + + + + + from("direct:triggerRoute") + .setProperty(OBSPropeties.OPERATION, constant("createBucket")) + .setProperty(OBSPropeties.BUCKET_NAME ,constant("your_bucket_name")) + .setProperty(OBSPropeties.BUCKET_LOCATION, constant("your_bucket_location")) + .to("hwcloud-obs:createBucket?region=cn-north-4&serviceKeys=#myServiceKeyConfig") + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|operation|Operation to be performed||string| +|bucketName|Name of bucket to perform operation on||string| +|endpoint|OBS url. Carries higher precedence than region parameter based client initialization||string| +|objectName|Name of object to perform operation with||string| +|region|OBS service region. This is lower precedence than endpoint based configuration||string| +|deleteAfterRead|Determines if objects should be deleted after it has been retrieved|false|boolean| +|delimiter|The character used for grouping object names||string| +|destinationBucket|Name of destination bucket where objects will be moved when moveAfterRead is set to true||string| +|fileName|Get the object from the bucket with the given file name||string| +|includeFolders|If true, objects in folders will be consumed. Otherwise, they will be ignored and no Exchanges will be created for them|true|boolean| +|maxMessagesPerPoll|The maximum number of messages to poll at each polling|10|integer| +|moveAfterRead|Determines whether objects should be moved to a different bucket after they have been retrieved. The destinationBucket option must also be set for this option to work.|false|boolean| +|prefix|The object name prefix used for filtering objects to be listed||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|bucketLocation|Location of bucket when creating a new bucket||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|proxyHost|Proxy server ip/hostname||string| +|proxyPassword|Proxy authentication password||string| +|proxyPort|Proxy server port||integer| +|proxyUser|Proxy authentication user||string| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessKey|Access key for the cloud user||string| +|ignoreSslVerification|Ignore SSL verification|false|boolean| +|secretKey|Secret key for the cloud user||string| +|serviceKeys|Configuration object for cloud service authentication||object| diff --git a/camel-hwcloud-smn.md b/camel-hwcloud-smn.md new file mode 100644 index 0000000000000000000000000000000000000000..4e5e8bc623c4337992eead2d03674b34589fe4fc --- /dev/null +++ b/camel-hwcloud-smn.md @@ -0,0 +1,230 @@ +# Hwcloud-smn + +**Since Camel 3.8** + +**Only producer is supported** + +Huawei Cloud Simple Message Notification (SMN) component allows you to +integrate with +[SMN](https://www.huaweicloud.com/intl/en-us/product/smn.html) provided +by Huawei Cloud. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-huaweicloud-smn + x.x.x + + + +# URI format + +To send a notification. + + hwcloud-smn:service[?options] + +# Usage + +## Message properties evaluated by the SMN producer + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelHwCloudSmnSubject

String

Subject tag for the outgoing +notification

CamelHwCloudSmnTopic

String

Smn topic into which the message is to +be posted

CamelHwCloudSmnMessageTtl

Integer

Validity of the posted notification +message

CamelHwCloudSmnTemplateTags

Map<String, String>

Contains K,V pairs of tags +and values when using operation +publishAsTemplatedMessage

CamelHwCloudSmnTemplateName

String

Name of the template to use while using +operation publishAsTemplatedMessage

+ +## Message properties set by the SMN producer + + +++++ + + + + + + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelHwCloudSmnMesssageId

String

Unique message id returned by Simple +Message Notification server after processing the request

CamelHwCloudSmnRequestId

String

Unique request id returned by Simple +Message Notification server after processing the request

+ +# Supported list of smn services and corresponding operations + + ++++ + + + + + + + + + + + + +
ServiceOperations

publishMessageService

publishAsTextMessage, +publishAsTemplatedMessage

+ +# Inline Configuration of route + +## publishAsTextMessage + +Java DSL + + from("direct:triggerRoute") + .setProperty(SmnProperties.NOTIFICATION_SUBJECT, constant("Notification Subject")) + .setProperty(SmnProperties.NOTIFICATION_TOPIC_NAME,constant(testConfiguration.getProperty("topic"))) + .setProperty(SmnProperties.NOTIFICATION_TTL, constant(60)) + .to("hwcloud-smn:publishMessageService?operation=publishAsTextMessage&accessKey=*********&secretKey=********&projectId=9071a38e7f6a4ba7b7bcbeb7d4ea6efc®ion=cn-north-4") + +XML DSL + + + + + this is my subjectline + + + reji-test + + + 60 + + + + +## publishAsTemplatedMessage + +Java DSL + + from("direct:triggerRoute") + .setProperty("CamelHwCloudSmnSubject", constant("This is my subjectline")) + .setProperty("CamelHwCloudSmnTopic", constant("reji-test")) + .setProperty("CamelHwCloudSmnMessageTtl", constant(60)) + .setProperty("CamelHwCloudSmnTemplateTags", constant(tags)) + .setProperty("CamelHwCloudSmnTemplateName", constant("hello-template")) + .to("hwcloud-smn:publishMessageService?operation=publishAsTemplatedMessage&accessKey=*********&secretKey=********&projectId=9071a38e7f6a4ba7b7bcbeb7d4ea6efc®ion=cn-north-4") + +# Using ServiceKey configuration Bean + +Access key and secret keys are required to authenticate against cloud +smn service. You can avoid having them being exposed and scattered over +in your endpoint uri by wrapping them inside a bean of class +\`\`\`org.apache.camel.component.huaweicloud.smn.models.ServiceKeys\`\`\`. +Add it to the registry and let camel look it up by referring the object +via endpoint query parameter \`\`\`serviceKeys\`\`\`. Check the +following code snippets + + + + + + + from("direct:triggerRoute") + .setProperty(SmnProperties.NOTIFICATION_SUBJECT, constant("Notification Subject")) + .setProperty(SmnProperties.NOTIFICATION_TOPIC_NAME,constant(testConfiguration.getProperty("topic"))) + .setProperty(SmnProperties.NOTIFICATION_TTL, constant(60)) + .to("hwcloud-smn:publishMessageService?operation=publishAsTextMessage&projectId=9071a38e7f6a4ba7b7bcbeb7d4ea6efc®ion=cn-north-4&serviceKeys=#myServiceKeyConfig") + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|smnService|Name of SMN service to invoke||string| +|accessKey|Access key for the cloud user||string| +|endpoint|Fully qualified smn service url. Carries higher precedence than region parameter based client initialization||string| +|ignoreSslVerification|Ignore SSL verification|false|boolean| +|messageTtl|TTL for published message|3600|integer| +|operation|Name of operation to perform||string| +|projectId|Cloud project ID||string| +|proxyHost|Proxy server ip/hostname||string| +|proxyPassword|Proxy authentication password||string| +|proxyPort|Proxy server port||integer| +|proxyUser|Proxy authentication user||string| +|region|SMN service region. This is lower precedence than endpoint based configuration||string| +|secretKey|Secret key for the cloud user||string| +|serviceKeys|Configuration object for cloud service authentication||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-iec60870-client.md b/camel-iec60870-client.md new file mode 100644 index 0000000000000000000000000000000000000000..eb3b5ac7b903529c09ddc38502c5d8dc62d6eb54 --- /dev/null +++ b/camel-iec60870-client.md @@ -0,0 +1,88 @@ +# Iec60870-client + +**Since Camel 2.20** + +**Both producer and consumer are supported** + +The IEC 60870-5-104 Client component provides access to IEC 60870 +servers using the [Eclipse NeoSCADA](http://eclipse.org/eclipsescada) +implementation. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-iec60870 + x.x.x + + + +# URI format + +The URI syntax of the endpoint is: + + iec60870-client:host:port/00-01-02-03-04 + +The information object address is encoded in the path in the syntax +above. Please note that always the full, 5-octet address format is being +used. Unused octets have to be filled with zero. + +A connection instance if identified by the host and port part of the +URI, plus all parameters in the *"id"* group. If a new connection id is +encountered, the connection options will be evaluated and the connection +instance is created with those options. + +If two URIs specify the same connection (host, port, …) but different +connection options, then it is undefined which of those connection +options will be used. + +The final connection options will be evaluated in the following order: + +- If present, the `connectionOptions` parameter will be used + +- Otherwise, the `defaultConnectionOptions` instance is copied and + customized in the following steps + +- Apply `protocolOptions` if present + +- Apply `dataModuleOptions` if present + +- Apply all explicit connection parameters (e.g. `timeZone`) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|defaultConnectionOptions|Default connection options||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|uriPath|The object information address||object| +|dataModuleOptions|Data module options||object| +|protocolOptions|Protocol options||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|acknowledgeWindow|Parameter W - Acknowledgment window.|10|integer| +|adsuAddressType|The common ASDU address size. May be either SIZE\_1 or SIZE\_2.||object| +|causeOfTransmissionType|The cause of transmission type. May be either SIZE\_1 or SIZE\_2.||object| +|informationObjectAddressType|The information address size. May be either SIZE\_1, SIZE\_2 or SIZE\_3.||object| +|maxUnacknowledged|Parameter K - Maximum number of un-acknowledged messages.|15|integer| +|timeout1|Timeout T1 in milliseconds.|15000|integer| +|timeout2|Timeout T2 in milliseconds.|10000|integer| +|timeout3|Timeout T3 in milliseconds.|20000|integer| +|causeSourceAddress|Whether to include the source address||integer| +|connectionTimeout|Timeout in millis to wait for client to establish a connected connection.|10000|integer| +|ignoreBackgroundScan|Whether background scan transmissions should be ignored.|true|boolean| +|ignoreDaylightSavingTime|Whether to ignore or respect DST|false|boolean| +|timeZone|The timezone to use. May be any Java time zone string|UTC|object| +|connectionId|An identifier grouping connection instances||string| diff --git a/camel-iec60870-server.md b/camel-iec60870-server.md new file mode 100644 index 0000000000000000000000000000000000000000..5a3f59f73be4d9ec821bc9bceffdd6936a326c75 --- /dev/null +++ b/camel-iec60870-server.md @@ -0,0 +1,67 @@ +# Iec60870-server + +**Since Camel 2.20** + +**Both producer and consumer are supported** + +The **IEC 60870-5-104 Server** component provides access to IEC 60870 +servers using the [Eclipse NeoSCADA](http://eclipse.org/eclipsescada) +implementation. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-iec60870 + x.x.x + + + +# URI format + +The URI syntax of the endpoint is: + + iec60870-server:host:port/00-01-02-03-04 + +The information object address is encoded in the path in the syntax +above. Please note that always the full, 5-octet address format is being +used. Unused octets have to be filled with zero. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|defaultConnectionOptions|Default connection options||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|uriPath|The object information address||object| +|dataModuleOptions|Data module options||object| +|filterNonExecute|Filter out all requests which don't have the execute bit set|true|boolean| +|protocolOptions|Protocol options||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|acknowledgeWindow|Parameter W - Acknowledgment window.|10|integer| +|adsuAddressType|The common ASDU address size. May be either SIZE\_1 or SIZE\_2.||object| +|causeOfTransmissionType|The cause of transmission type. May be either SIZE\_1 or SIZE\_2.||object| +|informationObjectAddressType|The information address size. May be either SIZE\_1, SIZE\_2 or SIZE\_3.||object| +|maxUnacknowledged|Parameter K - Maximum number of un-acknowledged messages.|15|integer| +|timeout1|Timeout T1 in milliseconds.|15000|integer| +|timeout2|Timeout T2 in milliseconds.|10000|integer| +|timeout3|Timeout T3 in milliseconds.|20000|integer| +|causeSourceAddress|Whether to include the source address||integer| +|connectionTimeout|Timeout in millis to wait for client to establish a connected connection.|10000|integer| +|ignoreBackgroundScan|Whether background scan transmissions should be ignored.|true|boolean| +|ignoreDaylightSavingTime|Whether to ignore or respect DST|false|boolean| +|timeZone|The timezone to use. May be any Java time zone string|UTC|object| +|connectionId|An identifier grouping connection instances||string| diff --git a/camel-ignite-cache.md b/camel-ignite-cache.md new file mode 100644 index 0000000000000000000000000000000000000000..dd8c42cbb059d8664461f40b97d3b11658a46ac2 --- /dev/null +++ b/camel-ignite-cache.md @@ -0,0 +1,54 @@ +# Ignite-cache + +**Since Camel 2.17** + +**Both producer and consumer are supported** + +The Ignite Cache endpoint is one of camel-ignite endpoints that allow +you to interact with an [Ignite +Cache](https://apacheignite.readme.io/docs/data-grid). This offers both +a Producer (to invoke cache operations on an Ignite cache) and a +Consumer (to consume changes from a continuous query). + +The cache value is always the body of the message, whereas the cache key +is always stored in the `IgniteConstants.IGNITE_CACHE_KEY` message +header. + +Even if you configure a fixed operation in the endpoint URI, you can +vary it per-exchange by setting the +`IgniteConstants.IGNITE_CACHE_OPERATION` message header. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configurationResource|The resource from where to load the configuration. It can be a: URL, String or InputStream type.||object| +|ignite|To use an existing Ignite instance.||object| +|igniteConfiguration|Allows the user to set a programmatic ignite configuration.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|The cache name.||string| +|propagateIncomingBodyIfNoReturnValue|Sets whether to propagate the incoming body if the return type of the underlying Ignite operation is void.|true|boolean| +|treatCollectionsAsCacheObjects|Sets whether to treat Collections as cache objects or as Collections of items to insert/update/compute, etc.|false|boolean| +|autoUnsubscribe|Whether auto unsubscribe is enabled in the Continuous Query Consumer. Default value notice: ContinuousQuery.DFLT\_AUTO\_UNSUBSCRIBE|true|boolean| +|fireExistingQueryResults|Whether to process existing results that match the query. Used on initialization of the Continuous Query Consumer.|false|boolean| +|oneExchangePerUpdate|Whether to pack each update in an individual Exchange, even if multiple updates are received in one batch. Only used by the Continuous Query Consumer.|true|boolean| +|pageSize|The page size. Only used by the Continuous Query Consumer. Default value notice: ContinuousQuery.DFLT\_PAGE\_SIZE|1|integer| +|query|The Query to execute, only needed for operations that require it, and for the Continuous Query Consumer.||object| +|remoteFilter|The remote filter, only used by the Continuous Query Consumer.||object| +|timeInterval|The time interval for the Continuous Query Consumer. Default value notice: ContinuousQuery.DFLT\_TIME\_INTERVAL|0|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|cachePeekMode|The CachePeekMode, only needed for operations that require it (IgniteCacheOperation#SIZE).|ALL|object| +|failIfInexistentCache|Whether to fail the initialization if the cache doesn't exist.|false|boolean| +|operation|The cache operation to invoke. Possible values: GET, PUT, REMOVE, SIZE, REBALANCE, QUERY, CLEAR.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-ignite-compute.md b/camel-ignite-compute.md new file mode 100644 index 0000000000000000000000000000000000000000..a477a89323d4fcd597805057cf69985684c7709d --- /dev/null +++ b/camel-ignite-compute.md @@ -0,0 +1,97 @@ +# Ignite-compute + +**Since Camel 2.17** + +**Only producer is supported** + +The Ignite Compute endpoint is one of camel-ignite endpoints which +allows you to run [compute +operations](https://apacheignite.readme.io/docs/compute-grid) on the +cluster by passing in an IgniteCallable, an IgniteRunnable, an +IgniteClosure, or collections of them, along with their parameters if +necessary. + +The host part of the endpoint URI is a symbolic endpoint ID, it is not +used for any purposes. + +The endpoint tries to run the object passed in the body of the IN +message as the compute job. It expects different payload types depending +on the execution type. + +# Expected payload types + +Each operation expects the indicated types: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationExpected payloads

CALL

Collection of IgniteCallable, or a +single IgniteCallable.

BROADCAST

IgniteCallable, IgniteRunnable, +IgniteClosure.

APPLY

IgniteClosure.

EXECUTE

ComputeTask, Class<? extends +ComputeTask> or an object representing parameters if the taskName +option is not null.

RUN

A Collection of IgniteRunnables, or a +single IgniteRunnable.

AFFINITY_CALL

IgniteCallable.

AFFINITY_RUN

IgniteRunnable.

+ +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configurationResource|The resource from where to load the configuration. It can be a: URL, String or InputStream type.||object| +|ignite|To use an existing Ignite instance.||object| +|igniteConfiguration|Allows the user to set a programmatic ignite configuration.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|endpointId|The endpoint ID (not used).||string| +|clusterGroupExpression|An expression that returns the Cluster Group for the IgniteCompute instance.||object| +|computeName|The name of the compute job, which will be set via IgniteCompute#withName(String).||string| +|executionType|The compute operation to perform. Possible values: CALL, BROADCAST, APPLY, EXECUTE, RUN, AFFINITY\_CALL, AFFINITY\_RUN. The component expects different payload types depending on the operation.||object| +|propagateIncomingBodyIfNoReturnValue|Sets whether to propagate the incoming body if the return type of the underlying Ignite operation is void.|true|boolean| +|taskName|The task name, only applicable if using the IgniteComputeExecutionType#EXECUTE execution type.||string| +|timeoutMillis|The timeout interval for triggered jobs, in milliseconds, which will be set via IgniteCompute#withTimeout(long).||integer| +|treatCollectionsAsCacheObjects|Sets whether to treat Collections as cache objects or as Collections of items to insert/update/compute, etc.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-ignite-events.md b/camel-ignite-events.md new file mode 100644 index 0000000000000000000000000000000000000000..85c1c74979fa087cf115d4ff43733faf1a9d7bf9 --- /dev/null +++ b/camel-ignite-events.md @@ -0,0 +1,37 @@ +# Ignite-events + +**Since Camel 2.17** + +**Only consumer is supported** + +The Ignite Events endpoint is one of camel-ignite endpoints which allows +you to [receive events](https://apacheignite.readme.io/docs/events) from +the Ignite cluster by creating a local event listener. + +The Exchanges created by this consumer put the received Event object +into the body of the *IN* message. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|configurationResource|The resource from where to load the configuration. It can be a: URL, String or InputStream type.||object| +|ignite|To use an existing Ignite instance.||object| +|igniteConfiguration|Allows the user to set a programmatic ignite configuration.||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|endpointId|The endpoint ID (not used).||string| +|clusterGroupExpression|The cluster group expression.||object| +|events|The event types to subscribe to as a comma-separated string of event constants as defined in EventType. For example: EVT\_CACHE\_ENTRY\_CREATED,EVT\_CACHE\_OBJECT\_REMOVED,EVT\_IGFS\_DIR\_CREATED.|EVTS\_ALL|string| +|propagateIncomingBodyIfNoReturnValue|Sets whether to propagate the incoming body if the return type of the underlying Ignite operation is void.|true|boolean| +|treatCollectionsAsCacheObjects|Sets whether to treat Collections as cache objects or as Collections of items to insert/update/compute, etc.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| diff --git a/camel-ignite-idgen.md b/camel-ignite-idgen.md new file mode 100644 index 0000000000000000000000000000000000000000..7a034f5e4f59a09cc1a108c1b696c490b7f420ef --- /dev/null +++ b/camel-ignite-idgen.md @@ -0,0 +1,33 @@ +# Ignite-idgen + +**Since Camel 2.17** + +**Only producer is supported** + +The Ignite ID Generator endpoint is one of camel-ignite endpoints that +allow you to interact with [Ignite Atomic Sequences and ID +Generators](https://apacheignite.readme.io/docs/id-generator). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configurationResource|The resource from where to load the configuration. It can be a: URL, String or InputStream type.||object| +|ignite|To use an existing Ignite instance.||object| +|igniteConfiguration|Allows the user to set a programmatic ignite configuration.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|The sequence name.||string| +|batchSize|The batch size.||integer| +|initialValue|The initial value.|0|integer| +|operation|The operation to invoke on the Ignite ID Generator. Superseded by the IgniteConstants.IGNITE\_IDGEN\_OPERATION header in the IN message. Possible values: ADD\_AND\_GET, GET, GET\_AND\_ADD, GET\_AND\_INCREMENT, INCREMENT\_AND\_GET.||object| +|propagateIncomingBodyIfNoReturnValue|Sets whether to propagate the incoming body if the return type of the underlying Ignite operation is void.|true|boolean| +|treatCollectionsAsCacheObjects|Sets whether to treat Collections as cache objects or as Collections of items to insert/update/compute, etc.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-ignite-messaging.md b/camel-ignite-messaging.md new file mode 100644 index 0000000000000000000000000000000000000000..7c0a569c6c9ffd13a50ff9adf3696dc517324d8d --- /dev/null +++ b/camel-ignite-messaging.md @@ -0,0 +1,37 @@ +# Ignite-messaging + +**Since Camel 2.17** + +**Both producer and consumer are supported** + +The Ignite Messaging endpoint is one of camel-ignite endpoints that +allow you to send and consume messages from an [Ignite +topic](https://apacheignite.readme.io/docs/messaging). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configurationResource|The resource from where to load the configuration. It can be a: URL, String or InputStream type.||object| +|ignite|To use an existing Ignite instance.||object| +|igniteConfiguration|Allows the user to set a programmatic ignite configuration.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|topic|The topic name.||string| +|propagateIncomingBodyIfNoReturnValue|Sets whether to propagate the incoming body if the return type of the underlying Ignite operation is void.|true|boolean| +|treatCollectionsAsCacheObjects|Sets whether to treat Collections as cache objects or as Collections of items to insert/update/compute, etc.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|clusterGroupExpression|The cluster group expression.||object| +|sendMode|The send mode to use. Possible values: UNORDERED, ORDERED.|UNORDERED|object| +|timeout|The timeout for the send operation when using ordered messages.||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-ignite-queue.md b/camel-ignite-queue.md new file mode 100644 index 0000000000000000000000000000000000000000..e49370b605af71f813704ffcf09d0464ec9f87fa --- /dev/null +++ b/camel-ignite-queue.md @@ -0,0 +1,34 @@ +# Ignite-queue + +**Since Camel 2.17** + +**Only producer is supported** + +The Ignite Queue endpoint is one of camel-ignite endpoints that allow +you to interact with [Ignite Queue data +structures](https://apacheignite.readme.io/docs/queue-and-set). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configurationResource|The resource from where to load the configuration. It can be a: URL, String or InputStream type.||object| +|ignite|To use an existing Ignite instance.||object| +|igniteConfiguration|Allows the user to set a programmatic ignite configuration.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|The queue name.||string| +|capacity|The queue capacity. Default: non-bounded.||integer| +|configuration|The collection configuration. Default: empty configuration. You can also conveniently set inner properties by using configuration.xyz=123 options.||object| +|operation|The operation to invoke on the Ignite Queue. Superseded by the IgniteConstants.IGNITE\_QUEUE\_OPERATION header in the IN message. Possible values: CONTAINS, ADD, SIZE, REMOVE, ITERATOR, CLEAR, RETAIN\_ALL, ARRAY, DRAIN, ELEMENT, PEEK, OFFER, POLL, TAKE, PUT.||object| +|propagateIncomingBodyIfNoReturnValue|Sets whether to propagate the incoming body if the return type of the underlying Ignite operation is void.|true|boolean| +|timeoutMillis|The queue timeout in milliseconds. Default: no timeout.||integer| +|treatCollectionsAsCacheObjects|Sets whether to treat Collections as cache objects or as Collections of items to insert/update/compute, etc.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-ignite-set.md b/camel-ignite-set.md new file mode 100644 index 0000000000000000000000000000000000000000..12e37da4fdaee657e179f2f8a3ffe7b28e037222 --- /dev/null +++ b/camel-ignite-set.md @@ -0,0 +1,32 @@ +# Ignite-set + +**Since Camel 2.17** + +**Only producer is supported** + +The Ignite Sets endpoint is one of camel-ignite endpoints that allows +you to interact with [Ignite Set data +structures](https://apacheignite.readme.io/docs/queue-and-set). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configurationResource|The resource from where to load the configuration. It can be a: URL, String or InputStream type.||object| +|ignite|To use an existing Ignite instance.||object| +|igniteConfiguration|Allows the user to set a programmatic ignite configuration.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|The set name.||string| +|configuration|The collection configuration. Default: empty configuration. You can also conveniently set inner properties by using configuration.xyz=123 options.||object| +|operation|The operation to invoke on the Ignite Set. Superseded by the IgniteConstants.IGNITE\_SETS\_OPERATION header in the IN message. Possible values: CONTAINS, ADD, SIZE, REMOVE, ITERATOR, CLEAR, RETAIN\_ALL, ARRAY.The set operation to perform.||object| +|propagateIncomingBodyIfNoReturnValue|Sets whether to propagate the incoming body if the return type of the underlying Ignite operation is void.|true|boolean| +|treatCollectionsAsCacheObjects|Sets whether to treat Collections as cache objects or as Collections of items to insert/update/compute, etc.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-influxdb.md b/camel-influxdb.md new file mode 100644 index 0000000000000000000000000000000000000000..e0587f9838c6bceeb6f912228e4458906094e6c5 --- /dev/null +++ b/camel-influxdb.md @@ -0,0 +1,74 @@ +# Influxdb + +**Since Camel 2.18** + +**Only producer is supported** + +This component allows you to interact with +[InfluxDB](https://influxdata.com/time-series-platform/influxdb/) v1, a +time series database. + +The native body type for this component is `Point` (the native influxdb +class). However, it can also accept `Map` as message +body, and it will get converted to `Point.class`, please note that the +map must contain an element with `InfluxDbConstants.MEASUREMENT_NAME` as +key. + +Additionally, you may register your own Converters to your data type to +`Point`, or use the (un)marshalling tools provided by Camel. + +For InfluxDB v2 check the [InfluxDB2 +component](#influxdb2-component.adoc). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-influxdb + x.x.x + + + +# URI format + + influxdb://beanName?[options] + +The producer allows sending messages to an InfluxDB configured in the +registry, using the native java driver. + +# Example + +Below is an example route that stores a point into the db (taking the db +name from the URI) specific key: + + from("direct:start") + .setHeader(InfluxDbConstants.DBNAME_HEADER, constant("myTimeSeriesDB")) + .to("influxdb://connectionBean); + + from("direct:start") + .to("influxdb://connectionBean?databaseName=myTimeSeriesDB"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|influxDB|The shared Influx DB to use for all endpoints||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|connectionBean|Connection to the influx database, of class InfluxDB.class||string| +|autoCreateDatabase|Define if we want to auto create the database if it's not present|false|boolean| +|batch|Define if this operation is a batch operation or not|false|boolean| +|checkDatabaseExistence|Define if we want to check the database existence while starting the endpoint|false|boolean| +|databaseName|The name of the database where the time series will be stored||string| +|operation|Define if this operation is an insert or a query|insert|string| +|query|Define the query in case of operation query||string| +|retentionPolicy|The string that defines the retention policy to the data created by the endpoint|default|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-influxdb2.md b/camel-influxdb2.md new file mode 100644 index 0000000000000000000000000000000000000000..9cdb07986b715d61727c2f0279194216bd3f5378 --- /dev/null +++ b/camel-influxdb2.md @@ -0,0 +1,70 @@ +# Influxdb2 + +**Since Camel 3.20** + +**Only producer is supported** + +This component allows you to interact with +[InfluxDB](https://influxdata.com/time-series-platform/influxdb/) 2.x, a +time series database. The native body type for this component is `Point` +(the native InfluxDB class). However, it can also accept +`Map` as message body, and it will get converted to +`Point.class`, please note that the map must contain an element with +`InfluxDbConstants.MEASUREMENT_NAME` as key. + +Additionally, you may register your own Converters to your data type to +`Point`, or use the (un)marshalling tools provided by Camel. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-influxdb2 + x.x.x + + + +# URI format + + influxdb2://connectionBean?[options] + +The producer allows sending messages to an InfluxDB configured in the +registry, using the native java driver. + +# Example + +Below is an example route that stores a point into the db (taking the db +name from the URI) specific key: + + from("direct:start") + .to("influxdb2://connectionBean?org=&bucket="); + + from("direct:start") + .setHeader(InfluxDbConstants.ORG, "myTestOrg") + .setHeader(InfluxDbConstants.BUCKET, "myTestBucket") + .to("influxdb2://connectionBean?"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|influxDBClient|The shared Influx DB to use for all endpoints||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|connectionBean|Connection to the Influx database, of class com.influxdb.client.InfluxDBClient.class.||string| +|autoCreateBucket|Define if we want to auto create the bucket if it's not present.|true|boolean| +|autoCreateOrg|Define if we want to auto create the organization if it's not present.|true|boolean| +|bucket|The name of the bucket where the time series will be stored.||string| +|operation|Define if this operation is an insert of ping.|INSERT|object| +|org|The name of the organization where the time series will be stored.||string| +|retentionPolicy|Define the retention policy to the data created by the endpoint.|default|string| +|writePrecision|The format or precision of time series timestamps.|ms|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-irc.md b/camel-irc.md new file mode 100644 index 0000000000000000000000000000000000000000..94793f5f68fedfeba88eeb4a920a0a8e74fa25e4 --- /dev/null +++ b/camel-irc.md @@ -0,0 +1,180 @@ +# Irc + +**Since Camel 1.1** + +**Both producer and consumer are supported** + +The IRC component implements an +[IRC](http://en.wikipedia.org/wiki/Internet_Relay_Chat) (Internet Relay +Chat) transport. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-irc + x.x.x + + + +# SSL Support + +## Using the JSSE Configuration Utility + +The IRC component supports SSL/TLS configuration through the [Camel JSSE +Configuration Utility](#manual::camel-configuration-utilities.adoc). +This utility greatly decreases the amount of component-specific code you +need to write and is configurable at the endpoint and component levels. +The following examples demonstrate how to use the utility with the IRC +component. + +Programmatic configuration of the endpoint + + KeyStoreParameters ksp = new KeyStoreParameters(); + ksp.setResource("/users/home/server/truststore.jks"); + ksp.setPassword("keystorePassword"); + + TrustManagersParameters tmp = new TrustManagersParameters(); + tmp.setKeyStore(ksp); + + SSLContextParameters scp = new SSLContextParameters(); + scp.setTrustManagers(tmp); + + Registry registry = ... + registry.bind("sslContextParameters", scp); + + ... + + from(...) + .to("ircs://camel-prd-user@server:6669/#camel-test?nickname=camel-prd&password=password&sslContextParameters=#sslContextParameters"); + +Spring DSL based configuration of endpoint + + ... + + + + + ... + ... + ... + +## Using the legacy basic configuration options + +You can also connect to an SSL enabled IRC server, as follows: + + ircs:host[:port]/#room?username=user&password=pass + +By default, the IRC transport uses +[SSLDefaultTrustManager](http://moepii.sourceforge.net/irclib/javadoc/org/schwering/irc/lib/ssl/SSLDefaultTrustManager.html). +If you need to provide your own custom trust manager, use the +`trustManager` parameter as follows: + + ircs:host[:port]/#room?username=user&password=pass&trustManager=#referenceToMyTrustManagerBean + +# Using keys + +Some IRC rooms require you to provide a key to be able to join that +channel. The key is just a secret word. + +For example, we join three channels whereas only channel 1 and 3 use a +key. + + irc:nick@irc.server.org?channels=#chan1,#chan2,#chan3&keys=chan1Key,,chan3key + +# Getting a list of channel users + +Using the `namesOnJoin` option one can invoke the IRC-`NAMES` command +after the component has joined a channel. The server will reply with +`irc.num = 353`. So to process the result the property `onReply` has to +be `true`. Furthermore, one has to filter the `onReply` exchanges to get +the names. + +For example, we want to get all exchanges that contain the usernames of +the channel: + + from("ircs:nick@myserver:1234/#mychannelname?namesOnJoin=true&onReply=true") + .choice() + .when(header("irc.messageType").isEqualToIgnoreCase("REPLY")) + .filter(header("irc.num").isEqualTo("353")) + .to("mock:result").stop(); + +# Sending to a different channel or a person + +If you need to send messages to a different channel (or a person) which +is not defined on IRC endpoint, you can specify a different destination +in a message header. + +You can specify the destination in the following header: + + +++++ + + + + + + + + + + + + + + +
HeaderTypeDescription

irc.sendTo

String

The channel (or the person) +name.

+ +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|hostname|Hostname for the IRC chat server||string| +|port|Port number for the IRC chat server. If no port is configured then a default port of either 6667, 6668 or 6669 is used.||integer| +|autoRejoin|Whether to auto re-join when being kicked|true|boolean| +|channels|Comma separated list of IRC channels.||string| +|commandTimeout|Delay in milliseconds before sending commands after the connection is established.|5000|integer| +|keys|Comma separated list of keys for channels.||string| +|namesOnJoin|Sends NAMES command to channel after joining it. onReply has to be true in order to process the result which will have the header value irc.num = '353'.|false|boolean| +|nickname|The nickname used in chat.||string| +|persistent|Use persistent messages.|true|boolean| +|realname|The IRC user's actual name.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|colors|Whether or not the server supports color codes.|true|boolean| +|onJoin|Handle user join events.|true|boolean| +|onKick|Handle kick events.|true|boolean| +|onMode|Handle mode change events.|true|boolean| +|onNick|Handle nickname change events.|true|boolean| +|onPart|Handle user part events.|true|boolean| +|onPrivmsg|Handle private message events.|true|boolean| +|onQuit|Handle user quit events.|true|boolean| +|onReply|Whether or not to handle general responses to commands or informational messages.|false|boolean| +|onTopic|Handle topic change events.|true|boolean| +|nickPassword|Your IRC server nickname password.||string| +|password|The IRC server password.||string| +|sslContextParameters|Used for configuring security using SSL. Reference to a org.apache.camel.support.jsse.SSLContextParameters in the Registry. This reference overrides any configured SSLContextParameters at the component level. Note that this setting overrides the trustManager option.||object| +|trustManager|The trust manager used to verify the SSL server's certificate.||object| +|username|The IRC server user name.||string| diff --git a/camel-ironmq.md b/camel-ironmq.md new file mode 100644 index 0000000000000000000000000000000000000000..0e1da3c819374beeb2f96f5ba6f8d25aef72bc65 --- /dev/null +++ b/camel-ironmq.md @@ -0,0 +1,101 @@ +# Ironmq + +**Since Camel 2.17** + +**Both producer and consumer are supported** + +The IronMQ component provides integration with +[IronMQ](http://www.iron.io/products/mq) an elastic and durable hosted +message queue as a service. + +The component uses the [IronMQ java +client](https://github.com/iron-io/iron_mq_java) library. + +To run it requires an IronMQ account and a project id and token. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-ironmq + x.x.x + + + +# URI format + + ironmq:queueName[?options] + +Where `queueName` identifies the IronMQ queue you want to publish or +consume messages from. + +# Message Body + +It should be either a String or an array of Strings. In the latter case, +the batch of strings will be sent to IronMQ as one request, creating one +message per element in the array. + +# Consumer example + +Consume 50 messages per poll from the queue `testqueue` on aws eu, and +save the messages to files. + + from("ironmq:testqueue?ironMQCloud=https://mq-aws-eu-west-1-1.iron.io&projectId=myIronMQProjectid&token=myIronMQToken&maxMessagesPerPoll=50") + .to("file:somefolder"); + +# Producer example + +Dequeue from activemq jms and enqueue the messages on IronMQ. + + from("activemq:foo") + .to("ironmq:testqueue?projectId=myIronMQProjectid&token=myIronMQToken"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|queueName|The name of the IronMQ queue||string| +|ironMQCloud|IronMq Cloud url. Urls for public clusters: https://mq-aws-us-east-1-1.iron.io (US) and https://mq-aws-eu-west-1-1.iron.io (EU)|https://mq-aws-us-east-1-1.iron.io|string| +|preserveHeaders|Should message headers be preserved when publishing messages. This will add the Camel headers to the Iron MQ message as a json payload with a header list, and a message body. Useful when Camel is both consumer and producer.|false|boolean| +|projectId|IronMQ projectId||string| +|batchDelete|Should messages be deleted in one batch. This will limit the number of api requests since messages are deleted in one request, instead of one pr. exchange. If enabled care should be taken that the consumer is idempotent when processing exchanges.|false|boolean| +|concurrentConsumers|The number of concurrent consumers.|1|integer| +|maxMessagesPerPoll|Number of messages to poll pr. call. Maximum is 100.|1|integer| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|timeout|After timeout (in seconds), item will be placed back onto the queue.|60|integer| +|wait|Time in seconds to wait for a message to become available. This enables long polling. Default is 0 (does not wait), maximum is 30.||integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|visibilityDelay|The item will not be available on the queue until this many seconds have passed. Default is 0 seconds.||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|client|Reference to a io.iron.ironmq.Client in the Registry.||object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|token|IronMQ token||string| diff --git a/camel-jcache.md b/camel-jcache.md new file mode 100644 index 0000000000000000000000000000000000000000..ec0ff4b20f85cbcc831ab95c83859bc81f418ff4 --- /dev/null +++ b/camel-jcache.md @@ -0,0 +1,359 @@ +# Jcache + +**Since Camel 2.17** + +**Both producer and consumer are supported** + +The JCache component enables you to perform caching operations using +JSR107/JCache as cache implementation. + +# URI Format + + jcache:cacheName[?options] + +# JCache Policy + +The JCachePolicy is an interceptor around a route that caches the +"result of the route" (the message body) after the route is completed. +If the next time the route is called with a "similar" Exchange, the +cached value is used on the Exchange instead of executing the route. The +policy uses the JSR107/JCache API of a cache implementation, so it’s +required to add one (e.g., Hazelcast, Ehcache) to the classpath. + +The policy takes a *key* value from the received Exchange to get or +store values in the cache. By default, the *key* is the message body. +For example, if the route - having a JCachePolicy - receives an Exchange +with a String body *"fruit"* and the body at the end of the route is +"apple", it stores a *key/value* pair *"fruit=apple"* in the cache. If +next time another Exchange arrives with a body *"fruit"*, the value +*"apple"* is taken from the cache instead of letting the route process +the Exchange. + +So by default, the message body at the beginning of the route is the +cache *key* and the body at the end is the stored *value*. It’s possible +to use something else as a *key* by setting a Camel Expression via +`.setKeyExpression()` that will be used to determine the key. + +The policy needs a JCache Cache. It can be set directly by `.setCache()` +or the policy will try to get or create the Cache based on the other +parameters set. + +Similar caching solution is available, for example, in Spring using the +@Cacheable annotation. + +# JCachePolicy Fields + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionDefaultType

cache

The Cache to use to store the cached +values. If this value is set, cacheManager, +cacheName and cacheConfiguration is +ignored.

Cache

cacheManager

The CacheManager to use to look up or +create the Cache. Used only if cache is not set.

Try to find a CacheManager +in the CamelContext registry or calls the standard JCache +Caching.getCachingProvider().getCacheManager().

CacheManager

cacheName

Name of the cache. Get the Cache from +cacheManager or create a new one if it doesn’t +exist.

RouteId of the route.

String

cacheConfiguration

JCache cache configuration to use if a +new Cache is created

Default new +MutableConfiguration object.

CacheConfiguration

keyExpression

An Expression to evaluate to determine +the cache key.

Exchange body

Expression

enabled

If the policy is not enabled, no +wrapper processor is added to the route. It has impact only during +startup, not during runtime. For example, it can be used to disable +caching from properties.

true

boolean

+ +# How to determine cache to use? + +# Set cache + +The cache used by the policy can be set directly. This means you have to +configure the cache yourself and get a JCache Cache object, but this +gives the most flexibility. For example, it can be setup in the config +xml of the cache provider (Hazelcast, EhCache, …) and used here. Or it’s +possible to use the standard Caching API as below: + + MutableConfiguration configuration = new MutableConfiguration<>(); + configuration.setTypes(String.class, Object.class); + configuration.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(new Duration(TimeUnit.MINUTES, 60))); + CacheManager cacheManager = Caching.getCachingProvider().getCacheManager(); + Cache cache = cacheManager.createCache("orders",configuration); + + JCachePolicy jcachePolicy = new JCachePolicy(); + jcachePolicy.setCache(cache); + + from("direct:get-orders") + .policy(jcachePolicy) + .log("Getting order with id: ${body}") + .bean(OrderService.class,"findOrderById(${body})"); + +# Set cacheManager + +If the `cache` is not set, the policy will try to look up or create the +cache automatically. If the `cacheManager` is set on the policy, it will +try to get cache with the set `cacheName` (routeId by default) from the +CacheManager. If the cache does not exist, it will create a new one +using the `cacheConfiguration` (new MutableConfiguration by default). + + //In a Spring environment, for example, the CacheManager may already exist as a bean + @Autowire + CacheManager cacheManager; + ... + + //Cache "items" is used or created if not exists + JCachePolicy jcachePolicy = new JCachePolicy(); + jcachePolicy.setCacheManager(cacheManager); + jcachePolicy.setCacheName("items") + +# Find cacheManager + +If `cacheManager` (and the `cache`) is not set, the policy will try to +find a JCache CacheManager object: + +- Lookup a CacheManager in Camel registry. That falls back on JNDI or + Spring context based on the environment + +- Use the standard api + `Caching.getCachingProvider().getCacheManager()` + + + + //A Cache "getorders" will be used (or created) from the found CacheManager + from("direct:get-orders").routeId("getorders") + .policy(new JCachePolicy()) + .log("Getting order with id: ${body}") + .bean(OrderService.class,"findOrderById(${body})"); + +# Partially wrapped route + +In the examples above, the whole route was executed or skipped. A policy +can be used to wrap only a segment of the route instead of all +processors. + + from("direct:get-orders") + .log("Order requested: ${body}") + .policy(new JCachePolicy()) + .log("Getting order with id: ${body}") + .bean(OrderService.class,"findOrderById(${body})") + .end() + .log("Order found: ${body}"); + +The `.log()` at the beginning and at the end of the route is always +called, but the section inside `.policy()` and `.end()` is executed +based on the cache. + +# KeyExpression + +By default, the policy uses the received Exchange body as the *key*, so +the default expression is like `simple("${body\}")`. We can set a +different Camel Expression as `keyExpression` which will be evaluated to +determine the key. For example, if we try to find an `order` by an +`orderId` which is in the message headers, set `header("orderId")` (or +`simple("${header.orderId\}")` as `keyExpression`. + +The expression is evaluated only once at the beginning of the route to +determine the *key*. If nothing was found in cache, this *key* is used +to store the *value* in cache at the end of the route. + + MutableConfiguration configuration = new MutableConfiguration<>(); + configuration.setTypes(String.class, Order.class); + configuration.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(new Duration(TimeUnit.MINUTES, 10))); + + JCachePolicy jcachePolicy = new JCachePolicy(); + jcachePolicy.setCacheConfiguration(configuration); + jcachePolicy.setCacheName("orders") + jcachePolicy.setKeyExpression(simple("${header.orderId})) + + //The cache key is taken from "orderId" header. + from("direct:get-orders") + .policy(jcachePolicy) + .log("Getting order with id: ${header.orderId}") + .bean(OrderService.class,"findOrderById(${header.orderId})"); + +# BypassExpression + +The `JCachePolicy` can be configured with an `Expression` that can per +`Exchange` determine whether to look up the value from the cache or +bypass. If the expression is evaluated to `false` then the route is +executed as normal, and the returned value is inserted into the cache +for future lookup. + +# Camel XML DSL examples + +# Use JCachePolicy in an XML route + +In Camel XML DSL, we need a named reference to the JCachePolicy instance +(registered in CamelContext or simply in Spring). We have to wrap the +route between `...` tags after ``. + + + + + + + + + + + + +See this example when only a part of the route is wrapped: + + + + + + + + + + + + + + + +# Define CachePolicy in Spring + +It’s more convenient to create a JCachePolicy in Java, especially within +a RouteBuilder using the Camel DSL expressions, but see this example to +define it in a Spring XML: + + + + + + + + + + +# Create Cache from XML + +It’s not strictly speaking related to Camel XML DSL, but JCache +providers usually have a way to configure the cache in an XML file. For +example with Hazelcast, you can add a `hazelcast.xml` to classpath to +configure the cache "spring" used in the example above. + + + + + + + + + + + + + +# Special scenarios and error handling + +If the Cache used by the policy is closed (can be done dynamically), the +whole caching functionality is skipped, the route will be executed every +time. + +If the determined *key* is *null*, nothing is looked up or stored in +cache. + +In case of an exception during the route, the error handled is called as +always. If the exception gets `handled()`, the policy stores the +Exchange body. Otherwise, nothing is added to the cache. If an exception +happens during evaluating the keyExpression, the routing fails, the +error handler is called as normally. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheConfiguration|A Configuration for the Cache||object| +|cacheConfigurationProperties|Properties to configure jcache||object| +|cacheConfigurationPropertiesRef|References to an existing Properties or Map to lookup in the registry to use for configuring jcache.||string| +|cachingProvider|The fully qualified class name of the javax.cache.spi.CachingProvider||string| +|configurationUri|An implementation specific URI for the CacheManager||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|cacheName|The name of the cache||string| +|cacheConfigurationProperties|The Properties for the javax.cache.spi.CachingProvider to create the CacheManager||object| +|cachingProvider|The fully qualified class name of the javax.cache.spi.CachingProvider||string| +|configurationUri|An implementation specific URI for the CacheManager||string| +|managementEnabled|Whether management gathering is enabled|false|boolean| +|readThrough|If read-through caching should be used|false|boolean| +|statisticsEnabled|Whether statistics gathering is enabled|false|boolean| +|storeByValue|If cache should use store-by-value or store-by-reference semantics|true|boolean| +|writeThrough|If write-through caching should be used|false|boolean| +|filteredEvents|Events a consumer should filter (multiple events can be separated by comma). If using filteredEvents option, then eventFilters one will be ignored||string| +|oldValueRequired|if the old value is required for events|false|boolean| +|synchronous|if the event listener should block the thread causing the event|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|eventFilters|The CacheEntryEventFilter. If using eventFilters option, then filteredEvents one will be ignored||array| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|action|To configure using a cache operation by default. If an operation in the message header, then the operation from the header takes precedence.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|cacheConfiguration|A Configuration for the Cache||object| +|cacheLoaderFactory|The CacheLoader factory||object| +|cacheWriterFactory|The CacheWriter factory||object| +|createCacheIfNotExists|Configure if a cache need to be created if it does exist or can't be pre-configured.|true|boolean| +|expiryPolicyFactory|The ExpiryPolicy factory||object| +|lookupProviders|Configure if a camel-cache should try to find implementations of jcache api in runtimes like OSGi.|false|boolean| diff --git a/camel-jcr.md b/camel-jcr.md new file mode 100644 index 0000000000000000000000000000000000000000..0ef0d00b42848f205771e853b5af551fd88bb541 --- /dev/null +++ b/camel-jcr.md @@ -0,0 +1,82 @@ +# Jcr + +**Since Camel 1.3** + +**Both producer and consumer are supported** + +The JCR component allows you to add/read nodes to/from a JCR compliant +content repository, for example, [Apache +Jackrabbit](http://jackrabbit.apache.org/), with its producer, or +register an EventListener with the consumer. + +You can use consumer as an EventListener in JCR or a producer to read a +node by identifier. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-jcr + x.x.x + + + +# URI format + + jcr://user:password@repository/path/to/node + +The `repository` element of the URI is used to look up the JCR +`Repository` object in the Camel context registry. + +# Example + +The snippet below creates a node named `node` under the `/home/test` +node in the content repository. One additional property is added to the +node as well: `my.contents.property` which will contain the body of the +message being sent. + + from("direct:a").setHeader(JcrConstants.JCR_NODE_NAME, constant("node")) + .setHeader("my.contents.property", body()) + .to("jcr://user:pass@repository/home/test"); + +The following code will register an EventListener under the path +import-application/inbox for `Event.NODE_ADDED` and `Event.NODE_REMOVED` +events (event types 1 and 2, both masked as 3) and listening deep for +all the children. + + + + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|Name of the javax.jcr.Repository to lookup from the Camel registry to be used.||string| +|base|Get the base node when accessing the repository||string| +|deep|When isDeep is true, events whose associated parent node is at absPath or within its subgraph are received.|false|boolean| +|eventTypes|eventTypes (a combination of one or more event types encoded as a bit mask value such as javax.jcr.observation.Event.NODE\_ADDED, javax.jcr.observation.Event.NODE\_REMOVED, etc.).||integer| +|nodeTypeNames|When a comma separated nodeTypeName list string is set, only events whose associated parent node has one of the node types (or a subtype of one of the node types) in this list will be received.||string| +|noLocal|If noLocal is true, then events generated by the session through which the listener was registered are ignored. Otherwise, they are not ignored.|false|boolean| +|password|Password for login||string| +|sessionLiveCheckInterval|Interval in milliseconds to wait before each session live checking The default value is 60000 ms.|60000|duration| +|sessionLiveCheckIntervalOnStart|Interval in milliseconds to wait before the first session live checking. The default value is 3000 ms.|3000|duration| +|username|Username for login||string| +|uuids|When a comma separated uuid list string is set, only events whose associated parent node has one of the identifiers in the comma separated uuid list will be received.||string| +|workspaceName|The workspace to access. If it's not specified then the default one will be used||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-jdbc.md b/camel-jdbc.md new file mode 100644 index 0000000000000000000000000000000000000000..f1a908766019de558ba4cd38c1b76aee07b7b4cc --- /dev/null +++ b/camel-jdbc.md @@ -0,0 +1,171 @@ +# Jdbc + +**Since Camel 1.2** + +**Only producer is supported** + +The JDBC component enables you to access databases through JDBC, where +SQL queries (SELECT) and operations (INSERT, UPDATE, etc.) are sent in +the message body. This component uses the standard JDBC API, unlike the +[SQL Component](#sql-component.adoc), which uses spring-jdbc. + +When you use Spring and need to support Spring Transactions, use the +[Spring JDBC Component](#spring-jdbc-component.adoc) instead of this +one. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-jdbc + x.x.x + + + +This component can only be used to define producer endpoints, which +means that you cannot use the JDBC component in a `from()` statement. + +# URI format + + jdbc:dataSourceName[?options] + +# Result + +By default, the result is returned in the OUT body as an +`ArrayList>`. The `List` object contains the +list of rows and the `Map` objects contain each row with the `String` +key as the column name. You can use the option `outputType` to control +the result. + +**Note:** This component fetches `ResultSetMetaData` to be able to +return the column name as the key in the `Map`. + +# Generated keys + +If you insert data using SQL INSERT, then the RDBMS may support auto +generated keys. You can instruct the [JDBC](#jdbc-component.adoc) +producer to return the generated keys in headers. +To do that set the header `CamelRetrieveGeneratedKeys=true`. Then the +generated keys will be provided as headers with the keys listed in the +table above. + +Using generated keys does not work together with named parameters. + +# Using named parameters + +In the given route below, we want to get all the projects from the +`projects` table. Notice the SQL query has two named parameters, `:?lic` +and `:?min`. Camel will then look up these parameters from the message +headers. Notice in the example above we set two headers with constant +value for the named parameters: + + from("direct:projects") + .setHeader("lic", constant("ASF")) + .setHeader("min", constant(123)) + .setBody("select * from projects where license = :?lic and id > :?min order by id") + .to("jdbc:myDataSource?useHeadersAsParameters=true") + +You can also store the header values in a `java.util.Map` and store the +map on the headers with the key `CamelJdbcParameters`. + +# Samples + +In the following example, we set up the DataSource that camel-jdbc +requires. First we register our datasource in the Camel registry as +`testdb`: + + EmbeddedDatabase db = new EmbeddedDatabaseBuilder() + .setType(EmbeddedDatabaseType.DERBY).addScript("sql/init.sql").build(); + + CamelContext context = ... + context.getRegistry().bind("testdb", db); + +Then we configure a route that routes to the JDBC component, so the SQL +will be executed. Note how we refer to the `testdb` datasource that was +bound in the previous step: + + from("direct:hello") + .to("jdbc:testdb"); + +We create an endpoint, add the SQL query to the body of the IN message, +and then send the exchange. The result of the query is returned in the +*OUT* body: + + Endpoint endpoint = context.getEndpoint("direct:hello"); + Exchange exchange = endpoint.createExchange(); + // then we set the SQL on the in body + exchange.getMessage().setBody("select * from customer order by ID"); + // now we send the exchange to the endpoint, and receive the response from Camel + Exchange out = template.send(endpoint, exchange); + +If you want to work on the rows one by one instead of the entire +ResultSet at once, you need to use the Splitter EIP such as: + + from("direct:hello") + // here we split the data from the testdb into new messages one by one, + // so the mock endpoint will receive a message per row in the table + // the StreamList option allows streaming the result of the query without creating a List of rows + // and notice we also enable streaming mode on the splitter + .to("jdbc:testdb?outputType=StreamList") + .split(body()).streaming() + .to("mock:result"); + +## Polling the database every minute + +If we want to poll a database using the JDBC component, we need to +combine it with a polling scheduler such as the +[Timer](#timer-component.adoc) or [Quartz](#quartz-component.adoc) etc. +In the following example, we retrieve data from the database every 60 +seconds: + + from("timer://foo?period=60000") + .setBody(constant("select * from customer")) + .to("jdbc:testdb") + .to("activemq:queue:customers"); + +## Move Data Between Data Sources + +A common use case is to query for data, process it and move it to +another data source (ETL operations). In the following example, we +retrieve new customer records from the source table every hour, +filter/transform them and move them to a destination table: + + from("timer://MoveNewCustomersEveryHour?period=3600000") + .setBody(constant("select * from customer where create_time > (sysdate-1/24)")) + .to("jdbc:testdb") + .split(body()) + .process(new MyCustomerProcessor()) //filter/transform results as needed + .setBody(simple("insert into processed_customer values('${body[ID]}','${body[NAME]}')")) + .to("jdbc:testdb"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|dataSource|To use the DataSource instance instead of looking up the data source by name from the registry.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|connectionStrategy|To use a custom strategy for working with connections. Do not use a custom strategy when using the spring-jdbc component because a special Spring ConnectionStrategy is used by default to support Spring Transactions.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|dataSourceName|Name of DataSource to lookup in the Registry. If the name is dataSource or default, then Camel will attempt to lookup a default DataSource from the registry, meaning if there is a only one instance of DataSource found, then this DataSource will be used.||string| +|allowNamedParameters|Whether to allow using named parameters in the queries.|true|boolean| +|outputClass|Specify the full package and class name to use as conversion when outputType=SelectOne or SelectList.||string| +|outputType|Determines the output the producer should use.|SelectList|object| +|parameters|Optional parameters to the java.sql.Statement. For example to set maxRows, fetchSize etc.||object| +|readSize|The default maximum number of rows that can be read by a polling query. The default value is 0.||integer| +|resetAutoCommit|Camel will set the autoCommit on the JDBC connection to be false, commit the change after executed the statement and reset the autoCommit flag of the connection at the end, if the resetAutoCommit is true. If the JDBC connection doesn't support to reset the autoCommit flag, you can set the resetAutoCommit flag to be false, and Camel will not try to reset the autoCommit flag. When used with XA transactions you most likely need to set it to false so that the transaction manager is in charge of committing this tx.|true|boolean| +|transacted|Whether transactions are in use.|false|boolean| +|useGetBytesForBlob|To read BLOB columns as bytes instead of string data. This may be needed for certain databases such as Oracle where you must read BLOB columns as bytes.|false|boolean| +|useHeadersAsParameters|Set this option to true to use the prepareStatementStrategy with named parameters. This allows to define queries with named placeholders, and use headers with the dynamic values for the query placeholders.|false|boolean| +|useJDBC4ColumnNameAndLabelSemantics|Sets whether to use JDBC 4 or JDBC 3.0 or older semantic when retrieving column name. JDBC 4.0 uses columnLabel to get the column name where as JDBC 3.0 uses both columnName or columnLabel. Unfortunately JDBC drivers behave differently so you can use this option to work out issues around your JDBC driver if you get problem using this component This option is default true.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|beanRowMapper|To use a custom org.apache.camel.component.jdbc.BeanRowMapper when using outputClass. The default implementation will lower case the row names and skip underscores, and dashes. For example CUST\_ID is mapped as custId.||object| +|connectionStrategy|To use a custom strategy for working with connections. Do not use a custom strategy when using the spring-jdbc component because a special Spring ConnectionStrategy is used by default to support Spring Transactions.||object| +|prepareStatementStrategy|Allows the plugin to use a custom org.apache.camel.component.jdbc.JdbcPrepareStatementStrategy to control preparation of the query and prepared statement.||object| diff --git a/camel-jetty.md b/camel-jetty.md new file mode 100644 index 0000000000000000000000000000000000000000..bd67f848c513ac17e20a95c1ac88f193d4e86d1c --- /dev/null +++ b/camel-jetty.md @@ -0,0 +1,593 @@ +# Jetty + +**Since Camel 1.2** + +**Only consumer is supported** + +The Jetty component provides HTTP-based endpoints for consuming and +producing HTTP requests. That is, the Jetty component behaves as a +simple Web server. + +**Stream** + +Jetty is stream-based, which means the input it receives is submitted to +Camel as a stream. That means you will only be able to read the content +of the stream **once**. If you find a situation where the message body +appears to be empty, or you need to access the +Exchange.HTTP\_RESPONSE\_CODE data multiple times (e.g.: doing +multicasting, or redelivery error handling), you should use Stream +caching or convert the message body to a `String` which is safe to be +re-read multiple times. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-jetty + x.x.x + + + +# URI format + + jetty:http://hostname[:port][/resourceUri][?options] + +# Message Headers + +Camel uses the same message headers as the [HTTP](#http-component.adoc) +component. It also uses a header (`Exchange.HTTP_CHUNKED`, +`CamelHttpChunked`) to turn on or turn off the chunked encoding on the +camel-jetty consumer. + +Camel also populates **all** `request.parameter` and `request.headers`. +For example, given a client request with the URL, +`\http://myserver/myserver?orderid=123`, the exchange will contain a +header named `orderid` with the value `123`. + +You can get the `request.parameter` from the message header not only +from Get Method, but also other HTTP methods. + +# Usage + +The Jetty component supports consumer endpoints. + +# Consumer Example + +In this sample we define a route that exposes an HTTP service at +`\http://localhost:8080/myapp/myservice`: + +**Usage of localhost** + +When you specify `localhost` in a URL, Camel exposes the endpoint only +on the local TCP/IP network interface, so it cannot be accessed from +outside the machine it operates on. + +If you need to expose a Jetty endpoint on a specific network interface, +the numerical IP address of this interface should be used as the host. +If you need to expose a Jetty endpoint on all network interfaces, the +`0.0.0.0` address should be used. + +To listen across an entire URI prefix, see [How do I let Jetty match +wildcards](#manual:faq:how-do-i-let-jetty-match-wildcards.adoc). + +# Servlets + +If you actually want to expose routes by HTTP and already have a +Servlet, you should instead refer to the [Servlet +Transport](#servlet-component.adoc). + +# HTTP Request Parameters + +So if a client sends the HTTP request, `\http://serverUri?one=hello`, +the Jetty component will copy the HTTP request parameter, `one` to the +exchange’s `in.header`. We can then use the `simple` language to route +exchanges that contain this header to a specific endpoint and all others +to another. If we used a language more powerful than +[Simple](#languages:simple-language.adoc) (such as +[OGNL](#languages:ognl-language.adoc)), we could also test for the +parameter value and do routing based on the header value as well. + +# Session Support + +The session support option, `sessionSupport`, can be used to enable a +`HttpSession` object and access the session object while processing the +exchange. For example, the following route enables sessions: + + + + + + +The `myCode` Processor can be instantiated by a Spring `bean` element: + + + +Where the processor implementation can access the `HttpSession` as +follows: + + public void process(Exchange exchange) throws Exception { + HttpSession session = exchange.getIn(HttpMessage.class).getRequest().getSession(); + ... + } + +# SSL Support (HTTPS) + +Using the JSSE Configuration Utility + +The Jetty component supports SSL/TLS configuration through the [Camel +JSSE Configuration +Utility](#manual::camel-configuration-utilities.adoc). This utility +greatly decreases the amount of component-specific code you need to +write and is configurable at the endpoint and component levels. The +following examples demonstrate how to use the utility with the Jetty +component. + +Programmatic configuration of the component + + KeyStoreParameters ksp = new KeyStoreParameters(); + ksp.setResource("/users/home/server/keystore.jks"); + ksp.setPassword("keystorePassword"); + + KeyManagersParameters kmp = new KeyManagersParameters(); + kmp.setKeyStore(ksp); + kmp.setKeyPassword("keyPassword"); + + SSLContextParameters scp = new SSLContextParameters(); + scp.setKeyManagers(kmp); + + JettyComponent jettyComponent = getContext().getComponent("jetty", JettyComponent.class); + jettyComponent.setSslContextParameters(scp); + +Spring DSL based configuration of endpoint + + + + + + + + + +Blueprint based configuration of endpoint + +Global configuration of sslContextParameters in a dedicated Blueprint +XML file + + + + + + + + + + + + +Use of the global configuration in other Blueprint XML files with route +definitions + + ... + + + + + + ... + +Configuring Jetty Directly + +Jetty provides SSL support out of the box. To enable Jetty to run in SSL +mode, format the URI with the `\https://` prefix---for example: + + + +Jetty also needs to know where to load your keystore from and what +passwords to use to load the correct SSL certificate. Set the following +JVM System Properties: + +- `org.eclipse.jetty.ssl.keystore` specify the location of the Java + keystore file, which contains the Jetty server’s own X.509 + certificate in a *key entry*. A key entry stores the X.509 + certificate (effectively, the *public key*) and also its associated + private key. + +- `org.eclipse.jetty.ssl.password` the store password, which is + required to access the keystore file (this is the same password that + is supplied to the `keystore` command’s `-storepass` option). + +- `org.eclipse.jetty.ssl.keypassword` the key password, which is used + to access the certificate’s key entry in the keystore (this is the + same password that is supplied to the `keystore` command’s + `-keypass` option). + +For details of how to configure SSL on a Jetty endpoint, read the +following documentation at the Jetty Site: +[http://docs.codehaus.org/display/JETTY/How+to+configure+SSL](http://docs.codehaus.org/display/JETTY/How+to+configure+SSL) + +Camel doesn’t expose some SSL properties directly. However, Camel does +expose the underlying SslSocketConnector, which will allow you to set +properties like needClientAuth for mutual authentication requiring a +client certificate or wantClientAuth for mutual authentication where a +client doesn’t need a certificate but can have one. + + + + + + + + + + + + + + + + + +The value you use as keys in the above map is the port you configure +Jetty to listen to. + +## Configuring general SSL properties + +Instead of a per-port number specific SSL socket connector (as shown +above), you can now configure general properties that apply for all SSL +socket connectors (that are not explicitly configured as above with the +port number as entry). + + + + + + + + + + + + + +## How to obtain reference to the X509Certificate + +Jetty stores a reference to the certificate in the HttpServletRequest +which you can access from code as follows: + + HttpServletRequest req = exchange.getIn().getBody(HttpServletRequest.class); + X509Certificate cert = (X509Certificate) req.getAttribute("javax.servlet.request.X509Certificate") + +## Configuring general HTTP properties + +Instead of a per-port number specific HTTP socket connector (as shown +above), you can now configure general properties that apply for all HTTP +socket connectors (that are not explicitly configured as above with the +port number as entry). + + + + + + + + + + +## Obtaining X-Forwarded-For header with HttpServletRequest.getRemoteAddr() + +If the HTTP requests are handled by an Apache server and forwarded to +jetty with mod\_proxy, the original client IP address is in the +X-Forwarded-For header and the HttpServletRequest.getRemoteAddr() will +return the address of the Apache proxy. + +Jetty has a forwarded property which takes the value from +X-Forwarded-For and places it in the HttpServletRequest remoteAddr +property. This property is not available directly through the endpoint +configuration, but it can be easily added using the socketConnectors +property: + + + + + + + + + + + + + +This is particularly useful when an existing Apache server handles TLS +connections for a domain and proxies them to application servers +internally. + +# Default behavior for returning HTTP status codes + +The default behavior of HTTP status codes is defined by the +`org.apache.camel.component.http.DefaultHttpBinding` class, which +handles how a response is written and also sets the HTTP status code. + +If the exchange was processed successfully, the 200 HTTP status code is +returned. +If the exchange failed with an exception, the 500 HTTP status code is +returned, and the stacktrace is returned in the body. If you want to +specify which HTTP status code to return, set the code in the +`Exchange.HTTP_RESPONSE_CODE` header of the OUT message. + +# Customizing HttpBinding + +By default, Camel uses the +`org.apache.camel.component.http.DefaultHttpBinding` to handle how a +response is written. If you like, you can customize this behavior either +by implementing your own `HttpBinding` class or by extending +`DefaultHttpBinding` and overriding the appropriate methods. + +The following example shows how to customize the `DefaultHttpBinding` in +order to change how exceptions are returned: + +We can then create an instance of our binding and register it in the +Spring registry as follows: + + + +And then we can reference this binding when we define the route: + + + + + + +# Jetty handlers and security configuration + +You can configure a list of Jetty handlers on the endpoint, which can be +useful for enabling advanced Jetty security features. These handlers are +configured in Spring XML as follows: + + + + + + + + + + + + + + + + + + + + + + +You can configure a list of Jetty handlers as follows: + + + + + + + + + + + + + + + + + + + + + + + +You can then define the endpoint as: + + from("jetty:http://0.0.0.0:9080/myservice?handlers=securityHandler") + +If you need more handlers, set the `handlers` option equal to a +comma-separated list of bean IDs. + +Blueprint-based definition of basic authentication (based on Jetty 12): + + + + + + + + + rolename1 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ... + +The `roles.properties` files contain + + username1=password1,rolename1 + username2=password2,rolename1 + +This file is located in the `etc` folder and will be reloaded when +changed. The endpoint: + + http://0.0.0.0/path + +It is now secured with basic authentication. Only `username1` with +`password1` and `username2` with `password2` are able to access the +endpoint. + +# How to return a custom HTTP 500 reply message + +You may want to return a custom reply message when something goes wrong, +instead of the default reply message Camel +[Jetty](#jetty-component.adoc) replies to with. You could use a custom +`HttpBinding` to be in control of the message mapping, but often it may +be easier to use Camel’s Exception Clause to construct the custom reply +message. For example, as show here, where we return +`Dude something went wrong` with HTTP error code 500: + +# Multipart Form support + +The camel-jetty component supports multipart form post out of the box. +The submitted form-data are mapped into the message header. Camel Jetty +creates an attachment for each uploaded file. The file name is mapped to +the name of the attachment. The content type is set as the content type +of the attachment file name. You can find the example here. + +# Jetty JMX support + +The camel-jetty component supports the enabling of Jetty’s JMX +capabilities at the component and endpoint level with the endpoint +configuration taking priority. Note that JMX must be enabled within the +Camel context to enable JMX support in this component as the component +provides Jetty with a reference to the MBeanServer registered with the +Camel context. Because the camel-jetty component caches and reuses Jetty +resources for a given protocol/host/port pairing, this configuration +option will only be evaluated during the creation of the first endpoint +to use a protocol/host/port pairing. For example, given two routes +created from the following XML fragments, JMX support would remain +enabled for all endpoints listening on `https://0.0.0.0`. + + + + + +The camel-jetty component also provides for direct configuration of the +Jetty MBeanContainer. Jetty creates MBean names dynamically. If you are +running another instance of Jetty outside of the Camel context and +sharing the same MBeanServer between the instances, you can provide both +instances with a reference to the same MBeanContainer to avoid name +collisions when registering Jetty MBeans. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|continuationTimeout|Allows to set a timeout in millis when using Jetty as consumer (server). By default Jetty uses 30000. You can use a value of = 0 to never expire. If a timeout occurs then the request will be expired and Jetty will return back a http error 503 to the client. This option is only in use when using Jetty with the Asynchronous Routing Engine.|30000|integer| +|enableJmx|If this option is true, Jetty JMX support will be enabled for this endpoint.|false|boolean| +|maxThreads|To set a value for maximum number of threads in server thread pool. Notice that both a min and max size must be configured.||integer| +|minThreads|To set a value for minimum number of threads in server thread pool. Notice that both a min and max size must be configured.||integer| +|muteException|If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace.|true|boolean| +|requestBufferSize|Allows to configure a custom value of the request buffer size on the Jetty connectors.||integer| +|requestHeaderSize|Allows to configure a custom value of the request header size on the Jetty connectors.||integer| +|responseBufferSize|Allows to configure a custom value of the response buffer size on the Jetty connectors.||integer| +|responseHeaderSize|Allows to configure a custom value of the response header size on the Jetty connectors.||integer| +|sendServerVersion|If the option is true, jetty will send the server header with the jetty version information to the client which sends the request. NOTE please make sure there is no any other camel-jetty endpoint is share the same port, otherwise this option may not work as expected.|true|boolean| +|useContinuation|Whether or not to use Jetty continuations for the Jetty Server.|true|boolean| +|useXForwardedForHeader|To use the X-Forwarded-For header in HttpServletRequest.getRemoteAddr.|false|boolean| +|fileSizeThreshold|The size threshold after which files will be written to disk for multipart/form-data requests. By default the files are not written to disk|0|integer| +|filesLocation|The directory location where files will be store for multipart/form-data requests. By default the files are written in the system temporary folder||string| +|maxFileSize|The maximum size allowed for uploaded files. -1 means no limit|-1|integer| +|maxRequestSize|The maximum size allowed for multipart/form-data requests. -1 means no limit|-1|integer| +|threadPool|To use a custom thread pool for the server. This option should only be used in special circumstances.||object| +|allowJavaSerializedObject|Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|errorHandler|This option is used to set the ErrorHandler that Jetty server uses.||object| +|httpBinding|Not to be used - use JettyHttpBinding instead.||object| +|httpConfiguration|Jetty component does not use HttpConfiguration.||object| +|mbContainer|To use a existing configured org.eclipse.jetty.jmx.MBeanContainer if JMX is enabled that Jetty uses for registering mbeans.||object| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| +|proxyHost|To use a http proxy to configure the hostname.||string| +|proxyPort|To use a http proxy to configure the port number.||integer| +|keystore|Specifies the location of the Java keystore file, which contains the Jetty server's own X.509 certificate in a key entry.||string| +|socketConnectorProperties|A map which contains general HTTP connector properties. Uses the same principle as sslSocketConnectorProperties.||object| +|socketConnectors|A map which contains per port number specific HTTP connectors. Uses the same principle as sslSocketConnectors.||object| +|sslContextParameters|To configure security using SSLContextParameters||object| +|sslKeyPassword|The key password, which is used to access the certificate's key entry in the keystore (this is the same password that is supplied to the keystore command's -keypass option).||string| +|sslPassword|The ssl password, which is required to access the keystore file (this is the same password that is supplied to the keystore command's -storepass option).||string| +|sslSocketConnectorProperties|A map which contains general SSL connector properties.||object| +|sslSocketConnectors|A map which contains per port number specific SSL connectors.||object| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|httpUri|The url of the HTTP endpoint to call.||string| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object| +|httpBinding|To use a custom HttpBinding to control the mapping between Camel message and HttpClient.||object| +|chunked|If this option is false the Servlet will disable the HTTP streaming and set the content-length header on the response|true|boolean| +|disableStreamCache|Determines whether or not the raw input stream from Servlet is cached or not (Camel will read the stream into a in memory/overflow to file, Stream caching) cache. By default Camel will cache the Servlet input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The http producer will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is as the message body.|false|boolean| +|transferException|If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk.|false|boolean| +|async|Configure the consumer to work in async mode|false|boolean| +|continuationTimeout|Allows to set a timeout in millis when using Jetty as consumer (server). By default Jetty uses 30000. You can use a value of = 0 to never expire. If a timeout occurs then the request will be expired and Jetty will return back a http error 503 to the client. This option is only in use when using Jetty with the Asynchronous Routing Engine.|30000|integer| +|enableCORS|If the option is true, Jetty server will setup the CrossOriginFilter which supports the CORS out of box.|false|boolean| +|enableJmx|If this option is true, Jetty JMX support will be enabled for this endpoint. See Jetty JMX support for more details.|false|boolean| +|enableMultipartFilter|Whether org.apache.camel.component.jetty.MultiPartFilter is enabled or not. You should set this value to false when bridging endpoints, to ensure multipart requests is proxied/bridged as well.|false|boolean| +|httpMethodRestrict|Used to only allow consuming if the HttpMethod matches, such as GET/POST/PUT etc. Multiple methods can be specified separated by comma.||string| +|logException|If enabled and an Exchange failed processing on the consumer side the exception's stack trace will be logged when the exception stack trace is not sent in the response's body.|false|boolean| +|matchOnUriPrefix|Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found.|false|boolean| +|muteException|If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace.|false|boolean| +|responseBufferSize|To use a custom buffer size on the jakarta.servlet.ServletResponse.||integer| +|sendDateHeader|If the option is true, jetty server will send the date header to the client which sends the request. NOTE please make sure there is no any other camel-jetty endpoint is share the same port, otherwise this option may not work as expected.|false|boolean| +|sendServerVersion|If the option is true, jetty will send the server header with the jetty version information to the client which sends the request. NOTE please make sure there is no any other camel-jetty endpoint is share the same port, otherwise this option may not work as expected.|true|boolean| +|sessionSupport|Specifies whether to enable the session manager on the server side of Jetty.|false|boolean| +|useContinuation|Whether or not to use Jetty continuations for the Jetty Server.||boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|eagerCheckContentAvailable|Whether to eager check whether the HTTP requests has content if the content-length header is 0 or not present. This can be turned on in case HTTP clients do not send streamed data.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|fileSizeThreshold|The size threshold after which files will be written to disk for multipart/form-data requests. By default the files are not written to disk||integer| +|filesLocation|The directory location where files will be store for multipart/form-data requests. By default the files are written in the system temporary folder||string| +|filterInitParameters|Configuration of the filter init parameters. These parameters will be applied to the filter list before starting the jetty server.||object| +|filters|Allows using a custom filters which is putted into a list and can be find in the Registry. Multiple values can be separated by comma.||array| +|handlers|Specifies a comma-delimited set of Handler instances to lookup in your Registry. These handlers are added to the Jetty servlet context (for example, to add security). Important: You can not use different handlers with different Jetty endpoints using the same port number. The handlers is associated to the port number. If you need different handlers, then use different port numbers.||array| +|idleTimeout|The max idle time (in milli seconds) is applied to an HTTP request for IO operations and delayed dispatch. Idle time 0 implies an infinite timeout, -1 (default) implies no HTTP channel timeout and the connection timeout is used instead.|-1|integer| +|mapHttpMessageBody|If this option is true then IN exchange Body of the exchange will be mapped to HTTP body. Setting this to false will avoid the HTTP mapping.|true|boolean| +|mapHttpMessageFormUrlEncodedBody|If this option is true then IN exchange Form Encoded body of the exchange will be mapped to HTTP. Setting this to false will avoid the HTTP Form Encoded body mapping.|true|boolean| +|mapHttpMessageHeaders|If this option is true then IN exchange Headers of the exchange will be mapped to HTTP headers. Setting this to false will avoid the HTTP Headers mapping.|true|boolean| +|maxFileSize|The maximum size allowed for uploaded files. -1 means no limit||integer| +|maxRequestSize|The maximum size allowed for multipart/form-data requests. -1 means no limit||integer| +|multipartFilter|Allows using a custom multipart filter. Note: setting multipartFilterRef forces the value of enableMultipartFilter to true.||object| +|optionsEnabled|Specifies whether to enable HTTP OPTIONS for this Servlet consumer. By default OPTIONS is turned off.|false|boolean| +|traceEnabled|Specifies whether to enable HTTP TRACE for this Servlet consumer. By default TRACE is turned off.|false|boolean| +|sslContextParameters|To configure security using SSLContextParameters||object| diff --git a/camel-jgroups-raft.md b/camel-jgroups-raft.md new file mode 100644 index 0000000000000000000000000000000000000000..0c6b402f5cb7a9384c8c5e339403e3a5d299d9f9 --- /dev/null +++ b/camel-jgroups-raft.md @@ -0,0 +1,102 @@ +# Jgroups-raft + +**Since Camel 2.24** + +**Both producer and consumer are supported** + +[JGroups-raft](http://belaban.github.io/jgroups-raft/) is a +[Raft](https://raftconsensus.github.io/) implementation in +[JGroups](http://www.jgroups.org/). The **jgroups-raft:** component +provides interoperability between camel and a JGroups-raft clusters. + +Maven users will need to add the following dependency to their `pom.xml` +for this component. + + + org.apache.camel + camel-jgroups-raft + + x.y.z + + +# URI format + + jgroups-raft:clusterName[?options] + +Where **clusterName** represents the name of the JGroups-raft cluster, +the component should connect to. + +# Options + +# Usage + +Using `jgroups-raft` component with `enableRoleChangeEvents=true` on the +consumer side of the route will capture change in JGroups-raft role and +forward them to the Camel route. JGroups-raft consumer processes +incoming messages +[asynchronously](http://camel.apache.org/asynchronous-routing-engine.html). + + // Capture raft role changes from cluster named + // 'clusterName' and send them to Camel route. + from("jgroups-raft:clusterName?enableRoleChangeEvents=true").to("seda:queue"); + +Using `jgroups-raft` component on the producer side of the route will +use the body of the camel exchange (which must be a `byte[]`) to perform +a setX() operation on the raftHandle associated with the endpoint. + + // perform a setX() operation to the cluster named 'clusterName' shared state machine + from("direct:start").to("jgroups-raft:clusterName"); + +# Examples + +## Receive cluster view change notifications + +The snippet below demonstrates how to create the consumer endpoint +listening to the change role events. By default, this option is off. + + ... + from("jgroups-raft:clusterName?enableRoleChangeEvents=true").to(mock:mockEndpoint); + ... + +## Keeping singleton route within the cluster + +The snippet below demonstrates how to keep the singleton consumer route +in the cluster of Camel Contexts. As soon as the master node dies, one +of the slaves will be elected as a new master and started. In this +particular example, we want to keep singleton +[jetty](#jetty-component.adoc) instance listening for the requests on +address\` [http://localhost:8080/orders](http://localhost:8080/orders)\`. + + JGroupsRaftClusterService service = new JGroupsRaftClusterService(); + service.setId("raftId"); + service.setRaftId("raftId"); + service.setJgroupsClusterName("clusterName"); + ... + context.addService(service); + + from("master:mycluster:jetty:http://localhost:8080/orders").to("jms:orders"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|channelProperties|Specifies configuration properties of the RaftHandle JChannel used by the endpoint (ignored if raftHandle ref is provided).|raft.xml|string| +|raftHandle|RaftHandle to use.||object| +|raftId|Unique raftId to use.||string| +|stateMachine|StateMachine to use.|NopStateMachine|object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|clusterName|The name of the JGroupsraft cluster the component should connect to.||string| +|enableRoleChangeEvents|If set to true, the consumer endpoint will receive roleChange event as well (not just connecting and/or using the state machine). By default it is set to false.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-jgroups.md b/camel-jgroups.md new file mode 100644 index 0000000000000000000000000000000000000000..1649f59604df63122c197439626d7953dcd6cb79 --- /dev/null +++ b/camel-jgroups.md @@ -0,0 +1,176 @@ +# Jgroups + +**Since Camel 2.13** + +**Both producer and consumer are supported** + +[JGroups](http://www.jgroups.org) is a toolkit for reliable multicast +communication. The **jgroups:** component provides exchange of messages +between Camel infrastructure and [JGroups](http://jgroups.org) clusters. + +Maven users will need to add the following dependency to their `pom.xml` +for this component. + + + org.apache.camel + camel-jgroups + + x.y.z + + +# URI format + + jgroups:clusterName[?options] + +Where **clusterName** represents the name of the JGroups cluster, the +component should connect to. + +# Usage + +Using `jgroups` component on the consumer side of the route will capture +messages received by the `JChannel` associated with the endpoint and +forward them to the Camel route. JGroups consumer processes incoming +messages +[asynchronously](http://camel.apache.org/asynchronous-routing-engine.html). + + // Capture messages from cluster named + // 'clusterName' and send them to Camel route. + from("jgroups:clusterName").to("seda:queue"); + +Using `jgroups` component on the producer side of the route will forward +body of the Camel exchanges to the `JChannel` instance managed by the +endpoint. + + // Send a message to the cluster named 'clusterName' + from("direct:start").to("jgroups:clusterName"); + +# Predefined filters + +JGroups component comes with predefined filters factory class named +`JGroupsFilters.` + +If you would like to consume only view changes notifications sent to +coordinator of the cluster (and ignore these sent to the "slave" nodes), +use the `JGroupsFilters.dropNonCoordinatorViews()` filter. This filter +is particularly useful when you want a single Camel node to become the +master in the cluster, because messages passing this filter notifies you +when a given node has become a coordinator of the cluster. The snippet +below demonstrates how to collect only messages received by the master +node. + + import static org.apache.camel.component.jgroups.JGroupsFilters.dropNonCoordinatorViews; + ... + from("jgroups:clusterName?enableViewMessages=true"). + filter(dropNonCoordinatorViews()). + to("seda:masterNodeEventsQueue"); + +# Predefined expressions + +JGroups component comes with predefined expressions factory class named +`JGroupsExpressions.` + +If you would like to create delayer that would affect the route only if +the Camel context has not been started yet, use the +`JGroupsExpressions.delayIfContextNotStarted(long delay)` factory +method. The expression created by this factory method will return given +delay value only if the Camel context is in the state different from +`started`. This expression is particularly useful if you would like to +use JGroups component for keeping singleton (master) route within the +cluster. [Control Bus](#controlbus-component.adoc) `start` command won’t +initialize the singleton route if the Camel Context hasn’t been yet +started. So you need to delay a startup of the master route, to be sure +that it has been initialized after the Camel Context startup. Because +such a scenario can happen only during the initialization of the +cluster, we don’t want to delay startup of the slave node becoming the +new master - that’s why we need a conditional delay expression. + +The snippet below demonstrates how to use conditional delaying with the +JGroups component to delay the initial startup of master node in the +cluster. + + import static java.util.concurrent.TimeUnit.SECONDS; + import static org.apache.camel.component.jgroups.JGroupsExpressions.delayIfContextNotStarted; + import static org.apache.camel.component.jgroups.JGroupsFilters.dropNonCoordinatorViews; + ... + from("jgroups:clusterName?enableViewMessages=true"). + filter(dropNonCoordinatorViews()). + threads().delay(delayIfContextNotStarted(SECONDS.toMillis(5))). // run in separated and delayed thread. Delay only if the context hasn't been started already. + to("controlbus:route?routeId=masterRoute&action=start&async=true"); + + from("timer://master?repeatCount=1").routeId("masterRoute").autoStartup(false).to(masterMockUri); + +# Examples + +## Sending (receiving) messages to (from) the JGroups cluster + +To send a message to the JGroups cluster, use producer endpoint, just as +demonstrated in the snippet below. + + from("direct:start").to("jgroups:myCluster"); + ... + producerTemplate.sendBody("direct:start", "msg") + +To receive the message from the snippet above (on the same, or the other +physical machine), listen to the messages coming from the given cluster, +just as demonstrated on the code fragment below. + + mockEndpoint.setExpectedMessageCount(1); + mockEndpoint.message(0).body().isEqualTo("msg"); + ... + from("jgroups:myCluster").to("mock:messagesFromTheCluster"); + ... + mockEndpoint.assertIsSatisfied(); + +## Receive cluster view change notifications + +The snippet below demonstrates how to create the consumer endpoint +listening to the notifications regarding cluster membership changes. By +default, the endpoint consumes only regular messages. + + mockEndpoint.setExpectedMessageCount(1); + mockEndpoint.message(0).body().isInstanceOf(org.jgroups.View.class); + ... + from("jgroups:clusterName?enableViewMessages=true").to(mockEndpoint); + ... + mockEndpoint.assertIsSatisfied(); + +## Keeping singleton route within the cluster + +The snippet below demonstrates how to keep the singleton consumer route +in the cluster of Camel Contexts. As soon as the master node dies, one +of the slaves will be elected as a new master and started. In this +particular example, we want to keep singleton +[jetty](#jetty-component.adoc) instance listening for the requests on +address\` [http://localhost:8080/orders](http://localhost:8080/orders)\`. + + JGroupsLockClusterService service = new JGroupsLockClusterService(); + service.setId("uniqueNodeId"); + ... + context.addService(service); + + from("master:mycluster:jetty:http://localhost:8080/orders").to("jms:orders"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|channel|Channel to use||object| +|channelProperties|Specifies configuration properties of the JChannel used by the endpoint.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|enableViewMessages|If set to true, the consumer endpoint will receive org.jgroups.View messages as well (not only org.jgroups.Message instances). By default only regular messages are consumed by the endpoint.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|clusterName|The name of the JGroups cluster the component should connect to.||string| +|channelProperties|Specifies configuration properties of the JChannel used by the endpoint.||string| +|enableViewMessages|If set to true, the consumer endpoint will receive org.jgroups.View messages as well (not only org.jgroups.Message instances). By default only regular messages are consumed by the endpoint.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-jira.md b/camel-jira.md new file mode 100644 index 0000000000000000000000000000000000000000..87078a5510c9bd596c724c6e98af3d8d1b8396b0 --- /dev/null +++ b/camel-jira.md @@ -0,0 +1,329 @@ +# Jira + +**Since Camel 3.0** + +**Both producer and consumer are supported** + +The JIRA component interacts with the JIRA API by encapsulating +Atlassian’s [REST Java Client for +JIRA](https://bitbucket.org/atlassian/jira-rest-java-client/src/master/). +It currently provides polling for new issues and new comments. It is +also able to create new issues, add comments, change issues, add/remove +watchers, add attachment and transition the state of an issue. + +Rather than webhooks, this endpoint relies on simple polling. Reasons +include: + +- Concern for reliability/stability + +- The types of payloads we’re polling aren’t typically large (plus, + paging is available in the API) + +- The need to support apps running somewhere not publicly accessible + where a webhook would fail + +Note that the JIRA API is fairly expansive. Therefore, this component +could be easily expanded to provide additional interactions. + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-jira + ${camel-version} + + +Atlassian does not release their JIRA Java client to Maven Central. +Therefore, when using `camel-jira` then the `pom.xml` has included Maven +repository to the following URL: +`https://packages.atlassian.com/maven-external`. + +Keep this in mind as Maven will then use this repository to download the +JIRA client (and potentially other JARs). Which allows Atlassian to +track these downloads by their servers. + +# URI format + + jira://type[?options] + +The Jira type accepts the following operations: + +For consumers: + +- newIssues: retrieve only new issues after the route is started + +- newComments: retrieve only new comments after the route is started + +- watchUpdates: retrieve only updated fields/issues based on provided + jql + +For producers: + +- addIssue: add an issue + +- addComment: add a comment on a given issue + +- attach: add an attachment on a given issue + +- deleteIssue: delete a given issue + +- updateIssue: update fields of a given issue + +- transitionIssue: transition a status of a given issue + +- watchers: add/remove watchers of a given issue + +As Jira is fully customizable, you must ensure the field IDs exist for +the project and workflow, as they can change between different Jira +servers. + +# Client Factory + +You can bind the `JiraRestClientFactory` with name +**JiraRestClientFactory** in the registry to have it automatically set +in the Jira endpoint. + +# Authentication + +Camel-jira supports the following forms of authentication: + +- [Basic + Authentication](https://developer.atlassian.com/cloud/jira/platform/jira-rest-api-basic-authentication/) + +- [OAuth 3 legged + authentication](https://developer.atlassian.com/cloud/jira/platform/jira-rest-api-oauth-authentication/) + +- [Personal + Token](https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html)\* + +We recommend using OAuth or Personal token whenever possible, as it +provides the best security for your users and system. + +## Basic authentication requirements: + +- A username and a password. + +## OAuth authentication requirements: + +Follow the tutorial in [Jira OAuth +documentation](https://developer.atlassian.com/cloud/jira/platform/jira-rest-api-oauth-authentication/) +to generate the client private key, consumer key, verification code and +access token. + +- a private key, generated locally on your system. + +- A verification code, generated by Jira server. + +- The consumer key, set in the Jira server settings. + +- An access token, generated by Jira server. + +## Personal access token authentication requirements: + +Follow the tutorial to generate the [Personal +Token](https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html). + +- You have to set only the personal token in the `access-token` + parameter. + +# JQL: + +The JQL URI option is used by both consumer endpoints. Theoretically, +items like the "project key", etc. could be URI options themselves. +However, by requiring the use of JQL, the consumers become much more +flexible and powerful. + +At the bare minimum, the consumers will require the following: + + jira://[type]?[required options]&jql=project=[project key] + +One important thing to note is that the newIssues consumer will +automatically set the JQL as: + +- append `ORDER BY key desc` to your JQL + +- prepend `id > latestIssueId` to retrieve issues added after the + camel route was started. + +This is in order to optimize startup processing, rather than having to +index every single issue in the project. + +Another note is that, similarly, the newComments consumer will have to +index every single issue **and** comment on the project. Therefore, for +large projects, it’s **vital** to optimize the JQL expression as much as +possible. For example, the JIRA Toolkit Plugin includes a "Number of +comments" custom field — use *"Number of comments" \> 0* in your +query. Also try to minimize based on state (status=Open), increase the +polling delay, etc. Example: + + jira://[type]?[required options]&jql=RAW(project=[project key] AND status in (Open, \"Coding In Progress\") AND \"Number of comments\">0)" + +# Operations + +See a list of required headers to set when using the Jira operations. +The author field for the producers is automatically set to the +authenticated user on the Jira side. + +If any required field is not set, then an IllegalArgumentException is +throw. + +There are operations that requires `id` for fields such as the issue +type, priority, transition. Check the valid `id` on your jira project as +they may differ on a jira installation and project workflow. + +# AddIssue + +Required: + +- `ProjectKey`: The project key, example: CAMEL, HHH, MYP. + +- `IssueTypeId` or `IssueTypeName`: The `id` of the issue type or the + name of the issue type, you can see the valid list in + `\http://jira_server/rest/api/2/issue/createmeta?projectKeys=SAMPLE_KEY`. + +- `IssueSummary`: The summary of the issue. + +Optional: + +- `IssueAssignee`: the assignee user + +- `IssueAssigneeId`: the assignee user id + +- `IssuePriorityId` or `IssuePriorityName`: The priority of the issue, + you can see the valid list in + `\http://jira_server/rest/api/2/priority`. + +- `IssueComponents`: A list of string with the valid component names. + +- `IssueWatchersAdd`: A list of strings with the usernames (or id) to + add to the watcher list. + +- `IssueDescription`: The description of the issue. + +# AddComment + +Required: + +- `IssueKey`: The issue key identifier. + +- the body of the exchange is the description. + +# Attach + +Only one file should attach per invocation. + +Required: + +- `IssueKey`: The issue key identifier. + +- body of the exchange should be of type `File` + +# DeleteIssue + +Required: + +- `IssueKey`: The issue key identifier. + +# TransitionIssue + +Required: + +- `IssueKey`: The issue key identifier. + +- `IssueTransitionId`: The issue transition `id`. + +- the body of the exchange is the description. + +# UpdateIssue + +- `IssueKey`: The issue key identifier. + +- `IssueTypeId` or `IssueTypeName`: The `id` of the issue type or the + name of the issue type, you can see the valid list in + `\http://jira_server/rest/api/2/issue/createmeta?projectKeys=SAMPLE_KEY`. + +- `IssueSummary`: The summary of the issue. + +- `IssueAssignee`: the assignee user + +- `IssueAssigneeId`: the assignee user id + +- `IssuePriorityId` or `IssuePriorityName`: The priority of the issue, + you can see the valid list in + `\http://jira_server/rest/api/2/priority`. + +- `IssueComponents`: A list of string with the valid component names. + +- `IssueDescription`: The description of the issue. + +# Watcher + +- `IssueKey`: The issue key identifier. + +- `IssueWatchersAdd`: A list of strings with the usernames (or id) to + add to the watcher list. + +- `IssueWatchersRemove`: A list of strings with the usernames to + remove from the watcher list. + +# WatchUpdates (consumer) + +- `watchedFields` Comma separated list of fields to watch for changes + i.e. `Status,Priority,Assignee,Components` etc. + +- `sendOnlyUpdatedField` By default, only the changed field is sent as + the body. + +All messages also contain the following headers that add additional info +about the change: + +- `issueKey`: Key of the updated issue + +- `changed`: name of the updated field (i.e., Status) + +- `watchedIssues`: list of all issue keys that are watched in the time + of update + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|delay|Time in milliseconds to elapse for the next poll.|6000|integer| +|jiraUrl|The Jira server url, example: http://my\_jira.com:8081||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configuration|To use a shared base jira configuration.||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|accessToken|(OAuth or Personal Access Token authentication) The access token generated by the Jira server.||string| +|consumerKey|(OAuth only) The consumer key from Jira settings.||string| +|password|(Basic authentication only) The password or the API Token to authenticate to the Jira server. Use only if username basic authentication is used.||string| +|privateKey|(OAuth only) The private key generated by the client to encrypt the conversation to the server.||string| +|username|(Basic authentication only) The username to authenticate to the Jira server. Use only if OAuth is not enabled on the Jira server. Do not set the username and OAuth token parameter, if they are both set, the username basic authentication takes precedence.||string| +|verificationCode|(OAuth only) The verification code from Jira generated in the first step of the authorization proccess.||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|type|Operation to perform. Consumers: NewIssues, NewComments. Producers: AddIssue, AttachFile, DeleteIssue, TransitionIssue, UpdateIssue, Watchers. See this class javadoc description for more information.||object| +|delay|Time in milliseconds to elapse for the next poll.|6000|integer| +|jiraUrl|The Jira server url, example: http://my\_jira.com:8081||string| +|jql|JQL is the query language from JIRA which allows you to retrieve the data you want. For example jql=project=MyProject Where MyProject is the product key in Jira. It is important to use the RAW() and set the JQL inside it to prevent camel parsing it, example: RAW(project in (MYP, COM) AND resolution = Unresolved)||string| +|maxResults|Max number of issues to search for|50|integer| +|sendOnlyUpdatedField|Indicator for sending only changed fields in exchange body or issue object. By default consumer sends only changed fields.|true|boolean| +|watchedFields|Comma separated list of fields to watch for changes. Status,Priority are the defaults.|Status,Priority|string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|accessToken|(OAuth or Personal Access Token authentication) The access token generated by the Jira server.||string| +|consumerKey|(OAuth only) The consumer key from Jira settings.||string| +|password|(Basic authentication only) The password or the API Token to authenticate to the Jira server. Use only if username basic authentication is used.||string| +|privateKey|(OAuth only) The private key generated by the client to encrypt the conversation to the server.||string| +|username|(Basic authentication only) The username to authenticate to the Jira server. Use only if OAuth is not enabled on the Jira server. Do not set the username and OAuth token parameter, if they are both set, the username basic authentication takes precedence.||string| +|verificationCode|(OAuth only) The verification code from Jira generated in the first step of the authorization proccess.||string| diff --git a/camel-jms.md b/camel-jms.md new file mode 100644 index 0000000000000000000000000000000000000000..b3a755ffef2397f9be871eca8cb28f02ca853955 --- /dev/null +++ b/camel-jms.md @@ -0,0 +1,1465 @@ +# Jms + +**Since Camel 1.0** + +**Both producer and consumer are supported** + +This component allows messages to be sent to (or consumed from) a +[JMS](http://java.sun.com/products/jms/) Queue or Topic. It uses +Spring’s JMS support for declarative transactions, including Spring’s +`JmsTemplate` for sending and a `MessageListenerContainer` for +consuming. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-jms + x.x.x + + + +**Using ActiveMQ** + +If you are using [Apache ActiveMQ](http://activemq.apache.org/), you +should prefer the ActiveMQ component as it has been optimized for +ActiveMQ. All the options and samples on this page are also valid for +the ActiveMQ component. + +**Transacted and caching** + +See section *Transactions and Cache Levels* below if you are using +transactions with [JMS](#jms-component.adoc) as it can impact +performance. + +**Request/Reply over JMS** + +Make sure to read the section *Request-reply over JMS* further below on +this page for important notes about request/reply, as Camel offers a +number of options to configure for performance, and clustered +environments. + +# URI format + + jms:[queue:|topic:]destinationName[?options] + +Where `destinationName` is a JMS queue or topic name. By default, the +`destinationName` is interpreted as a queue name. For example, to +connect to the queue, `FOO.BAR` use: + + jms:FOO.BAR + +You can include the optional `queue:` prefix, if you prefer: + + jms:queue:FOO.BAR + +To connect to a topic, you *must* include the `topic:` prefix. For +example, to +connect to the topic, `Stocks.Prices`, use: + + jms:topic:Stocks.Prices + +You append query options to the URI by using the following format, +`?option=value&option=value&...` + +# Notes + +## Using ActiveMQ + +The JMS component reuses Spring 2’s `JmsTemplate` for sending messages. +This is not ideal for use in a non-J2EE container and typically requires +some caching in the JMS provider to avoid [poor +performance](http://activemq.apache.org/jmstemplate-gotchas.html). + +If you intend to use [Apache ActiveMQ](http://activemq.apache.org/) as +your message broker, the recommendation is that you do one of the +following: + +- Use the ActiveMQ component, which is already optimized to use + ActiveMQ efficiently + +- Use the `PoolingConnectionFactory` in ActiveMQ. + +## Transactions and Cache Levels + +If you are consuming messages and using transactions (`transacted=true`) +then the default settings for cache level can impact performance. + +If you are using XA transactions, then you cannot cache as it can cause +the XA transaction to not work properly. + +If you are **not** using XA, then you should consider caching as it +speeds up performance, such as setting `cacheLevelName=CACHE_CONSUMER`. + +The default setting for `cacheLevelName` is `CACHE_AUTO`. This default +auto-detects the mode and sets the cache level accordingly to: + +- `CACHE_CONSUMER` if `transacted=false` + +- `CACHE_NONE` if `transacted=true` + +So you can say the default setting is conservative. Consider using +`cacheLevelName=CACHE_CONSUMER` if you are using non-XA transactions. + +## Durable Subscriptions + +### Durable Subscriptions with JMS 2.0 + +If you wish to use durable topic subscriptions, you need to specify the +`durableSubscriptionName`. + +### Durable Subscriptions with JMS 1.1 + +If you wish to use durable topic subscriptions, you need to specify both +`clientId` and `durableSubscriptionName`. The value of the `clientId` +must be unique and can only be used by a single JMS connection instance +in your entire network. + +If you are using the [Apache ActiveMQ +Classic](https://activemq.apache.org/components/classic/) or [Apache +ActiveMQ Artemis](https://activemq.apache.org/components/artemis/), you +may prefer to use a feature called Virtual Topic. This should remove the +necessity of having a unique `clientId`. + +You can consult the specific documentation for +[Artemis](https://activemq.apache.org/components/artemis/migration-documentation/VirtualTopics.html) +or for [ActiveMQ +Classic](https://activemq.apache.org/virtual-destinations.html) for +details about how to leverage this feature. + +You can find more details about durable messaging for ActiveMQ Classic +[here](http://activemq.apache.org/how-do-durable-queues-and-topics-work.html). + +## Message Header Mapping + +When using message headers, the JMS specification states that header +names must be valid Java identifiers. So try to name your headers to be +valid Java identifiers. One benefit of doing this is that you can then +use your headers inside a JMS Selector (whose SQL92 syntax mandates Java +identifier syntax for headers). + +A simple strategy for mapping header names is used by default. The +strategy is to replace any dots and hyphens in the header name as shown +below and to reverse the replacement when the header name is restored +from a JMS message sent over the wire. What does this mean? No more +losing method names to invoke on a bean component, no more losing the +filename header for the File Component, and so on. + +The current header name strategy for accepting header names in Camel is +as follows: + +- Dots are replaced by `\_DOT_` and the replacement is reversed when + Camel consume the message + +- Hyphen is replaced by `\_HYPHEN_` and the replacement is reversed + when Camel consumes the message + +You can configure many different properties on the JMS endpoint, which +map to properties on the `JMSConfiguration` object. + +**Mapping to Spring JMS** + +Many of these properties map to properties on Spring JMS, which Camel +uses for sending and receiving messages. So you can get more information +about these properties by consulting the relevant Spring documentation. + +# Samples + +JMS is used in many examples for other components as well. But we +provide a few samples below to get started. + +## Receiving from JMS + +In the following sample, we configure a route that receives JMS messages +and routes the message to a POJO: + + from("jms:queue:foo"). + to("bean:myBusinessLogic"); + +You can use any of the EIP patterns so the route can be context based. +For example, here’s how to filter an order topic for the big spenders: + + from("jms:topic:OrdersTopic"). + filter().method("myBean", "isGoldCustomer"). + to("jms:queue:BigSpendersQueue"); + +## Sending to JMS + +In the sample below, we poll a file folder and send the file content to +a JMS topic. As we want the content of the file as a `TextMessage` +instead of a `BytesMessage`, we need to convert the body to a `String`: + + from("file://orders"). + convertBodyTo(String.class). + to("jms:topic:OrdersTopic"); + +## Using Annotations + +Camel also has annotations, so you can use [POJO +Consuming](#manual::pojo-consuming.adoc) and [POJO +Producing](#manual::pojo-producing.adoc). + +## Spring DSL sample + +The preceding examples use the Java DSL. Camel also supports Spring XML +DSL. Here is the big spender sample using Spring DSL: + + + + + + + + + +## Other samples + +JMS appears in many of the examples for other components and EIP +patterns, as well in this Camel documentation. So feel free to browse +the documentation. + +## Using JMS as a Dead Letter Queue storing Exchange + +Normally, when using [JMS](#jms-component.adoc) as the transport, it +only transfers the body and headers as the payload. If you want to use +[JMS](#jms-component.adoc) with a [Dead Letter +Channel](#eips:dead-letter-channel.adoc), using a JMS queue as the Dead +Letter Queue, then normally the caused Exception is not stored in the +JMS message. You can, however, use the `transferExchange` option on the +JMS dead letter queue to instruct Camel to store the entire Exchange in +the queue as a `javax.jms.ObjectMessage` that holds a +`org.apache.camel.support.DefaultExchangeHolder`. This allows you to +consume from the Dead Letter Queue and retrieve the caused exception +from the Exchange property with the key `Exchange.EXCEPTION_CAUGHT`. The +demo below illustrates this: + + // setup error handler to use JMS as queue and store the entire Exchange + errorHandler(deadLetterChannel("jms:queue:dead?transferExchange=true")); + +Then you can consume from the JMS queue and analyze the problem: + + from("jms:queue:dead").to("bean:myErrorAnalyzer"); + + // and in our bean + String body = exchange.getIn().getBody(); + Exception cause = exchange.getProperty(Exchange.EXCEPTION_CAUGHT, Exception.class); + // the cause message is + String problem = cause.getMessage(); + +## Using JMS as a Dead Letter Channel storing error only + +You can use JMS to store the cause error message or to store a custom +body, which you can initialize yourself. The following example uses the +Message Translator EIP to do a transformation on the failed exchange +before it is moved to the [JMS](#jms-component.adoc) dead letter queue: + + // we sent it to a seda dead queue first + errorHandler(deadLetterChannel("seda:dead")); + + // and on the seda dead queue we can do the custom transformation before its sent to the JMS queue + from("seda:dead").transform(exceptionMessage()).to("jms:queue:dead"); + +Here we only store the original cause error message in the transform. +You can, however, use any Expression to send whatever you like. For +example, you can invoke a method on a Bean or use a custom processor. + +# Message Mapping between JMS and Camel + +Camel automatically maps messages between `javax.jms.Message` and +`org.apache.camel.Message`. + +When sending a JMS message, Camel converts the message body to the +following JMS message types: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Body TypeJMS MessageComment

String

javax.jms.TextMessage

org.w3c.dom.Node

javax.jms.TextMessage

The DOM will be converted to +String.

Map

javax.jms.MapMessage

java.io.Serializable

javax.jms.ObjectMessage

byte[]

javax.jms.BytesMessage

java.io.File

javax.jms.BytesMessage

java.io.Reader

javax.jms.BytesMessage

java.io.InputStream

javax.jms.BytesMessage

java.nio.ByteBuffer

javax.jms.BytesMessage

+ +When receiving a JMS message, Camel converts the JMS message to the +following body type: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + +
JMS MessageBody Type

javax.jms.TextMessage

String

javax.jms.BytesMessage

byte[]

javax.jms.MapMessage

Map<String, Object>

javax.jms.ObjectMessage

Object

+ +## Disabling auto-mapping of JMS messages + +You can use the `mapJmsMessage` option to disable the auto-mapping +above. If disabled, Camel will not try to map the received JMS message, +but instead uses it directly as the payload. This allows you to avoid +the overhead of mapping and let Camel just pass through the JMS message. +For instance, it even allows you to route `javax.jms.ObjectMessage` JMS +messages with classes you do **not** have on the classpath. + +## Using a custom MessageConverter + +You can use the `messageConverter` option to do the mapping yourself in +a Spring `org.springframework.jms.support.converter.MessageConverter` +class. + +For example, in the route below, we use a custom message converter when +sending a message to the JMS order queue: + + from("file://inbox/order").to("jms:queue:order?messageConverter=#myMessageConverter"); + +You can also use a custom message converter when consuming from a JMS +destination. + +## Controlling the mapping strategy selected + +You can use the `jmsMessageType` option on the endpoint URL to force a +specific message type for all messages. + +In the route below, we poll files from a folder and send them as +`javax.jms.TextMessage` as we have forced the JMS producer endpoint to +use text messages: + + from("file://inbox/order").to("jms:queue:order?jmsMessageType=Text"); + +You can also specify the message type to use for each message by setting +the header with the key `CamelJmsMessageType`. For example: + + from("file://inbox/order").setHeader("CamelJmsMessageType", JmsMessageType.Text).to("jms:queue:order"); + +The possible values are defined in the `enum` class, +`org.apache.camel.jms.JmsMessageType`. + +# Message format when sending + +The exchange sent over the JMS wire must conform to the [JMS Message +spec](http://java.sun.com/j2ee/1.4/docs/api/javax/jms/Message.html). + +For the `exchange.in.header` the following rules apply for the header +**keys**: + +- Keys starting with `JMS` or `JMSX` are reserved. + +- `exchange.in.headers` keys must be literals and all be valid Java + identifiers (do not use dots in the key name). + +- Camel replaces dots \& hyphens and the reverse when consuming JMS + messages: + `.` is replaced by `_DOT_` and the reverse replacement when Camel + consumes the message. + `-` is replaced by `_HYPHEN_` and the reverse replacement when Camel + consumes the message. + +- See also the option `jmsKeyFormatStrategy`, which allows use of your + own custom strategy for formatting keys. + +For the `exchange.in.header`, the following rules apply for the header +**values**: + +- The values must be primitives or their counter-objects (such as + `Integer`, `Long`, `Character`). The types, `String`, + `CharSequence`, `Date`, `BigDecimal` and `BigInteger` are all + converted to their `toString()` representation. All other types are + dropped. + +Camel will log with category `org.apache.camel.component.jms.JmsBinding` +at **DEBUG** level if it drops a given header value. For example: + + 2008-07-09 06:43:04,046 [main ] DEBUG JmsBinding + - Ignoring non primitive header: order of class: org.apache.camel.component.jms.issues.DummyOrder with value: DummyOrder{orderId=333, itemId=4444, quantity=2} + +# Message format when receiving + +Camel adds the following properties to the `Exchange` when it receives a +message: + + +++++ + + + + + + + + + + + + + + +
PropertyTypeDescription

org.apache.camel.jms.replyDestination

javax.jms.Destination

The reply destination.

+ +Camel adds the following JMS properties to the In message headers when +it receives a JMS message: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
HeaderTypeDescription

JMSCorrelationID

String

The JMS correlation ID.

JMSDeliveryMode

int

The JMS delivery mode.

JMSDestination

javax.jms.Destination

The JMS destination.

JMSExpiration

long

The JMS expiration.

JMSMessageID

String

The JMS unique message ID.

JMSPriority

int

The JMS priority (with 0 as the lowest +priority and 9 as the highest).

JMSRedelivered

boolean

Whether the JMS message is +redelivered.

JMSReplyTo

javax.jms.Destination

The JMS reply-to destination.

JMSTimestamp

long

The JMS timestamp.

JMSType

String

The JMS type.

JMSXGroupID

String

The JMS group ID.

+ +As all the above information is standard JMS, you can check the [JMS +documentation](http://java.sun.com/javaee/5/docs/api/javax/jms/Message.html) +for further details. + +# About using Camel to send and receive messages and JMSReplyTo + +The JMS component is complex, and you have to pay close attention to how +it works in some cases. So this is a short summary of some +areas/pitfalls to look for. + +When Camel sends a message using its `JMSProducer`, it checks the +following conditions: + +- The message exchange pattern. + +- Whether a `JMSReplyTo` was set in the endpoint or in the message + headers. + +- Whether any of the following options have been set on the JMS + endpoint: `disableReplyTo`, `preserveMessageQos`, + `explicitQosEnabled`. + +All this can be a tad complex to understand and configure to support +your use case. + +## JmsProducer + +The `JmsProducer` behaves as follows, depending on configuration: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Exchange PatternOther optionsDescription

InOut

-

Camel will expect a reply, set a +temporary JMSReplyTo, and after sending the message, it +will start to listen for the reply message on the temporary +queue.

InOut

JMSReplyTo is set

Camel will expect a reply and, after +sending the message, it will start to listen for the reply message on +the specified JMSReplyTo queue.

InOnly

-

Camel will send the message and +not expect a reply.

InOnly

JMSReplyTo is set

By default, Camel discards the +JMSReplyTo destination and clears the +JMSReplyTo header before sending the message. Camel then +sends the message and does not expect a reply. Camel +logs this in the log at WARN level (changed to +DEBUG level from Camel 2.6 onwards. You +can use preserveMessageQuo=true to instruct Camel to keep +the JMSReplyTo. In all situations the +JmsProducer does not expect any reply and +thus continue after sending the message.

+ +## JmsConsumer + +The `JmsConsumer` behaves as follows, depending on configuration: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
Exchange PatternOther optionsDescription

InOut

-

Camel will send the reply back to the +JMSReplyTo queue.

InOnly

-

Camel will not send a reply back, as +the pattern is InOnly.

-

disableReplyTo=true

This option suppress replies.

+ +So pay attention to the message exchange pattern set on your exchanges. + +If you send a message to a JMS destination in the middle of your route, +you can specify the exchange pattern to use, see more at Request +Reply. +This is useful if you want to send an `InOnly` message to a JMS topic: + + from("activemq:queue:in") + .to("bean:validateOrder") + .to(ExchangePattern.InOnly, "activemq:topic:order") + .to("bean:handleOrder"); + +# Reuse endpoint and send to different destinations computed at runtime + +If you need to send messages to a lot of different JMS destinations, it +makes sense to reuse a JMS endpoint and specify the real destination in +a message header. This allows Camel to reuse the same endpoint, but send +to different destinations. This greatly reduces the number of endpoints +created and economizes on memory and thread resources. + +You can specify the destination in the following headers: + + +++++ + + + + + + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelJmsDestination

javax.jms.Destination

A destination object.

CamelJmsDestinationName

String

The destination name.

+ +For example, the following route shows how you can compute a destination +at run time and use it to override the destination appearing in the JMS +URL: + + from("file://inbox") + .to("bean:computeDestination") + .to("activemq:queue:dummy"); + +The queue name, `dummy`, is just a placeholder. It must be provided as +part of the JMS endpoint URL, but it will be ignored in this example. + +In the `computeDestination` bean, specify the real destination by +setting the `CamelJmsDestinationName` header as follows: + + public void setJmsHeader(Exchange exchange) { + String id = .... + exchange.getIn().setHeader("CamelJmsDestinationName", "order:" + id"); + } + +Then Camel will read this header and use it as the destination instead +of the one configured on the endpoint. So, in this example Camel sends +the message to `activemq:queue:order:2`, assuming the `id` value was 2. + +If both the `CamelJmsDestination` and the `CamelJmsDestinationName` +headers are set, `CamelJmsDestination` takes priority. Keep in mind that +the JMS producer removes both `CamelJmsDestination` and +`CamelJmsDestinationName` headers from the exchange and do not propagate +them to the created JMS message to avoid the accidental loops in the +routes (in scenarios when the message will be forwarded to another JMS +endpoint). + +# Configuring different JMS providers + +You can configure your JMS provider in Spring XML as follows: + +You can configure as many JMS component instances as you wish and give +them **a unique name using the** `id` **attribute**. The preceding +example configures an `activemq` component. You could do the same to +configure MQSeries, TibCo, BEA, Sonic and so on. + +Once you have a named JMS component, you can then refer to endpoints +within that component using URIs. For example, for the component name, +`activemq`, you can then refer to destinations using the URI format, +`activemq:[queue:|topic:]destinationName`. You can use the same approach +for all other JMS providers. + +This works by the SpringCamelContext lazily fetching components from the +spring context for the scheme name you use for Endpoint URIs and having +the Component resolve the endpoint URIs. + +## Using JNDI to find the ConnectionFactory + +If you are using a J2EE container, you might need to look up JNDI to +find the JMS `ConnectionFactory` rather than use the usual `` +mechanism in Spring. You can do this using Spring’s factory bean or the +new Spring XML namespace. For example: + + + + + + + +See [The jee +schema](http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/xsd-config.html#xsd-config-body-schemas-jee) +in the Spring reference documentation for more details about JNDI +lookup. + +# Concurrent Consuming + +A common requirement with JMS is to consume messages concurrently in +multiple threads to make an application more responsive. You can set the +`concurrentConsumers` option to specify the number of threads servicing +the JMS endpoint, as follows: + + from("jms:SomeQueue?concurrentConsumers=20"). + bean(MyClass.class); + +You can configure this option in one of the following ways: + +- On the `JmsComponent`, + +- On the endpoint URI or, + +- By invoking `setConcurrentConsumers()` directly on the + `JmsEndpoint`. + +## Concurrent Consuming with async consumer + +Notice that each concurrent consumer will only pick up the next +available message from the JMS broker, when the current message has been +fully processed. You can set the option `asyncConsumer=true` to let the +consumer pick up the next message from the JMS queue, while the previous +message is being processed asynchronously (by the Asynchronous Routing +Engine). See more details in the table on top of the page about the +`asyncConsumer` option. + + from("jms:SomeQueue?concurrentConsumers=20&asyncConsumer=true"). + bean(MyClass.class); + +# Request-reply over JMS + +Camel supports Request Reply over JMS. In essence the MEP of the +Exchange should be `InOut` when you send a message to a JMS queue. + +Camel offers a number of options to configure request/reply over JMS +that influence performance and clustered environments. The table below +summaries the options. + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OptionPerformanceClusterDescription

Temporary

Fast

Yes

A temporary queue is used as reply +queue, and automatic created by Camel. To use this, do +not specify a replyTo queue name. And you +can optionally configure replyToType=Temporary to make it +stand out that temporary queues are in use.

Shared

Slow

Yes

A shared persistent queue is used as +reply queue. The queue must be created beforehand, although some brokers +can create them on the fly, such as Apache ActiveMQ. To use this, you +must specify the replyTo queue name. And you can optionally configure +replyToType=Shared to make it stand out that shared queues +are in use. A shared queue can be used in a clustered environment with +multiple nodes running this Camel application at the same time. All of +them using the same shared reply queue. This is possible because JMS +Message selectors are used to correlate expected reply messages; this +impacts performance though. JMS Message selectors are slower, and +therefore not as fast as Temporary or +Exclusive queues. See further below how to tweak this for +better performance.

Exclusive

Fast

No (*Yes)

An exclusive persistent queue is used +as reply queue. The queue must be created beforehand, although some +brokers can create them on the fly, such as Apache ActiveMQ. To use +this, you must specify the replyTo queue name. And you +must configure replyToType=Exclusive to +instruct Camel to use exclusive queues, as Shared is used +by default, if a replyTo queue name was configured. When +using exclusive reply queues, then JMS Message selectors are +not in use, and therefore other applications must not +use this queue as well. An exclusive queue cannot be +used in a clustered environment with multiple nodes running this Camel +application at the same time; as we do not have control if the reply +queue comes back to the same node that sent the request message; that is +why shared queues use JMS Message selectors to make sure of this. +Though if you configure each Exclusive reply queue with +a unique name per node, then you can run this in a clustered +environment. As then the reply message will be sent back to that queue +for the given node that awaits the reply message.

concurrentConsumers

Fast

Yes

Allows processing reply messages +concurrently using concurrent message listeners in use. You can specify +a range using the concurrentConsumers and +maxConcurrentConsumers options. Notice: +That using Shared reply queues may not work as well with +concurrent listeners, so use this option with care.

maxConcurrentConsumers

Fast

Yes

Allows processing reply messages +concurrently using concurrent message listeners in use. You can specify +a range using the concurrentConsumers and +maxConcurrentConsumers options. Notice: +That using Shared reply queues may not work as well with +concurrent listeners, so use this option with care.

+ +The `JmsProducer` detects the `InOut` and provides a `JMSReplyTo` header +with the reply destination to be used. By default, Camel uses a +temporary queue, but you can use the `replyTo` option on the endpoint to +specify a fixed reply queue (see more below about fixed reply queue). + +Camel will automatically set up a consumer that listens to on the reply +queue, so you should **not** do anything. +This consumer is a Spring `DefaultMessageListenerContainer` which listen +for replies. However, it’s fixed to one concurrent consumer. +That means replies will be processed in sequence as there is only one +thread to process the replies. You can configure the listener to use +concurrent threads using the `concurrentConsumers` and +`maxConcurrentConsumers` options. This allows you to easier configure +this in Camel as shown below: + + from(xxx) + .inOut().to("activemq:queue:foo?concurrentConsumers=5") + .to(yyy) + .to(zzz); + +In this route, we instruct Camel to route replies asynchronously using a +thread pool with five threads. + +## Request-reply over JMS and using a shared fixed reply queue + +If you use a fixed reply queue when doing Request Reply over JMS as +shown in the example below, then pay attention. + + from(xxx) + .inOut().to("activemq:queue:foo?replyTo=bar") + .to(yyy) + +In this example, the fixed reply queue named "bar" is used. By default, +Camel assumes the queue is shared when using fixed reply queues, and +therefore it uses a `JMSSelector` to only pick up the expected reply +messages (e.g., based on the `JMSCorrelationID`). See the next section +for exclusive fixed reply queues. That means it’s not as fast as +temporary queues. You can speed up how often Camel will pull for reply +messages using the `receiveTimeout` option. By default, its 1000 +milliseconds. So to make it faster, you can set it to 250 millis to pull +4 times per second as shown: + + from(xxx) + .inOut().to("activemq:queue:foo?replyTo=bar&receiveTimeout=250") + .to(yyy) + +Notice this will cause the Camel to send pull requests to the message +broker more frequently, and thus require more network traffic. +It is generally recommended to use temporary queues if possible. + +## Request-reply over JMS and using an exclusive fixed reply queue + +In the previous example, Camel would anticipate the fixed reply queue +named "bar" was shared, and thus it uses a `JMSSelector` to only consume +reply messages which it expects. However, there is a drawback to doing +this as the JMS selector is slower. Also, the consumer on the reply +queue is slower to update with new JMS selector ids. In fact, it only +updates when the `receiveTimeout` option times out, which by default is +1 second. So in theory, the reply messages could take up till about 1 +sec to be detected. On the other hand, if the fixed reply queue is +exclusive to the Camel reply consumer, then we can avoid using the JMS +selectors, and thus be more performant. In fact, as fast as using +temporary queues. There is the `ReplyToType` option which you can +configure to `Exclusive` +to tell Camel that the reply queue is exclusive as shown in the example +below: + + from(xxx) + .inOut().to("activemq:queue:foo?replyTo=bar&replyToType=Exclusive") + .to(yyy) + +Mind that the queue must be exclusive to each and every endpoint. So if +you have two routes, then they each need a unique reply queue as shown +in the next example: + + from(xxx) + .inOut().to("activemq:queue:foo?replyTo=bar&replyToType=Exclusive") + .to(yyy) + + from(aaa) + .inOut().to("activemq:queue:order?replyTo=order.reply&replyToType=Exclusive") + .to(bbb) + +The same applies if you run in a clustered environment. Then each node +in the cluster must use a unique reply queue name. As otherwise, each +node in the cluster may pick up messages intended as a reply on another +node. For clustered environments, it’s recommended to use shared reply +queues instead. + +# Synchronizing clocks between senders and receivers + +When doing messaging between systems, it is desirable that the systems +have synchronized clocks. For example, when sending a +[JMS](#jms-component.adoc) message, then you can set a time to live +value on the message. Then the receiver can inspect this value and +determine if the message is already expired, and thus drop the message +instead of consume and process it. However, this requires that both +sender and receiver have synchronized clocks. + +If you are using [ActiveMQ](http://activemq.apache.org/), then you can +use the [timestamp +plugin](http://activemq.apache.org/timestampplugin.html) to synchronize +clocks. + +# About time to live + +Read first above about synchronized clocks. + +When you do request/reply (InOut) over [JMS](#jms-component.adoc) with +Camel, then Camel uses a timeout on the sender side, which is default 20 +seconds from the `requestTimeout` option. You can control this by +setting a higher/lower value. However, the time to live value is still +set on the [JMS](#jms-component.adoc) message being sent. So that +requires the clocks to be synchronized between the systems. If they are +not, then you may want to disable the time to live value being set. This +is now possible using the `disableTimeToLive` option from **Camel 2.8** +onwards. So if you set this option to `disableTimeToLive=true`, then +Camel does **not** set any time to live value when sending +[JMS](#jms-component.adoc) messages. **But** the request timeout is +still active. So for example, if you do request/reply over +[JMS](#jms-component.adoc) and have disabled time to live, then Camel +will still use a timeout by 20 seconds (the `requestTimeout` option). +That option can also be configured. So the two options `requestTimeout` +and `disableTimeToLive` gives you Fine-grained control when doing +request/reply. + +You can provide a header in the message to override and use as the +request timeout value instead of the endpoint configured value. For +example: + + from("direct:someWhere") + .to("jms:queue:foo?replyTo=bar&requestTimeout=30s") + .to("bean:processReply"); + +In the route above we have an endpoint configured `requestTimeout` of 30 +seconds. So Camel will wait up till 30 seconds for that reply message to +come back on the bar queue. If no reply message is received then a +`org.apache.camel.ExchangeTimedOutException` is set on the Exchange, and +Camel continues routing the message, which would then fail due the +exception, and Camel’s error handler reacts. + +If you want to use a per message timeout value, you can set the header +with key +`org.apache.camel.component.jms.JmsConstants#JMS_REQUEST_TIMEOUT` which +has constant value `"CamelJmsRequestTimeout"` with a timeout value as a +long type. + +For example, we can use a bean to compute the timeout value per +individual message, such as calling the `"whatIsTheTimeout"` method on +the service bean as shown below: + + from("direct:someWhere") + .setHeader("CamelJmsRequestTimeout", method(ServiceBean.class, "whatIsTheTimeout")) + .to("jms:queue:foo?replyTo=bar&requestTimeout=30s") + .to("bean:processReply"); + +When you do fire and forget (InOut) over [JMS](#jms-component.adoc) with +Camel, then Camel by default does **not** set any time to live value on +the message. You can configure a value by using the `timeToLive` option. +For example, to indicate a 5 sec., you set `timeToLive=5000`. The option +`disableTimeToLive` can be used to force disabling the time to live, +also for InOnly messaging. The `requestTimeout` option is not being used +for InOnly messaging. + +# Enabling Transacted Consumption + +A common requirement is to consume from a queue in a transaction and +then process the message using the Camel route. To do this, just ensure +that you set the following properties on the component/endpoint: + +- `transacted` = true + +- `transactionManager` = a *Transsaction Manager* - typically the + `JmsTransactionManager` + +See the Transactional Client EIP pattern for further details. + +Transactions and \[Request Reply\] over JMS + +When using Request Reply over JMS, you cannot use a single transaction; +JMS will not send any messages until a commit is performed, so the +server side won’t receive anything at all until the transaction commits. +Therefore, to use [Request Reply](#eips:requestReply-eip.adoc), you must +commit a transaction after sending the request and then use a separate +transaction for receiving the response. + +To address this issue, the JMS component uses different properties to +specify transaction use for oneway messaging and request reply +messaging: + +The `transacted` property applies **only** to the InOnly message +Exchange Pattern (MEP). + +You can leverage the [DMLC transacted session +API]() +using the following properties on component/endpoint: + +- `transacted` = true + +- `lazyCreateTransactionManager` = false + +The benefit of doing so is that the cacheLevel setting will be honored +when using local transactions without a configured TransactionManager. +When a TransactionManager is configured, no caching happens at DMLC +level, and it is necessary to rely on a pooled connection factory. For +more details about this kind of setup, see +[here](http://tmielke.blogspot.com/2012/03/camel-jms-with-transactions-lessons.html) +and +[here](http://forum.springsource.org/showthread.php?123631-JMS-DMLC-not-caching%20connection-when-using-TX-despite-cacheLevel-CACHE_CONSUMER&p=403530&posted=1#post403530). + +# Using JMSReplyTo for late replies + +When using Camel as a JMS listener, it sets an Exchange property with +the value of the ReplyTo `javax.jms.Destination` object, having the key +`ReplyTo`. You can obtain this `Destination` as follows: + + Destination replyDestination = exchange.getIn().getHeader(JmsConstants.JMS_REPLY_DESTINATION, Destination.class); + +And then later use it to send a reply using regular JMS or Camel. + + // we need to pass in the JMS component, and in this sample we use ActiveMQ + JmsEndpoint endpoint = JmsEndpoint.newInstance(replyDestination, activeMQComponent); + // now we have the endpoint we can use regular Camel API to send a message to it + template.sendBody(endpoint, "Here is the late reply."); + +A different solution to sending a reply is to provide the +`replyDestination` object in the same Exchange property when sending. +Camel will then pick up this property and use it for the real +destination. The endpoint URI must include a dummy destination, however. +For example: + + // we pretend to send it to some non-existing dummy queue + template.send("activemq:queue:dummy, new Processor() { + public void process(Exchange exchange) throws Exception { + // and here we override the destination with the ReplyTo destination object so the message is sent to there instead of dummy + exchange.getIn().setHeader(JmsConstants.JMS_DESTINATION, replyDestination); + exchange.getIn().setBody("Here is the late reply."); + } + } + +# Using a request timeout + +In the sample below we send a Request Reply style message Exchange (we +use the `requestBody` method = `InOut`) to the slow queue for further +processing in Camel, and we wait for a return reply: + +# Sending an InOnly message and keeping the JMSReplyTo header + +When sending to a [JMS](#jms-component.adoc) destination using +**camel-jms**, the producer will use the MEP to detect if its *InOnly* +or *InOut* messaging. However, there can be times when you want to send +an *InOnly* message but keeping the `JMSReplyTo` header. To do so, you +have to instruct Camel to keep it, otherwise the `JMSReplyTo` header +will be dropped. + +For example, to send an *InOnly* message to the foo queue, but with a +`JMSReplyTo` with bar queue you can do as follows: + + template.send("activemq:queue:foo?preserveMessageQos=true", new Processor() { + public void process(Exchange exchange) throws Exception { + exchange.getIn().setBody("World"); + exchange.getIn().setHeader("JMSReplyTo", "bar"); + } + }); + +Notice we use `preserveMessageQos=true` to instruct Camel to keep the +`JMSReplyTo` header. + +# Setting JMS provider options on the destination + +Some JMS providers, like IBM’s WebSphere MQ, need options to be set on +the JMS destination. For example, you may need to specify the +`targetClient` option. Since `targetClient` is a WebSphere MQ option and +not a Camel URI option, you need to set that on the JMS destination name +like so: + + // ... + .setHeader("CamelJmsDestinationName", constant("queue:///MY_QUEUE?targetClient=1")) + .to("wmq:queue:MY_QUEUE?useMessageIDAsCorrelationID=true"); + +Some versions of WMQ won’t accept this option on the destination name, +and you will get an exception like: + + com.ibm.msg.client.jms.DetailedJMSException: JMSCC0005: The specified + value 'MY_QUEUE?targetClient=1' is not allowed for + 'XMSC_DESTINATION_NAME' + +A workaround is to use a custom DestinationResolver: + + JmsComponent wmq = new JmsComponent(connectionFactory); + + wmq.setDestinationResolver(new DestinationResolver() { + public Destination resolveDestinationName(Session session, String destinationName, boolean pubSubDomain) throws JMSException { + MQQueueSession wmqSession = (MQQueueSession) session; + return wmqSession.createQueue("queue:///" + destinationName + "?targetClient=1"); + } + }); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|clientId|Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions with JMS 1.1.||string| +|connectionFactory|The connection factory to be use. A connection factory must be configured either on the component or endpoint.||object| +|disableReplyTo|Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another.|false|boolean| +|durableSubscriptionName|The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well.||string| +|jmsMessageType|Allows you to force the use of a specific jakarta.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it.||object| +|replyTo|Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer).||string| +|testConnectionOnStartup|Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well.|false|boolean| +|acknowledgementModeName|The JMS acknowledgement name, which is one of: SESSION\_TRANSACTED, CLIENT\_ACKNOWLEDGE, AUTO\_ACKNOWLEDGE, DUPS\_OK\_ACKNOWLEDGE|AUTO\_ACKNOWLEDGE|string| +|artemisConsumerPriority|Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer).||integer| +|asyncConsumer|Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions).|false|boolean| +|autoStartup|Specifies whether the consumer container should auto-startup.|true|boolean| +|cacheLevel|Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details.||integer| +|cacheLevelName|Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE\_AUTO, CACHE\_CONNECTION, CACHE\_CONSUMER, CACHE\_NONE, and CACHE\_SESSION. The default setting is CACHE\_AUTO. See the Spring documentation and Transactions Cache Levels for more information.|CACHE\_AUTO|string| +|concurrentConsumers|Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.|1|integer| +|maxConcurrentConsumers|Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.||integer| +|replyToDeliveryPersistent|Specifies whether to use persistent delivery by default for replies.|true|boolean| +|selector|Sets the JMS selector to use||string| +|subscriptionDurable|Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well.|false|boolean| +|subscriptionName|Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0).||string| +|subscriptionShared|Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker.|false|boolean| +|acceptMessagesWhileStopping|Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option.|false|boolean| +|allowReplyManagerQuickStop|Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag.|false|boolean| +|consumerType|The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|defaultTaskExecutorType|Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached thread-pool-like). If not set, it defaults to the previous behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers.||object| +|eagerLoadingOfProperties|Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody.|false|boolean| +|eagerPoisonBody|If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties.|Poison JMS message due to ${exception.message}|string| +|exposeListenerSession|Specifies whether the listener session should be exposed when consuming messages.|false|boolean| +|replyToConsumerType|The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|replyToSameDestinationAllowed|Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself.|false|boolean| +|taskExecutor|Allows you to specify a custom task executor for consuming messages.||object| +|deliveryDelay|Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker.|-1|integer| +|deliveryMode|Specifies the delivery mode to be used. Possible values are those defined by jakarta.jms.DeliveryMode. NON\_PERSISTENT = 1 and PERSISTENT = 2.||integer| +|deliveryPersistent|Specifies whether persistent delivery is used by default.|true|boolean| +|explicitQosEnabled|Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers.|false|boolean| +|formatDateHeadersToIso8601|Sets whether JMS date properties should be formatted according to the ISO 8601 standard.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|preserveMessageQos|Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header.|false|boolean| +|priority|Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect.|4|integer| +|replyToConcurrentConsumers|Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.|1|integer| +|replyToMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.||integer| +|replyToOnTimeoutMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS.|1|integer| +|replyToOverride|Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination.||string| +|replyToType|Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive.||object| +|requestTimeout|The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option.|20000|duration| +|timeToLive|When sending messages, specifies the time-to-live of the message (in milliseconds).|-1|integer| +|allowAdditionalHeaders|This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example, some message systems, such as WMQ, do this with header names using prefix JMS\_IBM\_MQMD\_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching.||string| +|allowNullBody|Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown.|true|boolean| +|alwaysCopyMessage|If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set)|false|boolean| +|correlationProperty|When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel.||string| +|disableTimeToLive|Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details.|false|boolean| +|forceSendOriginalMessage|When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received.|false|boolean| +|includeSentJMSMessageID|Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination.|false|boolean| +|replyToCacheLevelName|Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE\_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE\_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE\_NONE to work. Note: If using temporary queues then CACHE\_NONE is not allowed, and you must use a higher value such as CACHE\_CONSUMER or CACHE\_SESSION.||string| +|replyToDestinationSelectorName|Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue).||string| +|streamMessageTypeEnabled|Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data.|false|boolean| +|allowAutoWiredConnectionFactory|Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default.|true|boolean| +|allowAutoWiredDestinationResolver|Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default.|true|boolean| +|allowSerializedHeaders|Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|artemisStreamingEnabled|Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used.|false|boolean| +|asyncStartListener|Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or fail-over. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry.|false|boolean| +|asyncStopListener|Whether to stop the JmsConsumer message listener asynchronously, when stopping a route.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configuration|To use a shared JMS configuration||object| +|destinationResolver|A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry).||object| +|errorHandler|Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler.||object| +|exceptionListener|Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions.||object| +|idleConsumerLimit|Specify the limit for the number of consumers that are allowed to be idle at any given time.|1|integer| +|idleTaskExecutionLimit|Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring.|1|integer| +|includeAllJMSXProperties|Whether to include all JMSX prefixed properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply.|false|boolean| +|includeCorrelationIDAsBytes|Whether the JMS consumer should include JMSCorrelationIDAsBytes as a header on the Camel Message.|true|boolean| +|jmsKeyFormatStrategy|Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation.||object| +|mapJmsMessage|Specifies whether Camel should auto map the received JMS message to a suited payload type, such as jakarta.jms.TextMessage to a String etc.|true|boolean| +|maxMessagesPerTask|The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required.|-1|integer| +|messageConverter|To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a jakarta.jms.Message.||object| +|messageCreatedStrategy|To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of jakarta.jms.Message objects when Camel is sending a JMS message.||object| +|messageIdEnabled|When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value.|true|boolean| +|messageListenerContainerFactory|Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom.||object| +|messageTimestampEnabled|Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value.|true|boolean| +|pubSubNoLocal|Specifies whether to inhibit the delivery of messages published by its own connection.|false|boolean| +|queueBrowseStrategy|To use a custom QueueBrowseStrategy when browsing queues||object| +|receiveTimeout|The timeout for receiving messages (in milliseconds).|1000|duration| +|recoveryInterval|Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds.|5000|duration| +|requestTimeoutCheckerInterval|Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout.|1000|duration| +|serviceLocationEnabled|Whether to detect the network address location of the JMS broker on startup. This information is gathered via reflection on the ConnectionFactory, and is vendor specific. This option can be used to turn this off.|true|boolean| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|temporaryQueueResolver|A pluggable TemporaryQueueResolver that allows you to use your own resolver for creating temporary queues (some messaging systems has special requirements for creating temporary queues).||object| +|transferException|If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a jakarta.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!|false|boolean| +|transferExchange|You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!|false|boolean| +|useMessageIDAsCorrelationID|Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages.|false|boolean| +|waitForProvisionCorrelationToBeUpdatedCounter|Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled.|50|integer| +|waitForProvisionCorrelationToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for provisional correlation id to be updated.|100|duration| +|waitForTemporaryReplyToToBeUpdatedCounter|Number of times to wait for temporary replyTo queue to be created and ready when doing request/reply over JMS.|200|integer| +|waitForTemporaryReplyToToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for temporary replyTo queue to be ready.|100|duration| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| +|errorHandlerLoggingLevel|Allows to configure the default errorHandler logging level for logging uncaught exceptions.|WARN|object| +|errorHandlerLogStackTrace|Allows to control whether stack-traces should be logged or not, by the default errorHandler.|true|boolean| +|password|Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|username|Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|transacted|Specifies whether to use transacted mode|false|boolean| +|transactedInOut|Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction.|false|boolean| +|lazyCreateTransactionManager|If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true.|true|boolean| +|transactionManager|The Spring transaction manager to use.||object| +|transactionName|The name of the transaction to use.||string| +|transactionTimeout|The timeout value of the transaction (in seconds), if using transacted mode.|-1|integer| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|destinationType|The kind of destination to use|queue|string| +|destinationName|Name of the queue or topic to use as destination||string| +|clientId|Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions with JMS 1.1.||string| +|connectionFactory|The connection factory to be use. A connection factory must be configured either on the component or endpoint.||object| +|disableReplyTo|Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another.|false|boolean| +|durableSubscriptionName|The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well.||string| +|jmsMessageType|Allows you to force the use of a specific jakarta.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it.||object| +|replyTo|Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer).||string| +|testConnectionOnStartup|Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well.|false|boolean| +|acknowledgementModeName|The JMS acknowledgement name, which is one of: SESSION\_TRANSACTED, CLIENT\_ACKNOWLEDGE, AUTO\_ACKNOWLEDGE, DUPS\_OK\_ACKNOWLEDGE|AUTO\_ACKNOWLEDGE|string| +|artemisConsumerPriority|Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer).||integer| +|asyncConsumer|Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions).|false|boolean| +|autoStartup|Specifies whether the consumer container should auto-startup.|true|boolean| +|cacheLevel|Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details.||integer| +|cacheLevelName|Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE\_AUTO, CACHE\_CONNECTION, CACHE\_CONSUMER, CACHE\_NONE, and CACHE\_SESSION. The default setting is CACHE\_AUTO. See the Spring documentation and Transactions Cache Levels for more information.|CACHE\_AUTO|string| +|concurrentConsumers|Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.|1|integer| +|maxConcurrentConsumers|Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.||integer| +|replyToDeliveryPersistent|Specifies whether to use persistent delivery by default for replies.|true|boolean| +|selector|Sets the JMS selector to use||string| +|subscriptionDurable|Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well.|false|boolean| +|subscriptionName|Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0).||string| +|subscriptionShared|Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker.|false|boolean| +|acceptMessagesWhileStopping|Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option.|false|boolean| +|allowReplyManagerQuickStop|Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag.|false|boolean| +|consumerType|The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|defaultTaskExecutorType|Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached thread-pool-like). If not set, it defaults to the previous behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers.||object| +|eagerLoadingOfProperties|Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody.|false|boolean| +|eagerPoisonBody|If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties.|Poison JMS message due to ${exception.message}|string| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|exposeListenerSession|Specifies whether the listener session should be exposed when consuming messages.|false|boolean| +|replyToConsumerType|The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use.|Default|object| +|replyToSameDestinationAllowed|Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself.|false|boolean| +|taskExecutor|Allows you to specify a custom task executor for consuming messages.||object| +|deliveryDelay|Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker.|-1|integer| +|deliveryMode|Specifies the delivery mode to be used. Possible values are those defined by jakarta.jms.DeliveryMode. NON\_PERSISTENT = 1 and PERSISTENT = 2.||integer| +|deliveryPersistent|Specifies whether persistent delivery is used by default.|true|boolean| +|explicitQosEnabled|Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers.|false|boolean| +|formatDateHeadersToIso8601|Sets whether JMS date properties should be formatted according to the ISO 8601 standard.|false|boolean| +|preserveMessageQos|Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header.|false|boolean| +|priority|Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect.|4|integer| +|replyToConcurrentConsumers|Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.|1|integer| +|replyToMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.||integer| +|replyToOnTimeoutMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS.|1|integer| +|replyToOverride|Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination.||string| +|replyToType|Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive.||object| +|requestTimeout|The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option.|20000|duration| +|timeToLive|When sending messages, specifies the time-to-live of the message (in milliseconds).|-1|integer| +|allowAdditionalHeaders|This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example, some message systems, such as WMQ, do this with header names using prefix JMS\_IBM\_MQMD\_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching.||string| +|allowNullBody|Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown.|true|boolean| +|alwaysCopyMessage|If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set)|false|boolean| +|correlationProperty|When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel.||string| +|disableTimeToLive|Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details.|false|boolean| +|forceSendOriginalMessage|When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received.|false|boolean| +|includeSentJMSMessageID|Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|replyToCacheLevelName|Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE\_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE\_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE\_NONE to work. Note: If using temporary queues then CACHE\_NONE is not allowed, and you must use a higher value such as CACHE\_CONSUMER or CACHE\_SESSION.||string| +|replyToDestinationSelectorName|Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue).||string| +|streamMessageTypeEnabled|Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data.|false|boolean| +|allowSerializedHeaders|Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|artemisStreamingEnabled|Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used.|false|boolean| +|asyncStartListener|Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or fail-over. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry.|false|boolean| +|asyncStopListener|Whether to stop the JmsConsumer message listener asynchronously, when stopping a route.|false|boolean| +|destinationResolver|A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry).||object| +|errorHandler|Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler.||object| +|exceptionListener|Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions.||object| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object| +|idleConsumerLimit|Specify the limit for the number of consumers that are allowed to be idle at any given time.|1|integer| +|idleTaskExecutionLimit|Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring.|1|integer| +|includeAllJMSXProperties|Whether to include all JMSX prefixed properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply.|false|boolean| +|jmsKeyFormatStrategy|Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation.||object| +|mapJmsMessage|Specifies whether Camel should auto map the received JMS message to a suited payload type, such as jakarta.jms.TextMessage to a String etc.|true|boolean| +|maxMessagesPerTask|The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required.|-1|integer| +|messageConverter|To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a jakarta.jms.Message.||object| +|messageCreatedStrategy|To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of jakarta.jms.Message objects when Camel is sending a JMS message.||object| +|messageIdEnabled|When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value.|true|boolean| +|messageListenerContainerFactory|Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom.||object| +|messageTimestampEnabled|Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value.|true|boolean| +|pubSubNoLocal|Specifies whether to inhibit the delivery of messages published by its own connection.|false|boolean| +|receiveTimeout|The timeout for receiving messages (in milliseconds).|1000|duration| +|recoveryInterval|Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds.|5000|duration| +|requestTimeoutCheckerInterval|Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout.|1000|duration| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|temporaryQueueResolver|A pluggable TemporaryQueueResolver that allows you to use your own resolver for creating temporary queues (some messaging systems has special requirements for creating temporary queues).||object| +|transferException|If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a jakarta.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!|false|boolean| +|transferExchange|You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!|false|boolean| +|useMessageIDAsCorrelationID|Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages.|false|boolean| +|waitForProvisionCorrelationToBeUpdatedCounter|Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled.|50|integer| +|waitForProvisionCorrelationToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for provisional correlation id to be updated.|100|duration| +|waitForTemporaryReplyToToBeUpdatedCounter|Number of times to wait for temporary replyTo queue to be created and ready when doing request/reply over JMS.|200|integer| +|waitForTemporaryReplyToToBeUpdatedThreadSleepingTime|Interval in millis to sleep each time while waiting for temporary replyTo queue to be ready.|100|duration| +|errorHandlerLoggingLevel|Allows to configure the default errorHandler logging level for logging uncaught exceptions.|WARN|object| +|errorHandlerLogStackTrace|Allows to control whether stack-traces should be logged or not, by the default errorHandler.|true|boolean| +|password|Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|username|Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory.||string| +|transacted|Specifies whether to use transacted mode|false|boolean| +|transactedInOut|Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction.|false|boolean| +|lazyCreateTransactionManager|If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true.|true|boolean| +|transactionManager|The Spring transaction manager to use.||object| +|transactionName|The name of the transaction to use.||string| +|transactionTimeout|The timeout value of the transaction (in seconds), if using transacted mode.|-1|integer| diff --git a/camel-jmx.md b/camel-jmx.md new file mode 100644 index 0000000000000000000000000000000000000000..e7af104798811f80ddaa216c3deb9c5003646f7d --- /dev/null +++ b/camel-jmx.md @@ -0,0 +1,60 @@ +# Jmx + +**Since Camel 2.6** + +**Only consumer is supported** + +Apache Camel has extensive support for JMX to allow you to monitor and +control the Camel managed objects with a JMX client. + +Camel also provides a [JMX](#jmx-component.adoc) component that allows +you to subscribe to MBean notifications. This page is about how to +manage and monitor Camel using JMX. + +If you run Camel standalone with just `camel-core` as a dependency, and +you want JMX to enable out of the box, then you need to add +`camel-management` as a dependency. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|serverURL|Server url comes from the remaining endpoint. Use platform to connect to local JVM.||string| +|format|Format for the message body. Either xml or raw. If xml, the notification is serialized to xml. If raw, then the raw java object is set as the body.|xml|string| +|granularityPeriod|The frequency to poll the bean to check the monitor (monitor types only).|10000|duration| +|monitorType|The type of monitor to create. One of string, gauge, counter (monitor types only).||string| +|objectDomain|The domain for the mbean you're connecting to||string| +|objectName|The name key for the mbean you're connecting to. This value is mutually exclusive with the object properties that get passed.||string| +|observedAttribute|The attribute to observe for the monitor bean or consumer.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|executorService|To use a custom shared thread pool for the consumers. By default each consume has their own thread-pool to process and route notifications.||object| +|handback|Value to handback to the listener when a notification is received. This value will be put in the message header with the key JMXConstants#JMX\_HANDBACK.||object| +|notificationFilter|Reference to a bean that implements the NotificationFilter.||object| +|objectProperties|Properties for the object name. These values will be used if the objectName param is not set||object| +|reconnectDelay|The number of seconds to wait before attempting to retry establishment of the initial connection or attempt to reconnect a lost connection|10|integer| +|reconnectOnConnectionFailure|If true the consumer will attempt to reconnect to the JMX server when any connection failure occurs. The consumer will attempt to re-establish the JMX connection every 'x' seconds until the connection is made-- where 'x' is the configured reconnectionDelay|false|boolean| +|testConnectionOnStartup|If true the consumer will throw an exception if unable to establish the JMX connection upon startup. If false, the consumer will attempt to establish the JMX connection every 'x' seconds until the connection is made -- where 'x' is the configured reconnectionDelay|true|boolean| +|initThreshold|Initial threshold for the monitor. The value must exceed this before notifications are fired (counter monitor only).||integer| +|modulus|The value at which the counter is reset to zero (counter monitor only).||integer| +|offset|The amount to increment the threshold after it's been exceeded (counter monitor only).||integer| +|differenceMode|If true, then the value reported in the notification is the difference from the threshold as opposed to the value itself (counter and gauge monitor only).|false|boolean| +|notifyHigh|If true, the gauge will fire a notification when the high threshold is exceeded (gauge monitor only).|false|boolean| +|notifyLow|If true, the gauge will fire a notification when the low threshold is exceeded (gauge monitor only).|false|boolean| +|thresholdHigh|Value for the gauge's high threshold (gauge monitor only).||number| +|thresholdLow|Value for the gauge's low threshold (gauge monitor only).||number| +|password|Credentials for making a remote connection||string| +|user|Credentials for making a remote connection||string| +|notifyDiffer|If true, will fire a notification when the string attribute differs from the string to compare (string monitor or consumer). By default the consumer will notify match if observed attribute and string to compare has been configured.|false|boolean| +|notifyMatch|If true, will fire a notification when the string attribute matches the string to compare (string monitor or consumer). By default the consumer will notify match if observed attribute and string to compare has been configured.|false|boolean| +|stringToCompare|Value for attribute to compare (string monitor or consumer). By default the consumer will notify match if observed attribute and string to compare has been configured.||string| diff --git a/camel-jolt.md b/camel-jolt.md new file mode 100644 index 0000000000000000000000000000000000000000..9a6c808149e1b6b3eb57652339b2369be33dc1ca --- /dev/null +++ b/camel-jolt.md @@ -0,0 +1,70 @@ +# Jolt + +**Since Camel 2.16** + +**Only producer is supported** + +The Jolt component allows you to process a JSON messages using a +[JOLT](https://github.com/bazaarvoice/jolt) specification. This can be +ideal when doing JSON to JSON transformation. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-jolt + x.x.x + + + +# URI format + + jolt:specName[?options] + +Where `specName` is the classpath-local URI of the specification to +invoke; or the complete URL of the remote specification (e.g.: +`\file://folder/myfile.vm`). + +# Samples + +For example, you could use something like + + from("activemq:My.Queue"). + to("jolt:com/acme/MyResponse.json"); + +And a file-based resource: + + from("activemq:My.Queue"). + to("jolt:file://myfolder/MyResponse.json?contentCache=true"). + to("activemq:Another.Queue"); + +You can also specify what specification the component should use +dynamically via a header, so, for example: + + from("direct:in"). + setHeader("CamelJoltResourceUri").constant("path/to/my/spec.json"). + to("jolt:dummy?allowTemplateFromHeader=true"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|transform|Explicitly sets the Transform to use. If not set a Transform specified by the transformDsl will be created||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|resourceUri|Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod.||string| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|contentCache|Sets whether to use resource content cache or not|false|boolean| +|inputType|Specifies if the input is hydrated JSON or a JSON String.|Hydrated|object| +|outputType|Specifies if the output should be hydrated JSON or a JSON String.|Hydrated|object| +|transformDsl|Specifies the Transform DSL of the endpoint resource. If none is specified Chainr will be used.|Chainr|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-jooq.md b/camel-jooq.md new file mode 100644 index 0000000000000000000000000000000000000000..0ca833f6d8c385d4684bc46ab99917dbaeef2e8e --- /dev/null +++ b/camel-jooq.md @@ -0,0 +1,266 @@ +# Jooq + +**Since Camel 3.0** + +**Both producer and consumer are supported** + +The JOOQ component enables you to store and retrieve Java objects from +persistent storage using JOOQ library. + +JOOQ provides DSL to create queries. There are two types of queries: + +1. org.jooq.Query - can be executed + +2. org.jooq.ResultQuery - can return results + +For example: + + // Create a Query object and execute it: + Query query = create.query("DELETE FROM BOOK"); + query.execute(); + + // Create a ResultQuery object and execute it, fetching results: + ResultQuery resultQuery = create.resultQuery("SELECT * FROM BOOK"); + Result result = resultQuery.fetch(); + +# Plain SQL + +SQL could be executed using JOOQ’s objects "Query" or "ResultQuery". +Also, the SQL query could be specified inside URI: + + from("jooq://org.apache.camel.component.jooq.db.tables.records.BookStoreRecord?query=select * from book_store x where x.name = 'test'").to("bean:myBusinessLogic"); + +See the examples below. + +# Consuming from endpoint + +Consuming messages from a JOOQ consumer endpoint removes (or updates) +entity beans in the database. This allows you to use a database table as +a logical queue: consumers take messages from the queue and then +delete/update them to logically remove them from the queue. If you do +not wish to delete the entity bean when it has been processed, you can +specify consumeDelete=false on the URI. + +## Operations + +When using jooq as a producer you can use any of the following +`JooqOperation` operations: + + ++++ + + + + + + + + + + + + + + + + + + + + +
OperationDescription

none

Execute a query (default)

execute

Execute a query with no expected +results

fetch

Execute a query and the result of the +query is stored as the new message body

+ +## Example: + +JOOQ configuration: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ${jooq.sql.dialect} + + + + + + +Camel context configuration: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Sample bean: + + @Component + public class BookStoreRecordBean { + private String name = "test"; + + public BookStoreRecord generate() { + return new BookStoreRecord(name); + } + + public ResultQuery select() { + return DSL.selectFrom(BOOK_STORE).where(BOOK_STORE.NAME.eq(name)); + } + + public Query delete() { + return DSL.delete(BOOK_STORE).where(BOOK_STORE.NAME.eq(name)); + } + } + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration (database connection, database entity type, etc.)||object| +|databaseConfiguration|To use a specific database configuration||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|consumeDelete|Delete entity after it is consumed|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|Type of operation to execute on query|NONE|object| +|query|To execute plain SQL query||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|entityType|JOOQ entity class||string| +|databaseConfiguration|To use a specific database configuration||object| +|consumeDelete|Delete entity after it is consumed|true|boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|operation|Type of operation to execute on query|NONE|object| +|query|To execute plain SQL query||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| diff --git a/camel-jpa.md b/camel-jpa.md new file mode 100644 index 0000000000000000000000000000000000000000..042d2b380afec3dc1482b733f93434ddd65f20ed --- /dev/null +++ b/camel-jpa.md @@ -0,0 +1,340 @@ +# Jpa + +**Since Camel 1.0** + +**Both producer and consumer are supported** + +The JPA component enables you to store and retrieve Java objects from +persistent storage using EJB 3’s Java Persistence Architecture (JPA). +JPA is a standard interface layer that wraps Object/Relational Mapping +(ORM) products such as OpenJPA, Hibernate, TopLink, and so on. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-jpa + x.x.x + + + +# Sending to the endpoint + +You can store a Java entity bean in a database by sending it to a JPA +producer endpoint. The body of the *In* message is assumed to be an +entity bean (that is a POJO with an +[@Entity](https://jakarta.ee/specifications/persistence/2.2/apidocs/javax/persistence/entity) +annotation on it) or a collection or array of entity beans. + +If the body is a List of entities, make sure to use +**entityType=java.util.List** as a configuration passed to the producer +endpoint. + +If the body does not contain one of the previous listed types, put a +Message Translator in front of the endpoint to perform the necessary +conversion first. + +You can use `query`, `namedQuery` or `nativeQuery` for the producer as +well. Also in the value of the `parameters`, you can use Simple +expression which allows you to retrieve parameter values from Message +body, header, etc. Those query can be used for retrieving a set of data +with using `SELECT` JPQL/SQL statement as well as executing bulk +update/delete with using `UPDATE`/`DELETE` JPQL/SQL statement. Please +note that you need to specify `useExecuteUpdate` to `true` if you +execute `UPDATE`/`DELETE` with `namedQuery` as Camel doesn’t look into +the named query unlike `query` and `nativeQuery`. + +# Consuming from the endpoint + +Consuming messages from a JPA consumer endpoint removes (or updates) +entity beans in the database. This allows you to use a database table as +a logical queue: consumers take messages from the queue and then +delete/update them to logically remove them from the queue. + +If you do not wish to delete the entity bean when it has been processed +(and when routing is done), you can specify `consumeDelete=false` on the +URI. This will result in the entity being processed in each poll. + +If you would rather perform some update on the entity to mark it as +processed (such as to exclude it from a future query), then you can +annotate a method with +[@Consumed](https://www.javadoc.io/doc/org.apache.camel/camel-jpa/current/org/apache/camel/component/jpa/Consumed.html). +It will be invoked on your entity bean when the entity bean has been +processed (and when routing is done). + +You can use +[@PreConsumed](https://www.javadoc.io/doc/org.apache.camel/camel-jpa/current/org/apache/camel/component/jpa/PreConsumed.html) +which will be invoked on your entity bean before it has been processed +(before routing). + +If you are consuming a lot of rows (100K+) and experience `OutOfMemory` +problems, you should set the `maximumResults` to a sensible value. + +# URI format + + jpa:entityClassName[?options] + +For sending to the endpoint, the *entityClassName* is optional. If +specified, it helps the [Type +Converter](http://camel.apache.org/type-converter.html) to ensure the +body is of the correct type. + +For consuming, the *entityClassName* is mandatory. + +# Configuring EntityManagerFactory + +It’s strongly advised to configure the JPA component to use a specific +`EntityManagerFactory` instance. If failed to do so each `JpaEndpoint` +will auto create their own instance of `EntityManagerFactory` which most +often is not what you want. + +For example, you can instantiate a JPA component that references the +`myEMFactory` entity manager factory, as follows: + + + + + +The `JpaComponent` looks up automatically the `EntityManagerFactory` +from the Registry which means you do not need to configure this on the +`JpaComponent` as shown above. You only need to do so if there is +ambiguity, in which case Camel will log a WARN. + +# Configuring TransactionStrategy + +The `TransactionStrategy` is a vendor neutral abstraction that allows +`camel-jpa` to easily plug in and work with Spring `TransactionManager` +or Quarkus Transaction API. + +The `JpaComponent` looks up automatically the `TransactionStrategy` from +the Registry. If Camel cannot find any `TransactionStrategy` instance +registered, it will also look up for the `TransactionTemplate` and try +to extract `TransactionStrategy` from it. + +If none `TransactionTemplate` is available in the registry, +`JpaEndpoint` will auto create a default instance +(`org.apache.camel.component.jpa.DefaultTransactionStrategy`) of +`TransactionStrategy` which most often is not what you want. + +If more than single instance of the `TransactionStrategy` is found, +Camel will log a WARN. In such cases you might want to instantiate and +explicitly configure a JPA component that references the +`myTransactionManager` transaction manager, as follows: + + + + + + +# Using a consumer with a named query + +For consuming only selected entities, you can use the `namedQuery` URI +query option. First, you have to define the named query in the JPA +Entity class: + + @Entity + @NamedQuery(name = "step1", query = "select x from MultiSteps x where x.step = 1") + public class MultiSteps { + ... + } + +After that, you can define a consumer uri like this one: + + from("jpa://org.apache.camel.examples.MultiSteps?namedQuery=step1") + .to("bean:myBusinessLogic"); + +# Using a consumer with a query + +For consuming only selected entities, you can use the `query` URI query +option. You only have to define the query option: + + from("jpa://org.apache.camel.examples.MultiSteps?query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1") + .to("bean:myBusinessLogic"); + +# Using a consumer with a native query + +For consuming only selected entities, you can use the `nativeQuery` URI +query option. You only have to define the native query option: + + from("jpa://org.apache.camel.examples.MultiSteps?nativeQuery=select * from MultiSteps where step = 1") + .to("bean:myBusinessLogic"); + +If you use the native query option, you will receive an object array in +the message body. + +# Using a producer with a named query + +For retrieving selected entities or execute bulk update/delete, you can +use the `namedQuery` URI query option. First, you have to define the +named query in the JPA Entity class: + + @Entity + @NamedQuery(name = "step1", query = "select x from MultiSteps x where x.step = 1") + public class MultiSteps { + ... + } + +After that, you can define a producer uri like this one: + + from("direct:namedQuery") + .to("jpa://org.apache.camel.examples.MultiSteps?namedQuery=step1"); + +Note that you need to specify `useExecuteUpdate` option to `true` to +execute `UPDATE`/`DELETE` statement as a named query. + +# Using a producer with a query + +For retrieving selected entities or execute bulk update/delete, you can +use the `query` URI query option. You only have to define the query +option: + + from("direct:query") + .to("jpa://org.apache.camel.examples.MultiSteps?query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1"); + +# Using a producer with a native query + +For retrieving selected entities or execute bulk update/delete, you can +use the `nativeQuery` URI query option. You only have to define the +native query option: + + from("direct:nativeQuery") + .to("jpa://org.apache.camel.examples.MultiSteps?resultClass=org.apache.camel.examples.MultiSteps&nativeQuery=select * from MultiSteps where step = 1"); + +If you use the native query option without specifying `resultClass`, you +will receive an object array in the message body. + +# Using the JPA-Based Idempotent Repository + +The Idempotent Consumer from the [EIP +patterns](http://camel.apache.org/enterprise-integration-patterns.html) +is used to filter out duplicate messages. A JPA-based idempotent +repository is provided. + +To use the JPA based idempotent repository. + +1. Set up a `persistence-unit` in the persistence.xml file: + +2. Set up a `org.springframework.orm.jpa.JpaTemplate` which is used by + the + `org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository`: + +3. Configure the error formatting macro: snippet: + java.lang.IndexOutOfBoundsException: Index: 20, Size: 20 + +4. Configure the idempotent repository: + `org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository`: + +5. Create the JPA idempotent repository in the Spring XML file: + + + + + + + +
messageId
+ +
+
+
+ +**When running this Camel component tests inside your IDE** + +If you run the [tests of this +component](https://svn.apache.org/repos/asf/camel/trunk/components/camel-jpa/src/test) +directly inside your IDE, and not through Maven, then you could see +exceptions like these: + + org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is + org.apache.openjpa.persistence.ArgumentException: This configuration disallows runtime optimization, + but the following listed types were not enhanced at build time or at class load time with a javaagent: "org.apache.camel.examples.SendEmail". + at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:427) + at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:371) + at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:127) + at org.apache.camel.processor.jpa.JpaRouteTest.cleanupRepository(JpaRouteTest.java:96) + at org.apache.camel.processor.jpa.JpaRouteTest.createCamelContext(JpaRouteTest.java:67) + at org.apache.camel.test.junit5.CamelTestSupport.doSetUp(CamelTestSupport.java:238) + at org.apache.camel.test.junit5.CamelTestSupport.setUp(CamelTestSupport.java:208) + +The problem here is that the source has been compiled or recompiled +through your IDE and not through Maven, which would [enhance the +byte-code at build +time](https://svn.apache.org/repos/asf/camel/trunk/components/camel-jpa/pom.xml). +To overcome this, you need to enable [dynamic byte-code enhancement of +OpenJPA](http://openjpa.apache.org/entity-enhancement.html#dynamic-enhancement). +For example, assuming the current OpenJPA version being used in Camel is +2\.2.1, to run the tests inside your IDE, you would need to pass the +following argument to the JVM: + + -javaagent:/org/apache/openjpa/openjpa/2.2.1/openjpa-2.2.1.jar + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|aliases|Maps an alias to a JPA entity class. The alias can then be used in the endpoint URI (instead of the fully qualified class name).||object| +|entityManagerFactory|To use the EntityManagerFactory. This is strongly recommended to configure.||object| +|joinTransaction|The camel-jpa component will join transaction by default. You can use this option to turn this off, for example if you use LOCAL\_RESOURCE and join transaction doesn't work with your JPA provider. This option can also be set globally on the JpaComponent, instead of having to set it on all endpoints.|true|boolean| +|sharedEntityManager|Whether to use Spring's SharedEntityManager for the consumer/producer. Note in most cases joinTransaction should be set to false as this is not an EXTENDED EntityManager.|false|boolean| +|transactionStrategy|To use the TransactionStrategy for running the operations in a transaction.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|entityType|Entity class name||string| +|joinTransaction|The camel-jpa component will join transaction by default. You can use this option to turn this off, for example if you use LOCAL\_RESOURCE and join transaction doesn't work with your JPA provider. This option can also be set globally on the JpaComponent, instead of having to set it on all endpoints.|true|boolean| +|maximumResults|Set the maximum number of results to retrieve on the Query.|-1|integer| +|namedQuery|To use a named query.||string| +|nativeQuery|To use a custom native query. You may want to use the option resultClass also when using native queries.||string| +|persistenceUnit|The JPA persistence unit used by default.|camel|string| +|query|To use a custom query.||string| +|resultClass|Defines the type of the returned payload (we will call entityManager.createNativeQuery(nativeQuery, resultClass) instead of entityManager.createNativeQuery(nativeQuery)). Without this option, we will return an object array. Only has an affect when using in conjunction with native query when consuming data.||string| +|consumeDelete|If true, the entity is deleted after it is consumed; if false, the entity is not deleted.|true|boolean| +|consumeLockEntity|Specifies whether or not to set an exclusive lock on each entity bean while processing the results from polling.|true|boolean| +|deleteHandler|To use a custom DeleteHandler to delete the row after the consumer is done processing the exchange||object| +|lockModeType|To configure the lock mode on the consumer.|PESSIMISTIC\_WRITE|object| +|maxMessagesPerPoll|An integer value to define the maximum number of messages to gather per poll. By default, no maximum is set. Can be used to avoid polling many thousands of messages when starting up the server. Set a value of 0 or negative to disable.||integer| +|preDeleteHandler|To use a custom Pre-DeleteHandler to delete the row after the consumer has read the entity.||object| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|skipLockedEntity|To configure whether to use NOWAIT on lock and silently skip the entity.|false|boolean| +|transacted|Whether to run the consumer in transacted mode, by which all messages will either commit or rollback, when the entire batch has been processed. The default behavior (false) is to commit all the previously successfully processed messages, and only rollback the last failed message.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|parameters|This key/value mapping is used for building the query parameters. It is expected to be of the generic type java.util.Map where the keys are the named parameters of a given JPA query and the values are their corresponding effective values you want to select for. When it's used for producer, Simple expression can be used as a parameter value. It allows you to retrieve parameter values from the message body, header and etc.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|findEntity|If enabled then the producer will find a single entity by using the message body as key and entityType as the class type. This can be used instead of a query to find a single entity.|false|boolean| +|firstResult|Set the position of the first result to retrieve.|-1|integer| +|flushOnSend|Flushes the EntityManager after the entity bean has been persisted.|true|boolean| +|outputTarget|To put the query (or find) result in a header or property instead of the body. If the value starts with the prefix property:, put the result into the so named property, otherwise into the header.||string| +|remove|Indicates to use entityManager.remove(entity).|false|boolean| +|singleResult|If enabled, a query or a find which would return no results or more than one result, will throw an exception instead.|false|boolean| +|useExecuteUpdate|To configure whether to use executeUpdate() when producer executes a query. When you use INSERT, UPDATE or DELETE statement as a named query, you need to specify this option to 'true'.||boolean| +|usePersist|Indicates to use entityManager.persist(entity) instead of entityManager.merge(entity). Note: entityManager.persist(entity) doesn't work for detached entities (where the EntityManager has to execute an UPDATE instead of an INSERT query)!|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|usePassedInEntityManager|If set to true, then Camel will use the EntityManager from the header JpaConstants.ENTITY\_MANAGER instead of the configured entity manager on the component/endpoint. This allows end users to control which entity manager will be in use.|false|boolean| +|entityManagerProperties|Additional properties for the entity manager to use.||object| +|sharedEntityManager|Whether to use Spring's SharedEntityManager for the consumer/producer. Note in most cases joinTransaction should be set to false as this is not an EXTENDED EntityManager.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| diff --git a/camel-jslt.md b/camel-jslt.md new file mode 100644 index 0000000000000000000000000000000000000000..d644b801c05c0032271b07658eeb0649ce892968 --- /dev/null +++ b/camel-jslt.md @@ -0,0 +1,132 @@ +# Jslt + +**Since Camel 3.1** + +**Only producer is supported** + +The JSLT component allows you to process JSON messages using an +[JSLT](https://github.com/schibsted/jslt) expression. This can be ideal +when doing JSON to JSON transformation or querying data. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-jslt + x.x.x + + + +# URI format + + jslt:specName[?options] + +Where **specName** is the classpath-local URI of the specification to +invoke; or the complete URL of the remote specification (e.g.: +`\file://folder/myfile.vm`). + +# Passing values to JSLT + +Camel can supply exchange information as variables when applying a JSLT +expression on the body. The available variables from the **Exchange** +are: + + ++++ + + + + + + + + + + + + + + + + + + + + +
namevalue

headers

The headers of the In message as a json +object

variables

The variables

exchange.properties

The Exchange +properties as a json object. exchange is the name of the +variable and properties is the path to the exchange +properties. Available if allowContextMapAll option is +true.

+ +All the values that cannot be converted to json with Jackson are denied +and will not be available in the jslt expression. + +For example, the header named `type` and the exchange property +`instance` can be accessed like + + { + "type": $headers.type, + "instance": $exchange.properties.instance + } + +# Samples + +For example, you could use something like: + + from("activemq:My.Queue"). + to("jslt:com/acme/MyResponse.json"); + +And a file-based resource: + + from("activemq:My.Queue"). + to("jslt:file://myfolder/MyResponse.json?contentCache=true"). + to("activemq:Another.Queue"); + +You can also specify which JSLT expression the component should use +dynamically via a header, so, for example: + + from("direct:in"). + setHeader("CamelJsltResourceUri").constant("path/to/my/spec.json"). + to("jslt:dummy?allowTemplateFromHeader=true"); + +Or send whole jslt expression via header: (suitable for querying) + + from("direct:in"). + setHeader("CamelJsltString").constant(".published"). + to("jslt:dummy?allowTemplateFromHeader=true"); + +Passing exchange properties to the jslt expression can be done like this + + from("direct:in"). + to("jslt:com/acme/MyResponse.json?allowContextMapAll=true"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|functions|JSLT can be extended by plugging in functions written in Java.||array| +|objectFilter|JSLT can be extended by plugging in a custom jslt object filter||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|resourceUri|Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod.||string| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|contentCache|Sets whether to use resource content cache or not|false|boolean| +|mapBigDecimalAsFloats|If true, the mapper will use the USE\_BIG\_DECIMAL\_FOR\_FLOATS in serialization features|false|boolean| +|objectMapper|Setting a custom JSON Object Mapper to be used||object| +|prettyPrint|If true, JSON in output message is pretty printed.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-json-patch.md b/camel-json-patch.md new file mode 100644 index 0000000000000000000000000000000000000000..14e4721b694cf10379ebacc4a31c060806d778d3 --- /dev/null +++ b/camel-json-patch.md @@ -0,0 +1,45 @@ +# Json-patch + +**Since Camel 3.12** + +**Only producer is supported** + +The JsonPatch component allows you to process JSON messages using an +[JSON Patch](https://github.com/java-json-tools/json-patch) ([RFC +6902](https://datatracker.ietf.org/doc/html/rfc6902)). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-json-patch + x.x.x + + + +# URI format + + json-patch:resourceUri[?options] + +Where **specName** is the classpath-local URI of the specification to +invoke; or the complete URL of the remote specification (eg: +file://folder/myfile.vm). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|resourceUri|Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod.||string| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|contentCache|Sets whether to use resource content cache or not|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-json-validator.md b/camel-json-validator.md new file mode 100644 index 0000000000000000000000000000000000000000..96ba748f7e15602c79619f1c8b19d02456b4e1e9 --- /dev/null +++ b/camel-json-validator.md @@ -0,0 +1,118 @@ +# Json-validator + +**Since Camel 2.20** + +**Only producer is supported** + +The JSON Schema Validator component performs bean validation of the +message body against JSON Schemas v4, v6, v7, v2019-09 draft and +v2020-12(partial) using the NetworkNT JSON Schema library +([https://github.com/networknt/json-schema-validator](https://github.com/networknt/json-schema-validator)). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-json-validator + x.y.z + + + +# URI format + + json-validator:resourceUri[?options] + +Where **resourceUri** is some URL to a local resource on the classpath +or a full URL to a remote resource or resource on the file system which +contains the JSON Schema to validate against. + +# Example + +Assuming we have the following JSON Schema: + +**myschema.json** + + { + "$schema": "http://json-schema.org/draft-04/schema#", + "definitions": {}, + "id": "my-schema", + "properties": { + "id": { + "default": 1, + "description": "An explanation about the purpose of this instance.", + "id": "/properties/id", + "title": "The id schema", + "type": "integer" + }, + "name": { + "default": "A green door", + "description": "An explanation about the purpose of this instance.", + "id": "/properties/name", + "title": "The name schema", + "type": "string" + }, + "price": { + "default": 12.5, + "description": "An explanation about the purpose of this instance.", + "id": "/properties/price", + "title": "The price schema", + "type": "number" + } + }, + "required": [ + "name", + "id", + "price" + ], + "type": "object" + } + +We can validate incoming JSON with the following Camel route, where +`myschema.json` is loaded from the classpath. + + from("direct:start") + .to("json-validator:myschema.json") + .to("mock:end") + +If you use the default schema loader, it will try to determine the +schema version from the $schema property and instruct the +[validator](https://github.com/networknt) appropriately. If it can’t +find (or doesn’t recognize) the $schema property, it will assume your +schema is version +[2019-09](https://json-schema.org/specification-links.html#draft-2019-09-formerly-known-as-draft-8). + +If your schema is local to your application (e.g. a classpath location +as opposed to URL), your schema can also contain `$ref` links to a +relative subschema in the classpath. Per the JSON schema spec, your +schema must not have an $id identifier property for this to work +properly. See the [unit +test](https://github.com/apache/camel/blob/main/components/camel-json-validator/src/test/java/org/apache/camel/component/jsonvalidator/LocalRefSchemaTest.java) +and +[schema](https://github.com/apache/camel/blob/main/components/camel-json-validator/src/test/resources/org/apache/camel/component/jsonvalidator/Order.json) +for an example. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|resourceUri|Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod.||string| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|contentCache|Sets whether to use resource content cache or not|false|boolean| +|failOnNullBody|Whether to fail if no body exists.|true|boolean| +|failOnNullHeader|Whether to fail if no header exists when validating against a header.|true|boolean| +|headerName|To validate against a header instead of the message body.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|disabledDeserializationFeatures|Comma-separated list of Jackson DeserializationFeature enum values which will be disabled for parsing exchange body||string| +|enabledDeserializationFeatures|Comma-separated list of Jackson DeserializationFeature enum values which will be enabled for parsing exchange body||string| +|errorHandler|To use a custom ValidatorErrorHandler. The default error handler captures the errors and throws an exception.||object| +|uriSchemaLoader|To use a custom schema loader allowing for adding custom format validation. The default implementation will create a schema loader that tries to determine the schema version from the $schema property of the specified schema.||object| diff --git a/camel-jsonata.md b/camel-jsonata.md new file mode 100644 index 0000000000000000000000000000000000000000..44e8eb94daeed5e27014e5a943537d1ee3c709b0 --- /dev/null +++ b/camel-jsonata.md @@ -0,0 +1,60 @@ +# Jsonata + +**Since Camel 3.5** + +**Only producer is supported** + +The Jsonata component allows you to process JSON messages using the +[JSONATA](https://jsonata.org/) specification. This can be ideal when +doing JSON to JSON transformation and other transformations from JSON. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-jsonata + x.x.x + + + +# URI format + + jsonata:specName[?options] + +Where **specName** is the classpath-local URI of the specification to +invoke; or the complete URL of the remote specification (e.g.: +`\file://folder/myfile.vm`). + +# Samples + +For example, you could use something like: + + from("activemq:My.Queue"). + to("jsonata:com/acme/MyResponse.json"); + +And a file-based resource: + + from("activemq:My.Queue"). + to("jsonata:file://myfolder/MyResponse.json?contentCache=true"). + to("activemq:Another.Queue"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|resourceUri|Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod.||string| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|contentCache|Sets whether to use resource content cache or not|false|boolean| +|inputType|Specifies if the input should be Jackson JsonNode or a JSON String.|Jackson|object| +|outputType|Specifies if the output should be Jackson JsonNode or a JSON String.|Jackson|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-jt400.md b/camel-jt400.md new file mode 100644 index 0000000000000000000000000000000000000000..12be3556a2d4e520e424e41ac8a8ed25de6cad35 --- /dev/null +++ b/camel-jt400.md @@ -0,0 +1,217 @@ +# Jt400 + +**Since Camel 1.5** + +**Both producer and consumer are supported** + +The JT400 component allows you to exchange messages with an IBM i system +using data queues, message queues, or program call. IBM i is the +replacement for AS/400 and iSeries servers. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-jt400 + x.x.x + + + +# URI format + +To send or receive data from a data queue + + jt400://user:password/system/QSYS.LIB/library.LIB/queue.DTAQ[?options] + +To send or receive messages from a message queue + + jt400://user:password/system/QSYS.LIB/library.LIB/queue.MSGQ[?options] + +To call program + + jt400://user:password/system/QSYS.LIB/library.LIB/program.PGM[?options] + +# Usage + +When configured as a data queue consumer endpoint, the endpoint will +poll a data queue on an IBM i system. For every entry on the data queue, +a new `Exchange` is sent with the entry’s data in the *In* message’s +body, formatted either as a `String` or a `byte[]`, depending on the +format. + +For a data queue provider endpoint, the *In* message body contents will +be put on the data queue as either raw bytes or text. + +When configured as a message queue consumer endpoint, the endpoint will +poll a message queue on an IBM i system. For every entry on the queue, a +new `Exchange` is sent with the entry’s data in the *In* message’s body. +The data is always formatted as a `String`. Note that only new messages +will be processed. That is, this endpoint will not process any existing +messages on the queue that have already been handled by another program. + +For a message queue provider endpoint, the *In* message body contents +are presumed to be text and sent to the queue as an informational +message. Inquiry messages or messages requiring a message ID are not +supported. + +# Connection pool + +You can explicitly configure a connection pool on the Jt400Component, or +as an uri option on the endpoint. + +# Program call + +This endpoint expects the input to be an `Object[]`, whose object types +are `int`, `long`, `CharSequence` (such as `String`), or `byte[]`. All +other data types in the input array will be converted to `String`. For +character inputs, CCSID handling is performed through the native jt400 +library mechanisms. A parameter can be *omitted* by passing null as the +value in its position (the program has to support it). After the program +execution, the endpoint returns an `Object[]` in the message body. +Depending on *format*, the returned array will be populated with +`byte[]` or `String` objects representing the values as they were +returned by the program. Input-only parameters will contain the same +data as the beginning of the invocation. This endpoint does not +implement a provider endpoint! + +# Example + +In the snippet below, the data for an exchange sent to the +`direct:george` endpoint will be put in the data queue `PENNYLANE` in +library `BEATLES` on a system named `LIVERPOOL`. +Another user connects to the same data queue to receive the information +from the data queue and forward it to the `mock:ringo` endpoint. + + public class Jt400RouteBuilder extends RouteBuilder { + @Override + public void configure() throws Exception { + from("direct:george").to("jt400://GEORGE:EGROEG@LIVERPOOL/QSYS.LIB/BEATLES.LIB/PENNYLANE.DTAQ"); + from("jt400://RINGO:OGNIR@LIVERPOOL/QSYS.LIB/BEATLES.LIB/PENNYLANE.DTAQ").to("mock:ringo"); + } + } + +## Program call examples + +In the snippet below, the data Exchange sent to the direct:work endpoint +will contain three strings that will be used as the arguments for the +program “compute” in the library “assets”. This program will write the +output values in the second and third parameters. All the parameters +will be sent to the direct:play endpoint. + + public class Jt400RouteBuilder extends RouteBuilder { + @Override + public void configure() throws Exception { + from("direct:work").to("jt400://GRUPO:ATWORK@server/QSYS.LIB/assets.LIB/compute.PGM?fieldsLength=10,10,512&ouputFieldsIdx=2,3").to("direct:play"); + } + } + +In this example, the camel route will call the QUSRTVUS API to retrieve +16 bytes from data area "MYUSRSPACE" in the "MYLIB" library. + + public class Jt400RouteBuilder extends RouteBuilder { + @Override + public void configure() throws Exception { + from("timer://foo?period=60000") + .process( exchange -> { + String usrSpc = "MYUSRSPACEMYLIB "; + Object[] parms = new Object[] { + usrSpc, // Qualified user space name + 1, // starting position + 16, // length of data + "" // output + }; + exchange.getIn().setBody(parms); + }) + .to("jt400://*CURRENT:*CURRENt@localhost/qsys.lib/QUSRTVUS.PGM?fieldsLength=20,4,4,16&outputFieldsIdx=3") + .setBody(simple("${body[3]}")) + .to("direct:foo"); + } + } + +## Writing to keyed data queues + + from("jms:queue:input") + .to("jt400://username:password@system/lib.lib/MSGINDQ.DTAQ?keyed=true"); + +## Reading from keyed data queues + + from("jt400://username:password@system/lib.lib/MSGOUTDQ.DTAQ?keyed=true&searchKey=MYKEY&searchType=GE") + .to("jms:queue:output"); + +## Writing to message queues + + from("jms:queue:input") + .to("jt400://username:password@system/lib.lib/MSGINQ.MSGQ"); + +## Reading from a message queue + + from("jt400://username:password@system/lib.lib/MSGOUTQ.MSGQ") + .to("jms:queue:output"); + +## Replying to an inquiry message on a message queue + + from("jt400://username:password@localhost/qsys.lib/qusrsys.lib/myq.msgq?sendingReply=true") + .choice() + .when(header(Jt400Constants.MESSAGE_TYPE).isEqualTo(AS400Message.INQUIRY)) + .process((exchange) -> { + String reply = // insert reply logic here + exchange.getIn().setBody(reply); + }) + .to("jt400://username:password@localhost/qsys.lib/qusrsys.lib/myq.msgq"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|connectionPool|Default connection pool used by the component. Note that this pool is lazily initialized. This is because in a scenario where the user always provides a pool, it would be wasteful for Camel to initialize and keep an idle pool.||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|userID|Returns the ID of the IBM i user.||string| +|password|Returns the password of the IBM i user.||string| +|systemName|Returns the name of the IBM i system.||string| +|objectPath|Returns the fully qualified integrated file system path name of the target object of this endpoint.||string| +|type|Whether to work with data queues or remote program call||object| +|ccsid|Sets the CCSID to use for the connection with the IBM i system.||integer| +|format|Sets the data format for sending messages.|text|object| +|guiAvailable|Sets whether IBM i prompting is enabled in the environment running Camel.|false|boolean| +|keyed|Whether to use keyed or non-keyed data queues.|false|boolean| +|searchKey|Search key for keyed data queues.||string| +|messageAction|Action to be taken on messages when read from a message queue. Messages can be marked as old (OLD), removed from the queue (REMOVE), or neither (SAME).|OLD|object| +|readTimeout|Timeout in millis the consumer will wait while trying to read a new message of the data queue.|30000|integer| +|searchType|Search type such as EQ for equal etc.|EQ|object| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|sendingReply|If true, the consumer endpoint will set the Jt400Constants.MESSAGE\_REPLYTO\_KEY header of the camel message for any IBM i inquiry messages received. If that message is then routed to a producer endpoint, the action will not be processed as sending a message to the queue, but rather a reply to the specific inquiry message.|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|outputFieldsIdxArray|Specifies which fields (program parameters) are output parameters.||object| +|outputFieldsLengthArray|Specifies the fields (program parameters) length as in the IBM i program definition.||object| +|procedureName|Procedure name from a service program to call||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|secured|Whether connections to IBM i are secured with SSL.|false|boolean| diff --git a/camel-jte.md b/camel-jte.md new file mode 100644 index 0000000000000000000000000000000000000000..6ecbe361922e614e02aa873c00b875b16f4e275c --- /dev/null +++ b/camel-jte.md @@ -0,0 +1,130 @@ +# Jte + +**Since Camel 4.4** + +**Only producer is supported** + +The **jte:** component allows for processing a message using a +[JTE](https://jte.gg/) template. The JTE is a Java Template Engine, +which means you write templates that resemble Java code, which in fact +gets transformed into .java source files that gets compiled to have very +fast performance. + +Only use this component if you are familiar with Java programming. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-jte + x.x.x + + +# URI format + + jte:templateName[?options] + +Where **templateName** is the classpath-local URI of the template to +invoke; or the complete URL of the remote template (e.g.: +`\file://folder/myfile.jte`). + +# JTE Context + +Camel will provide exchange information in the JTE context, as a +`org.apache.camel.component.jte.Model` class with the following +information: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
keyvalue

exchange

The Exchange itself (only +if allowContextMapAll=true).

headers

The headers of the message as +java.util.Map.

body

The message body as +Object.

strBody()

The message body converted to a +String

header("key")

Message header with the given key +converted to a String value.

exchangeProperty("key")

Exchange property with the given key +converted to a String value (only if allowContextMapAll=true).

+ +You can set up your custom JTE data model in the message header with the +key "**CamelJteDataModel**" just like this + +# Dynamic templates + +Camel provides two headers by which you can define a different resource +location for a template or the template content itself. If any of these +headers is set, then Camel uses this over the endpoint configured +resource. This allows you to provide a dynamic template at runtime. + +# Samples + +For example, you could use something like: + + from("rest:get:item/{id}"). + to("jte:com/acme/response.jte"); + +To use a JTE template to formulate a response to the REST get call: + + @import org.apache.camel.component.jte.Model + @param Model model + + The item ${model.header("id")} is being processed. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|contentType|Content type the JTE engine should use.|Plain|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|preCompile|To speed up startup and rendering on your production server, it is possible to precompile all templates during the build. This way, the template engine can load each template's .class file directly without first compiling it.|false|boolean| +|workDir|Work directory where JTE will store compiled templates.|jte-classes|string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|resourceUri|Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod.||string| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|contentCache|Sets whether to use resource content cache or not|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-kafka.md b/camel-kafka.md new file mode 100644 index 0000000000000000000000000000000000000000..1c51bf29cdd378f03c881fc4917afe5e187e9583 --- /dev/null +++ b/camel-kafka.md @@ -0,0 +1,1019 @@ +# Kafka + +**Since Camel 2.13** + +**Both producer and consumer are supported** + +The Kafka component is used for communicating with [Apache +Kafka](http://kafka.apache.org/) message broker. + +Maven users will need to add the following dependency to their `pom.xml` +for this component. + + + org.apache.camel + camel-kafka + x.x.x + + + +# URI format + + kafka:topic[?options] + +For more information about Producer/Consumer configuration: + +[http://kafka.apache.org/documentation.html#newconsumerconfigs](http://kafka.apache.org/documentation.html#newconsumerconfigs) +[http://kafka.apache.org/documentation.html#producerconfigs](http://kafka.apache.org/documentation.html#producerconfigs) + +If you want to send a message to a dynamic topic then use +`KafkaConstants.OVERRIDE_TOPIC` as it is used as a one-time header that +is not sent along the message, and actually is removed in the producer. + +# Consumer error handling + +While kafka consumer is polling messages from the kafka broker, then +errors can happen. This section describes what happens and what you can +configure. + +The consumer may throw exception when invoking the Kafka `poll` API. For +example, if the message cannot be deserialized due to invalid data, and +many other kinds of errors. Those errors are in the form of +`KafkaException` which are either *retriable* or not. The exceptions +which can be retried (`RetriableException`) will be retried again (with +a poll timeout in between). All other kinds of exceptions are handled +according to the *pollOnError* configuration. This configuration has the +following values: + +- DISCARD will discard the message and continue to poll the next + message. + +- ERROR\_HANDLER will use Camel’s error handler to process the + exception, and afterwards continue to poll the next message. + +- RECONNECT will re-connect the consumer and try to poll the message + again. + +- RETRY will let the consumer retry polling the same message again + +- STOP will stop the consumer (it has to be manually started/restarted + if the consumer should be able to consume messages again). + +The default is **ERROR\_HANDLER**, which will let Camel’s error handler +(if any configured) process the caused exception. Afterwards continue to +poll the next message. This behavior is similar to the +*bridgeErrorHandler* option that Camel components have. + +For advanced control a custom implementation of +`org.apache.camel.component.kafka.PollExceptionStrategy` can be +configured on the component level, which allows controlling which of the +strategies to use for each exception. + +# Consumer error handling (advanced) + +By default, Camel will poll using the **ERROR\_HANDLER** to process +exceptions. How Camel handles a message that results in an exception can +be altered using the `breakOnFirstError` attribute in the configuration. +Instead of continuing to poll the next message, Camel will instead +commit the offset so that the message that caused the exception will be +retried. This is similar to the **RETRY** polling strategy above. + + KafkaComponent kafka = new KafkaComponent(); + kafka.setBreakOnFirstError(true); + ... + camelContext.addComponent("kafka", kafka); + +It is recommended that you read the section below "Using manual commit +with Kafka consumer" to understand how `breakOnFirstError` will work +based on the `CommitManager` that is configured. + +# Samples + +## Consuming messages from Kafka + +Here is the minimal route you need to read messages from Kafka. + + from("kafka:test?brokers=localhost:9092") + .log("Message received from Kafka : ${body}") + .log(" on the topic ${headers[kafka.TOPIC]}") + .log(" on the partition ${headers[kafka.PARTITION]}") + .log(" with the offset ${headers[kafka.OFFSET]}") + .log(" with the key ${headers[kafka.KEY]}") + +If you need to consume messages from multiple topics, you can use a +comma separated list of topic names. + + from("kafka:test,test1,test2?brokers=localhost:9092") + .log("Message received from Kafka : ${body}") + .log(" on the topic ${headers[kafka.TOPIC]}") + .log(" on the partition ${headers[kafka.PARTITION]}") + .log(" with the offset ${headers[kafka.OFFSET]}") + .log(" with the key ${headers[kafka.KEY]}") + +It’s also possible to subscribe to multiple topics giving a pattern as +the topic name and using the `topicIsPattern` option. + + from("kafka:test.*?brokers=localhost:9092&topicIsPattern=true") + .log("Message received from Kafka : ${body}") + .log(" on the topic ${headers[kafka.TOPIC]}") + .log(" on the partition ${headers[kafka.PARTITION]}") + .log(" with the offset ${headers[kafka.OFFSET]}") + .log(" with the key ${headers[kafka.KEY]}") + +When consuming messages from Kafka, you can use your own offset +management and not delegate this management to Kafka. To keep the +offsets, the component needs a `StateRepository` implementation such as +`FileStateRepository`. This bean should be available in the registry. +Here how to use it : + + // Create the repository in which the Kafka offsets will be persisted + FileStateRepository repository = FileStateRepository.fileStateRepository(new File("/path/to/repo.dat")); + + // Bind this repository into the Camel registry + Registry registry = createCamelRegistry(); + registry.bind("offsetRepo", repository); + + // Configure the camel context + DefaultCamelContext camelContext = new DefaultCamelContext(registry); + camelContext.addRoutes(new RouteBuilder() { + @Override + public void configure() throws Exception { + from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" + + // Set up the topic and broker address + "&groupId=A" + + // The consumer processor group ID + "&autoOffsetReset=earliest" + + // Ask to start from the beginning if we have unknown offset + "&offsetRepository=#offsetRepo") + // Keep the offsets in the previously configured repository + .to("mock:result"); + } + }); + +## Producing messages to Kafka + +Here is the minimal route you need in order to write messages to Kafka. + + from("direct:start") + .setBody(constant("Message from Camel")) // Message to send + .setHeader(KafkaConstants.KEY, constant("Camel")) // Key of the message + .to("kafka:test?brokers=localhost:9092"); + +# SSL configuration + +You have 2 different ways to configure the SSL communication on the +Kafka component. + +The first way is through the many SSL endpoint parameters: + + from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" + + "&groupId=A" + + "&sslKeystoreLocation=/path/to/keystore.jks" + + "&sslKeystorePassword=changeit" + + "&sslKeyPassword=changeit" + + "&securityProtocol=SSL") + .to("mock:result"); + +The second way is to use the `sslContextParameters` endpoint parameter: + + // Configure the SSLContextParameters object + KeyStoreParameters ksp = new KeyStoreParameters(); + ksp.setResource("/path/to/keystore.jks"); + ksp.setPassword("changeit"); + KeyManagersParameters kmp = new KeyManagersParameters(); + kmp.setKeyStore(ksp); + kmp.setKeyPassword("changeit"); + SSLContextParameters scp = new SSLContextParameters(); + scp.setKeyManagers(kmp); + + // Bind this SSLContextParameters into the Camel registry + Registry registry = createCamelRegistry(); + registry.bind("ssl", scp); + + // Configure the camel context + DefaultCamelContext camelContext = new DefaultCamelContext(registry); + camelContext.addRoutes(new RouteBuilder() { + @Override + public void configure() throws Exception { + from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" + + // Set up the topic and broker address + "&groupId=A" + + // The consumer processor group ID + "&sslContextParameters=#ssl" + + // The security protocol + "&securityProtocol=SSL) + // Reference the SSL configuration + .to("mock:result"); + } + }); + +# Using the Kafka idempotent repository + +The `camel-kafka` library provides a Kafka topic-based idempotent +repository. This repository stores broadcasts all changes to idempotent +state (add/remove) in a Kafka topic, and populates a local in-memory +cache for each repository’s process instance through event sourcing. The +topic used must be unique per idempotent repository instance. The +mechanism does not have any requirements about the number of topic +partitions, as the repository consumes from all partitions at the same +time. It also does not have any requirements about the replication +factor of the topic. Each repository instance that uses the topic, +(e.g., typically on different machines running in parallel) controls its +own consumer group, so in a cluster of 10 Camel processes using the same +topic, each will control its own offset. On startup, the instance +subscribes to the topic, rewinds the offset to the beginning and +rebuilds the cache to the latest state. The cache will not be considered +warmed up until one poll of `pollDurationMs` in length returns 0 +records. Startup will not be completed until either the cache has warmed +up, or 30 seconds go by; if the latter happens, the idempotent +repository may be in an inconsistent state until its consumer catches up +to the end of the topic. Be mindful of the format of the header used for +the uniqueness check. By default, it uses Strings as the data types. +When using primitive numeric formats, the header must be deserialized +accordingly. Check the samples below for examples. + +A `KafkaIdempotentRepository` has the following properties: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PropertyDefaultDescription

topic

Required The name of +the Kafka topic to use to broadcast changes. (required)

bootstrapServers

Required The +bootstrap.servers property on the internal Kafka producer +and consumer. Use this as shorthand if not setting +consumerConfig and producerConfig. If used, +this component will apply sensible default configurations for the +producer and consumer.

groupId

The groupId to assign to the idempotent +consumer.

startupOnly

false

Whether to sync on startup only, or to +continue syncing while Camel is running.

maxCacheSize

1000

How many of the most recently used keys +should be stored in memory (default 1000).

pollDurationMs

100

The poll duration of the Kafka +consumer. The local caches are updated immediately. This value will +affect how far behind other peers that update their caches from the +topic are relative to the idempotent consumer instance that sent the +cache action message. The default value of this is 100 ms. If setting +this value explicitly, be aware that there is a tradeoff between the +remote cache liveness and the volume of network traffic between this +repository’s consumer and the Kafka brokers. The cache warmup process +also depends on there being one poll that fetches nothing - this +indicates that the stream has been consumed up to the current point. If +the poll duration is excessively long for the rate at which messages are +sent on the topic, there exists a possibility that the cache cannot be +warmed up and will operate in an inconsistent state relative to its +peers until it catches up.

producerConfig

Sets the properties that will be used +by the Kafka producer that broadcasts changes. Overrides +bootstrapServers, so must define the Kafka +bootstrap.servers property itself

consumerConfig

Sets the properties that will be used +by the Kafka consumer that populates the cache from the topic. Overrides +bootstrapServers, so must define the Kafka +bootstrap.servers property itself

+ +The repository can be instantiated by defining the `topic` and +`bootstrapServers`, or the `producerConfig` and `consumerConfig` +property sets can be explicitly defined to enable features such as +SSL/SASL. To use, this repository must be placed in the Camel registry, +either manually or by registration as a bean in Spring/Blueprint, as it +is `CamelContext` aware. + +Sample usage is as follows: + + KafkaIdempotentRepository kafkaIdempotentRepository = new KafkaIdempotentRepository("idempotent-db-inserts", "localhost:9091"); + + SimpleRegistry registry = new SimpleRegistry(); + registry.put("insertDbIdemRepo", kafkaIdempotentRepository); // must be registered in the registry, to enable access to the CamelContext + CamelContext context = new CamelContext(registry); + + // later in RouteBuilder... + from("direct:performInsert") + .idempotentConsumer(header("id")).idempotentRepository("insertDbIdemRepo") + // once-only insert into the database + .end() + +In XML: + + + + + + + + + + + + + + localhost:9091 + + + + + localhost:9091 + + + + +There are 3 alternatives to choose from when using idempotency with +numeric identifiers. The first one is to use the static method +`numericHeader` method from +`org.apache.camel.component.kafka.serde.KafkaSerdeHelper` to perform the +conversion for you: + + from("direct:performInsert") + .idempotentConsumer(numericHeader("id")).idempotentRepository("insertDbIdemRepo") + // once-only insert into the database + .end() + +Alternatively, it is possible to use a custom serializer configured via +the route URL to perform the conversion: + + public class CustomHeaderDeserializer extends DefaultKafkaHeaderDeserializer { + private static final Logger LOG = LoggerFactory.getLogger(CustomHeaderDeserializer.class); + + @Override + public Object deserialize(String key, byte[] value) { + if (key.equals("id")) { + BigInteger bi = new BigInteger(value); + + return String.valueOf(bi.longValue()); + } else { + return super.deserialize(key, value); + } + } + } + +Lastly, it is also possible to do so in a processor: + + from(from).routeId("foo") + .process(exchange -> { + byte[] id = exchange.getIn().getHeader("id", byte[].class); + + BigInteger bi = new BigInteger(id); + exchange.getIn().setHeader("id", String.valueOf(bi.longValue())); + }) + .idempotentConsumer(header("id")) + .idempotentRepository("kafkaIdempotentRepository") + .to(to); + +# Using manual commit with Kafka consumer + +By default, the Kafka consumer will use auto commit, where the offset +will be committed automatically in the background using a given +interval. + +In case you want to force manual commits, you can use +`KafkaManualCommit` API from the Camel Exchange, stored on the message +header. This requires turning on manual commits by either setting the +option `allowManualCommit` to `true` on the `KafkaComponent` or on the +endpoint, for example: + + KafkaComponent kafka = new KafkaComponent(); + kafka.setAutoCommitEnable(false); + kafka.setAllowManualCommit(true); + ... + camelContext.addComponent("kafka", kafka); + +By default, it uses the `NoopCommitManager` behind the scenes. To commit +an offset, you will require you to use the `KafkaManualCommit` from Java +code such as a Camel `Processor`: + + public void process(Exchange exchange) { + KafkaManualCommit manual = + exchange.getIn().getHeader(KafkaConstants.MANUAL_COMMIT, KafkaManualCommit.class); + manual.commit(); + } + +The `KafkaManualCommit` will force a synchronous commit which will block +until the commit is acknowledged on Kafka, or if it fails an exception +is thrown. You can use an asynchronous commit as well by configuring the +`KafkaManualCommitFactory` with the +`DefaultKafkaManualAsyncCommitFactory` implementation. + +Then the commit will be done in the next consumer loop using the kafka +asynchronous commit api. + +If you want to use a custom implementation of `KafkaManualCommit` then +you can configure a custom `KafkaManualCommitFactory` on the +`KafkaComponent` that creates instances of your custom implementation. + +When configuring a consumer to use manual commit and a specific +`CommitManager` it is important to understand how these influence the +behavior of `breakOnFirstError` + + KafkaComponent kafka = new KafkaComponent(); + kafka.setAutoCommitEnable(false); + kafka.setAllowManualCommit(true); + kafka.setBreakOnFirstError(true); + kafka.setKafkaManualCommitFactory(new DefaultKafkaManualCommitFactory()); + ... + camelContext.addComponent("kafka", kafka); + +When the `CommitManager` is left to the default `NoopCommitManager` then +`breakOnFirstError` will not automatically commit the offset so that the +message with an error is retried. The consumer must manage that in the +route using `KafkaManualCommit`. + +When the `CommitManager` is changed to either the synchronous or +asynchronous manager then `breakOnFirstError` will automatically commit +the offset so that the message with an error is retried. This message +will be continually retried until it can be processed without an error. + +**Note 1**: records from a partition must be processed and committed by +the same thread as the consumer. This means that certain EIPs, async or +concurrent operations in the DSL may cause the commit to fail. In such +circumstances, tyring to commit the transaction will cause the Kafka +client to throw a `java.util.ConcurrentModificationException` exception +with the message `KafkaConsumer is not safe for multi-threaded access`. +To prevent this from happening, redesign your route to avoid those +operations. + +\*Note 2: this is mostly useful with aggregation’s completion timeout +strategies. + +# Pausable Consumers + +The Kafka component supports pausable consumers. This type of consumer +can pause consuming data based on conditions external to the component +itself, such as an external system being unavailable or other transient +conditions. + + from("kafka:topic") + .pausable(new KafkaConsumerListener(), () -> canContinue()) // the pausable check gets called if the exchange fails to be processed ... + .routeId("pausable-route") + .process(this::process) // Kafka consumer will be paused if this one throws an exception ... + .to("some:destination"); // or this one + +In this example, consuming messages can pause (by calling the Kafka’s +Consumer pause method) if the result from `canContinue` is false. + +The pausable EIP is meant to be used as a support mechanism when **there +is an exception** somewhere in the route that prevents the exchange from +being processed. More specifically, the check called by the `pausable` +EIP should be used to test for transient conditions preventing the +exchange from being processed. + +most users should prefer using the +[RoutePolicy](#manual::route-policy.adoc), which offers better control +of the route. + +# Kafka Headers propagation + +When consuming messages from Kafka, headers will be propagated to camel +exchange headers automatically. Producing flow backed by same +behaviour - camel headers of particular exchange will be propagated to +kafka message headers. + +Since kafka headers allow only `byte[]` values, in order camel exchange +header to be propagated its value should be serialized to `bytes[]`, +otherwise header will be skipped. The following header value types are +supported: `String`, `Integer`, `Long`, `Double`, `Boolean`, `byte[]`. +Note: all headers propagated **from** kafka **to** camel exchange will +contain `byte[]` value by default. To override default functionality, +these uri parameters can be set: `headerDeserializer` for `from` route +and `headerSerializer` for `to` route. For example: + + from("kafka:my_topic?headerDeserializer=#myDeserializer") + ... + .to("kafka:my_topic?headerSerializer=#mySerializer") + +By default, all headers are being filtered by +`KafkaHeaderFilterStrategy`. Strategy filters out headers which start +with `Camel` or `org.apache.camel` prefixes. Default strategy can be +overridden by using `headerFilterStrategy` uri parameter in both `to` +and `from` routes: + + from("kafka:my_topic?headerFilterStrategy=#myStrategy") + ... + .to("kafka:my_topic?headerFilterStrategy=#myStrategy") + +`myStrategy` object should be a subclass of `HeaderFilterStrategy` and +must be placed in the Camel registry, either manually or by registration +as a bean in Spring/Blueprint, as it is `CamelContext` aware. + +# Kafka Transaction + +You need to add `transactional.id`, `enable.idempotence` and `retries` +in `additional-properties` to enable kafka transaction with the +producer. + + from("direct:transaction") + .to("kafka:my_topic?additional-properties[transactional.id]=1234&additional-properties[enable.idempotence]=true&additional-properties[retries]=5"); + +At the end of exchange routing, the kafka producer would commit the +transaction or abort it if there is an Exception throwing or the +exchange is `RollbackOnly`. Since Kafka does not support transactions in +multi threads, it will throw `ProducerFencedException` if there is +another producer with the same `transaction.id` to make the +transactional request. + +It would work with JTA `camel-jta` by using `transacted()` and if it +involves some resources (SQL or JMS), which supports XA, then they would +work in tandem, where they both will either commit or rollback at the +end of the exchange routing. In some cases, if the JTA transaction +manager fails to commit (during the 2PC processing), but kafka +transaction has been committed before and there is no chance to roll +back the changes since the kafka transaction does not support JTA/XA +spec. There is still a risk with the data consistency. + +# Setting Kerberos config file + +Configure the *krb5.conf* file directly through the API + + static { + KafkaComponent.setKerberosConfigLocation("path/to/config/file"); + } + +# Batching Consumer + +To use a Kafka batching consumer with Camel, an application has to set +the configuration `batching` to `true`. + +The received records are stored in a list in the exchange used in the +pipeline. As such, it is possible to commit individually every record or +the whole batch at once by committing the last exchange on the list. + +The size of the batch is controlled by the option `maxPollRecords`. + +To avoid blocking for too long, waiting for the whole set of records to +fill the batch, it is possible to use the `pollTimeoutMs` option to set +a timeout for the polling. In this case, the batch may contain less +messages than set in the `maxPollRecords`. + +## Automatic Commits + +By default, Camel uses automatic commits when using batch processing. In +this case, Camel automatically commits the records after they have been +successfully processed by the application. + +In case of failures, the records will not be processed. + +The code below provides an example of this approach: + + public void configure() { + from("kafka:topic?groupId=myGroup&pollTimeoutMs=1000&batching=true&maxPollRecords=10&autoOffsetReset=earliest").process(e -> { + // The received records are stored as exchanges in a list. This gets the list of those exchanges + final List exchanges = e.getMessage().getBody(List.class); + + // Ensure we are actually receiving what we are asking for + if (exchanges == null || exchanges.isEmpty()) { + return; + } + + // The records from the batch are stored in a list of exchanges in the original exchange. To process, we iterate over that list + for (Object obj : exchanges) { + if (obj instanceof Exchange exchange) { + LOG.info("Processing exchange with body {}", exchange.getMessage().getBody(String.class)); + } + } + }).to(KafkaTestUtil.MOCK_RESULT); + } + +### Handling Errors with Automatic Commits + +When using automatic commits, Camel will not commit records if there is +a failure in processing. Because of this, there is a risk that records +could be reprocessed multiple times. + +It is recommended to implement appropriate error handling mechanisms and +patterns (i.e.; such as dead-letter queues), to prevent failed records +from blocking processing progress. + +The code below provides an example of handling errors with automatic +commits: + + public void configure() { + /* + We want to use continued here, so that Camel auto-commits the batch even though part of it has failed. In a + production scenario, applications should probably send these records to a separate topic or fix the condition + that lead to the failure + */ + onException(IllegalArgumentException.class).process(exchange -> { + LOG.warn("Failed to process batch {}", exchange.getMessage().getBody()); + LOG.warn("Failed to process due to {}", exchange.getProperty(Exchange.EXCEPTION_CAUGHT, Throwable.class).getMessage()); + }).continued(true); + + from("kafka:topic?groupId=myGroup&pollTimeoutMs=1000&batching=true&maxPollRecords=10&autoOffsetReset=earliest").process(e -> { + // The received records are stored as exchanges in a list. This gets the list of those exchanges + final List exchanges = e.getMessage().getBody(List.class); + + // Ensure we are actually receiving what we are asking for + if (exchanges == null || exchanges.isEmpty()) { + return; + } + + // The records from the batch are stored in a list of exchanges in the original exchange. + int i = 0; + for (Object o : exchanges) { + if (o instanceof Exchange exchange) { + i++; + LOG.info("Processing exchange with body {}", exchange.getMessage().getBody(String.class)); + + if (i == 4) { + throw new IllegalArgumentException("Failed to process record"); + } + } + } + }).to(KafkaTestUtil.MOCK_RESULT); + } + +### Manual Commits + +When working with batch processing with manual commits, it’s up to the +application to commit the records, and handle the outcome of potentially +invalid records. + +The code below provides an example of this approach: + + public void configure() { + from("kafka:topic?batching=true&allowManualCommit=true&maxPollRecords=100&kafkaManualCommitFactory=#class:org.apache.camel.component.kafka.consumer.DefaultKafkaManualCommitFactory") + .process(e -> { + // The received records are stored as exchanges in a list. This gets the list of those exchanges + final List exchanges = e.getMessage().getBody(List.class); + + // Ensure we are actually receiving what we are asking for + if (exchanges == null || exchanges.isEmpty()) { + return; + } + + /* + Every exchange in that list should contain a reference to the manual commit object. We use the reference + for the last exchange in the list to commit the whole batch + */ + final Object tmp = exchanges.getLast(); + if (tmp instanceof Exchange exchange) { + KafkaManualCommit manual = + exchange.getMessage().getHeader(KafkaConstants.MANUAL_COMMIT, KafkaManualCommit.class); + LOG.debug("Performing manual commit"); + manual.commit(); + LOG.debug("Done performing manual commit"); + } + }); + } + +### Dealing with long polling timeouts + +In some cases, applications may want the polling process to have a long +timeout (see: `pollTimeoutMs`). + +To properly do so, first make sure to have a max polling interval that +is higher than the polling timeout (see: `maxPollIntervalMs`). + +Then, increase the shutdown timeout to ensure that committing, closing +and other Kafka operations are not abruptly aborted. For instance: + + public void configure() { + // Note that this can be configured in other ways + getCamelContext().getShutdownStrategy().setTimeout(10000); + + // route setup ... + } + +# Custom Subscription Adapters + +Applications with complex subscription logic may provide a custom bean +to handle the subscription process. To so, it is necessary to implement +the interface `SubscribeAdapter`. + +**Example subscriber adapter that subscribes to a set of Kafka topics or +patterns** + + public class CustomSubscribeAdapter implements SubscribeAdapter { + @Override + public void subscribe(Consumer consumer, ConsumerRebalanceListener reBalanceListener, TopicInfo topicInfo) { + if (topicInfo.isPattern()) { + consumer.subscribe(topicInfo.getPattern(), reBalanceListener); + } else { + consumer.subscribe(topicInfo.getTopics(), reBalanceListener); + } + } + } + +Then, it is necessary to add it as named bean instance to the registry: + +**Add to registry example** + + context.getRegistry().bind(KafkaConstants.KAFKA_SUBSCRIBE_ADAPTER, new CustomSubscribeAdapter()); + +# Interoperability + +## JMS + +When interoperating Kafka and JMS, it may be necessary to coerce the JMS +headers into their expected type. + +For instance, when consuming messages from Kafka carrying JMS headers +and then sending them to a JMS broker, those headers are first +deserialized into a byte array. Then, the `camel-jms` component tries to +coerce this byte array into the specific type used by. However, both the +origin endpoint as well as how this was setup on the code itsef may +affect how the data is serialized and deserialized. As such, it is not +feasible to naively assume the data type of the byte array. + +To address this issue, we provide a custom header deserializer to force +Kafka to de-serialize the JMS data according to the JMS specification. +This approach ensures that the headers are properly interpreted and +processed by the camel-jms component. + +Due to the inherent complexity of each possible system and endpoint, it +may not be possible for this deserializer to cover all possible +scenarios. As such, it is provided as model that can be modified and/or +adapted for the specific needs of each application. + +To utilize this solution, you need to modify the route URI on the +consumer end of the pipeline by including the `headerDeserializer` +option. For example: + +**Route snippet** + + from("kafka:topic?headerDeserializer=#class:org.apache.camel.component.kafka.consumer.support.interop.JMSDeserializer") + .to("..."); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|additionalProperties|Sets additional properties for either kafka consumer or kafka producer in case they can't be set directly on the camel configurations (e.g.: new Kafka properties that are not reflected yet in Camel configurations), the properties have to be prefixed with additionalProperties.., e.g.: additionalProperties.transactional.id=12345\&additionalProperties.schema.registry.url=http://localhost:8811/avro||object| +|brokers|URL of the Kafka brokers to use. The format is host1:port1,host2:port2, and the list can be a subset of brokers or a VIP pointing to a subset of brokers. This option is known as bootstrap.servers in the Kafka documentation.||string| +|clientId|The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request.||string| +|configuration|Allows to pre-configure the Kafka component with common options that the endpoints will reuse.||object| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object| +|reconnectBackoffMaxMs|The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.|1000|integer| +|retryBackoffMaxMs|The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If retry.backoff.ms is set to be higher than retry.backoff.max.ms, then retry.backoff.max.ms will be used as a constant backoff from the beginning without any exponential increase|1000|integer| +|retryBackoffMs|The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value.|100|integer| +|shutdownTimeout|Timeout in milliseconds to wait gracefully for the consumer or producer to shut down and terminate its worker threads.|30000|integer| +|allowManualCommit|Whether to allow doing manual commits via KafkaManualCommit. If this option is enabled then an instance of KafkaManualCommit is stored on the Exchange message header, which allows end users to access this API and perform manual offset commits via the Kafka consumer.|false|boolean| +|autoCommitEnable|If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. This committed offset will be used when the process fails as the position from which the new consumer will begin.|true|boolean| +|autoCommitIntervalMs|The frequency in ms that the consumer offsets are committed to zookeeper.|5000|integer| +|autoOffsetReset|What to do when there is no initial offset in ZooKeeper or if an offset is out of range: earliest : automatically reset the offset to the earliest offset latest: automatically reset the offset to the latest offset fail: throw exception to the consumer|latest|string| +|batching|Whether to use batching for processing or streaming. The default is false, which uses streaming|false|boolean| +|breakOnFirstError|This options controls what happens when a consumer is processing an exchange and it fails. If the option is false then the consumer continues to the next message and processes it. If the option is true then the consumer breaks out. Using the default NoopCommitManager will cause the consumer to not commit the offset so that the message is re-attempted. The consumer should use the KafkaManualCommit to determine the best way to handle the message. Using either the SyncCommitManager or the AsyncCommitManager, the consumer will seek back to the offset of the message that caused a failure, and then re-attempt to process this message. However, this can lead to endless processing of the same message if it's bound to fail every time, e.g., a poison message. Therefore, it's recommended to deal with that, for example, by using Camel's error handler.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|checkCrcs|Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.|true|boolean| +|commitTimeoutMs|The maximum time, in milliseconds, that the code will wait for a synchronous commit to complete|5000|duration| +|consumerRequestTimeoutMs|The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapsed, the client will resend the request if necessary or fail the request if retries are exhausted.|30000|integer| +|consumersCount|The number of consumers that connect to kafka server. Each consumer is run on a separate thread that retrieves and process the incoming data.|1|integer| +|fetchMaxBytes|The maximum amount of data the server should return for a fetch request This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel.|52428800|integer| +|fetchMinBytes|The minimum amount of data the server should return for a fetch request. If insufficient data is available, the request will wait for that much data to accumulate before answering the request.|1|integer| +|fetchWaitMaxMs|The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy fetch.min.bytes|500|integer| +|groupId|A string that uniquely identifies the group of consumer processes to which this consumer belongs. By setting the same group id, multiple processes can indicate that they are all part of the same consumer group. This option is required for consumers.||string| +|groupInstanceId|A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability (e.g., process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior.||string| +|headerDeserializer|To use a custom KafkaHeaderDeserializer to deserialize kafka headers values||object| +|heartbeatIntervalMs|The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.|3000|integer| +|keyDeserializer|Deserializer class for the key that implements the Deserializer interface.|org.apache.kafka.common.serialization.StringDeserializer|string| +|maxPartitionFetchBytes|The maximum amount of data per-partition the server will return. The maximum total memory used for a request will be #partitions max.partition.fetch.bytes. This size must be at least as large as the maximum message size the server allows or else it is possible for the producer to send messages larger than the consumer can fetch. If that happens, the consumer can get stuck trying to fetch a large message on a certain partition.|1048576|integer| +|maxPollIntervalMs|The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed, and the group will re-balance to reassign the partitions to another member.||duration| +|maxPollRecords|The maximum number of records returned in a single call to poll()|500|integer| +|offsetRepository|The offset repository to use to locally store the offset of each partition of the topic. Defining one will disable the autocommit.||object| +|partitionAssignor|The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used|org.apache.kafka.clients.consumer.RangeAssignor|string| +|pollOnError|What to do if kafka threw an exception while polling for new messages. Will by default use the value from the component configuration unless an explicit value has been configured on the endpoint level. DISCARD will discard the message and continue to poll the next message. ERROR\_HANDLER will use Camel's error handler to process the exception, and afterwards continue to poll the next message. RECONNECT will re-connect the consumer and try polling the message again. RETRY will let the consumer retry poll the same message again. STOP will stop the consumer (it has to be manually started/restarted if the consumer should be able to consume messages again)|ERROR\_HANDLER|object| +|pollTimeoutMs|The timeout used when polling the KafkaConsumer.|5000|duration| +|preValidateHostAndPort|Whether to eager validate that broker host:port is valid and can be DNS resolved to known host during starting this consumer. If the validation fails, then an exception is thrown, which makes Camel fail fast. Disabling this will postpone the validation after the consumer is started, and Camel will keep re-connecting in case of validation or DNS resolution error.|true|boolean| +|seekTo|Set if KafkaConsumer should read from the beginning or the end on startup: SeekPolicy.BEGINNING: read from the beginning. SeekPolicy.END: read from the end.||object| +|sessionTimeoutMs|The timeout used to detect failures when using Kafka's group management facilities.|45000|integer| +|specificAvroReader|This enables the use of a specific Avro reader for use with the in multiple Schema registries documentation with Avro Deserializers implementation. This option is only available externally (not standard Apache Kafka)|false|boolean| +|topicIsPattern|Whether the topic is a pattern (regular expression). This can be used to subscribe to dynamic number of topics matching the pattern.|false|boolean| +|valueDeserializer|Deserializer class for value that implements the Deserializer interface.|org.apache.kafka.common.serialization.StringDeserializer|string| +|createConsumerBackoffInterval|The delay in millis seconds to wait before trying again to create the kafka consumer (kafka-client).|5000|integer| +|createConsumerBackoffMaxAttempts|Maximum attempts to create the kafka consumer (kafka-client), before eventually giving up and failing. Error during creating the consumer may be fatal due to invalid configuration and as such recovery is not possible. However, one part of the validation is DNS resolution of the bootstrap broker hostnames. This may be a temporary networking problem, and could potentially be recoverable. While other errors are fatal, such as some invalid kafka configurations. Unfortunately, kafka-client does not separate this kind of errors. Camel will by default retry forever, and therefore never give up. If you want to give up after many attempts then set this option and Camel will then when giving up terminate the consumer. To try again, you can manually restart the consumer by stopping, and starting the route.||integer| +|isolationLevel|Controls how to read messages written transactionally. If set to read\_committed, consumer.poll() will only return transactional messages which have been committed. If set to read\_uncommitted (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. Messages will always be returned in offset order. Hence, in read\_committed mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular, any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, read\_committed consumers will not be able to read up to the high watermark when there are in flight transactions. Further, when in read\_committed the seekToEnd method will return the LSO|read\_uncommitted|string| +|kafkaManualCommitFactory|Factory to use for creating KafkaManualCommit instances. This allows to plugin a custom factory to create custom KafkaManualCommit instances in case special logic is needed when doing manual commits that deviates from the default implementation that comes out of the box.||object| +|pollExceptionStrategy|To use a custom strategy with the consumer to control how to handle exceptions thrown from the Kafka broker while pooling messages.||object| +|subscribeConsumerBackoffInterval|The delay in millis seconds to wait before trying again to subscribe to the kafka broker.|5000|integer| +|subscribeConsumerBackoffMaxAttempts|Maximum number the kafka consumer will attempt to subscribe to the kafka broker, before eventually giving up and failing. Error during subscribing the consumer to the kafka topic could be temporary errors due to network issues, and could potentially be recoverable. Camel will by default retry forever, and therefore never give up. If you want to give up after many attempts, then set this option and Camel will then when giving up terminate the consumer. You can manually restart the consumer by stopping and starting the route, to try again.||integer| +|batchWithIndividualHeaders|If this feature is enabled and a single element of a batch is an Exchange or Message, the producer will generate individual kafka header values for it by using the batch Message to determine the values. Normal behavior consists of always using the same header values (which are determined by the parent Exchange which contains the Iterable or Iterator).|false|boolean| +|bufferMemorySize|The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server, the producer will either block or throw an exception based on the preference specified by block.on.buffer.full.This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests.|33554432|integer| +|compressionCodec|This parameter allows you to specify the compression codec for all data generated by this producer. Valid values are none, gzip, snappy, lz4 and zstd.|none|string| +|connectionMaxIdleMs|Close idle connections after the number of milliseconds specified by this config.|540000|integer| +|deliveryTimeoutMs|An upper bound on the time to report success or failure after a call to send() returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures.|120000|integer| +|enableIdempotence|When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream. Note that enabling idempotence requires max.in.flight.requests.per.connection to be less than or equal to 5 (with message ordering preserved for any allowable value), retries to be greater than 0, and acks must be 'all'. Idempotence is enabled by default if no conflicting configurations are set. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. If idempotence is explicitly enabled and conflicting configurations are set, a ConfigException is thrown.|true|boolean| +|headerSerializer|To use a custom KafkaHeaderSerializer to serialize kafka headers values||object| +|key|The record key (or null if no key is specified). If this option has been configured then it take precedence over header KafkaConstants#KEY||string| +|keySerializer|The serializer class for keys (defaults to the same as for messages if nothing is given).|org.apache.kafka.common.serialization.StringSerializer|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|lingerMs|The producer groups together any records that arrive in between request transmissions into a single, batched, request. Normally, this occurs only under load when records arrive faster than they can be sent out. However, in some circumstances, the client may want to reduce the number of requests even under a moderate load. This setting accomplishes this by adding a small amount of artificial delay. That is, rather than immediately sending out a record, the producer will wait for up to the given delay to allow other records to be sent so that they can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition, it will be sent immediately regardless of this setting, however, if we have fewer than this many bytes accumulated for this partition, we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e., no delay). Setting linger.ms=5, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load.|0|integer| +|maxBlockMs|The configuration controls how long the KafkaProducer's send(), partitionsFor(), initTransactions(), sendOffsetsToTransaction(), commitTransaction() and abortTransaction() methods will block. For send() this timeout bounds the total time waiting for both metadata fetch and buffer allocation (blocking in the user-supplied serializers or partitioner is not counted against this timeout). For partitionsFor() this time out bounds the time spent waiting for metadata if it is unavailable. The transaction-related methods always block, but may time out if the transaction coordinator could not be discovered or did not respond within the timeout.|60000|integer| +|maxInFlightRequest|The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled).|5|integer| +|maxRequestSize|The maximum size of a request. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.|1048576|integer| +|metadataMaxAgeMs|The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.|300000|integer| +|metricReporters|A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.||string| +|metricsSampleWindowMs|The window of time a metrics sample is computed over.|30000|integer| +|noOfMetricsSample|The number of samples maintained to compute metrics.|2|integer| +|partitioner|The partitioner class for partitioning messages amongst sub-topics. The default partitioner is based on the hash of the key.||string| +|partitionerIgnoreKeys|Whether the message keys should be ignored when computing the partition. This setting has effect only when partitioner is not set|false|boolean| +|partitionKey|The partition to which the record will be sent (or null if no partition was specified). If this option has been configured then it take precedence over header KafkaConstants#PARTITION\_KEY||integer| +|producerBatchSize|The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. No attempt will be made to batch records larger than this size. Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent. A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records.|16384|integer| +|queueBufferingMaxMessages|The maximum number of unsent messages that can be queued up the producer when using async mode before either the producer must be blocked or data must be dropped.|10000|integer| +|receiveBufferBytes|The size of the TCP receive buffer (SO\_RCVBUF) to use when reading data.|65536|integer| +|reconnectBackoffMs|The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.|50|integer| +|recordMetadata|Whether the producer should store the RecordMetadata results from sending to Kafka. The results are stored in a List containing the RecordMetadata metadata's. The list is stored on a header with the key KafkaConstants#KAFKA\_RECORDMETA|true|boolean| +|requestRequiredAcks|The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed: acks=0 If set to zero, then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retry configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1. acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgment from all followers. In this case should the leader fail immediately after acknowledging the record, but before the followers have replicated it, then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting. Note that enabling idempotence requires this config value to be 'all'. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled.|all|string| +|requestTimeoutMs|The amount of time the broker will wait trying to meet the request.required.acks requirement before sending back an error to the client.|30000|integer| +|retries|Setting a value greater than zero will cause the client to resend any record that has failed to be sent due to a potentially transient error. Note that this retry is no different from if the client re-sending the record upon receiving the error. Produce requests will be failed before the number of retries has been exhausted if the timeout configured by delivery.timeout.ms expires first before successful acknowledgement. Users should generally prefer to leave this config unset and instead use delivery.timeout.ms to control retry behavior. Enabling idempotence requires this config value to be greater than 0. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. Allowing retries while setting enable.idempotence to false and max.in.flight.requests.per.connection to 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first.||integer| +|sendBufferBytes|Socket write buffer size|131072|integer| +|useIterator|Sets whether sending to kafka should send the message body as a single record, or use a java.util.Iterator to send multiple records to kafka (if the message body can be iterated).|true|boolean| +|valueSerializer|The serializer class for messages.|org.apache.kafka.common.serialization.StringSerializer|string| +|workerPool|To use a custom worker pool for continue routing Exchange after kafka server has acknowledged the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. If using this option, then you must handle the lifecycle of the thread pool to shut the pool down when no longer needed.||object| +|workerPoolCoreSize|Number of core threads for the worker pool for continue routing Exchange after kafka server has acknowledged the message that was sent to it from KafkaProducer using asynchronous non-blocking processing.|10|integer| +|workerPoolMaxSize|Maximum number of threads for the worker pool for continue routing Exchange after kafka server has acknowledged the message that was sent to it from KafkaProducer using asynchronous non-blocking processing.|20|integer| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|kafkaClientFactory|Factory to use for creating org.apache.kafka.clients.consumer.KafkaConsumer and org.apache.kafka.clients.producer.KafkaProducer instances. This allows configuring a custom factory to create instances with logic that extends the vanilla Kafka clients.||object| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|interceptorClasses|Sets interceptors for producer or consumers. Producer interceptors have to be classes implementing org.apache.kafka.clients.producer.ProducerInterceptor Consumer interceptors have to be classes implementing org.apache.kafka.clients.consumer.ConsumerInterceptor Note that if you use Producer interceptor on a consumer it will throw a class cast exception in runtime||string| +|schemaRegistryURL|URL of the schema registry servers to use. The format is host1:port1,host2:port2. This is known as schema.registry.url in multiple Schema registries documentation. This option is only available externally (not standard Apache Kafka)||string| +|kerberosBeforeReloginMinTime|Login thread sleep time between refresh attempts.|60000|integer| +|kerberosConfigLocation|Location of the kerberos config file.||string| +|kerberosInitCmd|Kerberos kinit command path. Default is /usr/bin/kinit|/usr/bin/kinit|string| +|kerberosPrincipalToLocalRules|A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order, and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}{REALM} are mapped to {username}. For more details on the format, please see the Security Authorization and ACLs documentation (at the Apache Kafka project website). Multiple values can be separated by comma|DEFAULT|string| +|kerberosRenewJitter|Percentage of random jitter added to the renewal time.|0.05|number| +|kerberosRenewWindowFactor|Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.|0.8|number| +|saslJaasConfig|Expose the kafka sasl.jaas.config parameter Example: org.apache.kafka.common.security.plain.PlainLoginModule required username=USERNAME password=PASSWORD;||string| +|saslKerberosServiceName|The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.||string| +|saslMechanism|The Simple Authentication and Security Layer (SASL) Mechanism used. For the valid values see http://www.iana.org/assignments/sasl-mechanisms/sasl-mechanisms.xhtml|GSSAPI|string| +|securityProtocol|Protocol used to communicate with brokers. SASL\_PLAINTEXT, PLAINTEXT, SASL\_SSL and SSL are supported|PLAINTEXT|string| +|sslCipherSuites|A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default, all the available cipher suites are supported.||string| +|sslContextParameters|SSL configuration using a Camel SSLContextParameters object. If configured, it's applied before the other SSL endpoint parameters. NOTE: Kafka only supports loading keystore from file locations, so prefix the location with file: in the KeyStoreParameters.resource option.||object| +|sslEnabledProtocols|The list of protocols enabled for SSL connections. The default is TLSv1.2,TLSv1.3 when running with Java 11 or newer, TLSv1.2 otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for SslProtocol.||string| +|sslEndpointAlgorithm|The endpoint identification algorithm to validate server hostname using server certificate. Use none or false to disable server hostname verification.|https|string| +|sslKeymanagerAlgorithm|The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.|SunX509|string| +|sslKeyPassword|The password of the private key in the key store file or the PEM key specified in sslKeystoreKey. This is required for clients only if two-way authentication is configured.||string| +|sslKeystoreLocation|The location of the key store file. This is optional for the client and can be used for two-way authentication for the client.||string| +|sslKeystorePassword|The store password for the key store file. This is optional for the client and only needed if sslKeystoreLocation is configured. Key store password is not supported for PEM format.||string| +|sslKeystoreType|The file format of the key store file. This is optional for the client. The default value is JKS|JKS|string| +|sslProtocol|The SSL protocol used to generate the SSLContext. The default is TLSv1.3 when running with Java 11 or newer, TLSv1.2 otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are TLSv1.2 and TLSv1.3. TLS, TLSv1.1, SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and sslEnabledProtocols, clients will downgrade to TLSv1.2 if the server does not support TLSv1.3. If this config is set to TLSv1.2, clients will not use TLSv1.3 even if it is one of the values in sslEnabledProtocols and the server only supports TLSv1.3.||string| +|sslProvider|The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.||string| +|sslTrustmanagerAlgorithm|The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.|PKIX|string| +|sslTruststoreLocation|The location of the trust store file.||string| +|sslTruststorePassword|The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.||string| +|sslTruststoreType|The file format of the trust store file. The default value is JKS.|JKS|string| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|topic|Name of the topic to use. On the consumer you can use comma to separate multiple topics. A producer can only send a message to a single topic.||string| +|additionalProperties|Sets additional properties for either kafka consumer or kafka producer in case they can't be set directly on the camel configurations (e.g.: new Kafka properties that are not reflected yet in Camel configurations), the properties have to be prefixed with additionalProperties.., e.g.: additionalProperties.transactional.id=12345\&additionalProperties.schema.registry.url=http://localhost:8811/avro||object| +|brokers|URL of the Kafka brokers to use. The format is host1:port1,host2:port2, and the list can be a subset of brokers or a VIP pointing to a subset of brokers. This option is known as bootstrap.servers in the Kafka documentation.||string| +|clientId|The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request.||string| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object| +|reconnectBackoffMaxMs|The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.|1000|integer| +|retryBackoffMaxMs|The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If retry.backoff.ms is set to be higher than retry.backoff.max.ms, then retry.backoff.max.ms will be used as a constant backoff from the beginning without any exponential increase|1000|integer| +|retryBackoffMs|The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value.|100|integer| +|shutdownTimeout|Timeout in milliseconds to wait gracefully for the consumer or producer to shut down and terminate its worker threads.|30000|integer| +|allowManualCommit|Whether to allow doing manual commits via KafkaManualCommit. If this option is enabled then an instance of KafkaManualCommit is stored on the Exchange message header, which allows end users to access this API and perform manual offset commits via the Kafka consumer.|false|boolean| +|autoCommitEnable|If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. This committed offset will be used when the process fails as the position from which the new consumer will begin.|true|boolean| +|autoCommitIntervalMs|The frequency in ms that the consumer offsets are committed to zookeeper.|5000|integer| +|autoOffsetReset|What to do when there is no initial offset in ZooKeeper or if an offset is out of range: earliest : automatically reset the offset to the earliest offset latest: automatically reset the offset to the latest offset fail: throw exception to the consumer|latest|string| +|batching|Whether to use batching for processing or streaming. The default is false, which uses streaming|false|boolean| +|breakOnFirstError|This options controls what happens when a consumer is processing an exchange and it fails. If the option is false then the consumer continues to the next message and processes it. If the option is true then the consumer breaks out. Using the default NoopCommitManager will cause the consumer to not commit the offset so that the message is re-attempted. The consumer should use the KafkaManualCommit to determine the best way to handle the message. Using either the SyncCommitManager or the AsyncCommitManager, the consumer will seek back to the offset of the message that caused a failure, and then re-attempt to process this message. However, this can lead to endless processing of the same message if it's bound to fail every time, e.g., a poison message. Therefore, it's recommended to deal with that, for example, by using Camel's error handler.|false|boolean| +|checkCrcs|Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.|true|boolean| +|commitTimeoutMs|The maximum time, in milliseconds, that the code will wait for a synchronous commit to complete|5000|duration| +|consumerRequestTimeoutMs|The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapsed, the client will resend the request if necessary or fail the request if retries are exhausted.|30000|integer| +|consumersCount|The number of consumers that connect to kafka server. Each consumer is run on a separate thread that retrieves and process the incoming data.|1|integer| +|fetchMaxBytes|The maximum amount of data the server should return for a fetch request This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel.|52428800|integer| +|fetchMinBytes|The minimum amount of data the server should return for a fetch request. If insufficient data is available, the request will wait for that much data to accumulate before answering the request.|1|integer| +|fetchWaitMaxMs|The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy fetch.min.bytes|500|integer| +|groupId|A string that uniquely identifies the group of consumer processes to which this consumer belongs. By setting the same group id, multiple processes can indicate that they are all part of the same consumer group. This option is required for consumers.||string| +|groupInstanceId|A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability (e.g., process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior.||string| +|headerDeserializer|To use a custom KafkaHeaderDeserializer to deserialize kafka headers values||object| +|heartbeatIntervalMs|The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.|3000|integer| +|keyDeserializer|Deserializer class for the key that implements the Deserializer interface.|org.apache.kafka.common.serialization.StringDeserializer|string| +|maxPartitionFetchBytes|The maximum amount of data per-partition the server will return. The maximum total memory used for a request will be #partitions max.partition.fetch.bytes. This size must be at least as large as the maximum message size the server allows or else it is possible for the producer to send messages larger than the consumer can fetch. If that happens, the consumer can get stuck trying to fetch a large message on a certain partition.|1048576|integer| +|maxPollIntervalMs|The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed, and the group will re-balance to reassign the partitions to another member.||duration| +|maxPollRecords|The maximum number of records returned in a single call to poll()|500|integer| +|offsetRepository|The offset repository to use to locally store the offset of each partition of the topic. Defining one will disable the autocommit.||object| +|partitionAssignor|The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used|org.apache.kafka.clients.consumer.RangeAssignor|string| +|pollOnError|What to do if kafka threw an exception while polling for new messages. Will by default use the value from the component configuration unless an explicit value has been configured on the endpoint level. DISCARD will discard the message and continue to poll the next message. ERROR\_HANDLER will use Camel's error handler to process the exception, and afterwards continue to poll the next message. RECONNECT will re-connect the consumer and try polling the message again. RETRY will let the consumer retry poll the same message again. STOP will stop the consumer (it has to be manually started/restarted if the consumer should be able to consume messages again)|ERROR\_HANDLER|object| +|pollTimeoutMs|The timeout used when polling the KafkaConsumer.|5000|duration| +|preValidateHostAndPort|Whether to eager validate that broker host:port is valid and can be DNS resolved to known host during starting this consumer. If the validation fails, then an exception is thrown, which makes Camel fail fast. Disabling this will postpone the validation after the consumer is started, and Camel will keep re-connecting in case of validation or DNS resolution error.|true|boolean| +|seekTo|Set if KafkaConsumer should read from the beginning or the end on startup: SeekPolicy.BEGINNING: read from the beginning. SeekPolicy.END: read from the end.||object| +|sessionTimeoutMs|The timeout used to detect failures when using Kafka's group management facilities.|45000|integer| +|specificAvroReader|This enables the use of a specific Avro reader for use with the in multiple Schema registries documentation with Avro Deserializers implementation. This option is only available externally (not standard Apache Kafka)|false|boolean| +|topicIsPattern|Whether the topic is a pattern (regular expression). This can be used to subscribe to dynamic number of topics matching the pattern.|false|boolean| +|valueDeserializer|Deserializer class for value that implements the Deserializer interface.|org.apache.kafka.common.serialization.StringDeserializer|string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|isolationLevel|Controls how to read messages written transactionally. If set to read\_committed, consumer.poll() will only return transactional messages which have been committed. If set to read\_uncommitted (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. Messages will always be returned in offset order. Hence, in read\_committed mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular, any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, read\_committed consumers will not be able to read up to the high watermark when there are in flight transactions. Further, when in read\_committed the seekToEnd method will return the LSO|read\_uncommitted|string| +|kafkaManualCommitFactory|Factory to use for creating KafkaManualCommit instances. This allows to plugin a custom factory to create custom KafkaManualCommit instances in case special logic is needed when doing manual commits that deviates from the default implementation that comes out of the box.||object| +|batchWithIndividualHeaders|If this feature is enabled and a single element of a batch is an Exchange or Message, the producer will generate individual kafka header values for it by using the batch Message to determine the values. Normal behavior consists of always using the same header values (which are determined by the parent Exchange which contains the Iterable or Iterator).|false|boolean| +|bufferMemorySize|The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server, the producer will either block or throw an exception based on the preference specified by block.on.buffer.full.This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests.|33554432|integer| +|compressionCodec|This parameter allows you to specify the compression codec for all data generated by this producer. Valid values are none, gzip, snappy, lz4 and zstd.|none|string| +|connectionMaxIdleMs|Close idle connections after the number of milliseconds specified by this config.|540000|integer| +|deliveryTimeoutMs|An upper bound on the time to report success or failure after a call to send() returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures.|120000|integer| +|enableIdempotence|When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream. Note that enabling idempotence requires max.in.flight.requests.per.connection to be less than or equal to 5 (with message ordering preserved for any allowable value), retries to be greater than 0, and acks must be 'all'. Idempotence is enabled by default if no conflicting configurations are set. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. If idempotence is explicitly enabled and conflicting configurations are set, a ConfigException is thrown.|true|boolean| +|headerSerializer|To use a custom KafkaHeaderSerializer to serialize kafka headers values||object| +|key|The record key (or null if no key is specified). If this option has been configured then it take precedence over header KafkaConstants#KEY||string| +|keySerializer|The serializer class for keys (defaults to the same as for messages if nothing is given).|org.apache.kafka.common.serialization.StringSerializer|string| +|lingerMs|The producer groups together any records that arrive in between request transmissions into a single, batched, request. Normally, this occurs only under load when records arrive faster than they can be sent out. However, in some circumstances, the client may want to reduce the number of requests even under a moderate load. This setting accomplishes this by adding a small amount of artificial delay. That is, rather than immediately sending out a record, the producer will wait for up to the given delay to allow other records to be sent so that they can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition, it will be sent immediately regardless of this setting, however, if we have fewer than this many bytes accumulated for this partition, we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e., no delay). Setting linger.ms=5, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load.|0|integer| +|maxBlockMs|The configuration controls how long the KafkaProducer's send(), partitionsFor(), initTransactions(), sendOffsetsToTransaction(), commitTransaction() and abortTransaction() methods will block. For send() this timeout bounds the total time waiting for both metadata fetch and buffer allocation (blocking in the user-supplied serializers or partitioner is not counted against this timeout). For partitionsFor() this time out bounds the time spent waiting for metadata if it is unavailable. The transaction-related methods always block, but may time out if the transaction coordinator could not be discovered or did not respond within the timeout.|60000|integer| +|maxInFlightRequest|The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled).|5|integer| +|maxRequestSize|The maximum size of a request. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.|1048576|integer| +|metadataMaxAgeMs|The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.|300000|integer| +|metricReporters|A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.||string| +|metricsSampleWindowMs|The window of time a metrics sample is computed over.|30000|integer| +|noOfMetricsSample|The number of samples maintained to compute metrics.|2|integer| +|partitioner|The partitioner class for partitioning messages amongst sub-topics. The default partitioner is based on the hash of the key.||string| +|partitionerIgnoreKeys|Whether the message keys should be ignored when computing the partition. This setting has effect only when partitioner is not set|false|boolean| +|partitionKey|The partition to which the record will be sent (or null if no partition was specified). If this option has been configured then it take precedence over header KafkaConstants#PARTITION\_KEY||integer| +|producerBatchSize|The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. No attempt will be made to batch records larger than this size. Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent. A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records.|16384|integer| +|queueBufferingMaxMessages|The maximum number of unsent messages that can be queued up the producer when using async mode before either the producer must be blocked or data must be dropped.|10000|integer| +|receiveBufferBytes|The size of the TCP receive buffer (SO\_RCVBUF) to use when reading data.|65536|integer| +|reconnectBackoffMs|The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.|50|integer| +|recordMetadata|Whether the producer should store the RecordMetadata results from sending to Kafka. The results are stored in a List containing the RecordMetadata metadata's. The list is stored on a header with the key KafkaConstants#KAFKA\_RECORDMETA|true|boolean| +|requestRequiredAcks|The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed: acks=0 If set to zero, then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retry configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1. acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgment from all followers. In this case should the leader fail immediately after acknowledging the record, but before the followers have replicated it, then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting. Note that enabling idempotence requires this config value to be 'all'. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled.|all|string| +|requestTimeoutMs|The amount of time the broker will wait trying to meet the request.required.acks requirement before sending back an error to the client.|30000|integer| +|retries|Setting a value greater than zero will cause the client to resend any record that has failed to be sent due to a potentially transient error. Note that this retry is no different from if the client re-sending the record upon receiving the error. Produce requests will be failed before the number of retries has been exhausted if the timeout configured by delivery.timeout.ms expires first before successful acknowledgement. Users should generally prefer to leave this config unset and instead use delivery.timeout.ms to control retry behavior. Enabling idempotence requires this config value to be greater than 0. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. Allowing retries while setting enable.idempotence to false and max.in.flight.requests.per.connection to 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first.||integer| +|sendBufferBytes|Socket write buffer size|131072|integer| +|useIterator|Sets whether sending to kafka should send the message body as a single record, or use a java.util.Iterator to send multiple records to kafka (if the message body can be iterated).|true|boolean| +|valueSerializer|The serializer class for messages.|org.apache.kafka.common.serialization.StringSerializer|string| +|workerPool|To use a custom worker pool for continue routing Exchange after kafka server has acknowledged the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. If using this option, then you must handle the lifecycle of the thread pool to shut the pool down when no longer needed.||object| +|workerPoolCoreSize|Number of core threads for the worker pool for continue routing Exchange after kafka server has acknowledged the message that was sent to it from KafkaProducer using asynchronous non-blocking processing.|10|integer| +|workerPoolMaxSize|Maximum number of threads for the worker pool for continue routing Exchange after kafka server has acknowledged the message that was sent to it from KafkaProducer using asynchronous non-blocking processing.|20|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|kafkaClientFactory|Factory to use for creating org.apache.kafka.clients.consumer.KafkaConsumer and org.apache.kafka.clients.producer.KafkaProducer instances. This allows to configure a custom factory to create instances with logic that extends the vanilla Kafka clients.||object| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|interceptorClasses|Sets interceptors for producer or consumers. Producer interceptors have to be classes implementing org.apache.kafka.clients.producer.ProducerInterceptor Consumer interceptors have to be classes implementing org.apache.kafka.clients.consumer.ConsumerInterceptor Note that if you use Producer interceptor on a consumer it will throw a class cast exception in runtime||string| +|schemaRegistryURL|URL of the schema registry servers to use. The format is host1:port1,host2:port2. This is known as schema.registry.url in multiple Schema registries documentation. This option is only available externally (not standard Apache Kafka)||string| +|kerberosBeforeReloginMinTime|Login thread sleep time between refresh attempts.|60000|integer| +|kerberosConfigLocation|Location of the kerberos config file.||string| +|kerberosInitCmd|Kerberos kinit command path. Default is /usr/bin/kinit|/usr/bin/kinit|string| +|kerberosPrincipalToLocalRules|A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order, and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}{REALM} are mapped to {username}. For more details on the format, please see the Security Authorization and ACLs documentation (at the Apache Kafka project website). Multiple values can be separated by comma|DEFAULT|string| +|kerberosRenewJitter|Percentage of random jitter added to the renewal time.|0.05|number| +|kerberosRenewWindowFactor|Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.|0.8|number| +|saslJaasConfig|Expose the kafka sasl.jaas.config parameter Example: org.apache.kafka.common.security.plain.PlainLoginModule required username=USERNAME password=PASSWORD;||string| +|saslKerberosServiceName|The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.||string| +|saslMechanism|The Simple Authentication and Security Layer (SASL) Mechanism used. For the valid values see http://www.iana.org/assignments/sasl-mechanisms/sasl-mechanisms.xhtml|GSSAPI|string| +|securityProtocol|Protocol used to communicate with brokers. SASL\_PLAINTEXT, PLAINTEXT, SASL\_SSL and SSL are supported|PLAINTEXT|string| +|sslCipherSuites|A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default, all the available cipher suites are supported.||string| +|sslContextParameters|SSL configuration using a Camel SSLContextParameters object. If configured, it's applied before the other SSL endpoint parameters. NOTE: Kafka only supports loading keystore from file locations, so prefix the location with file: in the KeyStoreParameters.resource option.||object| +|sslEnabledProtocols|The list of protocols enabled for SSL connections. The default is TLSv1.2,TLSv1.3 when running with Java 11 or newer, TLSv1.2 otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for SslProtocol.||string| +|sslEndpointAlgorithm|The endpoint identification algorithm to validate server hostname using server certificate. Use none or false to disable server hostname verification.|https|string| +|sslKeymanagerAlgorithm|The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.|SunX509|string| +|sslKeyPassword|The password of the private key in the key store file or the PEM key specified in sslKeystoreKey. This is required for clients only if two-way authentication is configured.||string| +|sslKeystoreLocation|The location of the key store file. This is optional for the client and can be used for two-way authentication for the client.||string| +|sslKeystorePassword|The store password for the key store file. This is optional for the client and only needed if sslKeystoreLocation is configured. Key store password is not supported for PEM format.||string| +|sslKeystoreType|The file format of the key store file. This is optional for the client. The default value is JKS|JKS|string| +|sslProtocol|The SSL protocol used to generate the SSLContext. The default is TLSv1.3 when running with Java 11 or newer, TLSv1.2 otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are TLSv1.2 and TLSv1.3. TLS, TLSv1.1, SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and sslEnabledProtocols, clients will downgrade to TLSv1.2 if the server does not support TLSv1.3. If this config is set to TLSv1.2, clients will not use TLSv1.3 even if it is one of the values in sslEnabledProtocols and the server only supports TLSv1.3.||string| +|sslProvider|The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.||string| +|sslTrustmanagerAlgorithm|The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.|PKIX|string| +|sslTruststoreLocation|The location of the trust store file.||string| +|sslTruststorePassword|The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.||string| +|sslTruststoreType|The file format of the trust store file. The default value is JKS.|JKS|string| diff --git a/camel-kamelet.md b/camel-kamelet.md new file mode 100644 index 0000000000000000000000000000000000000000..daa2f2a95ff5d8294cd1dde37bc762f2a516a190 --- /dev/null +++ b/camel-kamelet.md @@ -0,0 +1,102 @@ +# Kamelet + +**Since Camel 3.8** + +**Both producer and consumer are supported** + +The Kamelet Component provides support for interacting with the [Camel +Route Template](#manual::route-template.adoc) engine using Endpoint +semantic. + +# URI format + + kamelet:templateId/routeId[?options] + +The **kamelet** endpoint is **lenient**, which means that the endpoint +accepts additional parameters that are passed to the [Route +Template](#manual::route-template.adoc) engine and consumed upon route +materialization. + +# Discovery + +If a [Route Template](#manual::route-template.adoc) is not found, the +**kamelet** endpoint tries to load the related **kamelet** definition +from the file system (by default `classpath:kamelets`). The default +resolution mechanism expects *Kamelets* files to have the extension +`.kamelet.yaml`. + +# Samples + +*Kamelets* can be used as if they were standard Camel components. For +example, suppose that we have created a Route Template as follows: + + routeTemplate("setMyBody") + .templateParameter("bodyValue") + .from("kamelet:source") + .setBody().constant("{{bodyValue}}"); + +To let the **Kamelet** component wiring the materialized route to the +caller processor, we need to be able to identify the input and output +endpoint of the route and this is done by using `kamelet:source` to mark +the input endpoint and `kamelet:sink` for the output endpoint. + +Then the template can be instantiated and invoked as shown below: + + from("direct:setMyBody") + .to("kamelet:setMyBody?bodyValue=myKamelet"); + +Behind the scenes, the **Kamelet** component does the following things: + +1. it instantiates a route out of the Route Template identified by the + given `templateId` path parameter (in this case `setMyBody`) + +2. it will act like the `direct` component and connect the current + route to the materialized one. + +If you had to do it programmatically, it would have been something like: + + routeTemplate("setMyBody") + .templateParameter("bodyValue") + .from("direct:{{foo}}") + .setBody().constant("{{bodyValue}}"); + + TemplatedRouteBuilder.builder(context, "setMyBody") + .parameter("foo", "bar") + .parameter("bodyValue", "myKamelet") + .add(); + + from("direct:template") + .to("direct:bar"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|location|The location(s) of the Kamelets on the file system. Multiple locations can be set separated by comma.|classpath:kamelets|string| +|routeProperties|Set route local parameters.||object| +|templateProperties|Set template local parameters.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|block|If sending a message to a kamelet endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|timeout|The timeout value to use if block is enabled.|30000|integer| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|noErrorHandler|Kamelets, by default, will not do fine-grained error handling, but works in no-error-handler mode. This can be turned off, to use old behaviour in earlier versions of Camel.|true|boolean| +|routeTemplateLoaderListener|To plugin a custom listener for when the Kamelet component is loading Kamelets from external resources.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|templateId|The Route Template ID||string| +|routeId|The Route ID. Default value notice: The ID will be auto-generated if not provided||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|block|If sending a message to a direct endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active.|true|boolean| +|failIfNoConsumers|Whether the producer should fail by throwing an exception, when sending to a kamelet endpoint with no active consumers.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|timeout|The timeout value to use if block is enabled.|30000|integer| +|location|Location of the Kamelet to use which can be specified as a resource from file system, classpath etc. The location cannot use wildcards, and must refer to a file including extension, for example file:/etc/foo-kamelet.xml||string| +|noErrorHandler|Kamelets, by default, will not do fine-grained error handling, but works in no-error-handler mode. This can be turned off, to use old behaviour in earlier versions of Camel.|true|boolean| diff --git a/camel-knative.md b/camel-knative.md new file mode 100644 index 0000000000000000000000000000000000000000..206c68963177dd70ee70fe6a5d31c86488637be7 --- /dev/null +++ b/camel-knative.md @@ -0,0 +1,244 @@ +# Knative + +**Since Camel 3.15** + +**Both producer and consumer are supported** + +The Knative component provides support for interacting with +[Knative](https://knative.dev/). + +Maven users will need to add the following dependency to their `pom.xml` +for this component. + + + org.apache.camel + camel-knative + x.x.x + + + +# URI format + + knative:type/name[?options] + +You can append query options to the URI in the following format: + + ?option=value&option=value&... + +# Options + +# Supported Knative resources + +The component support the following Knative resources you can target or +exposes using the `type` path parameter: + +- **channel**: allow producing or consuming events to or from a + [**Knative Channel**](https://knative.dev/docs/eventing/channels/) + +- **endpoint**: allow exposing or targeting serverless workloads using + [**Knative + Services**](https://knative.dev/docs/serving/spec/knative-api-specification-1.0/#service) + +- **event**: allow producing or consuming events to or from a + [**Knative Broker**](https://knative.dev/docs/eventing/broker) + +# Knative Environment + +As the Knative component hides the technical details of how to +communicate with Knative services to the user (protocols, addresses, +etc.), it needs some metadata that describes the Knative environment to +set up the low-level transport details. To do so, the component needs a +so called `Knative Environment`, which is essence is a Json document +made by a number of `service` elements which looks like the below +example: + + { + "services": [ + { + "type": "channel|endpoint|event", + "name": "", + "metadata": { + "service.url": "http://my-service.svc.cluster.local" + "knative.event.type": "", + "camel.endpoint.kind": "source|sink", + } + }, { + ... + } + ] + } + +- the type of the Knative resource + +- the name of the resource + +- the url of the service to invoke (for producer only) + +- the Knative event type received or produced by the component + +- the type of the Camel Endpoint associated with this Knative resource + (source=consumer, sink=producer) + +The `metadata` fields has some additional advanced fields: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionExample

filter.

The prefix to define filters to be +applied to the incoming message headers.

```filter.ce.source=my-source```

knative.kind

The type of the k8s resource referenced +by the endpoint.

```knative.kind=InMemoryChannel```

knative.apiVersion

The version of the k8s resource +referenced by the endpoint

```knative.apiVersion=messaging.knative.dev/v1beta1```

knative.reply

If the consumer should construct a full +reply to knative request.

```knative.reply=false```

ce.override.

The prefix to define CloudEvents values +that have to be overridden.

```ce.override.ce-type=MyType```

+ +# Example + + CamelContext context = new DefaultCamelContext(); + + KnativeComponent component = context.getComponent("knative", KnativeComponent.class); + component.getConfiguration().setEnvironmentPath("classpath:knative.json"); // + + RouteBuilder.addRoutes(context, b -> { + b.from("knative:endpoint/myEndpoint") // + .to("log:info"); + }); + +- set the location of the `Knative Environment` file + +- expose knative service + +# Using custom Knative Transport + +As today the component only support `http` as transport as it is the +only supported protocol on Knative side but the transport is pluggable +by implementing the following interface: + + public interface KnativeTransport extends Service { + /** + * Create a camel {@link org.apache.camel.Producer} in place of the original endpoint for a specific protocol. + * + * @param endpoint the endpoint for which the producer should be created + * @param configuration the general transport configuration + * @param service the service definition containing information about how make reach the target service. + */ + Producer createProducer( + Endpoint endpoint, + KnativeTransportConfiguration configuration, + KnativeEnvironment.KnativeServiceDefinition service); + + /** + * Create a camel {@link org.apache.camel.Producer} in place of the original endpoint for a specific protocol. + * + * @param endpoint the endpoint for which the consumer should be created. + * @param configuration the general transport configuration + * @param service the service definition containing information about how make the route reachable from knative. + */ + Consumer createConsumer( + Endpoint endpoint, + KnativeTransportConfiguration configuration, + KnativeEnvironment.KnativeServiceDefinition service, Processor processor); + } + +# Using ProducerTemplate + +When using Knative producer with a ProducerTemplate, it is necessary to +specify a value for the CloudEvent source, simply by setting a value for +the header *CamelCloudEventSource*. + +## Example + + producerTemplate.sendBodyAndHeader("knative:event/broker-test", body, CloudEvent.CAMEL_CLOUD_EVENT_SOURCE, "my-source-name"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|ceOverride|CloudEvent headers to override||object| +|cloudEventsSpecVersion|Set the version of the cloudevents spec.|1.0|string| +|cloudEventsType|Set the event-type information of the produced events.|org.apache.camel.event|string| +|configuration|Set the configuration.||object| +|consumerFactory|The protocol consumer factory.||object| +|environment|The environment||object| +|environmentPath|The path ot the environment definition||string| +|filters|Set the filters.||object| +|producerFactory|The protocol producer factory.||object| +|sinkBinding|The SinkBinding configuration.||object| +|transportOptions|Set the transport options.||object| +|typeId|The name of the service to lookup from the KnativeEnvironment.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|replyWithCloudEvent|Transforms the reply into a cloud event that will be processed by the caller. When listening to events from a Knative Broker, if this flag is enabled, replies will be published to the same Broker where the request comes from (beware that if you don't change the type of the received message, you may create a loop and receive your same reply). When this flag is disabled, CloudEvent headers are removed from the reply.|false|boolean| +|reply|If the consumer should construct a full reply to knative request.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|apiVersion|The version of the k8s resource referenced by the endpoint.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|kind|The type of the k8s resource referenced by the endpoint.||string| +|name|The name of the k8s resource referenced by the endpoint.||string| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|type|The Knative resource type||object| +|typeId|The identifier of the Knative resource||string| +|ceOverride|CloudEvent headers to override||object| +|cloudEventsSpecVersion|Set the version of the cloudevents spec.|1.0|string| +|cloudEventsType|Set the event-type information of the produced events.|org.apache.camel.event|string| +|environment|The environment||object| +|filters|Set the filters.||object| +|sinkBinding|The SinkBinding configuration.||object| +|transportOptions|Set the transport options.||object| +|replyWithCloudEvent|Transforms the reply into a cloud event that will be processed by the caller. When listening to events from a Knative Broker, if this flag is enabled, replies will be published to the same Broker where the request comes from (beware that if you don't change the type of the received message, you may create a loop and receive your same reply). When this flag is disabled, CloudEvent headers are removed from the reply.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|reply|If the consumer should construct a full reply to knative request.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|apiVersion|The version of the k8s resource referenced by the endpoint.||string| +|kind|The type of the k8s resource referenced by the endpoint.||string| +|name|The name of the k8s resource referenced by the endpoint.||string| diff --git a/camel-kubernetes-config-maps.md b/camel-kubernetes-config-maps.md new file mode 100644 index 0000000000000000000000000000000000000000..d190d2ccbf6e667821d6c2ae91958f98e6efbd31 --- /dev/null +++ b/camel-kubernetes-config-maps.md @@ -0,0 +1,126 @@ +# Kubernetes-config-maps + +**Since Camel 2.17** + +**Both producer and consumer are supported** + +The Kubernetes ConfigMap component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Kubernetes ConfigMap operations and a consumer to consume events +related to ConfigMap objects. + +# Supported producer operation + +- listConfigMaps + +- listConfigMapsByLabels + +- getConfigMap + +- createConfigMap + +- updateConfigMap + +- deleteConfigMap + +# Kubernetes ConfigMaps Producer Examples + +- listConfigMaps: this operation lists the configmaps + + + + from("direct:list"). + to("kubernetes-config-maps:///?kubernetesClient=#kubernetesClient&operation=listConfigMaps"). + to("mock:result"); + +This operation returns a List of ConfigMaps from your cluster + +- listConfigMapsByLabels: this operation lists the configmaps selected + by label + + + + from("direct:listByLabels").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_CONFIGMAPS_LABELS, labels); + } + }); + to("kubernetes-config-maps:///?kubernetesClient=#kubernetesClient&operation=listConfigMapsByLabels"). + to("mock:result"); + +This operation returns a List of ConfigMaps from your cluster, using a +label selector (with key1 and key2, with value value1 and value2) + +# Kubernetes ConfigMaps Consumer Example + + fromF("kubernetes-config-maps://%s?oauthToken=%s", host, authToken) + .setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, constant("default")) + .setHeader(KubernetesConstants.KUBERNETES_CONFIGMAP_NAME, constant("test")) + .process(new KubernertesProcessor()).to("mock:result"); + + public class KubernertesProcessor implements Processor { + @Override + public void process(Exchange exchange) throws Exception { + Message in = exchange.getIn(); + ConfigMap cm = exchange.getIn().getBody(ConfigMap.class); + log.info("Got event with configmap name: " + cm.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); + } + } + +This consumer will return a list of events on the namespace default for +the config map test. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|crdGroup|The Consumer CRD Resource Group we would like to watch||string| +|crdName|The Consumer CRD Resource name we would like to watch||string| +|crdPlural|The Consumer CRD Resource Plural we would like to watch||string| +|crdScope|The Consumer CRD Resource Scope we would like to watch||string| +|crdVersion|The Consumer CRD Resource Version we would like to watch||string| +|labelKey|The Consumer Label key when watching at some resources||string| +|labelValue|The Consumer Label value when watching at some resources||string| +|poolSize|The Consumer pool size|1|integer| +|resourceName|The Consumer Resource Name we would like to watch||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|operation|Producer operation to do on Kubernetes||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-cronjob.md b/camel-kubernetes-cronjob.md new file mode 100644 index 0000000000000000000000000000000000000000..8005fb925f6a11b97510a82700c4945e8fe44e0a --- /dev/null +++ b/camel-kubernetes-cronjob.md @@ -0,0 +1,60 @@ +# Kubernetes-cronjob + +**Since Camel 4.3** + +**Only producer is supported** + +The Kubernetes CronJob component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute kubernetes CronJob operations. + +# Supported producer operation + +- listCronJob + +- listCronJobByLabels + +- getCronJob + +- createCronJob + +- updateCronJob + +- deleteCronJob + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|operation|Producer operation to do on Kubernetes||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-custom-resources.md b/camel-kubernetes-custom-resources.md new file mode 100644 index 0000000000000000000000000000000000000000..2592875600a83427240b2a985cd195f1bd8bf2aa --- /dev/null +++ b/camel-kubernetes-custom-resources.md @@ -0,0 +1,74 @@ +# Kubernetes-custom-resources + +**Since Camel 3.7** + +**Both producer and consumer are supported** + +The Kubernetes Custom Resources component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Kubernetes Custom Resources operations and a consumer to consume +events related to Node objects. + +# Supported producer operation + +- listCustomResources + +- listCustomResourcesByLabels + +- getCustomResource + +- deleteCustomResource + +- createCustomResource + +- updateCustomResource + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|crdGroup|The Consumer CRD Resource Group we would like to watch||string| +|crdName|The Consumer CRD Resource name we would like to watch||string| +|crdPlural|The Consumer CRD Resource Plural we would like to watch||string| +|crdScope|The Consumer CRD Resource Scope we would like to watch||string| +|crdVersion|The Consumer CRD Resource Version we would like to watch||string| +|labelKey|The Consumer Label key when watching at some resources||string| +|labelValue|The Consumer Label value when watching at some resources||string| +|poolSize|The Consumer pool size|1|integer| +|resourceName|The Consumer Resource Name we would like to watch||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|operation|Producer operation to do on Kubernetes||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-deployments.md b/camel-kubernetes-deployments.md new file mode 100644 index 0000000000000000000000000000000000000000..80cf5f410340c918ca6e1a7bda4b1a3cb549f082 --- /dev/null +++ b/camel-kubernetes-deployments.md @@ -0,0 +1,124 @@ +# Kubernetes-deployments + +**Since Camel 2.20** + +**Both producer and consumer are supported** + +The Kubernetes Deployments component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Kubernetes Deployments operations and a consumer to consume +events related to Deployments objects. + +# Supported producer operation + +- listDeployments + +- listDeploymentsByLabels + +- getDeployment + +- createDeployment + +- updateDeployment + +- deleteDeployment + +- scaleDeployment + +# Kubernetes Deployments Producer Examples + +- listDeployments: this operation list the deployments on a kubernetes + cluster + + + + from("direct:list"). + toF("kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listDeployments"). + to("mock:result"); + +This operation return a List of Deployment from your cluster + +- listDeploymentsByLabels: this operation list the deployments by + labels on a kubernetes cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_DEPLOYMENTS_LABELS, labels); + } + }); + toF("kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listDeploymentsByLabels"). + to("mock:result"); + +This operation return a List of Deployments from your cluster, using a +label selector (with key1 and key2, with value value1 and value2) + +# Kubernetes Deployments Consumer Example + + fromF("kubernetes-deployments://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); + public class KubernertesProcessor implements Processor { + @Override + public void process(Exchange exchange) throws Exception { + Message in = exchange.getIn(); + Deployment dp = exchange.getIn().getBody(Deployment.class); + log.info("Got event with configmap name: " + dp.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); + } + } + +This consumer will return a list of events on the namespace default for +the deployment test. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|crdGroup|The Consumer CRD Resource Group we would like to watch||string| +|crdName|The Consumer CRD Resource name we would like to watch||string| +|crdPlural|The Consumer CRD Resource Plural we would like to watch||string| +|crdScope|The Consumer CRD Resource Scope we would like to watch||string| +|crdVersion|The Consumer CRD Resource Version we would like to watch||string| +|labelKey|The Consumer Label key when watching at some resources||string| +|labelValue|The Consumer Label value when watching at some resources||string| +|poolSize|The Consumer pool size|1|integer| +|resourceName|The Consumer Resource Name we would like to watch||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|operation|Producer operation to do on Kubernetes||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-events.md b/camel-kubernetes-events.md new file mode 100644 index 0000000000000000000000000000000000000000..f02c557de1aa55541123e219096586ea7d2de270 --- /dev/null +++ b/camel-kubernetes-events.md @@ -0,0 +1,247 @@ +# Kubernetes-events + +**Since Camel 3.20** + +**Both producer and consumer are supported** + +The Kubernetes Event component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Kubernetes Event operations and a consumer to consume events +related to Event objects. + +# Supported producer operation + +- listEvents + +- listEventsByLabels + +- getEvent + +- createEvent + +- updateEvent + +- deleteEvent + +# Kubernetes Events Producer Examples + +- listEvents: this operation lists the events + + + + from("direct:list"). + to("kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=listEvents"). + to("mock:result"); + +This operation returns a list of events from your cluster. The type of +the events is `io.fabric8.kubernetes.api.model.events.v1.Event`. + +To indicate from which namespace, the events are expected, it is +possible to set the message header `CamelKubernetesNamespaceName`. By +default, the events of all namespaces are returned. + +- listEventsByLabels: this operation lists the events selected by + labels + + + + from("direct:listByLabels").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENTS_LABELS, labels); + } + }); + to("kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=listEventsByLabels"). + to("mock:result"); + +This operation returns a list of events from your cluster that occurred +in any namespaces, using a label selector (in the example above only +expect events which have the label "key1" set to "value1" and the label +"key2" set to "value2"). The type of the events is +`io.fabric8.kubernetes.api.model.events.v1.Event`. + +This operation expects the message header `CamelKubernetesEventsLabels` +to be set to a `Map` where the key-value pairs represent +the expected label names and values. + +- getEvent: this operation gives a specific event + + + + from("direct:get").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, "test"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, "event1"); + } + }); + to("kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=getEvent"). + to("mock:result"); + +This operation returns the event matching the criteria from your +cluster. The type of the event is +`io.fabric8.kubernetes.api.model.events.v1.Event`. + +This operation expects two message headers which are +`CamelKubernetesNamespaceName` and `CamelKubernetesEventName`, the first +one needs to be set to the name of the target namespace and second one +needs to be set to the target name of event. + +If no matching event could be found, `null` is returned. + +- createEvent: this operation creates a new event + + + + from("direct:get").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, "default"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, "test1"); + Map labels = new HashMap<>(); + labels.put("this", "rocks"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENTS_LABELS, labels); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION_PRODUCER, "Some Action"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_TYPE, "Normal"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_REASON, "Some Reason"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_REPORTING_CONTROLLER, "Some-Reporting-Controller"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_REPORTING_INSTANCE, "Some-Reporting-Instance"); + } + }); + to("kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=createEvent"). + to("mock:result"); + +This operation publishes a new event in your cluster. An event can be +created in two ways either from message headers or directly from an +`io.fabric8.kubernetes.api.model.events.v1.EventBuilder`. + +Whatever the way used to create the event: + +- The operation expects two message headers which are + `CamelKubernetesNamespaceName` and `CamelKubernetesEventName`, to + set respectively the name of namespace and the name of the produced + event. + +- The operation supports the message header + `CamelKubernetesEventsLabels` to set the labels to the produced + event. + +The message headers that can be used to create an event are +`CamelKubernetesEventTime`, `CamelKubernetesEventAction`, +`CamelKubernetesEventType`, `CamelKubernetesEventReason`, +`CamelKubernetesEventNote`,`CamelKubernetesEventRegarding`, +`CamelKubernetesEventRelated`, `CamelKubernetesEventReportingController` +and `CamelKubernetesEventReportingInstance`. + +In case the supported message headers are not enough for a specific use +case, it is still possible to set the message body with an object of +type `io.fabric8.kubernetes.api.model.events.v1.EventBuilder` +representing a prefilled builder to use when creating the event. Please +note that the labels, name of event and name of namespace are always set +from the message headers, even when the builder is provided. + +- updateEvent: this operation updates an existing event + +The behavior is exactly the same as `createEvent`, only the name of the +operation is different. + +- deleteEvent: this operation deletes an existing event. + + + + from("direct:get").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, "default"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, "test1"); + } + }); + to("kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=deleteEvent"). + to("mock:result"); + +This operation removes an existing event from your cluster. It returns a +`boolean` to indicate whether the operation was successful or not. + +This operation expects two message headers which are +`CamelKubernetesNamespaceName` and `CamelKubernetesEventName`, the first +one needs to be set to the name of the target namespace and second one +needs to be set to the target name of event. + +# Kubernetes Events Consumer Example + + fromF("kubernetes-events://%s?oauthToken=%s", host, authToken) + .setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, constant("default")) + .setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, constant("test")) + .process(new KubernertesProcessor()).to("mock:result"); + + public class KubernertesProcessor implements Processor { + @Override + public void process(Exchange exchange) throws Exception { + Message in = exchange.getIn(); + Event cm = exchange.getIn().getBody(Event.class); + log.info("Got event with event name: " + cm.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); + } + } + +This consumer returns a message per event received on the namespace +"default" for the event "test". It also set the action +(`io.fabric8.kubernetes.client.Watcher.Action`) in the message header +`CamelKubernetesEventAction` and the timestamp (`long`) in the message +header `CamelKubernetesEventTimestamp`. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|crdGroup|The Consumer CRD Resource Group we would like to watch||string| +|crdName|The Consumer CRD Resource name we would like to watch||string| +|crdPlural|The Consumer CRD Resource Plural we would like to watch||string| +|crdScope|The Consumer CRD Resource Scope we would like to watch||string| +|crdVersion|The Consumer CRD Resource Version we would like to watch||string| +|labelKey|The Consumer Label key when watching at some resources||string| +|labelValue|The Consumer Label value when watching at some resources||string| +|poolSize|The Consumer pool size|1|integer| +|resourceName|The Consumer Resource Name we would like to watch||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|operation|Producer operation to do on Kubernetes||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-hpa.md b/camel-kubernetes-hpa.md new file mode 100644 index 0000000000000000000000000000000000000000..31805e552cfca4fe202876f342fe0ca5344d7a6f --- /dev/null +++ b/camel-kubernetes-hpa.md @@ -0,0 +1,121 @@ +# Kubernetes-hpa + +**Since Camel 2.23** + +**Both producer and consumer are supported** + +The Kubernetes HPA component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute kubernetes Horizontal Pod Autoscaler operations and a consumer +to consume events related to Horizontal Pod Autoscaler objects. + +# Supported producer operation + +- listHPA + +- listHPAByLabels + +- getHPA + +- createHPA + +- updateHPA + +- deleteHPA + +# Kubernetes HPA Producer Examples + +- listHPA: this operation lists the HPAs on a kubernetes cluster + + + + from("direct:list"). + toF("kubernetes-hpa:///?kubernetesClient=#kubernetesClient&operation=listHPA"). + to("mock:result"); + +This operation returns a List of HPAs from your cluster + +- listDeploymentsByLabels: this operation lists the HPAs by labels on + a kubernetes cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_HPA_LABELS, labels); + } + }); + toF("kubernetes-hpa:///?kubernetesClient=#kubernetesClient&operation=listHPAByLabels"). + to("mock:result"); + +This operation returns a List of HPAs from your cluster, using a label +selector (with key1 and key2, with value value1 and value2) + +# Kubernetes HPA Consumer Example + + fromF("kubernetes-hpa://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); + public class KubernertesProcessor implements Processor { + @Override + public void process(Exchange exchange) throws Exception { + Message in = exchange.getIn(); + HorizontalPodAutoscaler hpa = exchange.getIn().getBody(HorizontalPodAutoscaler.class); + log.info("Got event with hpa name: " + hpa.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); + } + } + +This consumer will return a list of events on the namespace default for +the hpa test. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|crdGroup|The Consumer CRD Resource Group we would like to watch||string| +|crdName|The Consumer CRD Resource name we would like to watch||string| +|crdPlural|The Consumer CRD Resource Plural we would like to watch||string| +|crdScope|The Consumer CRD Resource Scope we would like to watch||string| +|crdVersion|The Consumer CRD Resource Version we would like to watch||string| +|labelKey|The Consumer Label key when watching at some resources||string| +|labelValue|The Consumer Label value when watching at some resources||string| +|poolSize|The Consumer pool size|1|integer| +|resourceName|The Consumer Resource Name we would like to watch||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|operation|Producer operation to do on Kubernetes||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-job.md b/camel-kubernetes-job.md new file mode 100644 index 0000000000000000000000000000000000000000..5f4f796c4bf52fd9fc58e9da5f7e725b2477abc3 --- /dev/null +++ b/camel-kubernetes-job.md @@ -0,0 +1,194 @@ +# Kubernetes-job + +**Since Camel 2.23** + +**Only producer is supported** + +The Kubernetes Job component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute kubernetes Job operations. + +# Supported producer operation + +- listJob + +- listJobByLabels + +- getJob + +- createJob + +- updateJob + +- deleteJob + +# Kubernetes Job Producer Examples + +- listJob: this operation lists the jobs on a kubernetes cluster + + + + from("direct:list"). + toF("kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJob"). + to("mock:result"); + +This operation returns a List of Jobs from your cluster + +- listJobByLabels: this operation lists the jobs by labels on a + kubernetes cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_LABELS, labels); + } + }); + toF("kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJobByLabels"). + to("mock:result"); + +This operation returns a List of Jobs from your cluster, using a label +selector (with key1 and key2, with value value1 and value2) + +- createJob: This operation creates a job on a Kubernetes Cluster + +We have a wonderful example of this operation thanks to [Emmerson +Miranda](https://github.com/Emmerson-Miranda) from this [Java +test](https://github.com/Emmerson-Miranda/camel/blob/master/camel3-cdi/cdi-k8s-pocs/src/main/java/edu/emmerson/camel/k8s/jobs/camel_k8s_jobs/KubernetesCreateJob.java) + + import java.util.ArrayList; + import java.util.Date; + import java.util.HashMap; + import java.util.List; + import java.util.Map; + + import javax.inject.Inject; + + import org.apache.camel.Endpoint; + import org.apache.camel.builder.RouteBuilder; + import org.apache.camel.cdi.Uri; + import org.apache.camel.component.kubernetes.KubernetesConstants; + import org.apache.camel.component.kubernetes.KubernetesOperations; + + import io.fabric8.kubernetes.api.model.Container; + import io.fabric8.kubernetes.api.model.ObjectMeta; + import io.fabric8.kubernetes.api.model.PodSpec; + import io.fabric8.kubernetes.api.model.PodTemplateSpec; + import io.fabric8.kubernetes.api.model.batch.JobSpec; + + public class KubernetesCreateJob extends RouteBuilder { + + @Inject + @Uri("timer:foo?delay=1000&repeatCount=1") + private Endpoint inputEndpoint; + + @Inject + @Uri("log:output") + private Endpoint resultEndpoint; + + @Override + public void configure() { + // you can configure the route rule with Java DSL here + + from(inputEndpoint) + .routeId("kubernetes-jobcreate-client") + .process(exchange -> { + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_NAME, "camel-job"); //DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, "default"); + + Map joblabels = new HashMap(); + joblabels.put("jobLabelKey1", "value1"); + joblabels.put("jobLabelKey2", "value2"); + joblabels.put("app", "jobFromCamelApp"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_LABELS, joblabels); + + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_SPEC, generateJobSpec()); + }) + .toF("kubernetes-job:///{{kubernetes-master-url}}?oauthToken={{kubernetes-oauth-token:}}&operation=" + KubernetesOperations.CREATE_JOB_OPERATION) + .log("Job created:") + .process(exchange -> { + System.out.println(exchange.getIn().getBody()); + }) + .to(resultEndpoint); + } + + private JobSpec generateJobSpec() { + JobSpec js = new JobSpec(); + + PodTemplateSpec pts = new PodTemplateSpec(); + + PodSpec ps = new PodSpec(); + ps.setRestartPolicy("Never"); + ps.setContainers(generateContainers()); + pts.setSpec(ps); + + ObjectMeta metadata = new ObjectMeta(); + Map annotations = new HashMap(); + annotations.put("jobMetadataAnnotation1", "random value"); + metadata.setAnnotations(annotations); + + Map podlabels = new HashMap(); + podlabels.put("podLabelKey1", "value1"); + podlabels.put("podLabelKey2", "value2"); + podlabels.put("app", "podFromCamelApp"); + metadata.setLabels(podlabels); + + pts.setMetadata(metadata); + js.setTemplate(pts); + return js; + } + + private List generateContainers() { + Container container = new Container(); + container.setName("pi"); + container.setImage("perl"); + List command = new ArrayList(); + command.add("echo"); + command.add("Job created from Apache Camel code at " + (new Date())); + container.setCommand(command); + List containers = new ArrayList(); + containers.add(container); + return containers; + } + } + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|operation|Producer operation to do on Kubernetes||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-namespaces.md b/camel-kubernetes-namespaces.md new file mode 100644 index 0000000000000000000000000000000000000000..7880891ee7707fc31f0885502e47d9a8ce672c8d --- /dev/null +++ b/camel-kubernetes-namespaces.md @@ -0,0 +1,121 @@ +# Kubernetes-namespaces + +**Since Camel 2.17** + +**Both producer and consumer are supported** + +The Kubernetes Namespaces component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Kubernetes Namespace operations and a consumer to consume events +related to Namespace events. + +# Supported producer operation + +- listNamespaces + +- listNamespacesByLabels + +- getNamespace + +- createNamespace + +- updateNamespace + +- deleteNamespace + +# Kubernetes Namespaces Producer Examples + +- listNamespaces: this operation lists the namespaces on a kubernetes + cluster + + + + from("direct:list"). + toF("kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listNamespaces"). + to("mock:result"); + +This operation returns a List of namespaces from your cluster + +- listNamespacesByLabels: this operation lists the namespaces by + labels on a kubernetes cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACES_LABELS, labels); + } + }); + toF("kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listNamespacesByLabels"). + to("mock:result"); + +This operation returns a List of Namespaces from your cluster, using a +label selector (with key1 and key2, with value value1 and value2) + +# Kubernetes Namespaces Consumer Example + + fromF("kubernetes-namespaces://%s?oauthToken=%s&namespace=default", host, authToken).process(new KubernertesProcessor()).to("mock:result"); + public class KubernertesProcessor implements Processor { + @Override + public void process(Exchange exchange) throws Exception { + Message in = exchange.getIn(); + Namespace ns = exchange.getIn().getBody(Namespace.class); + log.info("Got event with configmap name: " + ns.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); + } + } + +This consumer will return a list of events on the namespace default. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|crdGroup|The Consumer CRD Resource Group we would like to watch||string| +|crdName|The Consumer CRD Resource name we would like to watch||string| +|crdPlural|The Consumer CRD Resource Plural we would like to watch||string| +|crdScope|The Consumer CRD Resource Scope we would like to watch||string| +|crdVersion|The Consumer CRD Resource Version we would like to watch||string| +|labelKey|The Consumer Label key when watching at some resources||string| +|labelValue|The Consumer Label value when watching at some resources||string| +|poolSize|The Consumer pool size|1|integer| +|resourceName|The Consumer Resource Name we would like to watch||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|operation|Producer operation to do on Kubernetes||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-nodes.md b/camel-kubernetes-nodes.md new file mode 100644 index 0000000000000000000000000000000000000000..75957ad36e86a05fa0164d5d0dec064bde4dafc9 --- /dev/null +++ b/camel-kubernetes-nodes.md @@ -0,0 +1,120 @@ +# Kubernetes-nodes + +**Since Camel 2.17** + +**Both producer and consumer are supported** + +The Kubernetes Nodes component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Kubernetes Node operations and a consumer to consume events +related to Node objects. + +# Supported producer operation + +- listNodes + +- listNodesByLabels + +- getNode + +- createNode + +- updateNode + +- deleteNode + +# Kubernetes Nodes Producer Examples + +- listNodes: this operation lists the nodes on a kubernetes cluster + + + + from("direct:list"). + toF("kubernetes-nodes:///?kubernetesClient=#kubernetesClient&operation=listNodes"). + to("mock:result"); + +This operation returns a List of Nodes from your cluster + +- listNodesByLabels: this operation lists the nodes by labels on a + kubernetes cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NODES_LABELS, labels); + } + }); + toF("kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listNodesByLabels"). + to("mock:result"); + +This operation returns a List of Nodes from your cluster, using a label +selector (with key1 and key2, with value value1 and value2) + +# Kubernetes Nodes Consumer Example + + fromF("kubernetes-nodes://%s?oauthToken=%s&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); + public class KubernertesProcessor implements Processor { + @Override + public void process(Exchange exchange) throws Exception { + Message in = exchange.getIn(); + Node node = exchange.getIn().getBody(Node.class); + log.info("Got event with configmap name: " + node.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); + } + } + +This consumer will return a list of events for the node test. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|crdGroup|The Consumer CRD Resource Group we would like to watch||string| +|crdName|The Consumer CRD Resource name we would like to watch||string| +|crdPlural|The Consumer CRD Resource Plural we would like to watch||string| +|crdScope|The Consumer CRD Resource Scope we would like to watch||string| +|crdVersion|The Consumer CRD Resource Version we would like to watch||string| +|labelKey|The Consumer Label key when watching at some resources||string| +|labelValue|The Consumer Label value when watching at some resources||string| +|poolSize|The Consumer pool size|1|integer| +|resourceName|The Consumer Resource Name we would like to watch||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|operation|Producer operation to do on Kubernetes||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-persistent-volumes-claims.md b/camel-kubernetes-persistent-volumes-claims.md new file mode 100644 index 0000000000000000000000000000000000000000..6be044677fd5b0ac3ddf8e9edd3de0bf5f5be0b0 --- /dev/null +++ b/camel-kubernetes-persistent-volumes-claims.md @@ -0,0 +1,93 @@ +# Kubernetes-persistent-volumes-claims + +**Since Camel 2.17** + +**Only producer is supported** + +The Kubernetes Persistent Volume Claim component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Kubernetes Persistent Volume Claims operations. + +# Supported producer operation + +- listPersistentVolumesClaims + +- listPersistentVolumesClaimsByLabels + +- getPersistentVolumeClaim + +- createPersistentVolumeClaim + +- updatePersistentVolumeClaim + +- deletePersistentVolumeClaim + +# Kubernetes Persistent Volume Claims Producer Examples + +- listPersistentVolumesClaims: this operation lists the pvc on a + kubernetes cluster + + + + from("direct:list"). + toF("kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaims"). + to("mock:result"); + +This operation returns a List of pvc from your cluster + +- listPersistentVolumesClaimsByLabels: this operation lists the pvc by + labels on a kubernetes cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PERSISTENT_VOLUMES_CLAIMS_LABELS, labels); + } + }); + toF("kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaimsByLabels"). + to("mock:result"); + +This operation returns a List of pvc from your cluster, using a label +selector (with key1 and key2, with value value1 and value2) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|operation|Producer operation to do on Kubernetes||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-persistent-volumes.md b/camel-kubernetes-persistent-volumes.md new file mode 100644 index 0000000000000000000000000000000000000000..4f644c3a731bfa436e56e722b262003a13e3bf0c --- /dev/null +++ b/camel-kubernetes-persistent-volumes.md @@ -0,0 +1,87 @@ +# Kubernetes-persistent-volumes + +**Since Camel 2.17** + +**Only producer is supported** + +The Kubernetes Persistent Volume component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Kubernetes Persistent Volume operations. + +# Supported producer operation + +- listPersistentVolumes + +- listPersistentVolumesByLabels + +- getPersistentVolume + +# Kubernetes Persistent Volumes Producer Examples + +- listPersistentVolumes: this operation lists the pv on a kubernetes + cluster + + + + from("direct:list"). + toF("kubernetes-persistent-volumes:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumes"). + to("mock:result"); + +This operation returns a List of pv from your cluster + +- listPersistentVolumesByLabels: this operation lists the pv by labels + on a kubernetes cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PERSISTENT_VOLUMES_LABELS, labels); + } + }); + toF("kubernetes-persistent-volumes:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesByLabels"). + to("mock:result"); + +This operation returns a List of pv from your cluster, using a label +selector (with key1 and key2, with value value1 and value2) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|operation|Producer operation to do on Kubernetes||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-pods.md b/camel-kubernetes-pods.md new file mode 100644 index 0000000000000000000000000000000000000000..2cefdd8c85ecb81878874eb3c1890e56126922a2 --- /dev/null +++ b/camel-kubernetes-pods.md @@ -0,0 +1,121 @@ +# Kubernetes-pods + +**Since Camel 2.17** + +**Both producer and consumer are supported** + +The Kubernetes Pods component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Kubernetes Pods operations and a consumer to consume events +related to Pod Objects. + +# Supported producer operation + +- listPods + +- listPodsByLabels + +- getPod + +- createPod + +- updatePod + +- deletePod + +# Kubernetes Pods Producer Examples + +- listPods: this operation lists the pods on a kubernetes cluster + + + + from("direct:list"). + toF("kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPods"). + to("mock:result"); + +This operation returns a List of Pods from your cluster + +- listPodsByLabels: this operation lists the pods by labels on a + kubernetes cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PODS_LABELS, labels); + } + }); + toF("kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPodsByLabels"). + to("mock:result"); + +This operation returns a List of Pods from your cluster, using a label +selector (with key1 and key2, with value value1 and value2) + +# Kubernetes Pods Consumer Example + + fromF("kubernetes-pods://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); + public class KubernertesProcessor implements Processor { + @Override + public void process(Exchange exchange) throws Exception { + Message in = exchange.getIn(); + Pod pod = exchange.getIn().getBody(Pod.class); + log.info("Got event with configmap name: " + pod.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); + } + } + +This consumer will return a list of events on the namespace default for +the pod test. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|crdGroup|The Consumer CRD Resource Group we would like to watch||string| +|crdName|The Consumer CRD Resource name we would like to watch||string| +|crdPlural|The Consumer CRD Resource Plural we would like to watch||string| +|crdScope|The Consumer CRD Resource Scope we would like to watch||string| +|crdVersion|The Consumer CRD Resource Version we would like to watch||string| +|labelKey|The Consumer Label key when watching at some resources||string| +|labelValue|The Consumer Label value when watching at some resources||string| +|poolSize|The Consumer pool size|1|integer| +|resourceName|The Consumer Resource Name we would like to watch||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|operation|Producer operation to do on Kubernetes||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-replication-controllers.md b/camel-kubernetes-replication-controllers.md new file mode 100644 index 0000000000000000000000000000000000000000..c7af1d805f177b65ce99039569c8303baa3c70f5 --- /dev/null +++ b/camel-kubernetes-replication-controllers.md @@ -0,0 +1,124 @@ +# Kubernetes-replication-controllers + +**Since Camel 2.17** + +**Both producer and consumer are supported** + +The Kubernetes Replication Controller component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Kubernetes Replication controller operations and a consumer to +consume events related to Replication Controller objects. + +# Supported producer operation + +- listReplicationControllers + +- listReplicationControllersByLabels + +- getReplicationController + +- createReplicationController + +- updateReplicationController + +- deleteReplicationController + +- scaleReplicationController + +# Kubernetes Replication Controllers Producer Examples + +- listReplicationControllers: this operation lists the RCs on a + kubernetes cluster + + + + from("direct:list"). + toF("kubernetes-replication-controllers:///?kubernetesClient=#kubernetesClient&operation=listReplicationControllers"). + to("mock:result"); + +This operation returns a List of RCs from your cluster + +- listReplicationControllersByLabels: this operation lists the RCs by + labels on a kubernetes cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_REPLICATION_CONTROLLERS_LABELS, labels); + } + }); + toF("kubernetes-replication-controllers:///?kubernetesClient=#kubernetesClient&operation=listReplicationControllersByLabels"). + to("mock:result"); + +This operation returns a List of RCs from your cluster, using a label +selector (with key1 and key2, with value value1 and value2) + +# Kubernetes Replication Controllers Consumer Example + + fromF("kubernetes-replication-controllers://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); + public class KubernertesProcessor implements Processor { + @Override + public void process(Exchange exchange) throws Exception { + Message in = exchange.getIn(); + ReplicationController rc = exchange.getIn().getBody(ReplicationController.class); + log.info("Got event with configmap name: " + rc.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); + } + } + +This consumer will return a list of events on the namespace default for +the rc test. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|crdGroup|The Consumer CRD Resource Group we would like to watch||string| +|crdName|The Consumer CRD Resource name we would like to watch||string| +|crdPlural|The Consumer CRD Resource Plural we would like to watch||string| +|crdScope|The Consumer CRD Resource Scope we would like to watch||string| +|crdVersion|The Consumer CRD Resource Version we would like to watch||string| +|labelKey|The Consumer Label key when watching at some resources||string| +|labelValue|The Consumer Label value when watching at some resources||string| +|poolSize|The Consumer pool size|1|integer| +|resourceName|The Consumer Resource Name we would like to watch||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|operation|Producer operation to do on Kubernetes||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-resources-quota.md b/camel-kubernetes-resources-quota.md new file mode 100644 index 0000000000000000000000000000000000000000..1de863fd49ef38e0053b70ccc77681a96ed3e924 --- /dev/null +++ b/camel-kubernetes-resources-quota.md @@ -0,0 +1,94 @@ +# Kubernetes-resources-quota + +**Since Camel 2.17** + +**Only producer is supported** + +The Kubernetes Resources Quota component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Kubernetes Resource Quota operations. + +# Supported producer operation + +- listResourcesQuota + +- listResourcesQuotaByLabels + +- getResourcesQuota + +- createResourcesQuota + +- updateResourceQuota + +- deleteResourcesQuota + +# Kubernetes Resource Quota Producer Examples + +- listResourcesQuota: this operation lists the Resource Quotas on a + kubernetes cluster + + + + from("direct:list"). + toF("kubernetes-resources-quota:///?kubernetesClient=#kubernetesClient&operation=listResourcesQuota"). + to("mock:result"); + +This operation returns a List of Resource Quotas from your cluster + +- listResourcesQuotaByLabels: this operation lists the Resource Quotas + by labels on a kubernetes cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_RESOURCES_QUOTA_LABELS, labels); + } + }); + toF("kubernetes-resources-quota:///?kubernetesClient=#kubernetesClient&operation=listResourcesQuotaByLabels"). + to("mock:result"); + +This operation returns a List of Resource Quotas from your cluster, +using a label selector (with key1 and key2, with value value1 and +value2) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|operation|Producer operation to do on Kubernetes||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-secrets.md b/camel-kubernetes-secrets.md new file mode 100644 index 0000000000000000000000000000000000000000..e0fae2a2e9756226bd84c08e7c521c8578aa490e --- /dev/null +++ b/camel-kubernetes-secrets.md @@ -0,0 +1,93 @@ +# Kubernetes-secrets + +**Since Camel 2.17** + +**Only producer is supported** + +The Kubernetes Secrets component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Kubernetes Secrets operations. + +# Supported producer operation + +- listSecrets + +- listSecretsByLabels + +- getSecret + +- createSecret + +- updateSecret + +- deleteSecret + +# Kubernetes Secrets Producer Examples + +- listSecrets: this operation lists the secrets on a kubernetes + cluster + + + + from("direct:list"). + toF("kubernetes-secrets:///?kubernetesClient=#kubernetesClient&operation=listSecrets"). + to("mock:result"); + +This operation returns a List of secrets from your cluster + +- listSecretsByLabels: this operation lists the Secrets by labels on a + kubernetes cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_SECRETS_LABELS, labels); + } + }); + toF("kubernetes-secrets:///?kubernetesClient=#kubernetesClient&operation=listSecretsByLabels"). + to("mock:result"); + +This operation returns a List of Secrets from your cluster, using a +label selector (with key1 and key2, with value value1 and value2) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|operation|Producer operation to do on Kubernetes||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-service-accounts.md b/camel-kubernetes-service-accounts.md new file mode 100644 index 0000000000000000000000000000000000000000..8e4baeeb954ba6d65ca084b7506fa372a1233102 --- /dev/null +++ b/camel-kubernetes-service-accounts.md @@ -0,0 +1,93 @@ +# Kubernetes-service-accounts + +**Since Camel 2.17** + +**Only producer is supported** + +The Kubernetes Service Account component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Kubernetes Service Account operations. + +# Supported producer operation + +- listServiceAccounts + +- listServiceAccountsByLabels + +- getServiceAccount + +- createServiceAccount + +- updateServiceAccount + +- deleteServiceAccount + +# Kubernetes ServiceAccounts Produce Examples + +- listServiceAccounts: this operation lists the sa on a kubernetes + cluster + + + + from("direct:list"). + toF("kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccounts"). + to("mock:result"); + +This operation returns a List of services from your cluster + +- listServiceAccountsByLabels: this operation lists the sa by labels + on a kubernetes cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_SERVICE_ACCOUNTS_LABELS, labels); + } + }); + toF("kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccountsByLabels"). + to("mock:result"); + +This operation returns a List of Services from your cluster, using a +label selector (with key1 and key2, with value value1 and value2) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|operation|Producer operation to do on Kubernetes||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kubernetes-services.md b/camel-kubernetes-services.md new file mode 100644 index 0000000000000000000000000000000000000000..a4ce0b7e224db456f99d40a85ecc52ba305720c4 --- /dev/null +++ b/camel-kubernetes-services.md @@ -0,0 +1,121 @@ +# Kubernetes-services + +**Since Camel 2.17** + +**Both producer and consumer are supported** + +The Kubernetes Services component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Kubernetes Service operations and a consumer to consume events +related to Service objects. + +# Supported producer operation + +- listServices + +- listServicesByLabels + +- getService + +- createService + +- deleteService + +# Kubernetes Services Producer Examples + +- listServices: this operation list the services on a kubernetes + cluster + + + + from("direct:list"). + toF("kubernetes-services:///?kubernetesClient=#kubernetesClient&operation=listServices"). + to("mock:result"); + +This operation return a List of services from your cluster + +- listServicesByLabels: this operation list the deployments by labels + on a kubernetes cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_SERVICE_LABELS, labels); + } + }); + toF("kubernetes-services:///?kubernetesClient=#kubernetesClient&operation=listServicesByLabels"). + to("mock:result"); + +This operation return a List of Services from your cluster, using a +label selector (with key1 and key2, with value value1 and value2) + +# Kubernetes Services Consumer Example + + fromF("kubernetes-services://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); + + public class KubernertesProcessor implements Processor { + @Override + public void process(Exchange exchange) throws Exception { + Message in = exchange.getIn(); + Service sv = exchange.getIn().getBody(Service.class); + log.info("Got event with configmap name: " + sv.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); + } + } + +This consumer will return a list of events on the namespace default for +the service test. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|crdGroup|The Consumer CRD Resource Group we would like to watch||string| +|crdName|The Consumer CRD Resource name we would like to watch||string| +|crdPlural|The Consumer CRD Resource Plural we would like to watch||string| +|crdScope|The Consumer CRD Resource Scope we would like to watch||string| +|crdVersion|The Consumer CRD Resource Version we would like to watch||string| +|labelKey|The Consumer Label key when watching at some resources||string| +|labelValue|The Consumer Label value when watching at some resources||string| +|poolSize|The Consumer pool size|1|integer| +|resourceName|The Consumer Resource Name we would like to watch||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|operation|Producer operation to do on Kubernetes||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-kudu.md b/camel-kudu.md new file mode 100644 index 0000000000000000000000000000000000000000..c55a693df5b8d254a34d60752d53893e702d41bc --- /dev/null +++ b/camel-kudu.md @@ -0,0 +1,63 @@ +# Kudu + +**Since Camel 3.0** + +**Only producer is supported** + +The Kudu component supports storing and retrieving data from/to [Apache +Kudu](https://kudu.apache.org/), a free and open source column-oriented +data store of the Apache Hadoop ecosystem. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-kudu + x.x.x + + + +# Prerequisites + +You must have a valid Kudu instance running. More information is +available at [Apache Kudu](https://kudu.apache.org/). + +# Input Body formats + +## Insert, delete, update, and upsert + +The input body format has to be a java.util.Map\. +This map will represent a row of the table whose elements are columns, +where the key is the column name and the value is the value of the +column. + +# Output Body formats + +## Scan + +The output body format will be a +java.util.List\\>. Each element +of the list will be a different row of the table. Each row is a +Map\ whose elements will be each pair of column +name and column value for that row. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|kuduClient|To use an existing Kudu client instance, instead of creating a client per endpoint. This allows you to customize various aspects to the client configuration.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|Host of the server to connect to||string| +|port|Port of the server to connect to||string| +|tableName|Table to connect to||string| +|operation|Operation to perform||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-langchain4j-chat.md b/camel-langchain4j-chat.md new file mode 100644 index 0000000000000000000000000000000000000000..6abeac425ca5ccc41c00f53924a209ea81ef4c94 --- /dev/null +++ b/camel-langchain4j-chat.md @@ -0,0 +1,162 @@ +# Langchain4j-chat + +**Since Camel 4.5** + +**Only producer is supported** + +The LangChain4j Chat Component allows you to integrate with any LLM +supported by [LangChain4j](https://github.com/langchain4j/langchain4j). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-langchain4j-chat + x.x.x + + + +# URI format + + langchain4j-chat:chatIdId[?options] + +Where **chatId** can be any string to uniquely identify the endpoint + +# Using a specific Chat Model + +The Camel LangChain4j chat component provides an abstraction for +interacting with various types of Large Language Models (LLMs) supported +by [LangChain4j](https://github.com/langchain4j/langchain4j). + +To integrate with a specific Large Language Model, users should follow +these steps: + +## Example of Integrating with OpenAI + +Add the dependency for LangChain4j OpenAI support: + + + dev.langchain4j + langchain4j-open-ai + x.x.x + + +Init the OpenAI Chat Language Model, and add it to the Camel Registry: + + ChatLanguageModel model = OpenAiChatModel.builder() + .apiKey(openApiKey) + .modelName(GPT_3_5_TURBO) + .temperature(0.3) + .timeout(ofSeconds(3000)) + .build(); + context.getRegistry().bind("chatModel", model); + +Use the model in the Camel LangChain4j Chat Producer + + from("direct:chat") + .to("langchain4j-chat:test?chatModel=#chatModel") + +To switch to another Large Language Model and its corresponding +dependency, replace the `langchain4j-open-ai` dependency with the +appropriate dependency for the desired model. Update the initialization +parameters accordingly in the code snippet provided above. + +# Send a prompt with variables + +To send a prompt with variables, use the Operation type +`LangChain4jChatOperations.CHAT_SINGLE_MESSAGE_WITH_PROMPT`. This +operation allows you to send a single prompt message with dynamic +variables, which will be replaced with values provided in the request. + +Example of route : + + from("direct:chat") + .to("langchain4j-chat:test?chatModel=#chatModel&chatOperation=CHAT_SINGLE_MESSAGE_WITH_PROMPT") + +Example of usage: + + var promptTemplate = "Create a recipe for a {{dishType}} with the following ingredients: {{ingredients}}"; + + Map variables = new HashMap<>(); + variables.put("dishType", "oven dish"); + variables.put("ingredients", "potato, tomato, feta, olive oil"); + + String response = template.requestBodyAndHeader("direct:chat", variables, + LangChain4jChat.Headers.PROMPT_TEMPLATE, promptTemplate, String.class); + +# Chat with history + +You can send a new prompt along with the chat message history by passing +all messages in a list of type +`dev.langchain4j.data.message.ChatMessage`. Use the Operation type +`LangChain4jChatOperations.CHAT_MULTIPLE_MESSAGES`. This operation +allows you to continue the conversation with the context of previous +messages. + +Example of route : + + from("direct:chat") + .to("langchain4j-chat:test?chatModel=#chatModel&chatOperation=CHAT_MULTIPLE_MESSAGES") + +Example of usage: + + List messages = new ArrayList<>(); + messages.add(new SystemMessage("You are asked to provide recommendations for a restaurant based on user reviews.")); + // Add more chat messages as needed + + String response = template.requestBody("direct:send-multiple", messages, String.class); + +# Chat with Tool + +Camel langchain4j-chat component as a consumer can be used to implement +a LangChain tool. Right now tools are supported only via the +OpenAiChatModel backed by OpenAI APIs. + +Tool Input parameter can be defined as an Endpoint multiValue option in +the form of `parameter.=`, or via the endpoint option +camelToolParameter for a programmatic approach. The parameters can be +found as headers in the consumer route, in particular, if you define +`parameter.userId=5`, in the consumer route `${header.userId}` can be +used. + +Example of a producer and a consumer: + + from("direct:test") + .to("langchain4j-chat:test1?chatOperation=CHAT_MULTIPLE_MESSAGES"); + + from("langchain4j-chat:test1?description=Query user database by number¶meter.number=integer") + .to("sql:SELECT name FROM users WHERE id = :#number"); + +Example of usage: + + List messages = new ArrayList<>(); + messages.add(new SystemMessage(""" + You provide information about specific user name querying the database given a number. + """)); + messages.add(new UserMessage(""" + What is the name of the user 1? + """)); + + Exchange message = fluentTemplate.to("direct:test").withBody(messages).request(Exchange.class); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|chatOperation|Operation in case of Endpoint of type CHAT. The value is one of the values of org.apache.camel.component.langchain4j.chat.LangChain4jChatOperations|CHAT\_SINGLE\_MESSAGE|object| +|configuration|The configuration.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|chatModel|Chat Language Model of type dev.langchain4j.model.chat.ChatLanguageModel||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|chatId|The id||string| +|chatOperation|Operation in case of Endpoint of type CHAT. The value is one of the values of org.apache.camel.component.langchain4j.chat.LangChain4jChatOperations|CHAT\_SINGLE\_MESSAGE|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|chatModel|Chat Language Model of type dev.langchain4j.model.chat.ChatLanguageModel||object| diff --git a/camel-langchain4j-embeddings.md b/camel-langchain4j-embeddings.md new file mode 100644 index 0000000000000000000000000000000000000000..7e40e344d14814237c79e218782697fa1012d18f --- /dev/null +++ b/camel-langchain4j-embeddings.md @@ -0,0 +1,35 @@ +# Langchain4j-embeddings + +**Since Camel 4.5** + +**Only producer is supported** + +The LangChain4j embeddings component provides support for compute +embeddings using [LangChain4j](https://docs.langchain4j.dev/) +embeddings. + +# URI format + + langchain4j-embeddings:embeddingId[?options] + +Where **embeddingId** can be any string to uniquely identify the +endpoint + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|The configuration.||object| +|embeddingModel|The EmbeddingModel engine to use.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|embeddingId|The id||string| +|embeddingModel|The EmbeddingModel engine to use.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-langchain4j-web-search.md b/camel-langchain4j-web-search.md new file mode 100644 index 0000000000000000000000000000000000000000..91febb0c388f00429307594ad0a8167d6968433d --- /dev/null +++ b/camel-langchain4j-web-search.md @@ -0,0 +1,130 @@ +# Langchain4j-web-search + +**Since Camel 4.8** + +**Only producer is supported** + +The LangChain4j Web Search component provides support for web searching +using the [LangChain4j](https://docs.langchain4j.dev/) Web Search +Engines. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-langchain4j-web-search + x.x.x + + + +# URI format + + langchain4j-web-search:searchId[?options] + +Where **searchId** can be any string to uniquely identify the endpoint + +# Using a specific Web Search Engine + +The Camel LangChain4j web search component provides an abstraction for +interacting with various types of Web Search Engines supported by +[LangChain4j](https://github.com/langchain4j/langchain4j). + +To integrate with a specific Web Search Engine, users should follow +these steps: + +## Example of integrating with Tavily + +Add the dependency for LangChain4j Tavily Web Search Engine support : + + + dev.langchain4j + langchain4j-web-search-engine-tavily + x.x.x + + +Init the Tavily Web Search Engine, and add it to the Camel Registry: +Initialize the Tavily Web Search Engine, and bind it to the Camel +Registry: + + @BindToRegistry("web-search-engine") + WebSearchEngine tavilyWebSearchEngine = TavilyWebSearchEngine.builder() + .apiKey(tavilyApiKey) + .includeRawContent(true) + .build(); + +The web search engine will be autowired automatically if its bound name +is `web-search-engine`. Otherwise, it should be added as a configured +parameter to the Camel route. + +Example of route: + + from("direct:web-search") + .to("langchain4j-web-search:test?webSearchEngine=#web-search-engine-test") + +To switch to another Web Search Engine and its corresponding dependency, +replace the `langchain4j-web-search-engine-tavily` dependency with the +appropriate dependency for the desired web search engine. Update the +initialization parameters accordingly in the code snippet provided +above. + +# Customizing Web Search Results + +By default, the `maxResults` property is set to 1. You can adjust this +value to retrieve a list of results. + +## Retrieving single result or list of strings + +When `maxResults` is set to 1, you can by default retrieve by default +the content as a single string. Example: + + String response = template.requestBody("langchain4j-web-search:test", "Who won the European Cup in 2024?", String.class); + +When `maxResults` is greater than 1, you can retrieve a list of strings. +Example: + + List responses = template.requestBody("langchain4j-web-search:test?maxResults=3", "Who won the European Cup in 2024?", List.class); + +## Retrieve different types of Results + +You can get different type of Results. + +When `resultType` = SNIPPET, you will get a single or list (depending of +`maxResults` value) of Strings containing the snippets. + +When `resultType` = LANGCHAIN4J\_WEB\_SEARCH\_ORGANIC\_RESULT, you will +get a single or list (depending of `maxResults` value) of Objects of +type `WebSearchOrganicResult` containing all the response created under +the hood by Langchain4j Web Search. + +# Advanced usage of WebSearchRequest + +When defining a WebSearchRequest, the Camel LangChain4j web search +component will default to this request, instead of creating one from the +body and config parameters. + +When using a WebSearchRequest, the body and the parameters of the search +will be ignored. Use this parameter with caution. + +A WebSearchRequest should be bound to the registry. + +Example of binding the request to the registry. + + @BindToRegistry("web-search-request") + WebSearchRequest request = WebSearchRequest.builder() + .searchTerms("Who won the European Cup in 2024?") + .maxResults(2) + .build(); + +The request will be autowired automatically if its bound name is +`web-search-request`. Otherwise, it should be added as a configured +parameter to the Camel route. + +Example of route: + + from("direct:web-search") + .to("langchain4j-web-search:test?webSearchRequest=#searchRequestTest"); + +## Component ConfigurationsThere are no configurations for this component + +## Endpoint ConfigurationsThere are no configurations for this component diff --git a/camel-language.md b/camel-language.md new file mode 100644 index 0000000000000000000000000000000000000000..07e71e5798a6fdadb11d302a4314fe990b1a941b --- /dev/null +++ b/camel-language.md @@ -0,0 +1,111 @@ +# Language + +**Since Camel 2.5** + +**Only producer is supported** + +The Language component allows you to send `Exchange` to an endpoint +which executes a script by any of the supported Languages in Camel. + +By having a component to execute language scripts, it allows more +dynamic routing capabilities. For example, by using the Routing Slip or +[Dynamic Router](#eips:dynamicRouter-eip.adoc) EIPs you can send +messages to `language` endpoints where the script is dynamically defined +as well. + +You only have to include additional Camel components if the language of +choice mandates it, such as using +[Groovy](#languages:groovy-language.adoc) or +[JavaScript](#languages:groovy-language.adoc) languages. + +# URI format + + language://languageName[:script][?options] + +You can refer to an external resource for the script using the same +notation as supported by the other [Language](#language-component.adoc)s +in Camel + + language://languageName:resource:scheme:location][?options] + +# Examples + +For example, you can use the [Simple](#languages:simple-language.adoc) +language as [Message Translator](#eips:message-translator.adoc) EIP: + + from("direct:hello") + .to("language:simple:Hello ${body}") + +In case you want to convert the message body type, you can do this as +well. However, it is better to use [Convert Body +To](#eips:convertBodyTo-eip.adoc): + + from("direct:toString") + .to("language:simple:${bodyAs(String.class)}") + +You can also use the [Groovy](#languages:groovy-language.adoc) language, +such as this example where the input message will be multiplied with 2: + + from("direct:double") + .to("language:groovy:${body} * 2}") + +You can also provide the script as a header as shown below. Here we use +[XPath](#languages:xpath-language.adoc) language to extract the text +from the `` tag. + + Object out = producer.requestBodyAndHeader("language:xpath", "Hello World", Exchange.LANGUAGE_SCRIPT, "/foo/text()"); + assertEquals("Hello World", out); + +# Loading scripts from resources + +You can specify a resource uri for a script to load in either the +endpoint uri, or in the `Exchange.LANGUAGE_SCRIPT` header. The uri must +start with one of the following schemes: `file:`, `classpath:`, or +`http:` + + from("direct:start") + // load the script from the classpath + .to("language:simple:resource:classpath:org/apache/camel/component/language/mysimplescript.txt") + .to("mock:result"); + +By default, the script is loaded once and cached. However, you can +disable the `contentCache` option and have the script loaded on each +evaluation. For example, if the file `myscript.txt` is changed on disk, +then the updated script is used: + + from("direct:start") + // the script will be loaded on each message, as we disabled cache + .to("language:simple:myscript.txt?contentCache=false") + .to("mock:result"); + +You can also refer to the script as a resource similar to how all the +other [Language](#language-component.adoc)s in Camel functions, by +prefixing with `resource:` as shown below: + + from("direct:start") + .to("language:constant:resource:classpath:org/apache/camel/component/language/hello.txt") + .to("mock:result"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|languageName|Sets the name of the language to use||string| +|resourceUri|Path to the resource, or a reference to lookup a bean in the Registry to use as the resource||string| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|binary|Whether the script is binary content or text content. By default the script is read as text content (eg java.lang.String)|false|boolean| +|cacheScript|Whether to cache the compiled script and reuse Notice reusing the script can cause side effects from processing one Camel org.apache.camel.Exchange to the next org.apache.camel.Exchange.|false|boolean| +|contentCache|Sets whether to use resource content cache or not|true|boolean| +|resultType|Sets the class of the result type (type from output)||string| +|script|Sets the script to execute||string| +|transform|Whether or not the result of the script should be used as message body. This options is default true.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-ldap.md b/camel-ldap.md new file mode 100644 index 0000000000000000000000000000000000000000..a2b9c08db71865bd12ed76a7e0c79027ed7aa3eb --- /dev/null +++ b/camel-ldap.md @@ -0,0 +1,421 @@ +# Ldap + +**Since Camel 1.5** + +**Only producer is supported** + +The LDAP component allows you to perform searches in LDAP servers using +filters as the message payload. + +This component uses standard JNDI (`javax.naming` package) to access the +server. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-ldap + x.x.x + + + +# URI format + + ldap:ldapServerBean[?options] + +The *ldapServerBean* portion of the URI refers to a +[DirContext](https://docs.oracle.com/en/java/javase/17/docs/api/java.naming/javax/naming/directory/DirContext.html) +bean in the registry. The LDAP component only supports producer +endpoints, which means that an `ldap` URI cannot appear in the `from` at +the start of a route. + +# Result + +The result is returned to Out body as a +`List` object. + +# DirContext + +The URI, `ldap:ldapserver`, references a bean with the ID `ldapserver`. +The `ldapserver` bean may be defined as follows: + +Java (Quarkus) +public class LdapServerProducer { + + @Produces + @Dependent + @Named("ldapserver") + public DirContext createLdapServer() throws Exception { + Hashtable env = new Hashtable<>(); + env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); + env.put(Context.PROVIDER_URL, "ldap://localhost:10389"); + env.put(Context.SECURITY_AUTHENTICATION, "none"); + + return new InitialDirContext(env); + } + } + +XML (Spring) + + + +com.sun.jndi.ldap.LdapCtxFactory +ldap://localhost:10389 +none + + + + +The preceding example declares a regular Sun-based LDAP `DirContext` +that connects anonymously to a locally hosted LDAP server. + +`DirContext` objects are **not** required to support concurrency by +contract. It is therefore important to manage the directory context’s +lifecycle appropriately. In the Spring framework, `prototype` scoped +objects are instantiated each time they are looked up to ensure +concurrency and avoid sharing the same context between multiple threads. + +For Camel Quarkus applications, you can achieve similar behavior by +using the `@Dependent` annotation. When you annotate a component or bean +with `@Dependent`, a new instance of the component is created for each +injection point or usage, which effectively provides the same +concurrency guarantees as Spring’s `prototype` scope. This ensures that +each part of your application interacts with a separate and isolated +`DirContext` instance, preventing unintended thread interference. + +# Security concerns related to LDAP injection + +The camel-ldap component uses the message body to filter the search +results. Therefore, the message body should be protected from LDAP +injection. To assist with this, you can use +`org.apache.camel.component.ldap.LdapHelper` utility class that has +method(s) to escape string values to be LDAP injection safe. + +See the following link for information about [LDAP +Injection](https://cheatsheetseries.owasp.org/cheatsheets/LDAP_Injection_Prevention_Cheat_Sheet.html). + +# Samples + +Following on from the configuration above, the code sample below sends +an LDAP request to filter search a group for a member. The Common Name +is then extracted from the response. + + ProducerTemplate template = exchange.getContext().createProducerTemplate(); + + Collection results = template.requestBody( + "ldap:ldapserver?base=ou=mygroup,ou=groups,ou=system", + "(member=uid=huntc,ou=users,ou=system)", Collection.class); + + if (results.size() > 0) { + // Extract what we need from the device's profile + + Iterator resultIter = results.iterator(); + SearchResult searchResult = (SearchResult) resultIter.next(); + Attributes attributes = searchResult.getAttributes(); + Attribute deviceCNAttr = attributes.get("cn"); + String deviceCN = (String) deviceCNAttr.get(); + // ... + } + +If no specific filter is required - for example, you just need to look +up a single entry - specify a wildcard filter expression. For example, +if the LDAP entry has a Common Name, use a filter expression like: + + (cn=*) + +## Binding using credentials + +A Camel end user donated this sample code he used to bind to the ldap +server using credentials. + + Properties props = new Properties(); + props.setProperty(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); + props.setProperty(Context.PROVIDER_URL, "ldap://localhost:389"); + props.setProperty(Context.URL_PKG_PREFIXES, "com.sun.jndi.url"); + props.setProperty(Context.REFERRAL, "ignore"); + props.setProperty(Context.SECURITY_AUTHENTICATION, "simple"); + props.setProperty(Context.SECURITY_PRINCIPAL, "cn=Manager"); + props.setProperty(Context.SECURITY_CREDENTIALS, "secret"); + + DefaultRegistry reg = new DefaultRegistry(); + reg.bind("myldap", new InitialLdapContext(props, null)); + + CamelContext context = new DefaultCamelContext(reg); + context.addRoutes( + new RouteBuilder() { + @Override + public void configure() throws Exception { + from("direct:start").to("ldap:myldap?base=ou=test"); + } + } + ); + context.start(); + + ProducerTemplate template = context.createProducerTemplate(); + + Endpoint endpoint = context.getEndpoint("direct:start"); + Exchange exchange = endpoint.createExchange(); + exchange.getIn().setBody("(uid=test)"); + Exchange out = template.send(endpoint, exchange); + + Collection data = out.getMessage().getBody(Collection.class); + assert data != null; + assert !data.isEmpty(); + + System.out.println(out.getMessage().getBody()); + + context.stop(); + +# Configuring SSL + +All that is required is to create a custom socket factory and reference +it in the InitialDirContext bean - see below sample. + +**SSL Configuration** + +Java (Quarkus) +public class LdapServerProducer { + + @Produces + @Dependent + @Named("ldapserver") + public DirContext createLdapServer() throws Exception { + Hashtable env = new Hashtable<>(); + env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); + env.put(Context.PROVIDER_URL, "ldaps://" + InetAddress.getLocalHost().getCanonicalHostName() + ":10636"); + env.put(Context.SECURITY_AUTHENTICATION, "none"); + env.put("java.naming.ldap.factory.socket", CustomSSLSocketFactory.class.getName()); + + return new InitialDirContext(env); + } + } + +XML (Spring) + + + + + + + + + + + + + + + + + com.sun.jndi.ldap.LdapCtxFactory + ldaps://127.0.0.1:10636 + ssl + none + com.example.ldap.CustomSocketFactory + + + + + +**Custom Socket Factory** + +Java (Quarkus) +import java.io.FileInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.net.InetAddress; +import java.net.Socket; +import java.net.UnknownHostException; +import java.security.KeyStore; + + import javax.net.SocketFactory; + import javax.net.ssl.SSLContext; + import javax.net.ssl.SSLSocketFactory; + import javax.net.ssl.TrustManagerFactory; + + import org.eclipse.microprofile.config.ConfigProvider; + + public class CustomSSLSocketFactory extends SSLSocketFactory { + + private SSLSocketFactory delegate; + + public CustomSSLSocketFactory() throws Exception { + String trustStoreFilename = ConfigProvider.getConfig().getValue("ldap.trustStore", String.class); + String trustStorePassword = ConfigProvider.getConfig().getValue("ldap.trustStorePassword", String.class); + KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType()); + try (InputStream in = new FileInputStream(trustStoreFilename)) { + keyStore.load(in, trustStorePassword.toCharArray()); + } + TrustManagerFactory tmf = TrustManagerFactory.getInstance("SunX509"); + tmf.init(keyStore); + SSLContext ctx = SSLContext.getInstance("TLS"); + ctx.init(null, tmf.getTrustManagers(), null); + delegate = ctx.getSocketFactory(); + } + + public static SocketFactory getDefault() { + try { + return new CustomSSLSocketFactory(); + } catch (Exception ex) { + ex.printStackTrace(); + return null; + } + } + + @Override + public Socket createSocket(Socket s, String host, int port, boolean autoClose) throws IOException { + return delegate.createSocket(s, host, port, autoClose); + } + + @Override + public String[] getDefaultCipherSuites() { + return delegate.getDefaultCipherSuites(); + } + + @Override + public String[] getSupportedCipherSuites() { + return delegate.getSupportedCipherSuites(); + } + + @Override + public Socket createSocket(String host, int port) throws IOException, UnknownHostException { + return delegate.createSocket(host, port); + } + + @Override + public Socket createSocket(InetAddress address, int port) throws IOException { + return delegate.createSocket(address, port); + } + + @Override + public Socket createSocket(String host, int port, InetAddress localAddress, int localPort) + throws IOException, UnknownHostException { + return delegate.createSocket(host, port, localAddress, localPort); + } + + @Override + public Socket createSocket(InetAddress address, int port, InetAddress localAddress, int localPort) + throws IOException { + return delegate.createSocket(address, port, localAddress, localPort); + } + } + +The constructor uses the `ConfigProvider` to read the `ldap.trustStore` +and `ldap.trustStorePassword` configuration properties, which could be +specified in the `application.properties` file as follows: + + ldap.trustStore=/path/to/truststore.jks + ldap.trustStorePassword=secret + +XML (Spring) +package com.example.ldap; + + import java.io.IOException; + import java.net.InetAddress; + import java.net.Socket; + import java.security.KeyStore; + + import javax.net.SocketFactory; + import javax.net.ssl.SSLContext; + import javax.net.ssl.SSLSocketFactory; + import javax.net.ssl.TrustManagerFactory; + + import org.apache.camel.support.jsse.SSLContextParameters; + + /** + * The CustomSocketFactory. Loads the KeyStore and creates an instance of SSLSocketFactory + */ + public class CustomSocketFactory extends SSLSocketFactory { + + private static SSLSocketFactory socketFactory; + + /** + * Called by the getDefault() method. + */ + public CustomSocketFactory() { + } + + /** + * Called by Spring Boot DI to initialize an instance of SocketFactory + */ + public CustomSocketFactory(SSLContextParameters sslContextParameters) { + try { + KeyStore keyStore = sslContextParameters.getKeyManagers().getKeyStore().createKeyStore(); + TrustManagerFactory tmf = TrustManagerFactory.getInstance("SunX509"); + tmf.init(keyStore); + SSLContext ctx = SSLContext.getInstance("TLS"); + ctx.init(null, tmf.getTrustManagers(), null); + socketFactory = ctx.getSocketFactory(); + } catch (Exception ex) { + ex.printStackTrace(System.err); + } + } + + /** + * Getter for the SocketFactory + */ + public static SocketFactory getDefault() { + return new CustomSocketFactory(); + } + + @Override + public String[] getDefaultCipherSuites() { + return socketFactory.getDefaultCipherSuites(); + } + + @Override + public String[] getSupportedCipherSuites() { + return socketFactory.getSupportedCipherSuites(); + } + + @Override + public Socket createSocket(Socket socket, String string, int i, boolean bln) throws IOException { + return socketFactory.createSocket(socket, string, i, bln); + } + + @Override + public Socket createSocket(String string, int i) throws IOException { + return socketFactory.createSocket(string, i); + } + + @Override + public Socket createSocket(String string, int i, InetAddress ia, int i1) throws IOException { + return socketFactory.createSocket(string, i, ia, i1); + } + + @Override + public Socket createSocket(InetAddress ia, int i) throws IOException { + return socketFactory.createSocket(ia, i); + } + + @Override + public Socket createSocket(InetAddress ia, int i, InetAddress ia1, int i1) throws IOException { + return socketFactory.createSocket(ia, i, ia1, i1); + } + } + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|dirContextName|Name of either a javax.naming.directory.DirContext, or java.util.Hashtable, or Map bean to lookup in the registry. If the bean is either a Hashtable or Map then a new javax.naming.directory.DirContext instance is created for each use. If the bean is a javax.naming.directory.DirContext then the bean is used as given. The latter may not be possible in all situations where the javax.naming.directory.DirContext must not be shared, and in those situations it can be better to use java.util.Hashtable or Map instead.||string| +|base|The base DN for searches.|ou=system|string| +|pageSize|When specified the ldap module uses paging to retrieve all results (most LDAP Servers throw an exception when trying to retrieve more than 1000 entries in one query). To be able to use this a LdapContext (subclass of DirContext) has to be passed in as ldapServerBean (otherwise an exception is thrown)||integer| +|returnedAttributes|Comma-separated list of attributes that should be set in each entry of the result||string| +|scope|Specifies how deeply to search the tree of entries, starting at the base DN.|subtree|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-ldif.md b/camel-ldif.md new file mode 100644 index 0000000000000000000000000000000000000000..66740abfb7c33e3934b9099558090ac074033dcb --- /dev/null +++ b/camel-ldif.md @@ -0,0 +1,135 @@ +# Ldif + +**Since Camel 2.20** + +**Only producer is supported** + +The LDIF component allows you to do updates on an LDAP server from an +LDIF body content. + +This component uses a basic URL syntax to access the server. It uses the +Apache DS LDAP library to process the LDIF. After processing the LDIF, +the response body will be a list of statuses for success/failure of each +entry. + +The Apache LDAP API is very sensitive to LDIF syntax errors. If in +doubt, refer to the unit tests to see an example of each change type. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-ldif + x.x.x + + + +# URI format + + ldif:ldapServerBean[?options] + +The *ldapServerBean* portion of the URI refers to a +[LdapConnection](https://directory.apache.org/api/gen-docs/latest/apidocs/org/apache/directory/ldap/client/api/LdapConnection.html). +This should be constructed from a factory at the point of use to avoid +connection timeouts. The LDIF component only supports producer +endpoints, which means that an `ldif` URI cannot appear in the `from` at +the start of a route. + +For SSL configuration, refer to the `camel-ldap` component where there +is an example of setting up a custom SocketFactory instance. + +# Body types: + +The body can be a URL to an LDIF file or an inline LDIF file. To signify +the difference in body types, an inline LDIF must start with: + + version: 1 + +If not, the component will try to parse the body as a URL. + +# Result + +The result is returned in the Out body as a +`ArrayList` object. This contains either "success" or +an Exception message for each LDIF entry. + +# LdapConnection + +The URI, `ldif:ldapConnectionName`, references a bean with the ID, +`ldapConnectionName`. The ldapConnection can be configured using a +`LdapConnectionConfig` bean. Note that the scope must have a scope of +`prototype` to avoid the connection being shared or picking up a stale +connection. + +The `LdapConnection` bean may be defined as follows in Spring XML: + + + + + + + + + + + + + + + + +or in a OSGi blueprint.xml: + + + + + + + + + + + + + + + + +# Samples + +Following on from the Spring configuration above, the code sample below +sends an LDAP request to filter search a group for a member. The Common +Name is then extracted from the response. + + ProducerTemplate template = exchange.getContext().createProducerTemplate(); + + List results = (Collection) template.sendBody("ldif:ldapConnection, "LDiff goes here"); + + if (results.size() > 0) { + // Check for no errors + + for (String result : results) { + if ("success".equalTo(result)) { + // LDIF entry success + } else { + // LDIF entry failure + } + } + } + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|ldapConnectionName|The name of the LdapConnection bean to pull from the registry. Note that this must be of scope prototype to avoid it being shared among threads or using a connection that has timed out.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-log.md b/camel-log.md new file mode 100644 index 0000000000000000000000000000000000000000..02169c6806fca0f08a2ebb7b5bf8fae037849a58 --- /dev/null +++ b/camel-log.md @@ -0,0 +1,241 @@ +# Log + +**Since Camel 1.1** + +**Only producer is supported** + +The Log component logs message exchanges to the underlying logging +mechanism. + +Camel uses [SLF4J](http://www.slf4j.org/), which allows you to configure +logging via, among others: + +- Log4j + +- Logback + +- Java Util Logging + +# URI format + + log:loggingCategory[?options] + +Where **loggingCategory** is the name of the logging category to use. +You can append query options to the URI in the following format, +`?option=value&option=value&...` + +**Using Logger instance from the Registry** + +If there’s single instance of `org.slf4j.Logger` found in the Registry, +the **loggingCategory** is no longer used to create logger instance. The +registered instance is used instead. Also, it is possible to reference +particular `Logger` instance using `?logger=#myLogger` URI parameter. +Eventually, if there’s no registered and URI `logger` parameter, the +logger instance is created using **loggingCategory**. + +For example, a log endpoint typically specifies the logging level using +the `level` option, as follows: + + log:org.apache.camel.example?level=DEBUG + +The default logger logs every exchange (*regular logging*). But Camel +also ships with the `Throughput` logger, which is used whenever the +`groupSize` option is specified. + +There is also a `log` directly in the DSL, but it has a different +purpose. It’s meant for lightweight and human logs. See more details at +[LogEIP](#eips:log-eip.adoc). + +# Regular logger sample + +In the route below we log the incoming orders at `DEBUG` level before +the order is processed: + + from("activemq:orders").to("log:com.mycompany.order?level=DEBUG").to("bean:processOrder"); + +Or using Spring XML to define the route: + + + + + + + +# Regular logger with formatter sample + +In the route below we log the incoming orders at `INFO` level before the +order is processed. + + from("activemq:orders"). + to("log:com.mycompany.order?showAll=true&multiline=true").to("bean:processOrder"); + +# Throughput logger with groupSize sample + +In the route below we log the throughput of the incoming orders at +`DEBUG` level grouped by 10 messages. + + from("activemq:orders"). + to("log:com.mycompany.order?level=DEBUG&groupSize=10").to("bean:processOrder"); + +# Throughput logger with groupInterval sample + +This route will result in message stats logged every 10s, with an +initial 60s delay, and stats should be displayed even if there isn’t any +message traffic. + + from("activemq:orders"). + to("log:com.mycompany.order?level=DEBUG&groupInterval=10000&groupDelay=60000&groupActiveOnly=false").to("bean:processOrder"); + +The following will be logged: + + "Received: 1000 new messages, with total 2000 so far. Last group took: 10000 millis which is: 100 messages per second. average: 100" + +# Masking sensitive information like password + +You can enable security masking for logging by setting `logMask` flag to +`true`. Note that this option also affects Log EIP. + +To enable mask in Java DSL at CamelContext level: + + camelContext.setLogMask(true); + +And in XML: + + + +You can also turn it on\|off at endpoint level. To enable mask in Java +DSL at endpoint level, add logMask=true option in the URI for the log +endpoint: + + from("direct:start").to("log:foo?logMask=true"); + +And in XML: + + + + + + +`org.apache.camel.support.processor.DefaultMaskingFormatter` is used for +the masking by default. If you want to use a custom masking formatter, +put it into registry with the name `CamelCustomLogMask`. Note that the +masking formatter must implement +`org.apache.camel.spi.MaskingFormatter`. + +# Full customization of the logging output + +With the options outlined in the [#Formatting](#log-component.adoc) +section, you can control much of the output of the logger. However, log +lines will always follow this structure: + + Exchange[Id:ID-machine-local-50656-1234567901234-1-2, ExchangePattern:InOut, + Properties:{CamelToEndpoint=log://org.apache.camel.component.log.TEST?showAll=true, + CamelCreatedTimestamp=Thu Mar 28 00:00:00 WET 2013}, + Headers:{breadcrumbId=ID-machine-local-50656-1234567901234-1-1}, BodyType:String, Body:Hello World, Out: null] + +This format is unsuitable in some cases, perhaps because you need to… + +- … filter the headers and properties that are printed, to strike a + balance between insight and verbosity. + +- … adjust the log message to whatever you deem most readable. + +- … tailor log messages for digestion by log mining systems, e.g. + Splunk. + +- … print specific body types differently. + +- … etc. + +Whenever you require absolute customization, you can create a class that +implements the +[`ExchangeFormatter`](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/spi/ExchangeFormatter.html) +interface. Within the `format(Exchange)` method you have access to the +full Exchange, so you can select and extract the precise information you +need, format it in a custom manner and return it. The return value will +become the final log message. + +You can have the Log component pick up your custom `ExchangeFormatter` +in either of two ways: + +**Explicitly instantiating the LogComponent in your Registry:** + + + + + +## Convention over configuration:\* + +Simply by registering a bean with the name `logFormatter`; the Log +Component is intelligent enough to pick it up automatically. + + + +The `ExchangeFormatter` gets applied to **all Log endpoints within that +Camel Context**. If you need different ExchangeFormatters for different +endpoints, instantiate the LogComponent as many times as needed, and use +the relevant bean name as the endpoint prefix. + +When using a custom log formatter, you can specify parameters in the log +uri, which gets configured on the custom log formatter. Though when you +do that, you should define the "logFormatter" as prototype scoped, so +it’s not shared if you have different parameters, e.g.: + + + +And then we can have Camel routes using the log uri with different +options: + + + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|sourceLocationLoggerName|If enabled then the source location of where the log endpoint is used in Camel routes, would be used as logger name, instead of the given name. However, if the source location is disabled or not possible to resolve then the existing logger name will be used.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|exchangeFormatter|Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|loggerName|Name of the logging category to use||string| +|groupActiveOnly|If true, will hide stats when no new messages have been received for a time interval, if false, show stats regardless of message traffic.|true|boolean| +|groupDelay|Set the initial delay for stats (in millis)||integer| +|groupInterval|If specified will group message stats by this time interval (in millis)||integer| +|groupSize|An integer that specifies a group size for throughput logging.||integer| +|level|Logging level to use. The default value is INFO.|INFO|string| +|logMask|If true, mask sensitive information like password or passphrase in the log.||boolean| +|marker|An optional Marker name to use.||string| +|plain|If enabled only the body will be printed out|false|boolean| +|sourceLocationLoggerName|If enabled then the source location of where the log endpoint is used in Camel routes, would be used as logger name, instead of the given name. However, if the source location is disabled or not possible to resolve then the existing logger name will be used.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|exchangeFormatter|To use a custom exchange formatter||object| +|maxChars|Limits the number of characters logged per line.|10000|integer| +|multiline|If enabled then each information is outputted on a newline.|false|boolean| +|showAll|Quick option for turning all options on. (multiline, maxChars has to be manually set if to be used)|false|boolean| +|showAllProperties|Show all of the exchange properties (both internal and custom).|false|boolean| +|showBody|Show the message body.|true|boolean| +|showBodyType|Show the body Java type.|true|boolean| +|showCachedStreams|Whether Camel should show cached stream bodies or not (org.apache.camel.StreamCache).|true|boolean| +|showCaughtException|If the exchange has a caught exception, show the exception message (no stack trace). A caught exception is stored as a property on the exchange (using the key org.apache.camel.Exchange#EXCEPTION\_CAUGHT) and for instance a doCatch can catch exceptions.|false|boolean| +|showException|If the exchange has an exception, show the exception message (no stacktrace)|false|boolean| +|showExchangeId|Show the unique exchange ID.|false|boolean| +|showExchangePattern|Shows the Message Exchange Pattern (or MEP for short).|false|boolean| +|showFiles|If enabled Camel will output files|false|boolean| +|showFuture|If enabled Camel will on Future objects wait for it to complete to obtain the payload to be logged.|false|boolean| +|showHeaders|Show the message headers.|false|boolean| +|showProperties|Show the exchange properties (only custom). Use showAllProperties to show both internal and custom properties.|false|boolean| +|showRouteGroup|Show route Group.|false|boolean| +|showRouteId|Show route ID.|false|boolean| +|showStackTrace|Show the stack trace, if an exchange has an exception. Only effective if one of showAll, showException or showCaughtException are enabled.|false|boolean| +|showStreams|Whether Camel should show stream bodies or not (eg such as java.io.InputStream). Beware if you enable this option then you may not be able later to access the message body as the stream have already been read by this logger. To remedy this you will have to use Stream Caching.|false|boolean| +|showVariables|Show the variables.|false|boolean| +|skipBodyLineSeparator|Whether to skip line separators when logging the message body. This allows to log the message body in one line, setting this option to false will preserve any line separators from the body, which then will log the body as is.|true|boolean| +|style|Sets the outputs style to use.|Default|object| diff --git a/camel-lpr.md b/camel-lpr.md new file mode 100644 index 0000000000000000000000000000000000000000..1ae318ad94acb8040eff84ee717393cb2375fe31 --- /dev/null +++ b/camel-lpr.md @@ -0,0 +1,107 @@ +# Lpr + +**Since Camel 2.1** + +**Only producer is supported** + +The Printer component provides a way to direct payloads on a route to a +printer. The payload has to be a formatted piece of payload in order for +the component to appropriately print it. The goal is to be able to +direct specific payloads as jobs to a line printer in a camel flow. + +The functionality allows for the payload to be printed on a default +printer, named local, remote or wireless linked printer using the javax +printing API under the covers. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-printer + x.x.x + + + +# URI format + +Since the URI scheme for a printer has not been standardized (the +nearest thing to a standard being the IETF print standard), and +therefore not uniformly applied by vendors, we have chosen **"lpr"** as +the scheme. + + lpr://localhost/default[?options] + lpr://remotehost:port/path/to/printer[?options] + +# Sending Messages to a Printer + +## Printer Producer + +Sending data to the printer is very straightforward and involves +creating a producer endpoint that can be sent message exchanges on in +route. + +# Usage Samples + +## Example 1: Printing text-based payloads on a Default printer using letter stationary and one-sided mode + + RouteBuilder builder = new RouteBuilder() { + public void configure() { + from("file://inputdir/?delete=true") + .to("lpr://localhost/default?copies=2" + + "&flavor=DocFlavor.INPUT_STREAM&" + + "&mimeType=AUTOSENSE" + + "&mediaSize=NA_LETTER" + + "&sides=one-sided"); + }}; + +## Example 2: Printing GIF-based payloads on a remote printer using A4 stationary and one-sided mode + + RouteBuilder builder = new RouteBuilder() { + public void configure() { + from("file://inputdir/?delete=true") + .to("lpr://remotehost/sales/salesprinter" + + "?copies=2&sides=one-sided" + + "&mimeType=GIF&mediaSize=ISO_A4" + + "&flavor=DocFlavor.INPUT_STREAM"); + }}; + +## Example 3: Printing JPEG-based payloads on a remote printer using Japanese Postcard stationary and one-sided mode + + RouteBuilder builder = new RouteBuilder() { + public void configure() { + from("file://inputdir/?delete=true") + .to("lpr://remotehost/sales/salesprinter" + + "?copies=2&sides=one-sided" + + "&mimeType=JPEG" + + "&mediaSize=JAPANESE_POSTCARD" + + "&flavor=DocFlavor.INPUT_STREAM"); + }}; + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|hostname|Hostname of the printer||string| +|port|Port number of the printer||integer| +|printername|Name of the printer||string| +|copies|Number of copies to print|1|integer| +|docFlavor|Sets DocFlavor to use.||object| +|flavor|Sets DocFlavor to use.||string| +|mediaSize|Sets the stationary as defined by enumeration names in the javax.print.attribute.standard.MediaSizeName API. The default setting is to use North American Letter sized stationary. The value's case is ignored, e.g. values of iso\_a4 and ISO\_A4 may be used.|na-letter|string| +|mediaTray|Sets MediaTray supported by the javax.print.DocFlavor API, for example upper,middle etc.||string| +|mimeType|Sets mimeTypes supported by the javax.print.DocFlavor API||string| +|orientation|Sets the page orientation.|portrait|string| +|printerPrefix|Sets the prefix name of the printer, it is useful when the printer name does not start with //hostname/printer||string| +|sendToPrinter|etting this option to false prevents sending of the print data to the printer|true|boolean| +|sides|Sets one sided or two sided printing based on the javax.print.attribute.standard.Sides API|one-sided|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-lucene.md b/camel-lucene.md new file mode 100644 index 0000000000000000000000000000000000000000..8721af2544d795e9a8427c6039abdfcdfe530796 --- /dev/null +++ b/camel-lucene.md @@ -0,0 +1,165 @@ +# Lucene + +**Since Camel 2.2** + +**Only producer is supported** + +The Lucene component is based on the Apache Lucene project. Apache +Lucene is a powerful high-performance, full-featured text search engine +library written entirely in Java. For more details about Lucene, please +see the following links + +- [http://lucene.apache.org/java/docs/](http://lucene.apache.org/java/docs/) + +- [http://lucene.apache.org/java/docs/features.html](http://lucene.apache.org/java/docs/features.html) + +The lucene component in camel facilitates integration and utilization of +Lucene endpoints in enterprise integration patterns and scenarios. The +lucene component does the following + +- builds a searchable index of documents when payloads are sent to the + Lucene Endpoint + +- facilitates performing of indexed searches in Camel + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-lucene + x.x.x + + + +# URI format + + lucene:searcherName:insert[?options] + lucene:searcherName:query[?options] + +# Sending/Receiving Messages to/from the cache + +## Lucene Producers + +This component supports 2 producer endpoints. + +**insert**: the insert producer builds a searchable index by analyzing +the body in incoming exchanges and associating it with a token +("content"). **query**: the query producer performs searches on a +pre-created index. The query uses the searchable index to perform score +\& relevance based searches. Queries are sent via the incoming exchange +contains a header property name called *QUERY*. The value of the header +property *QUERY* is a Lucene Query. For more details on how to create +Lucene Queries, check out [Query Parser Classic +syntax](https://lucene.apache.org/core/8_4_1/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#package.description) + +## Lucene Processor + +There is a processor called LuceneQueryProcessor available to perform +queries against lucene without the need to create a producer. + +# Lucene Usage Samples + +## Example 1: Creating a Lucene index + + RouteBuilder builder = new RouteBuilder() { + public void configure() { + from("direct:start"). + to("lucene:whitespaceQuotesIndex:insert? + analyzer=#whitespaceAnalyzer&indexDir=#whitespace&srcDir=#load_dir"). + to("mock:result"); + } + }; + +## Example 2: Loading properties into the JNDI registry in the Camel Context + + CamelContext context = new DefaultCamelContext(createRegistry()); + Registry registry = context.getRegistry(); + registry.bind("whitespace", new File("./whitespaceIndexDir")); + registry.bind("load_dir", new File("src/test/resources/sources")); + registry.bind("whitespaceAnalyzer", new WhitespaceAnalyzer()); + +## Example 2: Performing searches using a Query Producer + + RouteBuilder builder = new RouteBuilder() { + public void configure() { + from("direct:start"). + setHeader(LuceneConstants.HEADER_QUERY, constant("Seinfeld")). + to("lucene:searchIndex:query? + analyzer=#whitespaceAnalyzer&indexDir=#whitespace&maxHits=20"). + to("direct:next"); + + from("direct:next").process(new Processor() { + public void process(Exchange exchange) throws Exception { + Hits hits = exchange.getIn().getBody(Hits.class); + printResults(hits); + } + + private void printResults(Hits hits) { + LOG.debug("Number of hits: " + hits.getNumberOfHits()); + for (int i = 0; i < hits.getNumberOfHits(); i++) { + LOG.debug("Hit " + i + " Index Location:" + hits.getHit().get(i).getHitLocation()); + LOG.debug("Hit " + i + " Score:" + hits.getHit().get(i).getScore()); + LOG.debug("Hit " + i + " Data:" + hits.getHit().get(i).getData()); + } + } + }).to("mock:searchResult"); + } + }; + +## Example 3: Performing searches using a Query Processor + + RouteBuilder builder = new RouteBuilder() { + public void configure() { + try { + from("direct:start"). + setHeader(LuceneConstants.HEADER_QUERY, constant("Rodney Dangerfield")). + process(new LuceneQueryProcessor("target/stdindexDir", analyzer, null, 20)). + to("direct:next"); + } catch (Exception e) { + e.printStackTrace(); + } + + from("direct:next").process(new Processor() { + public void process(Exchange exchange) throws Exception { + Hits hits = exchange.getIn().getBody(Hits.class); + printResults(hits); + } + + private void printResults(Hits hits) { + LOG.debug("Number of hits: " + hits.getNumberOfHits()); + for (int i = 0; i < hits.getNumberOfHits(); i++) { + LOG.debug("Hit " + i + " Index Location:" + hits.getHit().get(i).getHitLocation()); + LOG.debug("Hit " + i + " Score:" + hits.getHit().get(i).getScore()); + LOG.debug("Hit " + i + " Data:" + hits.getHit().get(i).getData()); + } + } + }).to("mock:searchResult"); + } + }; + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|analyzer|An Analyzer builds TokenStreams, which analyze text. It thus represents a policy for extracting index terms from text. The value for analyzer can be any class that extends the abstract class org.apache.lucene.analysis.Analyzer. Lucene also offers a rich set of analyzers out of the box||object| +|indexDir|A file system directory in which index files are created upon analysis of the document by the specified analyzer||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|maxHits|An integer value that limits the result set of the search operation||integer| +|srcDir|An optional directory containing files to be used to be analyzed and added to the index at producer startup.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|config|To use a shared lucene configuration||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|The URL to the lucene server||string| +|operation|Operation to do such as insert or query.||object| +|analyzer|An Analyzer builds TokenStreams, which analyze text. It thus represents a policy for extracting index terms from text. The value for analyzer can be any class that extends the abstract class org.apache.lucene.analysis.Analyzer. Lucene also offers a rich set of analyzers out of the box||object| +|indexDir|A file system directory in which index files are created upon analysis of the document by the specified analyzer||string| +|maxHits|An integer value that limits the result set of the search operation||integer| +|srcDir|An optional directory containing files to be used to be analyzed and added to the index at producer startup.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-lumberjack.md b/camel-lumberjack.md new file mode 100644 index 0000000000000000000000000000000000000000..4bb3624964b6deb7bbcb1cf47875b8de85991bd7 --- /dev/null +++ b/camel-lumberjack.md @@ -0,0 +1,63 @@ +# Lumberjack + +**Since Camel 2.18** + +**Only consumer is supported** + +The Lumberjack component retrieves logs sent over the network using the +Lumberjack protocol, from +[Filebeat](https://www.elastic.co/fr/products/beats/filebeat), for +instance. The network communication can be secured with SSL. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-lumberjack + x.x.x + + + +# URI format + + lumberjack:host + lumberjack:host:port + +# Result + +The result body is a `Map` object. + +# Lumberjack Usage Samples + +## Example 1: Streaming the log messages + + RouteBuilder builder = new RouteBuilder() { + public void configure() { + from("lumberjack:0.0.0.0"). // Listen on all network interfaces using the default port + setBody(simple("${body[message]}")). // Select only the log message + to("stream:out"); // Write it into the output stream + } + }; + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|sslContextParameters|Sets the default SSL configuration to use for all the endpoints. You can also configure it directly at the endpoint level.||object| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|Network interface on which to listen for Lumberjack||string| +|port|Network port on which to listen for Lumberjack|5044|integer| +|sslContextParameters|SSL configuration||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| diff --git a/camel-mail.md b/camel-mail.md new file mode 100644 index 0000000000000000000000000000000000000000..0a8ae4bc51cd8b83aad916d42672f61aedd6d92d --- /dev/null +++ b/camel-mail.md @@ -0,0 +1,522 @@ +# Mail + +**Since Camel 1.0** + +**Both producer and consumer are supported** + +The Mail component provides access to Email via Spring’s Mail support +and the underlying JavaMail system. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-mail + x.x.x + + + +**POP3 or IMAP** + +POP3 has some limitations and end users are encouraged to use IMAP if +possible. + +**Using mock-mail for testing** + +You can use a mock framework for unit testing, which allows you to test +without the need for a real mail server. However, you should remember to +not include the mock-mail when you go into production or other +environments where you need to send mail to a real mail server. Just the +presence of the mock-javamail.jar on the classpath means that it will +kick in and avoid sending the mails. + +# URI format + +Mail endpoints can have one of the following URI formats (for the +protocols, SMTP, POP3, or IMAP, respectively): + + smtp://[username@]host[:port][?options] + pop3://[username@]host[:port][?options] + imap://[username@]host[:port][?options] + +The mail component also supports secure variants of these protocols +(layered over SSL). You can enable the secure protocols by adding `s` to +the scheme: + + smtps://[username@]host[:port][?options] + pop3s://[username@]host[:port][?options] + imaps://[username@]host[:port][?options] + +## Sample endpoints + +Typically, you specify a URI with login credentials as follows: + +**SMTP example** + + smtp://[username@]host[:port][?password=somepwd] + +Alternatively, it is possible to specify both the username and the +password as query options: + + smtp://host[:port]?password=somepwd&username=someuser + +For example: + + smtp://mycompany.mailserver:30?password=tiger&username=scott + +## Component alias names + +- IMAP + +- IMAPs + +- POP3s + +- POP3s + +- SMTP + +- SMTPs + +## Default ports + +Default port numbers are supported. If the port number is omitted, Camel +determines the port number to use based on the protocol. + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ProtocolDefault Port Number

SMTP

25

SMTPS

465

POP3

110

POP3S

995

IMAP

143

IMAPS

993

+ +# SSL support + +The underlying mail framework is responsible for providing SSL support. +You may either configure SSL/TLS support by completely specifying the +necessary Java Mail API configuration options, or you may provide a +configured SSLContextParameters through the component or endpoint +configuration. + +## Using the JSSE Configuration Utility + +The mail component supports SSL/TLS configuration through the [Camel +JSSE Configuration +Utility](#manual::camel-configuration-utilities.adoc). This utility +greatly decreases the amount of component-specific code you need to +write and is configurable at the endpoint and component levels. The +following examples demonstrate how to use the utility with the mail +component. + +Programmatic configuration of the endpoint + + KeyStoreParameters ksp = new KeyStoreParameters(); + ksp.setResource("/users/home/server/truststore.jks"); + ksp.setPassword("keystorePassword"); + TrustManagersParameters tmp = new TrustManagersParameters(); + tmp.setKeyStore(ksp); + SSLContextParameters scp = new SSLContextParameters(); + scp.setTrustManagers(tmp); + Registry registry = ... + registry.bind("sslContextParameters", scp); + ... + from(...) + .to("smtps://smtp.google.com?username=user@gmail.com&password=password&sslContextParameters=#sslContextParameters"); + +Spring DSL based configuration of endpoint + + ... + + + + + ... + ... + ... + +## Configuring JavaMail Directly + +Camel uses Jakarta JavaMail, which only trusts certificates issued by +well-known Certificate Authorities (the default JVM trust +configuration). If you issue your own certificates, you have to import +the CA certificates into the JVM’s Java trust/key store files, override +the default JVM trust/key store files (see `SSLNOTES.txt` in JavaMail +for details). + +# Mail Message Content + +Camel uses the message exchange’s IN body as the +[MimeMessage](http://java.sun.com/javaee/5/docs/api/javax/mail/internet/MimeMessage.html) +text content. The body is converted to `String.class`. + +Camel copies all of the exchange’s IN headers to the +[MimeMessage](http://java.sun.com/javaee/5/docs/api/javax/mail/internet/MimeMessage.html) +headers. + +The subject of the +[MimeMessage](http://java.sun.com/javaee/5/docs/api/javax/mail/internet/MimeMessage.html) +can be configured using a header property on the IN message. The code +below demonstrates this: + + from("direct:a").setHeader("subject", constant(subject)).to("smtp://james2@localhost"); + +The same applies for other MimeMessage headers such as recipients, so +you can use a header property as `To`: + + Map headers = new HashMap(); + headers.put("To", "davsclaus@apache.org"); + headers.put("From", "jstrachan@apache.org"); + headers.put("Subject", "Camel rocks"); + headers.put("CamelFileName", "fileOne"); + headers.put("org.apache.camel.test", "value"); + + String body = "Hello Claus.\nYes it does.\n\nRegards James."; + template.sendBodyAndHeaders("smtp://davsclaus@apache.org", body, headers); + +When using the MailProducer to send the mail to server, you should be +able to get the message id of the +[MimeMessage](http://java.sun.com/javaee/5/docs/api/javax/mail/internet/MimeMessage.html) +with the key `CamelMailMessageId` from the Camel message header. + +# Headers take precedence over pre-configured recipients + +The recipients specified in the message headers always take precedence +over recipients pre-configured in the endpoint URI. The idea is that if +you provide any recipients in the message headers, that is what you get. +The recipients pre-configured in the endpoint URI are treated as a +fallback. + +In the sample code below, the email message is sent to +`davsclaus@apache.org`, because it takes precedence over the +pre-configured recipient, `info@mycompany.com`. Any `CC` and `BCC` +settings in the endpoint URI are also ignored, and those recipients will +not receive any mail. The choice between headers and pre-configured +settings is all or nothing: the mail component *either* takes the +recipients exclusively from the headers or exclusively from the +pre-configured settings. It is not possible to mix and match headers and +pre-configured settings. + + Map headers = new HashMap(); + headers.put("to", "davsclaus@apache.org"); + + template.sendBodyAndHeaders("smtp://admin@localhost?to=info@mycompany.com", "Hello World", headers); + +# Multiple recipients for easier configuration + +It is possible to set multiple recipients using a comma-separated or a +semicolon-separated list. This applies both to header settings and to +settings in an endpoint URI. For example: + + Map headers = new HashMap(); + headers.put("to", "davsclaus@apache.org ; jstrachan@apache.org ; ningjiang@apache.org"); + +The preceding example uses a semicolon, `;`, as the separator character. + +# Setting sender name and email + +You can specify recipients in the format, `name `, to include +both the name and the email address of the recipient. + +For example, you define the following headers on the message: + + Map headers = new HashMap(); + map.put("Subject", "Camel is cool"); + map.put("From", "James Strachan "); + map.put("To", "Claus Ibsen "); + map.put("Cc", "An Other "); + map.put("Bcc", "An Other "); + map.put("Reply-To", "An Other "); + +# JavaMail API (ex SUN JavaMail) + +[JavaMail API](https://java.net/projects/javamail/pages/Home) is used +under the hood for consuming and producing mails. +We encourage end-users to consult these references when using either +POP3 or IMAP protocol. Note particularly that POP3 has a much more +limited set of features than IMAP. + +- [JavaMail POP3 + API](https://javamail.java.net/nonav/docs/api/com/sun/mail/pop3/package-summary.html) + +- [JavaMail IMAP + API](https://javamail.java.net/nonav/docs/api/com/sun/mail/imap/package-summary.html) + +- And generally about the [MAIL + Flags](https://javamail.java.net/nonav/docs/api/javax/mail/Flags.html) + +# Samples + +We start with a simple route that sends the messages received from a JMS +queue as emails. The email account is the `admin` account on +`mymailserver.com`. + + from("jms://queue:subscription").to("smtp://admin@mymailserver.com?password=secret"); + +In the next sample, we poll a mailbox for new emails once every minute. + + from("imap://admin@mymailserver.com?password=secret&unseen=true&delay=60000") + .to("seda://mails"); + +# Sending mail with attachment sample + +**Attachments are not supported by all Camel components** + +The *Attachments API* is based on the Java Activation Framework and is +generally only used by the Mail API. Since many of the other Camel +components do not support attachments, the attachments could potentially +be lost as they propagate along the route. The rule of thumb, therefore, +is to add attachments just before sending a message to the mail +endpoint. + +The mail component supports attachments. In the sample below, we send a +mail message containing a plain text message with a logo file +attachment. + + // create an exchange with a normal body and attachment to be produced as email + Endpoint endpoint = context.getEndpoint("smtp://james@mymailserver.com?password=secret"); + + // create the exchange with the mail message that is multipart with a file and a Hello World text/plain message. + Exchange exchange = endpoint.createExchange(); + AttachmentMessage in = exchange.getIn(AttachmentMessage.class); + in.setBody("Hello World"); + DefaultAttachment att = new DefaultAttachment(new FileDataSource("src/test/data/logo.jpeg")); + att.addHeader("Content-Description", "some sample content"); + in.addAttachmentObject("logo.jpeg", att); + + // create a producer that can produce the exchange (= send the mail) + Producer producer = endpoint.createProducer(); + // start the producer + producer.start(); + // and let it go (processes the exchange by sending the email) + producer.process(exchange); + +# SSL sample + +In this sample, we want to poll our Google Mail inbox for mails. To +download mail onto a local mail client, Google Mail requires you to +enable and configure SSL. This is done by logging into your Google Mail +account and changing your settings to allow IMAP access. Google has +extensive documentation on how to do this. + + from("imaps://imap.gmail.com?username=YOUR_USERNAME@gmail.com&password=YOUR_PASSWORD" + + "&delete=false&unseen=true&delay=60000").to("log:newmail"); + +The preceding route polls the Google Mail inbox for new mails once every +minute and logs the received messages to the `newmail` logger +category. +Running the sample with `DEBUG` logging enabled, we can monitor the +progress in the logs: + + 2008-05-08 06:32:09,640 DEBUG MailConsumer - Connecting to MailStore imaps//imap.gmail.com:993 (SSL enabled), folder=INBOX + 2008-05-08 06:32:11,203 DEBUG MailConsumer - Polling mailfolder: imaps//imap.gmail.com:993 (SSL enabled), folder=INBOX + 2008-05-08 06:32:11,640 DEBUG MailConsumer - Fetching 1 messages. Total 1 messages. + 2008-05-08 06:32:12,171 DEBUG MailConsumer - Processing message: messageNumber=[332], from=[James Bond <007@mi5.co.uk>], to=YOUR_USERNAME@gmail.com], subject=[... + 2008-05-08 06:32:12,187 INFO newmail - Exchange[MailMessage: messageNumber=[332], from=[James Bond <007@mi5.co.uk>], to=YOUR_USERNAME@gmail.com], subject=[... + +# Consuming mails with attachment sample + +In this sample, we poll a mailbox and store all attachments from the +mails as files. First, we define a route to poll the mailbox. As this +sample is based on Google Mail, it uses the same route as shown in the +SSL sample: + + from("imaps://imap.gmail.com?username=YOUR_USERNAME@gmail.com&password=YOUR_PASSWORD" + + "&delete=false&unseen=true&delay=60000").process(new MyMailProcessor()); + +Instead of logging the mail, we use a processor where we can process the +mail from java code: + + public void process(Exchange exchange) throws Exception { + // the API is a bit clunky, so we need to loop + AttachmentMessage attachmentMessage = exchange.getMessage(AttachmentMessage.class); + Map attachments = attachmentMessage.getAttachments(); + if (attachments.size() > 0) { + for (String name : attachments.keySet()) { + DataHandler dh = attachments.get(name); + // get the file name + String filename = dh.getName(); + + // get the content and convert it to byte[] + byte[] data = exchange.getContext().getTypeConverter() + .convertTo(byte[].class, dh.getInputStream()); + + // write the data to a file + FileOutputStream out = new FileOutputStream(filename); + out.write(data); + out.flush(); + out.close(); + } + } + } + +As you can see the API to handle attachments is a bit clunky, but it’s +there, so you can get the `javax.activation.DataHandler` so you can +handle the attachments using standard API. + +# How to split a mail message with attachments + +In this example, we consume mail messages which may have a number of +attachments. What we want to do is to use the Splitter EIP per +individual attachment, to process the attachments separately. For +example, if the mail message has five attachments, we want the Splitter +to process five messages, each having a single attachment. To do this, +we need to provide a custom Expression to the Splitter where we provide +a List\ that contains the five messages with the single +attachment. + +The code is provided out of the box in Camel 2.10 onwards in the +`camel-mail` component. The code is in the class: +`org.apache.camel.component.mail.SplitAttachmentsExpression`, which you +can find the source code +[here](https://svn.apache.org/repos/asf/camel/trunk/components/camel-mail/src/main/java/org/apache/camel/component/mail/SplitAttachmentsExpression.java) + +In the Camel route, you then need to use this Expression in the route as +shown below: + +If you use XML DSL, then you need to declare a method call expression in +the Splitter as shown below + + + + + + +You can also split the attachments as byte\[\] to be stored as the +message body. This is done by creating the expression with boolean true + + SplitAttachmentsExpression split = SplitAttachmentsExpression(true); + +And then use the expression with the splitter EIP. + +# Using custom SearchTerm + +You can configure a `searchTerm` on the `MailEndpoint` which allows you +to filter out unwanted mails. + +For example, to filter mails to contain Camel in either Subject or Text, +you can do as follows: + + + + + + +Notice we use the `"searchTerm.subjectOrBody"` as a parameter key to +indicate that we want to search on mail subject or body, to contain the +word "Camel". +The class `org.apache.camel.component.mail.SimpleSearchTerm` has a +number of options you can configure: + +Or to get the new unseen emails going 24 hours back in time, you can do. +Notice the "now-24h" syntax. See the table below for more details. + + + + + + +You can have multiple searchTerm in the endpoint uri configuration. They +would then be combined using the `AND` operator, e.g., so both +conditions must match. For example, to get the last unseen emails going +back 24 hours which has Camel in the mail subject you can do: + + + + + + +The `SimpleSearchTerm` is designed to be easily configurable from a +POJO, so you can also configure it using a \ style in XML + + + + + + + +You can then refer to this bean, using #beanId in your Camel route as +shown: + + + + + + +In Java there is a builder class to build compound `SearchTerms` using +the `org.apache.camel.component.mail.SearchTermBuilder` class. This +allows you to build complex terms such as: + + // we just want the unseen mails that are not spam + SearchTermBuilder builder = new SearchTermBuilder(); + + builder.unseen().body(Op.not, "Spam").subject(Op.not, "Spam") + // which was sent from either foo or bar + .from("foo@somewhere.com").from(Op.or, "bar@somewhere.com"); + // ... and we could continue building the terms + + SearchTerm term = builder.build(); + +# Polling Optimization + +The parameter maxMessagePerPoll and fetchSize allow you to restrict the +number of messages that should be processed for each poll. These +parameters should help to prevent bad performance when working with +folders that contain a lot of messages. In previous versions, these +parameters have been evaluated too late, so that big mailboxes could +still cause performance problems. With Camel 3.1, these parameters are +evaluated earlier during the poll to avoid these problems. + +# Using headers with additional Java Mail Sender properties + +When sending mails, then you can provide dynamic java mail properties +for the `JavaMailSender` from the Exchange as message headers with keys +starting with `java.smtp.`. + +You can set any of the `java.smtp` properties which you can find in the +Java Mail documentation. + +For example, to provide a dynamic uuid in `java.smtp.from` (SMTP MAIL +command): + + .setHeader("from", constant("reply2me@foo.com")); + .setHeader("java.smtp.from", method(UUID.class, "randomUUID")); + .to("smtp://mymailserver:1234"); + +This is only supported when **not** using a custom `JavaMailSender`. + +## Component ConfigurationsThere are no configurations for this component + +## Endpoint ConfigurationsThere are no configurations for this component diff --git a/camel-mapstruct.md b/camel-mapstruct.md new file mode 100644 index 0000000000000000000000000000000000000000..090bdebdb12a1c9afb23ebf8e047792942982d59 --- /dev/null +++ b/camel-mapstruct.md @@ -0,0 +1,69 @@ +# Mapstruct + +**Since Camel 3.19** + +**Only producer is supported** + +The camel-mapstruct component is used for converting POJOs using +[MapStruct](https://mapstruct.org/). + +# URI format + + mapstruct:className[?options] + +Where `className` is the fully qualified class name of the POJO to +convert to. + +# Setting up MapStruct + +The camel-mapstruct component must be configured with one or more +package names for classpath scanning MapStruct *Mapper* classes. This is +needed because the *Mapper* classes are to be used for converting POJOs +with MapStruct. + +For example, to set up two packages, you can do the following: + + MapstructComponent mc = context.getComponent("mapstruct", MapstructComponent.class); + mc.setMapperPackageName("com.foo.mapper,com.bar.mapper"); + +This can also be configured in `application.properties`: + + camel.component.mapstruct.mapper-package-name = com.foo.mapper,com.bar.mapper + +Camel will on startup scan these packages for classes which names ends +with *Mapper*. These classes are then introspected to discover the +mapping methods. These mapping methods are then registered into the +Camel [Type Converter](#manual::type-converter.adoc) registry. This +means that you can also use type converter to convert the POJOs with +MapStruct, such as: + + from("direct:foo") + .convertBodyTo(MyFooDto.class); + +Where `MyFooDto` is a POJO that MapStruct is able to convert to/from. + +Camel does not support mapper methods defined with a `void` return type +such as those used with `@MappingTarget`. + +If you define multiple mapping methods for the same from / to types, +then the implementation chosen by Camel to do its type conversion is +potentially non-deterministic. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|mapperPackageName|Package name(s) where Camel should discover Mapstruct mapping classes. Multiple package names can be separated by comma.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|mapStructConverter|To use a custom MapStructConverter such as adapting to a special runtime.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|className|The fully qualified class name of the POJO that mapstruct should convert to (target)||string| +|mandatory|Whether there must exist a mapstruct converter to convert to the POJO.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-master.md b/camel-master.md new file mode 100644 index 0000000000000000000000000000000000000000..20e33b85e2a73359ab26e8b9fa3b8b5f88e92010 --- /dev/null +++ b/camel-master.md @@ -0,0 +1,141 @@ +# Master + +**Since Camel 2.20** + +**Only consumer is supported** + +The Camel-Master endpoint provides a way to ensure only a single +consumer in a cluster consumes from a given endpoint; with automatic +fail over if that JVM dies. + +This can be handy if you need to consume from some legacy back end that +either doesn’t support concurrent consumption or due to commercial or +stability reasons, you can only have a single connection at any point in +time. + +# Using the master endpoint + +Just prefix any camel endpoint with **master:someName:** where +*someName* is a logical name and is used to acquire the master lock. +e.g. + + from("master:cheese:jms:foo") + .to("activemq:wine"); + +In this example, the master component ensures that the route is only +active in one node, at any given time, in the cluster. So if there are 8 +nodes in the cluster, then the master component will elect one route to +be the leader, and only this route will be active, and hence only this +route will consume messages from `jms:foo`. In case this route is +stopped or unexpectedly terminated, then the master component will +detect this, and re-elect another node to be active, which will then +become active and start consuming messages from `jms:foo`. + +Apache ActiveMQ 5.x has such a feature out of the box called [Exclusive +Consumers](https://activemq.apache.org/exclusive-consumer.html). + +# URI format + + master:namespace:endpoint[?options] + +Where endpoint is any Camel endpoint, you want to run in master/slave +mode. + +# Example + +You can protect a clustered Camel application to only consume files from +one active node. + + // the file endpoint we want to consume from + String url = "file:target/inbox?delete=true"; + + // use the camel master component in the clustered group named myGroup + // to run a master/slave mode in the following Camel url + from("master:myGroup:" + url) + .log(name + " - Received file: ${file:name}") + .delay(delay) + .log(name + " - Done file: ${file:name}") + .to("file:target/outbox"); + +The master component leverages CamelClusterService you can configure +using + +- **Java** + + ZooKeeperClusterService service = new ZooKeeperClusterService(); + service.setId("camel-node-1"); + service.setNodes("myzk:2181"); + service.setBasePath("/camel/cluster"); + + context.addService(service) + +- **Xml (Spring/Blueprint)** + + + + + + + + + + + + ... + + + + +- **Spring boot** + + camel.component.zookeeper.cluster.service.enabled = true + camel.component.zookeeper.cluster.service.id = camel-node-1 + camel.component.zookeeper.cluster.service.base-path = /camel/cluster + camel.component.zookeeper.cluster.service.nodes = myzk:2181 + +# Implementations + +Camel provides the following ClusterService implementations: + +- camel-consul + +- camel-file + +- camel-infinispan + +- camel-jgroups-raft + +- camel-jgroups + +- camel-kubernetes + +- camel-zookeeper + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|backOffDelay|When the master becomes leader then backoff is in use to repeat starting the consumer until the consumer is successfully started or max attempts reached. This option is the delay in millis between start attempts.||integer| +|backOffMaxAttempts|When the master becomes leader then backoff is in use to repeat starting the consumer until the consumer is successfully started or max attempts reached. This option is the maximum number of attempts to try.||integer| +|service|Inject the service to use.||object| +|serviceSelector|Inject the service selector used to lookup the CamelClusterService to use.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|namespace|The name of the cluster namespace to use||string| +|delegateUri|The endpoint uri to use in master/slave mode||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| diff --git a/camel-metrics.md b/camel-metrics.md new file mode 100644 index 0000000000000000000000000000000000000000..b130f6e92e87401efa957e8e5e4cb2364ad2ecd2 --- /dev/null +++ b/camel-metrics.md @@ -0,0 +1,746 @@ +# Metrics + +**Since Camel 2.14** + +**Only producer is supported** + +The Metrics component allows collecting various metrics directly from +Camel routes. Supported metric types are +[counter](##MetricsComponent-counter), +[histogram](##MetricsComponent-histogram), +[meter](##MetricsComponent-meter), [timer](##MetricsComponent-timer) and +[gauge](##MetricsComponent-gauge). +[Metrics](http://metrics.dropwizard.io) provides a simple way to measure +the behaviour of applications. The configurable reporting backend +enables different integration options for collecting and visualizing +statistics. The component also provides a `MetricsRoutePolicyFactory` +which allows to expose route statistics using Dropwizard Metrics, see +bottom of page for details. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-metrics + x.x.x + + + +# URI format + + metrics:[ meter | counter | histogram | timer | gauge ]:metricname[?options] + +# Metric Registry + +Camel Metrics component uses by default a `MetricRegistry` instance with +a `Slf4jReporter` that has a 60-second reporting interval. This default +registry can be replaced with a custom one by providing a +`MetricRegistry` bean. If multiple `MetricRegistry` beans exist in the +application, the one with name `metricRegistry` is used. + +For example: + +Java (Spring) +@Configuration +public static class MyConfig extends SingleRouteCamelConfiguration { + + @Bean + @Override + public RouteBuilder route() { + return new RouteBuilder() { + @Override + public void configure() throws Exception { + // define Camel routes here + } + }; + } + + @Bean(name = MetricsComponent.METRIC_REGISTRY_NAME) + public MetricRegistry getMetricRegistry() { + MetricRegistry registry = ...; + return registry; + } + } + +Java (CDI) +class MyBean extends RouteBuilder { + + @Override + public void configure() { + from("...") + // Register the 'my-meter' meter in the MetricRegistry below + .to("metrics:meter:my-meter"); + } + + @Produces + // If multiple MetricRegistry beans + // @Named(MetricsComponent.METRIC_REGISTRY_NAME) + MetricRegistry registry() { + MetricRegistry registry = new MetricRegistry(); + // ... + return registry; + } + } + +# Usage + +Each metric has type and name. Supported types are +[counter](##MetricsComponent-counter), +[histogram](##MetricsComponent-histogram), +[meter](##MetricsComponent-meter), [timer](##MetricsComponent-timer) and +[gauge](##MetricsComponent-gauge). Metric name is simple string. If a +metric type is not provided, then type meter is used by default. + +## Headers + +Metric name defined in URI can be overridden by using header with name +`CamelMetricsName`. + +For example + + from("direct:in") + .setHeader(MetricsConstants.HEADER_METRIC_NAME, constant("new.name")) + .to("metrics:counter:name.not.used") + .to("direct:out"); + +will update counter with name `new.name` instead of `name.not.used`. + +All Metrics specific headers are removed from the message once Metrics +endpoint finishes processing of exchange. While processing exchange +Metrics endpoint will catch all exceptions and write log entry using +level `warn`. + +# Metrics type counter + + metrics:counter:metricname[?options] + +## Options + + +++++ + + + + + + + + + + + + + + + + + + + +
NameDefaultDescription

increment

-

Long value to add to the +counter

decrement

-

Long value to subtract from the +counter

+ +If neither `increment` or `decrement` is defined then the value of the +counter will be incremented by one. If `increment` and `decrement` are +both defined only increment operation is called. + + // update counter simple.counter by 7 + from("direct:in") + .to("metrics:counter:simple.counter?increment=7") + .to("direct:out"); + + // increment counter simple.counter by 1 + from("direct:in") + .to("metrics:counter:simple.counter") + .to("direct:out"); + + // decrement counter simple.counter by 3 + from("direct:in") + .to("metrics:counter:simple.counter?decrement=3") + .to("direct:out"); + +## Headers + +Message headers can be used to override `increment` and `decrement` +values specified in Metrics component URI. + + +++++ + + + + + + + + + + + + + + + + + + + +
NameDescriptionExpected type

CamelMetricsCounterIncrement

Override increment value in +URI

Long

CamelMetricsCounterDecrement

Override decrement value in +URI

Long

+ + // update counter simple.counter by 417 + from("direct:in") + .setHeader(MetricsConstants.HEADER_COUNTER_INCREMENT, constant(417L)) + .to("metrics:counter:simple.counter?increment=7") + .to("direct:out"); + + // updates counter using simple language to evaluate body.length + from("direct:in") + .setHeader(MetricsConstants.HEADER_COUNTER_INCREMENT, simple("${body.length}")) + .to("metrics:counter:body.length") + .to("mock:out"); + +# Metric type histogram + + metrics:histogram:metricname[?options] + +## Options + + +++++ + + + + + + + + + + + + + + +
NameDefaultDescription

value

-

Value to use in histogram

+ +If `value` is not set, nothing is added to histogram and warning is +logged. + + // adds value 9923 to simple.histogram + from("direct:in") + .to("metrics:histogram:simple.histogram?value=9923") + .to("direct:out"); + + // nothing is added to simple.histogram; warning is logged + from("direct:in") + .to("metrics:histogram:simple.histogram") + .to("direct:out"); + +## Headers + +Message header can be used to override value specified in Metrics +component URI. + + +++++ + + + + + + + + + + + + + + +
NameDescriptionExpected type

CamelMetricsHistogramValue

Override histogram value in +URI

Long

+ + // adds value 992 to simple.histogram + from("direct:in") + .setHeader(MetricsConstants.HEADER_HISTOGRAM_VALUE, constant(992L)) + .to("metrics:histogram:simple.histogram?value=700") + .to("direct:out") + +# Metric type meter + + metrics:meter:metricname[?options] + +## Options + + +++++ + + + + + + + + + + + + + + +
NameDefaultDescription

mark

-

Long value to use as mark

+ +If `mark` is not set then `meter.mark()` is called without argument. + + // marks simple.meter without value + from("direct:in") + .to("metrics:simple.meter") + .to("direct:out"); + + // marks simple.meter with value 81 + from("direct:in") + .to("metrics:meter:simple.meter?mark=81") + .to("direct:out"); + +## Headers + +Message header can be used to override `mark` value specified in Metrics +component URI. + + +++++ + + + + + + + + + + + + + + +
NameDescriptionExpected type

CamelMetricsMeterMark

Override mark value in URI

Long

+ + // updates meter simple.meter with value 345 + from("direct:in") + .setHeader(MetricsConstants.HEADER_METER_MARK, constant(345L)) + .to("metrics:meter:simple.meter?mark=123") + .to("direct:out"); + +# Metrics type timer + + metrics:timer:metricname[?options] + +## Options + + +++++ + + + + + + + + + + + + + + +
NameDefaultDescription

action

-

start or stop

+ +If no `action` or invalid value is provided then warning is logged +without any timer update. If action `start` is called on already running +timer or `stop` is called on not running timer then nothing is updated +and warning is logged. + + // measure time taken by route "calculate" + from("direct:in") + .to("metrics:timer:simple.timer?action=start") + .to("direct:calculate") + .to("metrics:timer:simple.timer?action=stop"); + +`TimerContext` objects are stored as Exchange properties between +different Metrics component calls. + +## Headers + +Message header can be used to override action value specified in Metrics +component URI. + + +++++ + + + + + + + + + + + + + + +
NameDescriptionExpected type

CamelMetricsTimerAction

Override timer action in URI

org.apache.camel.component.metrics.MetricsTimerAction

+ + // sets timer action using header + from("direct:in") + .setHeader(MetricsConstants.HEADER_TIMER_ACTION, MetricsTimerAction.start) + .to("metrics:timer:simple.timer") + .to("direct:out"); + +# Metric type gauge + + metrics:gauge:metricname[?options] + +## Options + + +++++ + + + + + + + + + + + + + + +
NameDefaultDescription

subject

-

Any object to be observed by the +gauge

+ +If `subject` is not defined it’s simply ignored, i.e., the gauge is not +registered. + + // update gauge "simple.gauge" by a bean "mySubjectBean" + from("direct:in") + .to("metrics:gauge:simple.gauge?subject=#mySubjectBean") + .to("direct:out"); + +## Headers + +Message headers can be used to override `subject` values specified in +Metrics component URI. Note: if `CamelMetricsName` header is specified, +then new gauge is registered in addition to default one specified in a +URI. + + +++++ + + + + + + + + + + + + + + +
NameDescriptionExpected type

CamelMetricsGaugeSubject

Override subject value in URI

Object

+ + // update gauge simple.gauge by a String literal "myUpdatedSubject" + from("direct:in") + .setHeader(MetricsConstants.HEADER_GAUGE_SUBJECT, constant("myUpdatedSubject")) + .to("metrics:counter:simple.gauge?subject=#mySubjectBean") + .to("direct:out"); + +# MetricsRoutePolicyFactory + +This factory allows adding a `RoutePolicy` for each route that exposes +route utilization statistics using Dropwizard metrics. This factory can +be used in Java and XML as the examples below demonstrates. + +Instead of using the `MetricsRoutePolicyFactory` you can define a +MetricsRoutePolicy per route you want to instrument, in case you only +want to instrument a few selected routes. + +From Java, you add the factory to the `CamelContext` as shown below: + + context.addRoutePolicyFactory(new MetricsRoutePolicyFactory()); + +And from XML DSL you define a \ as follows: + + + + +The `MetricsRoutePolicyFactory` and `MetricsRoutePolicy` supports the +following options: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDefaultDescription

useJmx

false

Whether to report fine-grained +statistics to JMX by using the +com.codahale.metrics.JmxReporter.
+Notice that if JMX is enabled on CamelContext then a +MetricsRegistryService mbean is enlisted under the services +type in the JMX tree. That mbean has a single operation to output the +statistics using json. Setting useJmx to true is only +needed if you want fine-grained mbeans per statistics type.

jmxDomain

org.apache.camel.metrics

The JMX domain name

prettyPrint

false

Whether to use pretty print when +outputting statistics in json format

metricsRegistry

Allow using a shared +com.codahale.metrics.MetricRegistry. If none is provided, +then Camel will create a shared instance used by the +CamelContext.

rateUnit

TimeUnit.SECONDS

The unit to use for rate in the metrics +reporter or when dumping the statistics as json.

durationUnit

TimeUnit.MILLISECONDS

The unit to use for duration in the +metrics reporter or when dumping the statistics as json.

namePattern

##name##.##routeId##.##type##

Camel 2.17: The name +pattern to use. Use dot as separators, but you can change that. The +values ##name##, ##routeId##, and +##type## will be replaced with actual value. Where +###name### is the name of the CamelContext. +###routeId### is the name of the route. And +###type### is the value of responses.

+ +From Java code you can get hold of the +`com.codahale.metrics.MetricRegistry` from the +`org.apache.camel.component.metrics.routepolicy.MetricsRegistryService` +as shown below: + + MetricRegistryService registryService = context.hasService(MetricsRegistryService.class); + if (registryService != null) { + MetricsRegistry registry = registryService.getMetricsRegistry(); + ... + } + +# MetricsMessageHistoryFactory + +This factory allows using metrics to capture Message History performance +statistics while routing messages. It works by using a metrics Timer for +each node in all the routes. This factory can be used in Java and XML as +the examples below demonstrates. + +From Java, you set the factory to the `CamelContext` as shown below: + + context.setMessageHistoryFactory(new MetricsMessageHistoryFactory()); + +And from XML DSL you define a \ as follows: + + + + +The following options are supported on the factory: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDefaultDescription

useJmx

false

Whether to report fine-grained +statistics to JMX by using the +com.codahale.metrics.JmxReporter.
+Notice that if JMX is enabled on CamelContext then a +MetricsRegistryService mbean is enlisted under the services +type in the JMX tree. That mbean has a single operation to output the +statistics using json. Setting useJmx to true is only +needed if you want fine-grained mbeans per statistics type.

jmxDomain

org.apache.camel.metrics

The JMX domain name

prettyPrint

false

Whether to use pretty print when +outputting statistics in json format

metricsRegistry

Allow using a shared +com.codahale.metrics.MetricRegistry. If none is provided, +then Camel will create a shared instance used by the +CamelContext.

rateUnit

TimeUnit.SECONDS

The unit to use for rate in the metrics +reporter or when dumping the statistics as json.

durationUnit

TimeUnit.MILLISECONDS

The unit to use for duration in the +metrics reporter or when dumping the statistics as json.

namePattern

##name##.##routeId##.###id###.##type##

The name pattern to use. Use dot as +separators, but you can change that. The values ##name##, +##routeId##, ##type##, and +###id### will be replaced with actual value. Where +###name### is the name of the CamelContext. +###routeId### is the name of the route. The +###id### pattern represents the node id. And +###type### is the value of history.

+ +At runtime the metrics can be accessed from Java API or JMX, which +allows to gather the data as json output. + +From Java code, you can get the service from the CamelContext as shown: + + MetricsMessageHistoryService service = context.hasService(MetricsMessageHistoryService.class); + String json = service.dumpStatisticsAsJson(); + +And the JMX API the MBean is registered in the `type=services` tree with +`name=MetricsMessageHistoryService`. + +# InstrumentedThreadPoolFactory + +This factory allows you to gather performance information about Camel +Thread Pools by injecting a `InstrumentedThreadPoolFactory` which +collects information from the inside of Camel. See more details at +Advanced configuration of CamelContext using Spring + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|metricRegistry|To use a custom configured MetricRegistry.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|metricsType|Type of metrics||object| +|metricsName|Name of metrics||string| +|action|Action when using timer type||object| +|decrement|Decrement value when using counter type||integer| +|increment|Increment value when using counter type||integer| +|mark|Mark when using meter type||integer| +|subject|Subject value when using gauge type||object| +|value|Value value when using histogram type||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-micrometer.md b/camel-micrometer.md new file mode 100644 index 0000000000000000000000000000000000000000..714e3557427d15e63207399111aa67c43551ac1a --- /dev/null +++ b/camel-micrometer.md @@ -0,0 +1,822 @@ +# Micrometer + +**Since Camel 2.22** + +**Only producer is supported** + +The Micrometer component allows collecting various metrics directly from +Camel routes. Supported metric types are +[counter](##MicrometerComponent-counter), +[summary](##MicrometerComponent-summary), and +[timer](##MicrometerComponent-timer). +[Micrometer](http://micrometer.io/) provides a simple way to measure the +behaviour of an application. The configurable reporting backend (via +Micrometer registries) enables different integration options for +collecting and visualizing statistics. + +The component also provides a `MicrometerRoutePolicyFactory` which +allows to expose route statistics using Micrometer as well as +`EventNotifier` implementations for counting routes and timing exchanges +from their creation to their completion. + +Maven users need to add the following dependency to their `pom.xml` for +this component: + + + org.apache.camel + camel-micrometer + x.x.x + + + +# URI format + + micrometer:[ counter | summary | timer ]:metricname[?options] + +# Options + +# Meter Registry + +By default the Camel Micrometer component creates a +`SimpleMeterRegistry` instance, suitable mainly for testing. You should +define a dedicated registry by providing a `MeterRegistry` bean. +Micrometer registries primarily determine the backend monitoring system +to be used. A `CompositeMeterRegistry` can be used to address more than +one monitoring target. + +# Default Camel Metrics + +Some Camel specific metrics are available out of the box. + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescription

camel.message.history

timer

Sample of performance of each node in +the route when message history is enabled

camel.routes.added

gauge

Number of routes in total

camel.routes.reloaded

gauge

Number of routes that has been +reloaded

camel.routes.running

gauge

Number of routes currently +running

camel.exchanges.inflight

gauge

Route inflight messages

camel.exchanges.total

counter

Total number of processed +exchanges

camel.exchanges.succeeded

counter

Number of successfully completed +exchanges

camel.exchanges.failed

counter

Number of failed exchanges

camel.exchanges.failures.handled

counter

Number of failures handled

camel.exchanges.external.redeliveries

counter

Number of external initiated +redeliveries (such as from JMS broker)

camel.exchange.event.notifier

gauge + summary

Metrics for messages created, sent, +completed, and failed events

camel.route.policy

gauge + summary

Route performance metrics

camel.route.policy.long.task

gauge + summary

Route long task metric

+ +## Using legacy metrics naming + +In Camel 3.20 or older, then the naming of metrics is using *camelCase* +style. However, since Camel 3.21 onwards, the naming is using the +Micrometer convention style (see table above). + +To use the legacy naming, then you can use the `LEGACY` naming from the +`xxxNamingStrategy` interfaces. + +For example: + + MicrometerRoutePolicyFactory factory = new MicrometerRoutePolicyFactory(); + factory.setNamingStrategy(MicrometerRoutePolicyNamingStrategy.LEGACY); + +The naming style can be configured on: + +- `MicrometerRoutePolicyFactory` + +- `MicrometerExchangeEventNotifier` + +- `MicrometerRouteEventNotifier` + +- `MicrometerMessageHistoryFactory` + +# Usage of producers + +Each meter has type and name. Supported types are +[counter](##MicrometerComponent-counter), [distribution +summary](##MicrometerComponent-summary), and timer. If no type is +provided, then a counter is used by default. + +The meter name is a string that is evaluated as `Simple` expression. In +addition to using the `CamelMetricsName` header (see below), this allows +selecting the meter depending on exchange data. + +The optional `tags` URI parameter is a comma-separated string, +consisting of `key=value` expressions. Both `key` and `value` are +strings that are also evaluated as `Simple` expression. E.g., the URI +parameter `tags=X=${header.Y}` would assign the current value of header +`Y` to the key `X`. + +## Headers + +The meter name defined in URI can be overridden by populating a header +with name `CamelMetricsName`. The meter tags defined as URI parameters +can be augmented by populating a header with name `CamelMetricsTags`. + +For example + + from("direct:in") + .setHeader(MicrometerConstants.HEADER_METRIC_NAME, constant("new.name")) + .setHeader(MicrometerConstants.HEADER_METRIC_TAGS, constant(Tags.of("dynamic-key", "dynamic-value"))) + .to("micrometer:counter:name.not.used?tags=key=value") + .to("direct:out"); + +will update a counter with name `new.name` instead of `name.not.used` +using the tag `dynamic-key` with value `dynamic-value` in addition to +the tag `key` with value `value`. + +All Metrics specific headers are removed from the message once the +Micrometer endpoint finishes processing of exchange. While processing +exchange Micrometer endpoint will catch all exceptions and write log +entry using level `warn`. + +# Counter + + micrometer:counter:name[?options] + +## Options + + +++++ + + + + + + + + + + + + + + + + + + + +
NameDefaultDescription

increment

-

Double value to add to the +counter

decrement

-

Double value to subtract from the +counter

+ +If neither `increment` or `decrement` is defined then value of the +counter will be incremented by one. If `increment` and `decrement` are +both defined only increment operation is called. + + // update counter simple.counter by 7 + from("direct:in") + .to("micrometer:counter:simple.counter?increment=7") + .to("direct:out"); + + // increment counter simple.counter by 1 + from("direct:in") + .to("micrometer:counter:simple.counter") + .to("direct:out"); + +Both `increment` and `decrement` values are evaluated as `Simple` +expressions with a Double result, e.g., if header `X` contains a value +that evaluates to 3.0, the `simple.counter` counter is decremented by +3\.0: + + // decrement counter simple.counter by 3 + from("direct:in") + .to("micrometer:counter:simple.counter?decrement=${header.X}") + .to("direct:out"); + +## Headers + +Like in `camel-metrics`, specific Message headers can be used to +override `increment` and `decrement` values specified in the Micrometer +endpoint URI. + + +++++ + + + + + + + + + + + + + + + + + + + +
NameDescriptionExpected type

CamelMetricsCounterIncrement

Override increment value in +URI

Double

CamelMetricsCounterDecrement

Override decrement value in +URI

Double

+ + // update counter simple.counter by 417 + from("direct:in") + .setHeader(MicrometerConstants.HEADER_COUNTER_INCREMENT, constant(417.0D)) + .to("micrometer:counter:simple.counter?increment=7") + .to("direct:out"); + + // updates counter using simple language to evaluate body.length + from("direct:in") + .setHeader(MicrometerConstants.HEADER_COUNTER_INCREMENT, simple("${body.length}")) + .to("micrometer:counter:body.length") + .to("direct:out"); + +# Distribution Summary + + micrometer:summary:metricname[?options] + +## Options + + +++++ + + + + + + + + + + + + + + +
NameDefaultDescription

value

-

Value to use in histogram

+ +If no `value` is not set, nothing is added to histogram and warning is +logged. + + // adds value 9923 to simple.histogram + from("direct:in") + .to("micrometer:summary:simple.histogram?value=9923") + .to("direct:out"); + + // nothing is added to simple.histogram; warning is logged + from("direct:in") + .to("micrometer:summary:simple.histogram") + .to("direct:out"); + +`value` is evaluated as `Simple` expressions with a Double result, e.g., +if header `X` contains a value that evaluates to 3.0, this value is +registered with the `simple.histogram`: + + from("direct:in") + .to("micrometer:summary:simple.histogram?value=${header.X}") + .to("direct:out"); + +## Headers + +Like in `camel-metrics`, a specific Message header can be used to +override the value specified in the Micrometer endpoint URI. + + +++++ + + + + + + + + + + + + + + +
NameDescriptionExpected type

CamelMetricsHistogramValue

Override histogram value in +URI

Long

+ + // adds value 992.0 to simple.histogram + from("direct:in") + .setHeader(MicrometerConstants.HEADER_HISTOGRAM_VALUE, constant(992.0D)) + .to("micrometer:summary:simple.histogram?value=700") + .to("direct:out") + +# Timer + + micrometer:timer:metricname[?options] + +## Options + + +++++ + + + + + + + + + + + + + + +
NameDefaultDescription

action

-

start or stop

+ +If no `action` or invalid value is provided then warning is logged +without any timer update. If action `start` is called on an already +running timer or `stop` is called on an unknown timer, nothing is +updated and warning is logged. + + // measure time spent in route "direct:calculate" + from("direct:in") + .to("micrometer:timer:simple.timer?action=start") + .to("direct:calculate") + .to("micrometer:timer:simple.timer?action=stop"); + +`Timer.Sample` objects are stored as Exchange properties between +different Metrics component calls. + +`action` is evaluated as a `Simple` expression returning a result of +type `MicrometerTimerAction`. + +## Headers + +Like in `camel-metrics`, a specific Message header can be used to +override action value specified in the Micrometer endpoint URI. + + +++++ + + + + + + + + + + + + + + +
NameDescriptionExpected type

CamelMetricsTimerAction

Override timer action in URI

org.apache.camel.component.micrometer.MicrometerTimerAction

+ + // sets timer action using header + from("direct:in") + .setHeader(MicrometerConstants.HEADER_TIMER_ACTION, MicrometerTimerAction.start) + .to("micrometer:timer:simple.timer") + .to("direct:out"); + +# Using Micrometer route policy factory + +`MicrometerRoutePolicyFactory` allows to add a RoutePolicy for each +route to expose route utilization statistics using Micrometer. This +factory can be used in Java and XML as the examples below demonstrates. + +Instead of using the `MicrometerRoutePolicyFactory` you can define a +dedicated `MicrometerRoutePolicy` per route you want to instrument, in +case you only want to instrument a few selected routes. + +From Java, you add the factory to the `CamelContext` as shown below: + + context.addRoutePolicyFactory(new MicrometerRoutePolicyFactory()); + +And from XML DSL you define a \ as follows: + + + + +The `MicrometerRoutePolicyFactory` and `MicrometerRoutePolicy` supports +the following options: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDefaultDescription

prettyPrint

false

Whether to use pretty print when +outputting statistics in json format

meterRegistry

Allow using a shared +MeterRegistry. If none is provided, then Camel will create +a shared instance used by the CamelContext.

durationUnit

TimeUnit.MILLISECONDS

The unit to use for duration in when +dumping the statistics as json.

configuration

see below

MicrometerRoutePolicyConfiguration.class

+ +The `MicrometerRoutePolicyConfiguration` supports the following options: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDefaultDescription

contextEnabled

true

whether to include counter for context +level metrics

routeEnabled

true

whether to include counter for route +level metrics

additionalCounters

true

activates all additional +counters

exchangesSucceeded

true

activates counter for succeeded +exchanges

exchangesFailed

true

activates counter for failed +exchanges

exchangesTotal

true

activates counter for total count of +exchanges

externalRedeliveries

true

activates counter for redeliveries of +exchanges

failuresHandled

true

activates counter for handled +failures

longTask

false

activates long task timer (current +processing time for micrometer)

timerInitiator

null

Consumer<Timer.Builder> for +custom initialize Timer

longTaskInitiator

null

Consumer<LongTaskTimer.Builder> +for custom initialize LongTaskTimer

+ +If JMX is enabled in the CamelContext, the MBean is registered in the +`type=services` tree with `name=MicrometerRoutePolicy`. + +# Using Micrometer message history factory + +`MicrometerMessageHistoryFactory` allows to use metrics to capture +Message History performance statistics while routing messages. It works +by using a Micrometer Timer for each node in all the routes. This +factory can be used in Java and XML as the examples below demonstrates. + +From Java, you set the factory to the `CamelContext` as shown below: + + context.setMessageHistoryFactory(new MicrometerMessageHistoryFactory()); + +And from XML DSL you define a \ as follows: + + + + +The following options are supported on the factory: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
NameDefaultDescription

prettyPrint

false

Whether to use pretty print when +outputting statistics in json format

meterRegistry

Allow using a shared +MeterRegistry. If none is provided, then Camel will create +a shared instance used by the CamelContext.

durationUnit

TimeUnit.MILLISECONDS

The unit to use for duration when +dumping the statistics as json.

+ +At runtime the metrics can be accessed from Java API or JMX, which +allows to gather the data as json output. + +From Java code, you can get the service from the CamelContext as shown: + + MicrometerMessageHistoryService service = context.hasService(MicrometerMessageHistoryService.class); + String json = service.dumpStatisticsAsJson(); + +If JMX is enabled in the CamelContext, the MBean is registered in the +`type=services` tree with `name=MicrometerMessageHistory`. + +# Micrometer event notification + +There is a `MicrometerRouteEventNotifier` (counting added and running +routes) and a `MicrometerExchangeEventNotifier` (timing exchanges from +their creation to their completion). + +EventNotifiers can be added to the CamelContext, e.g.: + + camelContext.getManagementStrategy().addEventNotifier(new MicrometerExchangeEventNotifier()) + +At runtime the metrics can be accessed from Java API or JMX, which +allows to gather the data as json output. + +From Java code, you can get the service from the CamelContext as shown: + + MicrometerEventNotifierService service = context.hasService(MicrometerEventNotifierService.class); + String json = service.dumpStatisticsAsJson(); + +If JMX is enabled in the CamelContext, the MBean is registered in the +`type=services` tree with `name=MicrometerEventNotifier`. + +# Instrumenting Camel thread pools + +`InstrumentedThreadPoolFactory` allows you to gather performance +information about Camel Thread Pools by injecting a +`InstrumentedThreadPoolFactory` which collects information from the +inside of Camel. See more details at [Threading +Model](#manual::threading-model.adoc). + +# Exposing Micrometer statistics in JMX + +Micrometer uses `MeterRegistry` implementations to publish statistics. +While in production scenarios it is advisable to select a dedicated +backend like Prometheus or Graphite, it may be sufficient for test or +local deployments to publish statistics to JMX. + +To achieve this, add the following dependency: + + + io.micrometer + micrometer-registry-jmx + ${micrometer-version} + + +and add a `JmxMeterRegistry` instance: + +Java +@Bean(name = MicrometerConstants.METRICS\_REGISTRY\_NAME) +public MeterRegistry getMeterRegistry() { +CompositeMeterRegistry meterRegistry = new CompositeMeterRegistry(); +meterRegistry.add(...); +meterRegistry.add(new JmxMeterRegistry( +CamelJmxConfig.DEFAULT, +Clock.SYSTEM, +HierarchicalNameMapper.DEFAULT)); +return meterRegistry; +} + +CDI +@Produces +@Named(MicrometerConstants.METRICS\_REGISTRY\_NAME)) +public MeterRegistry getMeterRegistry() { +CompositeMeterRegistry meterRegistry = new CompositeMeterRegistry(); +meterRegistry.add(...); +meterRegistry.add(new JmxMeterRegistry( +CamelJmxConfig.DEFAULT, +Clock.SYSTEM, +HierarchicalNameMapper.DEFAULT)); +return meterRegistry; +} + +The `HierarchicalNameMapper` strategy determines how meter name and tags +are assembled into an MBean name. + +# Using Camel Micrometer with Camel Main + +When you use Camel standalone (`camel-main`), then if you need to expose +metrics for Prometheus, then you can use `camel-micrometer-prometheus` +JAR. And easily enable and configure this from `application.properties` +as shown: + + # enable HTTP server with metrics + camel.server.enabled=true + camel.server.metricsEnabled=true + + # turn on micrometer metrics + camel.metrics.enabled=true + # include more camel details + camel.metrics.enableMessageHistory=true + # include additional out-of-the-box micrometer metrics for cpu, jvm and used file descriptors + camel.metrics.binders=processor,jvm-info,file-descriptor + +# Using Camel Micrometer with Spring Boot + +When you use `camel-micrometer-starter` with Spring Boot, then Spring +Boot autoconfiguration will automatically enable metrics capture if a +`io.micrometer.core.instrument.MeterRegistry` is available. + +For example, to capture data with Prometheus, you can add the following +dependency: + + + io.micrometer + micrometer-registry-prometheus + + +See the following table for options to specify what metrics to capture, +or to turn it off. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|metricsRegistry|To use a custom configured MetricRegistry.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|metricsType|Type of metrics||object| +|metricsName|Name of metrics||string| +|tags|Tags of metrics||object| +|action|Action expression when using timer type||string| +|decrement|Decrement value expression when using counter type||string| +|increment|Increment value expression when using counter type||string| +|metricsDescription|Description of metrics||string| +|value|Value expression when using histogram type||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-milvus.md b/camel-milvus.md new file mode 100644 index 0000000000000000000000000000000000000000..c68aca77978ab15b5662c91612f24004765d5a90 --- /dev/null +++ b/camel-milvus.md @@ -0,0 +1,156 @@ +# Milvus + +**Since Camel 4.5** + +**Only producer is supported** + +The Milvus Component provides support for interacting with the [Milvus +Vector Database](https://https://milvus.io/). + +# URI format + + milvus:collection[?options] + +Where **collection** represents a named set of points (vectors with a +payload) defined in your database. + +# Collection Samples + +In the route below, we use the milvus component to create a collection +named *test* with the given parameters: + +Java +FieldType fieldType1 = FieldType.newBuilder() +.withName("userID") +.withDescription("user identification") +.withDataType(DataType.Int64) +.withPrimaryKey(true) +.withAutoID(true) +.build(); + + FieldType fieldType2 = FieldType.newBuilder() + .withName("userFace") + .withDescription("face embedding") + .withDataType(DataType.FloatVector) + .withDimension(64) + .build(); + + FieldType fieldType3 = FieldType.newBuilder() + .withName("userAge") + .withDescription("user age") + .withDataType(DataType.Int8) + .build(); + + from("direct:in") + .setHeader(Milvus.Headers.ACTION) + .constant(MilvusAction.CREATE_COLLECTION) + .setBody() + .constant( + CreateCollectionParam.newBuilder() + .withCollectionName("test") + .withDescription("customer info") + .withShardsNum(2) + .withEnableDynamicField(false) + .addFieldType(fieldType1) + .addFieldType(fieldType2) + .addFieldType(fieldType3) + .build()) + .to("milvus:test"); + +# Points Samples + +## Upsert + +In the route below we use the milvus component to perform insert on +points in the collection named *test*: + +Java +private List\\> generateFloatVectors(int count) { +Random ran = new Random(); +List\\> vectors = new ArrayList\<\>(); +for (int n = 0; n \< count; ++n) { +List vector = new ArrayList\<\>(); +for (int i = 0; i \< 64; ++i) { +vector.add(ran.nextFloat()); +} +vectors.add(vector); +} + + return vectors; + } + + + Random ran = new Random(); + List ages = new ArrayList<>(); + for (long i = 0L; i < 2; ++i) { + ages.add(ran.nextInt(99)); + } + List fields = new ArrayList<>(); + fields.add(new InsertParam.Field("userAge", ages)); + fields.add(new InsertParam.Field("userFace", generateFloatVectors(2))); + + from("direct:in") + .setHeader(Milvus.Headers.ACTION) + .constant(MilvusAction.INSERT) + .setBody() + .constant( + InsertParam.newBuilder() + .withCollectionName("test") + .withFields(fields) + .build()) + .to("qdrant:test"); + +## Search + +In the route below, we use the milvus component to retrieve information +by query from the collection named *test*: + +Java +private List generateFloatVector() { +Random ran = new Random(); +List vector = new ArrayList\<\>(); +for (int i = 0; i \< 64; ++i) { +vector.add(ran.nextFloat()); +} +return vector; +} + + from("direct:in") + .setHeader(Milvus.Headers.ACTION) + .constant(MilvusAction.SEARCH) + .setBody() + .constant(SearchSimpleParam.newBuilder() + .withCollectionName("test") + .withVectors(generateFloatVector()) + .withFilter("userAge>0") + .withLimit(100L) + .withOffset(0L) + .withOutputFields(Lists.newArrayList("userAge")) + .withConsistencyLevel(ConsistencyLevelEnum.STRONG) + .build()) + .to("qdrant:myCollection"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|The configuration;||object| +|host|The host to connect to.|localhost|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|port|The port to connect to.|19530|integer| +|timeout|Sets a default timeout for all requests||integer| +|token|Sets the API key to use for authentication||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|collection|The collection Name||string| +|host|The host to connect to.|localhost|string| +|port|The port to connect to.|19530|integer| +|timeout|Sets a default timeout for all requests||integer| +|token|Sets the API key to use for authentication||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-mina.md b/camel-mina.md new file mode 100644 index 0000000000000000000000000000000000000000..bb97a39ae0d20b62fa5f5b25ca49b5a5d056b0ff --- /dev/null +++ b/camel-mina.md @@ -0,0 +1,218 @@ +# Mina + +**Since Camel 2.10** + +**Both producer and consumer are supported** + +The Mina component is a transport mechanism for working with [Apache +MINA 2.x](http://mina.apache.org/) + +Favor using [Netty](#netty-component.adoc) as Netty is a much more +active maintained and popular project than Apache Mina currently is. + +Be careful with `sync=false` on consumer endpoints. In camel-mina, all +consumer exchanges are `InOut`. This is different to camel-mina. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-mina + x.x.x + + + +# URI format + + mina:tcp://hostname[:port][?options] + mina:udp://hostname[:port][?options] + mina:vm://hostname[:port][?options] + +You can specify a codec in the Registry using the **codec** option. If +you are using TCP and no codec is specified then the `textline` flag is +used to determine if text-line-based codec or object serialization +should be used instead. By default, the object serialization is used. + +For UDP if no codec is specified the default uses a basic `ByteBuffer` +based codec. + +The VM protocol is used as a direct forwarding mechanism in the same +JVM. + +A Mina producer has a default timeout value of 30 seconds, while it +waits for a response from the remote server. + +In normal use, `camel-mina` only supports marshalling the body +content—message headers and exchange properties are not sent. +However, the option, **transferExchange**, does allow you to transfer +the exchange itself over the wire. See options below. + +# Using a custom codec + +See the Mina how to write your own codec. To use your custom codec with +`camel-mina`, you should register your codec in the Registry; for +example, by creating a bean in the Spring XML file. Then use the `codec` +option to specify the bean ID of your codec. See +[HL7](#dataformats:hl7-dataformat.adoc) that has a custom codec. + +## Sample with sync=false + +In this sample, Camel exposes a service that listens for TCP connections +on port 6200. We use the **textline** codec. In our route, we create a +Mina consumer endpoint that listens to on port 6200: + + from("mina:tcp://localhost:" + port1 + "?textline=true&sync=false").to("mock:result"); + +As the sample is part of a unit test, we test it by sending some data to +it on port 6200. + + MockEndpoint mock = getMockEndpoint("mock:result"); + mock.expectedBodiesReceived("Hello World"); + + template.sendBody("mina:tcp://localhost:" + port1 + "?textline=true&sync=false", "Hello World"); + + MockEndpoint.assertIsSatisfied(context); + +## Sample with sync=true + +In the next sample, we have a more common use case where we expose a TCP +service on port 6201 also use the textline codec. However, this time we +want to return a response, so we set the `sync` option to `true` on the +consumer. + + from("mina:tcp://localhost:" + port2 + "?textline=true&sync=true").process(new Processor() { + public void process(Exchange exchange) throws Exception { + String body = exchange.getIn().getBody(String.class); + exchange.getOut().setBody("Bye " + body); + } + }); + +Then we test the sample by sending some data and retrieving the response +using the `template.requestBody()` method. As we know the response is a +`String`, we cast it to `String` and can assert that the response is, in +fact, something we have dynamically set in our processor code logic. + + String response = (String)template.requestBody("mina:tcp://localhost:" + port2 + "?textline=true&sync=true", "World"); + assertEquals("Bye World", response); + +# Sample with Spring DSL + +Spring DSL can also be used for [MINA](#mina-component.adoc). In the +sample below, we expose a TCP server on port 5555: + + + + + + +In the route above, we expose a TCP server on port 5555 using the +textline codec. We let the Spring bean with ID, `myTCPOrderHandler`, +handle the request and return a reply. For instance, the handler bean +could be implemented as follows: + + public String handleOrder(String payload) { + ... + return "Order: OK" + } + +# Closing Session When Complete + +When acting as a server, you sometimes want to close the session when, +for example, a client conversion is finished. To instruct Camel to close +the session, you should add a header with the key +`CamelMinaCloseSessionWhenComplete` set to a boolean `true` value. + +For instance, the example below will close the session after it has +written the `bye` message back to the client: + + from("mina:tcp://localhost:8080?sync=true&textline=true").process(new Processor() { + public void process(Exchange exchange) throws Exception { + String body = exchange.getIn().getBody(String.class); + exchange.getOut().setBody("Bye " + body); + exchange.getOut().setHeader(MinaConstants.MINA_CLOSE_SESSION_WHEN_COMPLETE, true); + } + }); + +# Get the IoSession for message + +You can get the IoSession from the message header with this key +`MinaConstants.MINA_IOSESSION`, and also get the local host address with +the key `MinaConstants.MINA_LOCAL_ADDRESS` and remote host address with +the key `MinaConstants.MINA_REMOTE_ADDRESS`. + +# Configuring Mina filters + +Filters permit you to use some Mina Filters, such as `SslFilter`. You +can also implement some customized filters. Please note that `codec` and +`logger` are also implemented as Mina filters of the type, `IoFilter`. +Any filters you may define are appended to the end of the filter chain; +that is, after `codec` and `logger`. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|disconnect|Whether to disconnect(close) from Mina session right after use. Can be used for both consumer and producer.|false|boolean| +|minaLogger|You can enable the Apache MINA logging filter. Apache MINA uses slf4j logging at INFO level to log all input and output.|false|boolean| +|sync|Setting to set endpoint as one-way or request-response.|true|boolean| +|timeout|You can configure the timeout that specifies how long to wait for a response from a remote server. The timeout unit is in milliseconds, so 60000 is 60 seconds.|30000|integer| +|writeTimeout|Maximum amount of time it should take to send data to the MINA session. Default is 10000 milliseconds.|10000|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|clientMode|If the clientMode is true, mina consumer will connect the address as a TCP client.|false|boolean| +|noReplyLogLevel|If sync is enabled this option dictates MinaConsumer which logging level to use when logging a there is no reply to send back.|WARN|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|cachedAddress|Whether to create the InetAddress once and reuse. Setting this to false allows to pickup DNS changes in the network.|true|boolean| +|lazySessionCreation|Sessions can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started.|true|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configuration|To use the shared mina configuration.||object| +|disconnectOnNoReply|If sync is enabled then this option dictates MinaConsumer if it should disconnect where there is no reply to send back.|true|boolean| +|maximumPoolSize|Number of worker threads in the worker pool for TCP and UDP|16|integer| +|orderedThreadPoolExecutor|Whether to use ordered thread pool, to ensure events are processed orderly on the same channel.|true|boolean| +|transferExchange|Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|allowDefaultCodec|The mina component installs a default codec if both, codec is null and textline is false. Setting allowDefaultCodec to false prevents the mina component from installing a default codec as the first element in the filter chain. This is useful in scenarios where another filter must be the first in the filter chain, like the SSL filter.|true|boolean| +|codec|To use a custom minda codec implementation.||object| +|decoderMaxLineLength|To set the textline protocol decoder max line length. By default the default value of Mina itself is used which are 1024.|1024|integer| +|encoderMaxLineLength|To set the textline protocol encoder max line length. By default the default value of Mina itself is used which are Integer.MAX\_VALUE.|-1|integer| +|encoding|You can configure the encoding (a charset name) to use for the TCP textline codec and the UDP protocol. If not provided, Camel will use the JVM default Charset||string| +|filters|You can set a list of Mina IoFilters to use.||array| +|textline|Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP.|false|boolean| +|textlineDelimiter|Only used for TCP and if textline=true. Sets the text line delimiter to use. If none provided, Camel will use DEFAULT. This delimiter is used to mark the end of text.||object| +|sslContextParameters|To configure SSL security.||object| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|protocol|Protocol to use||string| +|host|Hostname to use. Use localhost or 0.0.0.0 for local server as consumer. For producer use the hostname or ip address of the remote server.||string| +|port|Port number||integer| +|disconnect|Whether to disconnect(close) from Mina session right after use. Can be used for both consumer and producer.|false|boolean| +|minaLogger|You can enable the Apache MINA logging filter. Apache MINA uses slf4j logging at INFO level to log all input and output.|false|boolean| +|sync|Setting to set endpoint as one-way or request-response.|true|boolean| +|timeout|You can configure the timeout that specifies how long to wait for a response from a remote server. The timeout unit is in milliseconds, so 60000 is 60 seconds.|30000|integer| +|writeTimeout|Maximum amount of time it should take to send data to the MINA session. Default is 10000 milliseconds.|10000|integer| +|clientMode|If the clientMode is true, mina consumer will connect the address as a TCP client.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|noReplyLogLevel|If sync is enabled this option dictates MinaConsumer which logging level to use when logging a there is no reply to send back.|WARN|object| +|cachedAddress|Whether to create the InetAddress once and reuse. Setting this to false allows to pickup DNS changes in the network.|true|boolean| +|lazySessionCreation|Sessions can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|disconnectOnNoReply|If sync is enabled then this option dictates MinaConsumer if it should disconnect where there is no reply to send back.|true|boolean| +|maximumPoolSize|Number of worker threads in the worker pool for TCP and UDP|16|integer| +|orderedThreadPoolExecutor|Whether to use ordered thread pool, to ensure events are processed orderly on the same channel.|true|boolean| +|transferExchange|Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|allowDefaultCodec|The mina component installs a default codec if both, codec is null and textline is false. Setting allowDefaultCodec to false prevents the mina component from installing a default codec as the first element in the filter chain. This is useful in scenarios where another filter must be the first in the filter chain, like the SSL filter.|true|boolean| +|codec|To use a custom minda codec implementation.||object| +|decoderMaxLineLength|To set the textline protocol decoder max line length. By default the default value of Mina itself is used which are 1024.|1024|integer| +|encoderMaxLineLength|To set the textline protocol encoder max line length. By default the default value of Mina itself is used which are Integer.MAX\_VALUE.|-1|integer| +|encoding|You can configure the encoding (a charset name) to use for the TCP textline codec and the UDP protocol. If not provided, Camel will use the JVM default Charset||string| +|filters|You can set a list of Mina IoFilters to use.||array| +|textline|Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP.|false|boolean| +|textlineDelimiter|Only used for TCP and if textline=true. Sets the text line delimiter to use. If none provided, Camel will use DEFAULT. This delimiter is used to mark the end of text.||object| +|sslContextParameters|To configure SSL security.||object| diff --git a/camel-minio.md b/camel-minio.md new file mode 100644 index 0000000000000000000000000000000000000000..8299ba26047f4a3a7fccef2a5d9ef18e0953255c --- /dev/null +++ b/camel-minio.md @@ -0,0 +1,412 @@ +# Minio + +**Since Camel 3.5** + +**Both producer and consumer are supported** + +The Minio component supports storing and retrieving objects from/to +[Minio](https://min.io/) service. + +# Prerequisites + +You must have valid credentials for authorized access to the +buckets/folders. More information is available at +[Minio](https://min.io/). + +# URI Format + + minio://bucketName[?options] + +The bucket will be created if it doesn’t already exist. +You can append query options to the URI in the following format: +`?options=value&option2=value&...` + +For example, to read file `hello.txt` from the bucket `helloBucket`, use +the following snippet: + + from("minio://helloBucket?accessKey=yourAccessKey&secretKey=yourSecretKey&objectName=hello.txt") + .to("file:/var/downloaded"); + +You have to provide the minioClient in the Registry or your accessKey +and secretKey to access the [Minio](https://min.io/). + +# Batch Consumer + +This component implements the Batch Consumer. + +This allows you, for instance, to know how many messages exist in this +batch and for instance, let the Aggregator aggregate this number of +messages. + +## Minio Producer operations + +Camel-Minio component provides the following operation on the producer +side: + +- copyObject + +- deleteObject + +- deleteObjects + +- listBuckets + +- deleteBucket + +- listObjects + +- getObject (this will return a MinioObject instance) + +- getObjectRange (this will return a MinioObject instance) + +- createDownloadLink (this will return a Presigned download Url) + +- createUploadLink (this will return a Presigned upload url) + +## Advanced Minio configuration + +If your Camel Application is running behind a firewall or if you need to +have more control over the `MinioClient` instance configuration, you can +create your own instance and refer to it in your Camel minio component +configuration: + + from("minio://MyBucket?minioClient=#client&delay=5000&maxMessagesPerPoll=5") + .to("mock:result"); + +## Minio Producer Operation examples + +- CopyObject: this operation copies an object from one bucket to a + different one + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(MinioConstants.DESTINATION_BUCKET_NAME, "camelDestinationBucket"); + exchange.getIn().setHeader(MinioConstants.OBJECT_NAME, "camelKey"); + exchange.getIn().setHeader(MinioConstants.DESTINATION_OBJECT_NAME, "camelDestinationKey"); + } + }) + .to("minio://mycamelbucket?minioClient=#minioClient&operation=copyObject") + .to("mock:result"); + +This operation will copy the object with the name expressed in the +header camelDestinationKey to the camelDestinationBucket bucket, from +the bucket mycamelbucket. + +- DeleteObject: this operation deletes an object from a bucket + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(MinioConstants.OBJECT_NAME, "camelKey"); + } + }) + .to("minio://mycamelbucket?minioClient=#minioClient&operation=deleteObject") + .to("mock:result"); + +This operation will delete the object camelKey from the bucket +mycamelbucket. + +- ListBuckets: this operation lists the buckets for this account in + this region + + + + from("direct:start") + .to("minio://mycamelbucket?minioClient=#minioClient&operation=listBuckets") + .to("mock:result"); + +This operation will list the buckets for this account + +- DeleteBucket: this operation deletes the bucket specified as URI + parameter or header + + + + from("direct:start") + .to("minio://mycamelbucket?minioClient=#minioClient&operation=deleteBucket") + .to("mock:result"); + +This operation will delete the bucket mycamelbucket + +- ListObjects: this operation list object in a specific bucket + + + + from("direct:start") + .to("minio://mycamelbucket?minioClient=#minioClient&operation=listObjects") + .to("mock:result"); + +This operation will list the objects in the mycamelbucket bucket + +- GetObject: this operation gets a single object in a specific bucket + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(MinioConstants.OBJECT_NAME, "camelKey"); + } + }) + .to("minio://mycamelbucket?minioClient=#minioClient&operation=getObject") + .to("mock:result"); + +This operation will return a MinioObject instance related to the +camelKey object in `mycamelbucket` bucket. + +- GetObjectRange: this operation gets a single object range in a + specific bucket + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(MinioConstants.OBJECT_NAME, "camelKey"); + exchange.getIn().setHeader(MinioConstants.OFFSET, "0"); + exchange.getIn().setHeader(MinioConstants.LENGTH, "9"); + } + }) + .to("minio://mycamelbucket?minioClient=#minioClient&operation=getObjectRange") + .to("mock:result"); + +This operation will return a MinioObject instance related to the +camelKey object in `mycamelbucket` bucket, containing bytes from 0 to 9. + +- createDownloadLink: this operation will return a presigned url + through which a file can be downloaded using GET method + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(MinioConstants.OBJECT_NAME, "camelKey"); + exchange.getIn().setHeader(MinioConstants.PRESIGNED_URL_EXPIRATION_TIME, 60 * 60); + } + }) + .to("minio://mycamelbucket?minioClient=#minioClient&operation=createDownloadLink") + .to("mock:result"); + +- createUploadLink: this operation will return a presigned url through + which a file can be uploaded using PUT method + + + + from("direct:start").process(new Processor() { + + @Override + public void process(Exchange exchange) throws Exception { + exchange.getIn().setHeader(MinioConstants.OBJECT_NAME, "camelKey"); + exchange.getIn().setHeader(MinioConstants.PRESIGNED_URL_EXPIRATION_TIME, 60 * 60); + } + }) + .to("minio://mycamelbucket?minioClient=#minioClient&operation=createUploadLink") + .to("mock:result"); + +createDownLink and createUploadLink have a default expiry of 3600s which +can be overridden by setting the header +MinioConstants.PRESIGNED\_URL\_EXPIRATION\_TIME (value in seconds) + +# Bucket Auto-creation + +With the option `autoCreateBucket` users are able to avoid the +autocreation of a Minio Bucket in case it doesn’t exist. The default for +this option is `true`. If set to false, any operation on a not-existent +bucket in Minio won’t be successful, and an error will be returned. + +# Automatic detection of a Minio client in registry + +The component is capable of detecting the presence of a Minio bean into +the registry. If it’s the only instance of that type, it will be used as +the client, and you won’t have to define it as uri parameter, like the +example above. This may be really useful for smarter configuration of +the endpoint. + +# Moving stuff between a bucket and another bucket + +Some users like to consume stuff from a bucket and move the content in a +different one without using the `copyObject` feature of this component. +If this is the case for you, remember to remove the `bucketName` header +from the incoming exchange of the consumer. Otherwise, the file will +always be overwritten on the same original bucket. + +# MoveAfterRead consumer option + +In addition to `deleteAfterRead`, it has been added another option, +`moveAfterRead`. With this option enabled, the consumed object will be +moved to a target `destinationBucket` instead of being only deleted. +This will require specifying the destinationBucket option. As example: + + from("minio://mycamelbucket?minioClient=#minioClient&moveAfterRead=true&destinationBucketName=myothercamelbucket") + .to("mock:result"); + +In this case, the objects consumed will be moved to `myothercamelbucket` +bucket and deleted from the original one (because of `deleteAfterRead` +set to true as default). + +# Using a POJO as body + +Sometimes build a Minio Request can be complex because of multiple +options. We introduce the possibility to use a POJO as the body. In +Minio, there are multiple operations you can submit, as an example for +List brokers request, you can do something like: + + from("direct:minio") + .setBody(ListObjectsArgs.builder() + .bucket(bucketName) + .recursive(getConfiguration().isRecursive()))) + .to("minio://test?minioClient=#minioClient&operation=listObjects&pojoRequest=true") + +In this way, you’ll pass the request directly without the need of +passing headers and options specifically related to this operation. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-minio + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|autoCreateBucket|Setting the autocreation of the bucket if bucket name not exist.|true|boolean| +|configuration|The component configuration||object| +|endpoint|Endpoint can be an URL, domain name, IPv4 address or IPv6 address.||string| +|minioClient|Reference to a Minio Client object in the registry.||object| +|objectLock|Set when creating new bucket.|false|boolean| +|policy|The policy for this queue to set in the method.||string| +|proxyPort|TCP/IP port number. 80 and 443 are used as defaults for HTTP and HTTPS.||integer| +|region|The region in which Minio client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1). You'll need to use the name Region.EU\_WEST\_1.id()||string| +|secure|Flag to indicate to use secure connection to minio service or not.|false|boolean| +|autoCloseBody|If this option is true and includeBody is true, then the MinioObject.close() method will be called on exchange completion. This option is strongly related to includeBody option. In case of setting includeBody to true and autocloseBody to false, it will be up to the caller to close the MinioObject stream. Setting autocloseBody to true, will close the MinioObject stream automatically.|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|bypassGovernanceMode|Set this flag if you want to bypassGovernanceMode when deleting a particular object.|false|boolean| +|deleteAfterRead|Delete objects from Minio after they have been retrieved. The delete is only performed if the Exchange is committed. If a rollback occurs, the object is not deleted. If this option is false, then the same objects will be retrieve over and over again on the polls. Therefore you need to use the Idempotent Consumer EIP in the route to filter out duplicates. You can filter using the MinioConstants#BUCKET\_NAME and MinioConstants#OBJECT\_NAME headers, or only the MinioConstants#OBJECT\_NAME header.|true|boolean| +|delimiter|The delimiter which is used in the ListObjectsRequest to only consume objects we are interested in.||string| +|destinationBucketName|Destination bucket name.||string| +|destinationObjectName|Destination object name.||string| +|includeBody|If it is true, the exchange body will be set to a stream to the contents of the file. If false, the headers will be set with the Minio object metadata, but the body will be null. This option is strongly related to autocloseBody option. In case of setting includeBody to true and autocloseBody to false, it will be up to the caller to close the MinioObject stream. Setting autocloseBody to true, will close the MinioObject stream automatically.|true|boolean| +|includeFolders|The flag which is used in the ListObjectsRequest to set include folders.|false|boolean| +|includeUserMetadata|The flag which is used in the ListObjectsRequest to get objects with user meta data.|false|boolean| +|includeVersions|The flag which is used in the ListObjectsRequest to get objects with versioning.|false|boolean| +|length|Number of bytes of object data from offset.||integer| +|matchETag|Set match ETag parameter for get object(s).||string| +|maxConnections|Set the maxConnections parameter in the minio client configuration|60|integer| +|maxMessagesPerPoll|Gets the maximum number of messages as a limit to poll at each polling. Gets the maximum number of messages as a limit to poll at each polling. The default value is 10. Use 0 or a negative number to set it as unlimited.|10|integer| +|modifiedSince|Set modified since parameter for get object(s).||object| +|moveAfterRead|Move objects from bucket to a different bucket after they have been retrieved. To accomplish the operation the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved.|false|boolean| +|notMatchETag|Set not match ETag parameter for get object(s).||string| +|objectName|To get the object from the bucket with the given object name.||string| +|offset|Start byte position of object data.||integer| +|prefix|Object name starts with prefix.||string| +|recursive|List recursively than directory structure emulation.|false|boolean| +|startAfter|list objects in bucket after this object name.||string| +|unModifiedSince|Set un modified since parameter for get object(s).||object| +|useVersion1|when true, version 1 of REST API is used.|false|boolean| +|versionId|Set specific version\_ID of a object when deleting the object.||string| +|deleteAfterWrite|Delete file object after the Minio file has been uploaded.|false|boolean| +|keyName|Setting the key name for an element in the bucket through endpoint parameter.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to do in case the user don't want to do only an upload.||object| +|pojoRequest|If we want to use a POJO request as body or not.|false|boolean| +|storageClass|The storage class to set in the request.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|customHttpClient|Set custom HTTP client for authenticated access.||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|accessKey|Amazon AWS Secret Access Key or Minio Access Key. If not set camel will connect to service for anonymous access.||string| +|secretKey|Amazon AWS Access Key Id or Minio Secret Key. If not set camel will connect to service for anonymous access.||string| +|serverSideEncryption|Server-side encryption.||object| +|serverSideEncryptionCustomerKey|Server-side encryption for source object while copy/move objects.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bucketName|Bucket name||string| +|autoCreateBucket|Setting the autocreation of the bucket if bucket name not exist.|true|boolean| +|endpoint|Endpoint can be an URL, domain name, IPv4 address or IPv6 address.||string| +|minioClient|Reference to a Minio Client object in the registry.||object| +|objectLock|Set when creating new bucket.|false|boolean| +|policy|The policy for this queue to set in the method.||string| +|proxyPort|TCP/IP port number. 80 and 443 are used as defaults for HTTP and HTTPS.||integer| +|region|The region in which Minio client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1). You'll need to use the name Region.EU\_WEST\_1.id()||string| +|secure|Flag to indicate to use secure connection to minio service or not.|false|boolean| +|autoCloseBody|If this option is true and includeBody is true, then the MinioObject.close() method will be called on exchange completion. This option is strongly related to includeBody option. In case of setting includeBody to true and autocloseBody to false, it will be up to the caller to close the MinioObject stream. Setting autocloseBody to true, will close the MinioObject stream automatically.|true|boolean| +|bypassGovernanceMode|Set this flag if you want to bypassGovernanceMode when deleting a particular object.|false|boolean| +|deleteAfterRead|Delete objects from Minio after they have been retrieved. The delete is only performed if the Exchange is committed. If a rollback occurs, the object is not deleted. If this option is false, then the same objects will be retrieve over and over again on the polls. Therefore you need to use the Idempotent Consumer EIP in the route to filter out duplicates. You can filter using the MinioConstants#BUCKET\_NAME and MinioConstants#OBJECT\_NAME headers, or only the MinioConstants#OBJECT\_NAME header.|true|boolean| +|delimiter|The delimiter which is used in the ListObjectsRequest to only consume objects we are interested in.||string| +|destinationBucketName|Destination bucket name.||string| +|destinationObjectName|Destination object name.||string| +|includeBody|If it is true, the exchange body will be set to a stream to the contents of the file. If false, the headers will be set with the Minio object metadata, but the body will be null. This option is strongly related to autocloseBody option. In case of setting includeBody to true and autocloseBody to false, it will be up to the caller to close the MinioObject stream. Setting autocloseBody to true, will close the MinioObject stream automatically.|true|boolean| +|includeFolders|The flag which is used in the ListObjectsRequest to set include folders.|false|boolean| +|includeUserMetadata|The flag which is used in the ListObjectsRequest to get objects with user meta data.|false|boolean| +|includeVersions|The flag which is used in the ListObjectsRequest to get objects with versioning.|false|boolean| +|length|Number of bytes of object data from offset.||integer| +|matchETag|Set match ETag parameter for get object(s).||string| +|maxConnections|Set the maxConnections parameter in the minio client configuration|60|integer| +|maxMessagesPerPoll|Gets the maximum number of messages as a limit to poll at each polling. Gets the maximum number of messages as a limit to poll at each polling. The default value is 10. Use 0 or a negative number to set it as unlimited.|10|integer| +|modifiedSince|Set modified since parameter for get object(s).||object| +|moveAfterRead|Move objects from bucket to a different bucket after they have been retrieved. To accomplish the operation the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved.|false|boolean| +|notMatchETag|Set not match ETag parameter for get object(s).||string| +|objectName|To get the object from the bucket with the given object name.||string| +|offset|Start byte position of object data.||integer| +|prefix|Object name starts with prefix.||string| +|recursive|List recursively than directory structure emulation.|false|boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|startAfter|list objects in bucket after this object name.||string| +|unModifiedSince|Set un modified since parameter for get object(s).||object| +|useVersion1|when true, version 1 of REST API is used.|false|boolean| +|versionId|Set specific version\_ID of a object when deleting the object.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|deleteAfterWrite|Delete file object after the Minio file has been uploaded.|false|boolean| +|keyName|Setting the key name for an element in the bucket through endpoint parameter.||string| +|operation|The operation to do in case the user don't want to do only an upload.||object| +|pojoRequest|If we want to use a POJO request as body or not.|false|boolean| +|storageClass|The storage class to set in the request.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|customHttpClient|Set custom HTTP client for authenticated access.||object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|accessKey|Amazon AWS Secret Access Key or Minio Access Key. If not set camel will connect to service for anonymous access.||string| +|secretKey|Amazon AWS Access Key Id or Minio Secret Key. If not set camel will connect to service for anonymous access.||string| +|serverSideEncryption|Server-side encryption.||object| +|serverSideEncryptionCustomerKey|Server-side encryption for source object while copy/move objects.||object| diff --git a/camel-mllp.md b/camel-mllp.md new file mode 100644 index 0000000000000000000000000000000000000000..8a4f38c5be4f34cfdd1482c9c1acecc262ee4daa --- /dev/null +++ b/camel-mllp.md @@ -0,0 +1,271 @@ +# Mllp + +**Since Camel 2.17** + +**Both producer and consumer are supported** + +The MLLP component is specifically designed to handle the nuances of the +MLLP protocol and provide the functionality required by Healthcare +providers to communicate with other systems using the MLLP protocol. + +The MLLP component provides a simple configuration URI, automated HL7 +acknowledgment generation, and automatic acknowledgment interrogation. + +The MLLP protocol does not typically use a large number of concurrent +TCP connections - a single active TCP connection is the normal case. +Therefore, the MLLP component uses a simple thread-per-connection model +based on standard Java Sockets. This keeps the implementation simple and +eliminates the dependencies on only Camel itself. + +The component supports the following: + +- A Camel consumer using a TCP Server + +- A Camel producer using a TCP Client + +The MLLP component use `byte[]` payloads, and relies on Camel type +conversion to convert `byte[]` to other types. + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-mllp + x.x.x + + + +# MLLP Consumer + +The MLLP Consumer supports receiving MLLP-framed messages and sending +HL7 Acknowledgements. The MLLP Consumer can automatically generate the +HL7 Acknowledgement (HL7 Application Acknowledgements only - AA, AE and +AR), or the acknowledgement can be specified using the +`CamelMllpAcknowledgement` exchange property. Additionally, the type of +acknowledgement that will be generated can be controlled by setting the +`CamelMllpAcknowledgementType` exchange property. The MLLP Consumer can +read messages without sending any HL7 Acknowledgement if the automatic +acknowledgement is disabled and the exchange pattern is `InOnly`. + +## Exchange Properties + +The type of acknowledgment the MLLP Consumer generates and state of the +TCP Socket can be controlled by these properties on the Camel exchange: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Key

Type

Description

CamelMllpAcknowledgement

byte[]

If present, this property will be sent +to the client as the MLLP Acknowledgement

CamelMllpAcknowledgementString

String

If present and +CamelMllpAcknowledgement is not present, this property will +we sent to the client as the MLLP Acknowledgement

CamelMllpAcknowledgementMsaText

String

If neither +CamelMllpAcknowledgement or +CamelMllpAcknowledgementString are present and autoAck is +true, this property can be used to specify the contents of MSA-3 in the +generated HL7 acknowledgement

CamelMllpAcknowledgementType

String

If neither +CamelMllpAcknowledgement or +CamelMllpAcknowledgementString are present and autoAck is +true, this property can be used to specify the HL7 acknowledgement type +(i.e. AA, AE, AR)

CamelMllpAutoAcknowledge

Boolean

Overrides the autoAck query +parameter

CamelMllpCloseConnectionBeforeSend

Boolean

If true, the Socket will be closed +before sending data

CamelMllpResetConnectionBeforeSend

Boolean

If true, the Socket will be reset +before sending data

CamelMllpCloseConnectionAfterSend

Boolean

If true, the Socket will be closed +immediately after sending data

CamelMllpResetConnectionAfterSend

Boolean

If true, the Socket will be reset +immediately after sending any data

+ +# MLLP Producer + +The MLLP Producer supports sending MLLP-framed messages and receiving +HL7 Acknowledgements. The MLLP Producer interrogates the HL7 +Acknowledgements and raises exceptions if a negative acknowledgement is +received. The received acknowledgement is interrogated and an exception +is raised in the event of a negative acknowledgement. The MLLP Producer +can ignore acknowledgements when configured with InOnly exchange +pattern. + +## Exchange Properties + +The state of the TCP Socket can be controlled by these properties on the +Camel exchange: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Key

Type

Description

CamelMllpCloseConnectionBeforeSend

Boolean

If true, the Socket will be closed +before sending data

CamelMllpResetConnectionBeforeSend

Boolean

If true, the Socket will be reset +before sending data

CamelMllpCloseConnectionAfterSend

Boolean

If true, the Socket will be closed +immediately after sending data

CamelMllpResetConnectionAfterSend

Boolean

If true, the Socket will be reset +immediately after sending any data

+ +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|autoAck|Enable/Disable the automatic generation of a MLLP Acknowledgement MLLP Consumers only|true|boolean| +|charsetName|Sets the default charset to use||string| +|configuration|Sets the default configuration to use when creating MLLP endpoints.||object| +|hl7Headers|Enable/Disable the automatic generation of message headers from the HL7 Message MLLP Consumers only|true|boolean| +|requireEndOfData|Enable/Disable strict compliance to the MLLP standard. The MLLP standard specifies START\_OF\_BLOCKhl7 payloadEND\_OF\_BLOCKEND\_OF\_DATA, however, some systems do not send the final END\_OF\_DATA byte. This setting controls whether or not the final END\_OF\_DATA byte is required or optional.|true|boolean| +|stringPayload|Enable/Disable converting the payload to a String. If enabled, HL7 Payloads received from external systems will be validated converted to a String. If the charsetName property is set, that character set will be used for the conversion. If the charsetName property is not set, the value of MSH-18 will be used to determine th appropriate character set. If MSH-18 is not set, then the default ISO-8859-1 character set will be use.|true|boolean| +|validatePayload|Enable/Disable the validation of HL7 Payloads If enabled, HL7 Payloads received from external systems will be validated (see Hl7Util.generateInvalidPayloadExceptionMessage for details on the validation). If and invalid payload is detected, a MllpInvalidMessageException (for consumers) or a MllpInvalidAcknowledgementException will be thrown.|false|boolean| +|acceptTimeout|Timeout (in milliseconds) while waiting for a TCP connection TCP Server Only|60000|integer| +|backlog|The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused.|5|integer| +|bindRetryInterval|TCP Server Only - The number of milliseconds to wait between bind attempts|5000|integer| +|bindTimeout|TCP Server Only - The number of milliseconds to retry binding to a server port|30000|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to receive incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. If disabled, the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions by logging them at WARN or ERROR level and ignored.|true|boolean| +|lenientBind|TCP Server Only - Allow the endpoint to start before the TCP ServerSocket is bound. In some environments, it may be desirable to allow the endpoint to start before the TCP ServerSocket is bound.|false|boolean| +|maxConcurrentConsumers|The maximum number of concurrent MLLP Consumer connections that will be allowed. If a new connection is received and the maximum is number are already established, the new connection will be reset immediately.|5|integer| +|reuseAddress|Enable/disable the SO\_REUSEADDR socket option.|false|boolean| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.|InOut|object| +|connectTimeout|Timeout (in milliseconds) for establishing for a TCP connection TCP Client only|30000|integer| +|idleTimeoutStrategy|decide what action to take when idle timeout occurs. Possible values are : RESET: set SO\_LINGER to 0 and reset the socket CLOSE: close the socket gracefully default is RESET.|RESET|object| +|keepAlive|Enable/disable the SO\_KEEPALIVE socket option.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|tcpNoDelay|Enable/disable the TCP\_NODELAY socket option.|true|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|defaultCharset|Set the default character set to use for byte to/from String conversions.|ISO-8859-1|string| +|logPhi|Whether to log PHI|true|boolean| +|logPhiMaxBytes|Set the maximum number of bytes of PHI that will be logged in a log entry.|5120|integer| +|maxBufferSize|Maximum buffer size used when receiving or sending data over the wire.|1073741824|integer| +|minBufferSize|Minimum buffer size used when receiving or sending data over the wire.|2048|integer| +|readTimeout|The SO\_TIMEOUT value (in milliseconds) used after the start of an MLLP frame has been received|5000|integer| +|receiveBufferSize|Sets the SO\_RCVBUF option to the specified value (in bytes)|8192|integer| +|receiveTimeout|The SO\_TIMEOUT value (in milliseconds) used when waiting for the start of an MLLP frame|15000|integer| +|sendBufferSize|Sets the SO\_SNDBUF option to the specified value (in bytes)|8192|integer| +|idleTimeout|The approximate idle time allowed before the Client TCP Connection will be reset. A null value or a value less than or equal to zero will disable the idle timeout.||integer| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|hostname|Hostname or IP for connection for the TCP connection. The default value is null, which means any local IP address||string| +|port|Port number for the TCP connection||integer| +|autoAck|Enable/Disable the automatic generation of a MLLP Acknowledgement MLLP Consumers only|true|boolean| +|charsetName|Sets the default charset to use||string| +|hl7Headers|Enable/Disable the automatic generation of message headers from the HL7 Message MLLP Consumers only|true|boolean| +|requireEndOfData|Enable/Disable strict compliance to the MLLP standard. The MLLP standard specifies START\_OF\_BLOCKhl7 payloadEND\_OF\_BLOCKEND\_OF\_DATA, however, some systems do not send the final END\_OF\_DATA byte. This setting controls whether or not the final END\_OF\_DATA byte is required or optional.|true|boolean| +|stringPayload|Enable/Disable converting the payload to a String. If enabled, HL7 Payloads received from external systems will be validated converted to a String. If the charsetName property is set, that character set will be used for the conversion. If the charsetName property is not set, the value of MSH-18 will be used to determine th appropriate character set. If MSH-18 is not set, then the default ISO-8859-1 character set will be use.|true|boolean| +|validatePayload|Enable/Disable the validation of HL7 Payloads If enabled, HL7 Payloads received from external systems will be validated (see Hl7Util.generateInvalidPayloadExceptionMessage for details on the validation). If and invalid payload is detected, a MllpInvalidMessageException (for consumers) or a MllpInvalidAcknowledgementException will be thrown.|false|boolean| +|acceptTimeout|Timeout (in milliseconds) while waiting for a TCP connection TCP Server Only|60000|integer| +|backlog|The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused.|5|integer| +|bindRetryInterval|TCP Server Only - The number of milliseconds to wait between bind attempts|5000|integer| +|bindTimeout|TCP Server Only - The number of milliseconds to retry binding to a server port|30000|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to receive incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. If disabled, the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions by logging them at WARN or ERROR level and ignored.|true|boolean| +|lenientBind|TCP Server Only - Allow the endpoint to start before the TCP ServerSocket is bound. In some environments, it may be desirable to allow the endpoint to start before the TCP ServerSocket is bound.|false|boolean| +|maxConcurrentConsumers|The maximum number of concurrent MLLP Consumer connections that will be allowed. If a new connection is received and the maximum is number are already established, the new connection will be reset immediately.|5|integer| +|reuseAddress|Enable/disable the SO\_REUSEADDR socket option.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.|InOut|object| +|connectTimeout|Timeout (in milliseconds) for establishing for a TCP connection TCP Client only|30000|integer| +|idleTimeoutStrategy|decide what action to take when idle timeout occurs. Possible values are : RESET: set SO\_LINGER to 0 and reset the socket CLOSE: close the socket gracefully default is RESET.|RESET|object| +|keepAlive|Enable/disable the SO\_KEEPALIVE socket option.|true|boolean| +|tcpNoDelay|Enable/disable the TCP\_NODELAY socket option.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|maxBufferSize|Maximum buffer size used when receiving or sending data over the wire.|1073741824|integer| +|minBufferSize|Minimum buffer size used when receiving or sending data over the wire.|2048|integer| +|readTimeout|The SO\_TIMEOUT value (in milliseconds) used after the start of an MLLP frame has been received|5000|integer| +|receiveBufferSize|Sets the SO\_RCVBUF option to the specified value (in bytes)|8192|integer| +|receiveTimeout|The SO\_TIMEOUT value (in milliseconds) used when waiting for the start of an MLLP frame|15000|integer| +|sendBufferSize|Sets the SO\_SNDBUF option to the specified value (in bytes)|8192|integer| +|idleTimeout|The approximate idle time allowed before the Client TCP Connection will be reset. A null value or a value less than or equal to zero will disable the idle timeout.||integer| diff --git a/camel-mock.md b/camel-mock.md new file mode 100644 index 0000000000000000000000000000000000000000..487c2c6587217af5f96ecd9eecaf9eafc4001fe5 --- /dev/null +++ b/camel-mock.md @@ -0,0 +1,391 @@ +# Mock + +**Since Camel 1.0** + +**Only producer is supported** + +Testing of distributed and asynchronous processing is notoriously +challenging. The [Mock](#mock-component.adoc) and +[DataSet](#dataset-component.adoc) endpoints work with the Camel Testing +Framework to simplify your unit and integration testing using +[Enterprise Integration +Patterns](#eips:enterprise-integration-patterns.adoc) and Camel’s large +range of Components together with the powerful Bean Integration. + +The Mock component provides a powerful declarative testing mechanism, +which is similar to [jMock](http://www.jmock.org) in that it allows +declarative expectations to be created on any Mock endpoint before a +test begins. Then the test is run, which typically fires messages to one +or more endpoints, and finally the expectations can be asserted in a +test case to ensure the system worked as expected. + +This allows you to test various things like: + +- The correct number of messages is received on each endpoint. + +- The correct payloads are received, in the right order. + +- Messages arrive at an endpoint in order, using some Expression to + create an order testing function. + +- Messages arrive match some kind of Predicate such as that specific + headers have certain values, or that parts of the messages match + some predicate, such as by evaluating an + [XPath](#languages:xpath-language.adoc) or + [XQuery](#languages:xquery-language.adoc) Expression. + +There is also the [Test endpoint](#others:test-junit5.adoc), which is a +Mock endpoint, but which uses a second endpoint to provide the list of +expected message bodies and automatically sets up the Mock endpoint +assertions. In other words, it’s a Mock endpoint that automatically sets +up its assertions from some sample messages in a File or +[database](#jpa-component.adoc), for example. + +**Mock endpoints keep received Exchanges in memory indefinitely.** + +Remember that Mock is designed for testing. When you add Mock endpoints +to a route, each Exchange sent to the endpoint will be stored (to allow +for later validation) in memory until explicitly reset or the JVM is +restarted. If you are sending high volume and/or large messages, this +may cause excessive memory use. If your goal is to test deployable +routes inline, consider using NotifyBuilder or AdviceWith in your tests +instead of adding Mock endpoints to routes directly. There are two new +options `retainFirst`, and `retainLast` that can be used to limit the +number of messages the Mock endpoints keep in memory. + +# URI format + + mock:someName[?options] + +Where `someName` can be any string that uniquely identifies the +endpoint. + +# Simple Example + +Here’s a simple example of Mock endpoint in use. First, the endpoint is +resolved on the context. Then we set an expectation, and then, after the +test has run, we assert that our expectations have been met: + + MockEndpoint resultEndpoint = context.getEndpoint("mock:foo", MockEndpoint.class); + + // set expectations + resultEndpoint.expectedMessageCount(2); + + // send some messages + + // now let's assert that the mock:foo endpoint received 2 messages + resultEndpoint.assertIsSatisfied(); + +You typically always call the +[`assertIsSatisfied()`](https://www.javadoc.io/doc/org.apache.camel/camel-mock/latest/org/apache/camel/component/mock/MockEndpoint.html#assertIsSatisfied--) +method to test that the expectations were met after running a test. + +Camel will by default wait 10 seconds when the `assertIsSatisfied()` is +invoked. This can be configured by setting the +`setResultWaitTime(millis)` method. + +# Using assertPeriod + +When the assertion is satisfied then Camel will stop waiting and +continue from the `assertIsSatisfied` method. That means if a new +message arrives at the mock endpoint, just a bit later. That arrival +will not affect the outcome of the assertion. Suppose you do want to +test that no new messages arrives after a period thereafter, then you +can do that by setting the `setAssertPeriod` method, for example: + + MockEndpoint resultEndpoint = context.getEndpoint("mock:foo", MockEndpoint.class); + resultEndpoint.setAssertPeriod(5000); + resultEndpoint.expectedMessageCount(2); + + // send some messages + + // now let's assert that the mock:foo endpoint received 2 messages + resultEndpoint.assertIsSatisfied(); + +# Setting expectations + +You can see from the Javadoc of +[MockEndpoint](https://www.javadoc.io/doc/org.apache.camel/camel-mock/current/org/apache/camel/component/mock/MockEndpoint.html) +the various helper methods you can use to set expectations. The main +methods are as follows: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
MethodDescription

expectedMessageCount(int)

To define the expected count of +messages on the endpoint.

expectedMinimumMessageCount(int)

To define the minimum number of +expected messages on the endpoint.

expectedBodiesReceived(…)

To define the expected bodies that +should be received (in order).

expectedHeaderReceived(…)

To define the expected header that +should be received

expectsAscending(Expression)

To add an expectation that messages are +received in order, using the given Expression to compare +messages.

expectsDescending(Expression)

To add an expectation that messages are +received in order, using the given Expression to compare +messages.

expectsNoDuplicates(Expression)

To add an expectation that no duplicate +messages are received; using an Expression to calculate a unique +identifier for each message. This could be something like the +JMSMessageID if using JMS, or some unique reference number +within the message.

+ +Here’s another example: + + resultEndpoint.expectedBodiesReceived("firstMessageBody", "secondMessageBody", "thirdMessageBody"); + +# Adding expectations to specific messages + +In addition, you can use the +[`message(int messageIndex)`](https://javadoc.io/doc/org.apache.camel/camel-mock/latest/org/apache/camel/component/mock/MockEndpoint.html) +method to add assertions about a specific message that is received. + +For example, to add expectations of the headers or body of the first +message (using zero-based indexing like `java.util.List`), you can use +the following code: + + resultEndpoint.message(0).header("foo").isEqualTo("bar"); + +There are some examples of the Mock endpoint in use in the [`camel-core` +processor +tests](https://github.com/apache/camel/tree/main/core/camel-core/src/test/java/org/apache/camel/processor). + +# Mocking existing endpoints + +Camel now allows you to automatically mock existing endpoints in your +Camel routes. + +**How it works** The endpoints are still in action. What happens +differently is that a [Mock](#mock-component.adoc) endpoint is injected +and receives the message first and then delegates the message to the +target endpoint. You can view this as a kind of intercept and delegate +or endpoint listener. + +Suppose you have the given route below: + +****Route**** + +You can then use the `adviceWith` feature in Camel to mock all the +endpoints in a given route from your unit test, as shown below: + +****`adviceWith` mocking all endpoints**** + +Notice that the mock endpoint is given the URI `mock:`, for +example `mock:direct:foo`. Camel logs at `INFO` level the endpoints +being mocked: + + INFO Adviced endpoint [direct://foo] with mock endpoint [mock:direct:foo] + +**Mocked endpoints are without parameters** +Endpoints which are mocked will have their parameters stripped off. For +example, the endpoint `log:foo?showAll=true` will be mocked to the +following endpoint `mock:log:foo`. Notice the parameters have been +removed. + +It’s also possible to only mock certain endpoints using a pattern. For +example to mock all `log` endpoints you do as shown: + +****`adviceWith` mocking only log endpoints using a pattern**** + +The pattern supported can be a wildcard or a regular expression. See +more details about this at Intercept as it is the same matching function +used by Camel. + +Mind that mocking endpoints causes the messages to be copied when they +arrive at the mock. +That means Camel will use more memory. This may not be suitable when you +send in a lot of messages. + +# Mocking existing endpoints using the `camel-test` component + +Instead of using the `adviceWith` to instruct Camel to mock endpoints, +you can easily enable this behavior when using the `camel-test` Test +Kit. + +The same route can be tested as follows. Notice that we return `"*"` +from the `isMockEndpoints` method, which tells Camel to mock all +endpoints. + +If you only want to mock all `log` endpoints you can return `"log*"` +instead. + +****`isMockEndpoints` using camel-test kit**** + +# Mocking existing endpoints with XML DSL + +If you do not use the `camel-test` component for unit testing (as shown +above) you can use a different approach when using XML files for +routes. +The solution is to create a new XML file used by the unit test and then +include the intended XML file which has the route you want to test. + +Suppose we have the route in the `camel-route.xml` file: + +****camel-route.xml**** + +Then we create a new XML file as follows, where we include the +`camel-route.xml` file and define a spring bean with the class +`org.apache.camel.impl.InterceptSendToMockEndpointStrategy` which tells +Camel to mock all endpoints: + +****test-camel-route.xml**** + +Then in your unit test you load the new XML file +(`test-camel-route.xml`) instead of `camel-route.xml`. + +To only mock all [Log](#log-component.adoc) endpoints, you can define +the pattern in the constructor for the bean: + + + + + +# Mocking endpoints and skip sending to original endpoint + +Sometimes you want to easily mock and skip sending to certain endpoints. +So the message is detoured and send to the mock endpoint only. You can +now use the `mockEndpointsAndSkip` method using AdviceWith. The example +below will skip sending to the two endpoints `"direct:foo"`, and +`"direct:bar"`. + +****adviceWith mock and skip sending to endpoints**** + +The same example using the Test Kit + +****isMockEndpointsAndSkip using camel-test kit**** + +# Limiting the number of messages to keep + +The [Mock](#mock-component.adoc) endpoints will by default keep a copy +of every Exchange that it received. So if you test with a lot of +messages, then it will consume memory. +We have introduced two options `retainFirst` and `retainLast` that can +be used to specify to only keep Nth of the first and/or last Exchanges. + +For example, in the code below, we only want to retain a copy of the +first five and last five Exchanges the mock receives. + + MockEndpoint mock = getMockEndpoint("mock:data"); + mock.setRetainFirst(5); + mock.setRetainLast(5); + mock.expectedMessageCount(2000); + + mock.assertIsSatisfied(); + +Using this has some limitations. The `getExchanges()` and +`getReceivedExchanges()` methods on the `MockEndpoint` will return only +the retained copies of the Exchanges. So in the example above, the list +will contain 10 Exchanges; the first five, and the last five. +The `retainFirst` and `retainLast` options also have limitations on +which expectation methods you can use. For example, the `expectedXXX` +methods that work on message bodies, headers, etc. will only operate on +the retained messages. In the example above, they can test only the +expectations on the 10 retained messages. + +# Testing with arrival times + +The [Mock](#mock-component.adoc) endpoint stores the arrival time of the +message as a property on the Exchange. + + Date time = exchange.getProperty(Exchange.RECEIVED_TIMESTAMP, Date.class); + +You can use this information to know when the message arrived at the +mock. But it also provides foundation to know the time interval between +the previous and next message arrived at the mock. You can use this to +set expectations using the `arrives` DSL on the +[Mock](#mock-component.adoc) endpoint. + +For example, to say that the first message should arrive between 0 and 2 +seconds before the next you can do: + + mock.message(0).arrives().noLaterThan(2).seconds().beforeNext(); + +You can also define this as that second message (0 index based) should +arrive no later than 0 and 2 seconds after the previous: + + mock.message(1).arrives().noLaterThan(2).seconds().afterPrevious(); + +You can also use between to set a lower bound. For example, suppose that +it should be between 1 and 4 seconds: + + mock.message(1).arrives().between(1, 4).seconds().afterPrevious(); + +You can also set the expectation on all messages, for example, to say +that the gap between them should be at most 1 second: + + mock.allMessages().arrives().noLaterThan(1).seconds().beforeNext(); + +**Time units** + +In the example above we use `seconds` as the time unit, but Camel offers +`milliseconds`, and `minutes` as well. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|log|To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging, then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|exchangeFormatter|Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Name of mock endpoint||string| +|assertPeriod|Sets a grace period after which the mock endpoint will re-assert to ensure the preliminary assertion is still valid. This is used, for example, to assert that exactly a number of messages arrive. For example, if the expected count was set to 5, then the assertion is satisfied when five or more messages arrive. To ensure that exactly 5 messages arrive, then you would need to wait a little period to ensure no further message arrives. This is what you can use this method for. By default, this period is disabled.||duration| +|expectedCount|Specifies the expected number of message exchanges that should be received by this endpoint. Beware: If you want to expect that 0 messages, then take extra care, as 0 matches when the tests starts, so you need to set a assert period time to let the test run for a while to make sure there are still no messages arrived; for that use setAssertPeriod(long). An alternative is to use NotifyBuilder, and use the notifier to know when Camel is done routing some messages, before you call the assertIsSatisfied() method on the mocks. This allows you to not use a fixed assert period, to speedup testing times. If you want to assert that exactly nth message arrives to this mock endpoint, then see also the setAssertPeriod(long) method for further details.|-1|integer| +|failFast|Sets whether assertIsSatisfied() should fail fast at the first detected failed expectation while it may otherwise wait for all expected messages to arrive before performing expectations verifications. Is by default true. Set to false to use behavior as in Camel 2.x.|false|boolean| +|log|To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class.|false|boolean| +|reportGroup|A number that is used to turn on throughput logging based on groups of the size.||integer| +|resultMinimumWaitTime|Sets the minimum expected amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied||duration| +|resultWaitTime|Sets the maximum amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied||duration| +|retainFirst|Specifies to only retain the first nth number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the first 10 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the first 10 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object...) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received.|-1|integer| +|retainLast|Specifies to only retain the last nth number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the last 20 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the last 20 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object...) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received.|-1|integer| +|sleepForEmptyTest|Allows a sleep to be specified to wait to check that this endpoint really is empty when expectedMessageCount(int) is called with zero||duration| +|copyOnExchange|Sets whether to make a deep copy of the incoming Exchange when received at this mock endpoint. Is by default true.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-mongodb-gridfs.md b/camel-mongodb-gridfs.md new file mode 100644 index 0000000000000000000000000000000000000000..07f061bdeeee77e4f99f7c81684b12cc515a2cca --- /dev/null +++ b/camel-mongodb-gridfs.md @@ -0,0 +1,141 @@ +# Mongodb-gridfs + +**Since Camel 2.18** + +**Both producer and consumer are supported** + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-mongodb-gridfs + x.y.z + + + +# URI format + + mongodb-gridfs:connectionBean?database=databaseName&bucket=bucketName[&moreOptions...] + +# Configuration of a database in Spring XML + +The following Spring XML creates a bean defining the connection to a +MongoDB instance. + + + + + + + + +# Sample route + +The following route defined in Spring XML executes the operation +[**findOne**](#mongodb-gridfs-component.adoc) on a collection. + +**Get a file from GridFS** + + + + + + + + +# GridFS operations - producer endpoint + +## count + +Returns the total number of files in the collection, returning an +Integer as the OUT message body. + + // from("direct:count").to("mongodb-gridfs?database=tickets&operation=count"); + Integer result = template.requestBodyAndHeader("direct:count", "irrelevantBody"); + assertTrue("Result is not of type Long", result instanceof Integer); + +You can provide a filename header to provide a count of files matching +that filename. + + Map headers = new HashMap(); + headers.put(Exchange.FILE_NAME, "filename.txt"); + Integer count = template.requestBodyAndHeaders("direct:count", query, headers); + +## listAll + +Returns a Reader that lists all the filenames and their IDs in a tab +separated stream. + + // from("direct:listAll").to("mongodb-gridfs?database=tickets&operation=listAll"); + Reader result = template.requestBodyAndHeader("direct:listAll", "irrelevantBody"); + + filename1.txt 1252314321 + filename2.txt 2897651254 + +## findOne + +Finds a file in the GridFS system and sets the body to an InputStream of +the content. Also provides the metadata has headers. It uses +`Exchange.FILE_NAME` from the incoming headers to determine the file to +find. + + // from("direct:findOne").to("mongodb-gridfs?database=tickets&operation=findOne"); + Map headers = new HashMap(); + headers.put(Exchange.FILE_NAME, "filename.txt"); + InputStream result = template.requestBodyAndHeaders("direct:findOne", "irrelevantBody", headers); + +## create + +Create a new file in the GridFs database. It uses the +`Exchange.FILE_NAME` from the incoming headers for the name and the body +contents (as an InputStream) as the content. + + // from("direct:create").to("mongodb-gridfs?database=tickets&operation=create"); + Map headers = new HashMap(); + headers.put(Exchange.FILE_NAME, "filename.txt"); + InputStream stream = ... the data for the file ... + template.requestBodyAndHeaders("direct:create", stream, headers); + +## remove + +Removes a file from the GridFS database. + + // from("direct:remove").to("mongodb-gridfs?database=tickets&operation=remove"); + Map headers = new HashMap(); + headers.put(Exchange.FILE_NAME, "filename.txt"); + template.requestBodyAndHeaders("direct:remove", "", headers); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|connectionBean|Name of com.mongodb.client.MongoClient to use.||string| +|bucket|Sets the name of the GridFS bucket within the database. Default is fs.|fs|string| +|database|Sets the name of the MongoDB database to target||string| +|readPreference|Sets a MongoDB ReadPreference on the Mongo connection. Read preferences set directly on the connection will be overridden by this setting. The com.mongodb.ReadPreference#valueOf(String) utility method is used to resolve the passed readPreference value. Some examples for the possible values are nearest, primary or secondary etc.||object| +|writeConcern|Set the WriteConcern for write operations on MongoDB using the standard ones. Resolved from the fields of the WriteConcern class by calling the WriteConcern#valueOf(String) method.||object| +|delay|Sets the delay between polls within the Consumer. Default is 500ms|500|duration| +|fileAttributeName|If the QueryType uses a FileAttribute, this sets the name of the attribute that is used. Default is camel-processed.|camel-processed|string| +|initialDelay|Sets the initialDelay before the consumer will start polling. Default is 1000ms|1000|duration| +|persistentTSCollection|If the QueryType uses a persistent timestamp, this sets the name of the collection within the DB to store the timestamp.|camel-timestamps|string| +|persistentTSObject|If the QueryType uses a persistent timestamp, this is the ID of the object in the collection to store the timestamp.|camel-timestamp|string| +|query|Additional query parameters (in JSON) that are used to configure the query used for finding files in the GridFsConsumer||string| +|queryStrategy|Sets the QueryStrategy that is used for polling for new files. Default is Timestamp|TimeStamp|object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|operation|Sets the operation this endpoint will execute against GridFs.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-mongodb.md b/camel-mongodb.md new file mode 100644 index 0000000000000000000000000000000000000000..21dd72c8a27bb6655baba5f9914e84ba7caa2deb --- /dev/null +++ b/camel-mongodb.md @@ -0,0 +1,904 @@ +# Mongodb + +**Since Camel 2.19** + +**Both producer and consumer are supported** + +According to Wikipedia: "NoSQL is a movement promoting a loosely defined +class of non-relational data stores that break with a long history of +relational databases and ACID guarantees." NoSQL solutions have grown in +popularity in the last few years, and major extremely-used sites and +services such as Facebook, LinkedIn, Twitter, etc. are known to use them +extensively to achieve scalability and agility. + +Basically, NoSQL solutions differ from traditional RDBMS (Relational +Database Management Systems) in that they don’t use SQL as their query +language and generally don’t offer ACID-like transactional behaviour nor +relational data. Instead, they are designed around the concept of +flexible data structures and schemas (meaning that the traditional +concept of a database table with a fixed schema is dropped), extreme +scalability on commodity hardware and blazing-fast processing. + +MongoDB is a very popular NoSQL solution and the camel-mongodb component +integrates Camel with MongoDB allowing you to interact with MongoDB +collections both as a producer (performing operations on the collection) +and as a consumer (consuming documents from a MongoDB collection). + +MongoDB revolves around the concepts of documents (not as is office +documents, but rather hierarchical data defined in JSON/BSON) and +collections. This component page will assume you are familiar with them. +Otherwise, visit [http://www.mongodb.org/](http://www.mongodb.org/). + +The MongoDB Camel component uses Mongo Java Driver 4.x. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-mongodb + x.y.z + + + +# URI formats + + mongodb:connectionBean?database=databaseName&collection=collectionName&operation=operationName[&moreOptions...] + mongodb:dummy?hosts=hostnames&database=databaseName&collection=collectionName&operation=operationName[&moreOptions...] + +# Configuration of database in Spring XML + +The following Spring XML creates a bean defining the connection to a +MongoDB instance. + +Since mongo java driver 3, the WriteConcern and readPreference options +are not dynamically modifiable. They are defined in the mongoClient +object + + + + + + + + +# Sample route + +The following route defined in Spring XML executes the operation +[???](#getDbStats) on a collection. + +**Get DB stats for specified collection** + + + + + + + + +# MongoDB operations - producer endpoints + +## Query operations + +### findById + +This operation retrieves only one element from the collection whose \_id +field matches the content of the IN message body. The incoming object +can be anything that has an equivalent to a `Bson` type. See +[http://bsonspec.org/spec.html](http://bsonspec.org/spec.html) and +[http://www.mongodb.org/display/DOCS/Java+Types](http://www.mongodb.org/display/DOCS/Java+Types). + + from("direct:findById") + .to("mongodb:myDb?database=flights&collection=tickets&operation=findById") + .to("mock:resultFindById"); + +Please, note that the default \_id is treated by Mongo as and `ObjectId` +type, so you may need to convert it properly. + + from("direct:findById") + .convertBodyTo(ObjectId.class) + .to("mongodb:myDb?database=flights&collection=tickets&operation=findById") + .to("mock:resultFindById"); + +**Supports optional parameters** + +This operation supports projection operators. See +\[???\](#Specifying a fields filter (projection)). + +### findOneByQuery + +Retrieve the first element from a collection matching a MongoDB query +selector. **If the `CamelMongoDbCriteria` header is set, then its value +is used as the query selector**. If the `CamelMongoDbCriteria` header is +*null*, then the IN message body is used as the query selector. In both +cases, the query selector should be of type `Bson` or convertible to +`Bson` (for instance, a JSON string or `HashMap`). See +\[???\](#Type conversions) for more info. + +Create query selectors using the `Filters` provided by the MongoDB +Driver. + +#### Example without a query selector (returns the first document in a collection) + + from("direct:findOneByQuery") + .to("mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery") + .to("mock:resultFindOneByQuery"); + +#### Example with a query selector (returns the first matching document in a collection): + + from("direct:findOneByQuery") + .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq("name", "Raul Kripalani"))) + .to("mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery") + .to("mock:resultFindOneByQuery"); + +**Supports optional parameters** + +This operation supports projection operators and sort clauses. See +\[???\](#Specifying a fields filter (projection)), +\[???\](#Specifying a sort clause). + +### findAll + +The `findAll` operation returns all documents matching a query, or none +at all, in which case all documents contained in the collection are +returned. **The query object is extracted `CamelMongoDbCriteria` +header**. if the CamelMongoDbCriteria header is null the query object is +extracted message body, i.e. it should be of type `Bson` or convertible +to `Bson`. It can be a JSON String or a Hashmap. See +\[???\](#Type conversions) for more info. + +#### Example without a query selector (returns all documents in a collection) + + from("direct:findAll") + .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") + .to("mock:resultFindAll"); + +#### Example with a query selector (returns all matching documents in a collection) + + from("direct:findAll") + .setHeader(MongoDbConstants.CRITERIA, Filters.eq("name", "Raul Kripalani")) + .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") + .to("mock:resultFindAll"); + +#### Example with option *outputType=MongoIterable* and batch size + + from("direct:findAll") + .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) + .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq("name", "Raul Kripalani"))) + .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll&outputType=MongoIterable") + .to("mock:resultFindAll"); + +**Supports optional parameters** + +This operation supports projection operators and sort clauses. See +\[???\](#Specifying a fields filter (projection)), +\[???\](#Specifying a sort clause). + +### count + +Returns the total number of objects in a collection, returning a Long as +the OUT message body. +The following example will count the number of records in the +"dynamicCollectionName" collection. Notice how dynamicity is enabled, +and as a result, the operation will not run against the +"notableScientists" collection, but against the "dynamicCollectionName" +collection. + + from("direct:count") + .to("mongodb:myDb?database=tickets&collection=flights&operation=count&dynamicity=true"); + + Long result = template.requestBodyAndHeader("direct:count", "irrelevantBody", MongoDbConstants.COLLECTION, "dynamicCollectionName"); + assertTrue("Result is not of type Long", result instanceof Long); + +You can provide a query **The query object is extracted +`CamelMongoDbCriteria` header**. if the CamelMongoDbCriteria header is +null the query object is extracted message body, i.e. it should be of +type `Bson` or convertible to `Bson`., and operation will return the +amount of documents matching this criteria. + + Document query = ... + Long count = template.requestBodyAndHeader("direct:count", query, MongoDbConstants.COLLECTION, "dynamicCollectionName"); + +### Specifying a fields filter (projection) + +Query operations will, by default, return the matching objects in their +entirety (with all their fields). If your documents are large and you +only require retrieving a subset of their fields, you can specify a +field filter in all query operations, simply by setting the relevant +`Bson` (or type convertible to `Bson`, such as a JSON String, Map, etc.) +on the `CamelMongoDbFieldsProjection` header, constant shortcut: +`MongoDbConstants.FIELDS_PROJECTION`. + +Here is an example that uses MongoDB’s `Projections` to simplify the +creation of Bson. It retrieves all fields except `_id` and +`boringField`: + + // route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") + Bson fieldProjection = Projection.exclude("_id", "boringField"); + Object result = template.requestBodyAndHeader("direct:findAll", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection); + +Here is an example that uses MongoDB’s `Projections` to simplify the +creation of Bson. It retrieves all fields except `_id` and +`boringField`: + + // route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") + Bson fieldProjection = Projection.exclude("_id", "boringField"); + Object result = template.requestBodyAndHeader("direct:findAll", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection); + +### Specifying a sort clause + +There is a often a requirement to fetch the min/max record from a +collection based on sorting by a particular field that uses MongoDB’s +`Sorts` to simplify the creation of Bson. It retrieves all fields except +`_id` and `boringField`: + + // route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") + Bson sorts = Sorts.descending("_id"); + Object result = template.requestBodyAndHeader("direct:findAll", ObjectUtils.NULL, MongoDbConstants.SORT_BY, sorts); + +In a Camel route the SORT\_BY header can be used with the findOneByQuery +operation to achieve the same result. If the FIELDS\_PROJECTION header +is also specified the operation will return a single field/value pair +that can be passed directly to another component (for example, a +parameterized MyBatis SELECT query). This example demonstrates fetching +the temporally newest document from a collection and reducing the result +to a single field, based on the `documentTimestamp` field: + + .from("direct:someTriggeringEvent") + .setHeader(MongoDbConstants.SORT_BY).constant(Sorts.descending("documentTimestamp")) + .setHeader(MongoDbConstants.FIELDS_PROJECTION).constant(Projection.include("documentTimestamp")) + .setBody().constant("{}") + .to("mongodb:myDb?database=local&collection=myDemoCollection&operation=findOneByQuery") + .to("direct:aMyBatisParameterizedSelect"); + +## Create/update operations + +### insert + +Inserts an new object into the MongoDB collection, taken from the IN +message body. Type conversion is attempted to turn it into `Document` or +a `List`. +Two modes are supported: single insert and multiple insert. For multiple +insert, the endpoint will expect a List, Array or Collections of objects +of any type, as long as they are - or can be converted to - `Document`. +Example: + + from("direct:insert") + .to("mongodb:myDb?database=flights&collection=tickets&operation=insert"); + +The operation will return a WriteResult, and depending on the +`WriteConcern` or the value of the `invokeGetLastError` option, +`getLastError()` would have been called already or not. If you want to +access the ultimate result of the write operation, you need to retrieve +the `CommandResult` by calling `getLastError()` or +`getCachedLastError()` on the `WriteResult`. Then you can verify the +result by calling `CommandResult.ok()`, +`CommandResult.getErrorMessage()` and/or `CommandResult.getException()`. + +Note that the new object’s `_id` must be unique in the collection. If +you don’t specify the value, MongoDB will automatically generate one for +you. But if you do specify it and it is not unique, the insert operation +will fail (and for Camel to notice, you will need to enable +invokeGetLastError or set a WriteConcern that waits for the write +result). + +This is not a limitation of the component, but it is how things work in +MongoDB for higher throughput. If you are using a custom `_id`, you are +expected to ensure at the application level that is unique (and this is +a good practice too). + +OID(s) of the inserted record(s) is stored in the message header under +`CamelMongoOid` key (`MongoDbConstants.OID` constant). The value stored +is `org.bson.types.ObjectId` for single insert or +`java.util.List` if multiple records have been +inserted. + +In MongoDB Java Driver 3.x the insertOne and insertMany operation return +void. The Camel insert operation return the Document or List of +Documents inserted. Note that each Documents are Updated by a new OID if +need. + +### save + +The save operation is equivalent to an *upsert* (UPdate, inSERT) +operation, where the record will be updated, and if it doesn’t exist, it +will be inserted, all in one atomic operation. MongoDB will perform the +matching based on the `_id` field. + +Beware that in case of an update, the object is replaced entirely and +the usage of [MongoDB’s +$modifiers](http://www.mongodb.org/display/DOCS/Updating#Updating-ModifierOperations) +is not permitted. Therefore, if you want to manipulate the object if it +already exists, you have two options: + +1. perform a query to retrieve the entire object first along with all + its fields (may not be efficient), alter it inside Camel and then + save it. + +2. use the update operation with + [$modifiers](http://www.mongodb.org/display/DOCS/Updating#Updating-ModifierOperations), + which will execute the update at the server-side instead. You can + enable the upsert flag, in which case if an insert is required, + MongoDB will apply the $modifiers to the filter query object and + insert the result. + +If the document to be saved does not contain the `_id` attribute, the +operation will be an insert, and the new `_id` created will be placed in +the `CamelMongoOid` header. + +For example: + + from("direct:insert") + .to("mongodb:myDb?database=flights&collection=tickets&operation=save"); + + // route: from("direct:insert").to("mongodb:myDb?database=flights&collection=tickets&operation=save"); + org.bson.Document docForSave = new org.bson.Document(); + docForSave.put("key", "value"); + Object result = template.requestBody("direct:insert", docForSave); + +### update + +Update one or multiple records on the collection. Requires a filter +query and a update rules. + +You can define the filter using MongoDBConstants.CRITERIA header as +`Bson` and define the update rules as `Bson` in Body. + +**Update after enrich** + +While defining the filter by using MongoDBConstants.CRITERIA header as +`Bson` to query mongodb before you do update, you should notice you need +to remove it from the resulting camel exchange during aggregation if you +use enrich pattern with a aggregation strategy and then apply mongodb +update. If you don’t remove this header during aggregation and/or +redefine MongoDBConstants.CRITERIA header before sending camel exchange +to mongodb producer endpoint, you may end up with invalid camel exchange +payload while updating mongodb. + +The second way Require a List\ as the IN message body +containing exactly 2 elements: + +- Element 1 (index 0) ⇒ filter query ⇒ determines what objects will be + affected, same as a typical query object + +- Element 2 (index 1) ⇒ update rules ⇒ how matched objects will be + updated. All [modifier + operations](http://www.mongodb.org/display/DOCS/Updating#Updating-ModifierOperations) + from MongoDB are supported. + +**Multiupdates** + +By default, MongoDB will only update 1 object even if multiple objects +match the filter query. To instruct MongoDB to update **all** matching +records, set the `CamelMongoDbMultiUpdate` IN message header to `true`. + +A header with key `CamelMongoDbRecordsAffected` will be returned +(`MongoDbConstants.RECORDS_AFFECTED` constant) with the number of +records updated (copied from `WriteResult.getN()`). + +For example, the following will update **all** records whose filterField +field equals true by setting the value of the "scientist" field to +"Darwin": + + // route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); + List body = new ArrayList<>(); + Bson filterField = Filters.eq("filterField", true); + body.add(filterField); + BsonDocument updateObj = new BsonDocument().append("$set", new BsonDocument("scientist", new BsonString("Darwin"))); + body.add(updateObj); + Object result = template.requestBodyAndHeader("direct:update", body, MongoDbConstants.MULTIUPDATE, true); + + // route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); + Maps headers = new HashMap<>(2); + headers.add(MongoDbConstants.MULTIUPDATE, true); + headers.add(MongoDbConstants.FIELDS_FILTER, Filters.eq("filterField", true)); + String updateObj = Updates.set("scientist", "Darwin");; + Object result = template.requestBodyAndHeaders("direct:update", updateObj, headers); + + // route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); + String updateObj = "[{\"filterField\": true}, {\"$set\", {\"scientist\", \"Darwin\"}}]"; + Object result = template.requestBodyAndHeader("direct:update", updateObj, MongoDbConstants.MULTIUPDATE, true); + +## Delete operations + +### remove + +Remove matching records from the collection. The IN message body will +act as the removal filter query, and is expected to be of type +`DBObject` or a type convertible to it. +The following example will remove all objects whose field +*conditionField* equals true, in the science database, notableScientists +collection: + + // route: from("direct:remove").to("mongodb:myDb?database=science&collection=notableScientists&operation=remove"); + Bson conditionField = Filters.eq("conditionField", true); + Object result = template.requestBody("direct:remove", conditionField); + +A header with key `CamelMongoDbRecordsAffected` is returned +(`MongoDbConstants.RECORDS_AFFECTED` constant) with type `int`, +containing the number of records deleted (copied from +`WriteResult.getN()`). + +## Bulk Write Operations + +### bulkWrite + +Performs write operations in bulk with controls for order of execution. +Requires a `List>` as the IN message body +containing commands for insert, update, and delete operations. + +The following example will insert a new scientist "Pierre Curie", update +record with id "5" by setting the value of the "scientist" field to +"Marie Curie" and delete record with id "3" : + + // route: from("direct:bulkWrite").to("mongodb:myDb?database=science&collection=notableScientists&operation=bulkWrite"); + List> bulkOperations = Arrays.asList( + new InsertOneModel<>(new Document("scientist", "Pierre Curie")), + new UpdateOneModel<>(new Document("_id", "5"), + new Document("$set", new Document("scientist", "Marie Curie"))), + new DeleteOneModel<>(new Document("_id", "3"))); + + BulkWriteResult result = template.requestBody("direct:bulkWrite", bulkOperations, BulkWriteResult.class); + +By default, operations are executed in order and interrupted on the +first write error without processing any remaining write operations in +the list. To instruct MongoDB to continue to process remaining write +operations in the list, set the `CamelMongoDbBulkOrdered` IN message +header to `false`. Unordered operations are executed in parallel and +this behavior is not guaranteed. + +## Other operations + +### aggregate + +Perform a aggregation with the given pipeline contained in the body. +**Aggregations could be long and heavy operations. Use with care.** + + // route: from("direct:aggregate").to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate"); + List aggregate = Arrays.asList(match(or(eq("scientist", "Darwin"), eq("scientist", + group("$scientist", sum("count", 1))); + from("direct:aggregate") + .setBody().constant(aggregate) + .to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate") + .to("mock:resultAggregate"); + +By default, a List of all results is returned. This can be heavy on +memory depending on the size of the results. A safer alternative is to +set your outputType=MongoIterable. The next Processor will see an +iterable in the message body allowing it to step through the results one +by one. Thus setting a batch size and returning an iterable allows for +efficient retrieval and processing of the result. + +An example would look like: + + List aggregate = Arrays.asList(match(or(eq("scientist", "Darwin"), eq("scientist", + group("$scientist", sum("count", 1))); + from("direct:aggregate") + .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) + .setBody().constant(aggregate) + .to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate&outputType=MongoIterable") + .split(body()) + .streaming() + .to("mock:resultAggregate"); + +Note that calling `.split(body())` is enough to send the entries down +the route one-by-one, however it would still load all the entries into +memory first. Calling `.streaming()` is thus required to load data into +memory by batches. + +### getDbStats + +Equivalent of running the `db.stats()` command in the MongoDB shell, +which displays useful statistic figures about the database. +For example: + + > db.stats(); + { + "db" : "test", + "collections" : 7, + "objects" : 719, + "avgObjSize" : 59.73296244784423, + "dataSize" : 42948, + "storageSize" : 1000058880, + "numExtents" : 9, + "indexes" : 4, + "indexSize" : 32704, + "fileSize" : 1275068416, + "nsSizeMB" : 16, + "ok" : 1 + } + +Usage example: + + // from("direct:getDbStats").to("mongodb:myDb?database=flights&collection=tickets&operation=getDbStats"); + Object result = template.requestBody("direct:getDbStats", "irrelevantBody"); + assertTrue("Result is not of type Document", result instanceof Document); + +The operation will return a data structure similar to the one displayed +in the shell, in the form of a `Document` in the OUT message body. + +### getColStats + +Equivalent of running the `db.collection.stats()` command in the MongoDB +shell, which displays useful statistic figures about the collection. +For example: + + > db.camelTest.stats(); + { + "ns" : "test.camelTest", + "count" : 100, + "size" : 5792, + "avgObjSize" : 57.92, + "storageSize" : 20480, + "numExtents" : 2, + "nindexes" : 1, + "lastExtentSize" : 16384, + "paddingFactor" : 1, + "flags" : 1, + "totalIndexSize" : 8176, + "indexSizes" : { + "_id_" : 8176 + }, + "ok" : 1 + } + +Usage example: + + // from("direct:getColStats").to("mongodb:myDb?database=flights&collection=tickets&operation=getColStats"); + Object result = template.requestBody("direct:getColStats", "irrelevantBody"); + assertTrue("Result is not of type Document", result instanceof Document); + +The operation will return a data structure similar to the one displayed +in the shell, in the form of a `Document` in the OUT message body. + +### command + +Run the body as a command on database. Useful for admin operation as +getting host information, replication or sharding status. + +Collection parameter is not use for this operation. + + // route: from("command").to("mongodb:myDb?database=science&operation=command"); + DBObject commandBody = new BasicDBObject("hostInfo", "1"); + Object result = template.requestBody("direct:command", commandBody); + +## Dynamic operations + +An Exchange can override the endpoint’s fixed operation by setting the +`CamelMongoDbOperation` header, defined by the +`MongoDbConstants.OPERATION_HEADER` constant. +The values supported are determined by the MongoDbOperation enumeration +and match the accepted values for the `operation` parameter on the +endpoint URI. + +For example: + + // from("direct:insert").to("mongodb:myDb?database=flights&collection=tickets&operation=insert"); + Object result = template.requestBodyAndHeader("direct:insert", "irrelevantBody", MongoDbConstants.OPERATION_HEADER, "count"); + assertTrue("Result is not of type Long", result instanceof Long); + +# Consumers + +There are several types of consumers: + +1. Tailable Cursor Consumer + +2. Change Streams Consumer + +## Tailable Cursor Consumer + +MongoDB offers a mechanism to instantaneously consume ongoing data from +a collection, by keeping the cursor open just like the `tail -f` command +of \*nix systems. This mechanism is significantly more efficient than a +scheduled poll, due to the fact that the server pushes new data to the +client as it becomes available, rather than making the client ping back +at scheduled intervals to fetch new data. It also reduces otherwise +redundant network traffic. + +There is only one requisite to use tailable cursors: the collection must +be a "capped collection", meaning that it will only hold N objects, and +when the limit is reached, MongoDB flushes old objects in the same order +they were originally inserted. For more information, please refer to: +[http://www.mongodb.org/display/DOCS/Tailable+Cursors](http://www.mongodb.org/display/DOCS/Tailable+Cursors). + +The Camel MongoDB component implements a tailable cursor consumer, +making this feature available for you to use in your Camel routes. As +new objects are inserted, MongoDB will push them as `Document` in +natural order to your tailable cursor consumer, who will transform them +to an Exchange and will trigger your route logic. + +# How the tailable cursor consumer works + +To turn a cursor into a tailable cursor, a few special flags are to be +signalled to MongoDB when first generating the cursor. Once created, the +cursor will then stay open and will block upon calling the +`MongoCursor.next()` method until new data arrives. However, the MongoDB +server reserves itself the right to kill your cursor if new data doesn’t +appear after an indeterminate period. If you are interested to continue +consuming new data, you have to regenerate the cursor. And to do so, you +will have to remember the position where you left off or else you will +start consuming from the top again. + +The Camel MongoDB tailable cursor consumer takes care of all these tasks +for you. You will just need to provide the key to some field in your +data of increasing nature, which will act as a marker to position your +cursor every time it is regenerated, e.g. a timestamp, a sequential ID, +etc. It can be of any datatype supported by MongoDB. Date, Strings and +Integers are found to work well. We call this mechanism "tail tracking" +in the context of this component. + +The consumer will remember the last value of this field and whenever the +cursor is to be regenerated, it will run the query with a filter like: +`increasingField > lastValue`, so that only unread data is consumed. + +**Setting the increasing field:** Set the key of the increasing field on +the endpoint URI `tailTrackingIncreasingField` option. In Camel 2.10, it +must be a top-level field in your data, as nested navigation for this +field is not yet supported. That is, the "timestamp" field is okay, but +"nested.timestamp" will not work. Please open a ticket in the Camel JIRA +if you do require support for nested increasing fields. + +**Cursor regeneration delay:** One thing to note is that if new data is +not already available upon initialisation, MongoDB will kill the cursor +instantly. Since we don’t want to overwhelm the server in this case, a +`cursorRegenerationDelay` option has been introduced (with a default +value of 1000ms.), which you can modify to suit your needs. + +An example: + + from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime") + .id("tailableCursorConsumer1") + .autoStartup(false) + .to("mock:test"); + +The above route will consume from the "flights.cancellations" capped +collection, using "departureTime" as the increasing field, with a +default regeneration cursor delay of 1000ms. + +# Persistent tail tracking + +Standard tail tracking is volatile and the last value is only kept in +memory. However, in practice you will need to restart your Camel +container every now and then, but your last value would then be lost and +your tailable cursor consumer would start consuming from the top again, +very likely sending duplicate records into your route. + +To overcome this situation, you can enable the **persistent tail +tracking** feature to keep track of the last consumed increasing value +in a special collection inside your MongoDB database too. When the +consumer initialises again, it will restore the last tracked value and +continue as if nothing happened. + +The last read value is persisted on two occasions: every time the cursor +is regenerated and when the consumer shuts down. We may consider +persisting at regular intervals too in the future (flush every 5 +seconds) for added robustness if the demand is there. To request this +feature, please open a ticket in the Camel JIRA. + +# Enabling persistent tail tracking + +To enable this function, set at least the following options on the +endpoint URI: + +- `persistentTailTracking` option to `true` + +- `persistentId` option to a unique identifier for this consumer, so + that the same collection can be reused across many consumers + +Additionally, you can set the `tailTrackDb`, `tailTrackCollection` and +`tailTrackField` options to customise where the runtime information will +be stored. Refer to the endpoint options table at the top of this page +for descriptions of each option. + +For example, the following route will consume from the +"flights.cancellations" capped collection, using "departureTime" as the +increasing field, with a default regeneration cursor delay of 1000ms, +with persistent tail tracking turned on, and persisting under the +"cancellationsTracker" id on the "flights.camelTailTracking", storing +the last processed value under the "lastTrackingValue" field +(`camelTailTracking` and `lastTrackingValue` are defaults). + + from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true" + + "&persistentId=cancellationsTracker") + .id("tailableCursorConsumer2") + .autoStartup(false) + .to("mock:test"); + +Below is another example identical to the one above, but where the +persistent tail tracking runtime information will be stored under the +"trackers.camelTrackers" collection, in the "lastProcessedDepartureTime" +field: + + from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true" + + "&persistentId=cancellationsTracker&tailTrackDb=trackers&tailTrackCollection=camelTrackers" + + "&tailTrackField=lastProcessedDepartureTime") + .id("tailableCursorConsumer3") + .autoStartup(false) + .to("mock:test"); + +## Change Streams Consumer + +Change Streams allow applications to access real-time data changes +without the complexity and risk of tailing the MongoDB oplog. +Applications can use change streams to subscribe to all data changes on +a collection and immediately react to them. Because change streams use +the aggregation framework, applications can also filter for specific +changes or transform the notifications at will. The exchange body will +contain the full document of any change. + +To configure Change Streams Consumer you need to specify `consumerType`, +`database`, `collection` and optional JSON property `streamFilter` to +filter events. That JSON property is standard MongoDB `$match` +aggregation. It could be easily specified using XML DSL configuration: + + + + + + +Java configuration: + + from("mongodb:myDb?consumerType=changeStreams&database=flights&collection=tickets&streamFilter={ '$match':{'$or':[{'fullDocument.stringValue': 'specificValue'}]} }") + .to("mock:test"); + +You can externalize the streamFilter value into a property placeholder +which allows the endpoint URI parameters to be *cleaner* and easier to +read. + +# Type conversions + +The `MongoDbBasicConverters` type converter included with the +camel-mongodb component provides the following conversions: + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameFrom typeTo typeHow?

fromMapToDocument

Map

Document

constructs a new Document +via the new Document(Map m) constructor.

fromDocumentToMap

Document

Map

Document already +implements Map.

fromStringToDocument

String

Document

uses +com.mongodb.Document.parse(String s).

fromStringToObjectId

String

ObjectId

constructs a new ObjectId +via the new ObjectId(s)

fromFileToDocument

File

Document

uses +fromInputStreamToDocument under the hood

fromInputStreamToDocument

InputStream

Document

converts the inputstream bytes to a +Document

fromStringToList

String

List<Bson>

uses +org.bson.codecs.configuration.CodecRegistries to convert to +BsonArray then to List<Bson>.

+ +This type converter is auto-discovered, so you don’t need to configure +anything manually. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|mongoConnection|Shared client used for connection. All endpoints generated from the component will share this connection client.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|connectionBean|Sets the connection bean reference used to lookup a client for connecting to a database if no hosts parameter is present.||string| +|collection|Sets the name of the MongoDB collection to bind to this endpoint||string| +|collectionIndex|Sets the collection index (JSON FORMAT : { field1 : order1, field2 : order2})||string| +|connectionUriString|Set the whole Connection String/Uri for mongodb endpoint.||string| +|createCollection|Create collection during initialisation if it doesn't exist. Default is true.|true|boolean| +|database|Sets the name of the MongoDB database to target||string| +|hosts|Host address of mongodb server in host:port format. It's possible also use more than one address, as comma separated list of hosts: host1:port1,host2:port2. If the hosts parameter is specified, the provided connectionBean is ignored.||string| +|mongoConnection|Sets the connection bean used as a client for connecting to a database.||object| +|operation|Sets the operation this endpoint will execute against MongoDB.||object| +|outputType|Convert the output of the producer to the selected type : DocumentList Document or MongoIterable. DocumentList or MongoIterable applies to findAll and aggregate. Document applies to all other operations.||object| +|consumerType|Consumer type.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|appName|Sets the logical name of the application. The application name may be used by the client to identify the application to the server, for use in server logs, slow query logs, and profile collection. Default: null||string| +|compressors|Specifies one or more compression algorithms that the driver will attempt to use to compress requests sent to the connected MongoDB instance. Possible values include: zlib, snappy, and zstd. Default: null||string| +|connectTimeoutMS|Specifies the maximum amount of time, in milliseconds, the Java driver waits for a connection to open before timing out. A value of 0 instructs the driver to never time out while waiting for a connection to open. Default: 10000 (10 seconds)|10000|integer| +|cursorRegenerationDelay|MongoDB tailable cursors will block until new data arrives. If no new data is inserted, after some time the cursor will be automatically freed and closed by the MongoDB server. The client is expected to regenerate the cursor if needed. This value specifies the time to wait before attempting to fetch a new cursor, and if the attempt fails, how long before the next attempt is made. Default value is 1000ms.|1000|duration| +|directConnection|Specifies that the driver must connect to the host directly. Default: false|false|boolean| +|dynamicity|Sets whether this endpoint will attempt to dynamically resolve the target database and collection from the incoming Exchange properties. Can be used to override at runtime the database and collection specified on the otherwise static endpoint URI. It is disabled by default to boost performance. Enabling it will take a minimal performance hit.|false|boolean| +|heartbeatFrequencyMS|heartbeatFrequencyMS controls when the driver checks the state of the MongoDB deployment. Specify the interval (in milliseconds) between checks, counted from the end of the previous check until the beginning of the next one. Default: Single-threaded drivers: 60 seconds. Multi-threaded drivers: 10 seconds.||integer| +|loadBalanced|If true the driver will assume that it's connecting to MongoDB through a load balancer.|false|boolean| +|localThresholdMS|The size (in milliseconds) of the latency window for selecting among multiple suitable MongoDB instances. Default: 15 milliseconds.|15|integer| +|maxConnecting|Specifies the maximum number of connections a pool may be establishing concurrently. Default: 2|2|integer| +|maxIdleTimeMS|Specifies the maximum amount of time, in milliseconds, the Java driver will allow a pooled connection to idle before closing the connection. A value of 0 indicates that there is no upper bound on how long the driver can allow a pooled collection to be idle. Default: 0|0|integer| +|maxLifeTimeMS|Specifies the maximum amount of time, in milliseconds, the Java driver will continue to use a pooled connection before closing the connection. A value of 0 indicates that there is no upper bound on how long the driver can keep a pooled connection open. Default: 0|0|integer| +|maxPoolSize|The maximum number of connections in the connection pool. The default value is 100.|100|integer| +|maxStalenessSeconds|Specifies, in seconds, how stale a secondary can be before the driver stops communicating with that secondary. The minimum value is either 90 seconds or the heartbeat frequency plus 10 seconds, whichever is greater. For more information, see the server documentation for the maxStalenessSeconds option. Not providing a parameter or explicitly specifying -1 indicates that there should be no staleness check for secondaries. Default: -1|-1|integer| +|minPoolSize|Specifies the minimum number of connections that must exist at any moment in a single connection pool. Default: 0|0|integer| +|readPreference|Configure how MongoDB clients route read operations to the members of a replica set. Possible values are PRIMARY, PRIMARY\_PREFERRED, SECONDARY, SECONDARY\_PREFERRED or NEAREST|PRIMARY|string| +|readPreferenceTags|A representation of a tag set as a comma-separated list of colon-separated key-value pairs, e.g. dc:ny,rack:1. Spaces are stripped from beginning and end of all keys and values. To specify a list of tag sets, using multiple readPreferenceTags, e.g. readPreferenceTags=dc:ny,rack:1;readPreferenceTags=dc:ny;readPreferenceTags= Note the empty value for the last one, which means match any secondary as a last resort. Order matters when using multiple readPreferenceTags.||string| +|replicaSet|Specifies that the connection string provided includes multiple hosts. When specified, the driver attempts to find all members of that set.||string| +|retryReads|Specifies that the driver must retry supported read operations if they fail due to a network error. Default: true|true|boolean| +|retryWrites|Specifies that the driver must retry supported write operations if they fail due to a network error. Default: true|true|boolean| +|serverSelectionTimeoutMS|Specifies how long (in milliseconds) to block for server selection before throwing an exception. Default: 30,000 milliseconds.|30000|integer| +|socketTimeoutMS|Specifies the maximum amount of time, in milliseconds, the Java driver will wait to send or receive a request before timing out. A value of 0 instructs the driver to never time out while waiting to send or receive a request. Default: 0|0|integer| +|srvMaxHosts|The maximum number of hosts from the SRV record to connect to.||integer| +|srvServiceName|Specifies the service name of the SRV resource recordsthe driver retrieves to construct your seed list. You must use the DNS Seed List Connection Format in your connection URI to use this option. Default: mongodb|mongodb|string| +|tls|Specifies that all communication with MongoDB instances should use TLS. Supersedes the ssl option. Default: false|false|boolean| +|tlsAllowInvalidHostnames|Specifies that the driver should allow invalid hostnames in the certificate for TLS connections. Supersedes sslInvalidHostNameAllowed. Has the same effect as tlsInsecure by setting tlsAllowInvalidHostnames to true. Default: false|false|boolean| +|waitQueueTimeoutMS|Specifies the maximum amount of time, in milliseconds that a thread may wait for a connection to become available. Default: 120000 (120 seconds)|120000|integer| +|writeConcern|Configure the connection bean with the level of acknowledgment requested from MongoDB for write operations to a standalone mongod, replicaset or cluster. Possible values are ACKNOWLEDGED, W1, W2, W3, UNACKNOWLEDGED, JOURNALED or MAJORITY.|ACKNOWLEDGED|string| +|writeResultAsHeader|In write operations, it determines whether instead of returning WriteResult as the body of the OUT message, we transfer the IN message to the OUT and attach the WriteResult as a header.|false|boolean| +|zlibCompressionLevel|Specifies the degree of compression that Zlib should use to decrease the size of requests to the connected MongoDB instance. The level can range from -1 to 9, with lower values compressing faster (but resulting in larger requests) and larger values compressing slower (but resulting in smaller requests). Default: null||integer| +|fullDocument|Specifies whether changeStream consumer include a copy of the full document when modified by update operations. Possible values are default, updateLookup, required and whenAvailable.|default|object| +|streamFilter|Filter condition for change streams consumer.||string| +|authSource|The database name associated with the user's credentials.||string| +|password|User password for mongodb connection||string| +|username|Username for mongodb connection||string| +|persistentId|One tail tracking collection can host many trackers for several tailable consumers. To keep them separate, each tracker should have its own unique persistentId.||string| +|persistentTailTracking|Enable persistent tail tracking, which is a mechanism to keep track of the last consumed message across system restarts. The next time the system is up, the endpoint will recover the cursor from the point where it last stopped slurping records.|false|boolean| +|tailTrackCollection|Collection where tail tracking information will be persisted. If not specified, MongoDbTailTrackingConfig#DEFAULT\_COLLECTION will be used by default.||string| +|tailTrackDb|Indicates what database the tail tracking mechanism will persist to. If not specified, the current database will be picked by default. Dynamicity will not be taken into account even if enabled, i.e. the tail tracking database will not vary past endpoint initialisation.||string| +|tailTrackField|Field where the last tracked value will be placed. If not specified, MongoDbTailTrackingConfig#DEFAULT\_FIELD will be used by default.||string| +|tailTrackIncreasingField|Correlation field in the incoming record which is of increasing nature and will be used to position the tailing cursor every time it is generated. The cursor will be (re)created with a query of type: tailTrackIncreasingField greater than lastValue (possibly recovered from persistent tail tracking). Can be of type Integer, Date, String, etc. NOTE: No support for dot notation at the current time, so the field should be at the top level of the document.||string| diff --git a/camel-mustache.md b/camel-mustache.md new file mode 100644 index 0000000000000000000000000000000000000000..0c2c4c180c0e1552e36421a24162c4fa11194479 --- /dev/null +++ b/camel-mustache.md @@ -0,0 +1,150 @@ +# Mustache + +**Since Camel 2.12** + +**Only producer is supported** + +The Mustache component allows for processing a message using a +[Mustache](http://mustache.github.io/) template. This can be ideal when +using Templating to generate responses for requests. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-mustache + x.x.x + + +# URI format + + mustache:templateName[?options] + +Where **templateName** is the classpath-local URI of the template to +invoke; or the complete URL of the remote template (e.g.: +`\file://folder/myfile.mustache`). + +# Mustache Context + +Camel will provide exchange information in the Mustache context (just a +`Map`). The `Exchange` is transferred as: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
keyvalue

exchange

The Exchange +itself.

exchange.properties

The Exchange +properties.

variables

The variables

headers

The headers of the In message.

camelContext

The Camel Context.

request

The In message.

body

The In message body.

response

The Out message (only for InOut message +exchange pattern).

+ +# Dynamic templates + +Camel provides two headers by which you can define a different resource +location for a template or the template content itself. If any of these +headers is set, then Camel uses this over the endpoint configured +resource. This allows you to provide a dynamic template at runtime. + +# Samples + +For example, you could use something like: + + from("activemq:My.Queue"). + to("mustache:com/acme/MyResponse.mustache"); + +To use a Mustache template to formulate a response for a message for +InOut message exchanges (where there is a `JMSReplyTo` header). + +If you want to use InOnly and consume the message and send it to another +destination, you could use: + + from("activemq:My.Queue"). + to("mustache:com/acme/MyResponse.mustache"). + to("activemq:Another.Queue"); + +It’s possible to specify what template the component should use +dynamically via a header, so for example: + + from("direct:in"). + setHeader(MustacheConstants.MUSTACHE_RESOURCE_URI).constant("path/to/my/template.mustache"). + to("mustache:dummy?allowTemplateFromHeader=true"); + +# The Email Sample + +In this sample, we want to use Mustache templating for an order +confirmation email. The email template is laid out in Mustache as: + + Dear {{headers.lastName}}, {{headers.firstName}} + + Thanks for the order of {{headers.item}}. + + Regards Camel Riders Bookstore + {{body}} + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|mustacheFactory|To use a custom MustacheFactory||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|resourceUri|Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod.||string| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|contentCache|Sets whether to use resource content cache or not|false|boolean| +|encoding|Character encoding of the resource content.||string| +|endDelimiter|Characters used to mark template code end.|}}|string| +|startDelimiter|Characters used to mark template code beginning.|{{|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-mvel.md b/camel-mvel.md new file mode 100644 index 0000000000000000000000000000000000000000..260b65edeed342535dbf04a1b9cb989d26a2b31a --- /dev/null +++ b/camel-mvel.md @@ -0,0 +1,153 @@ +# Mvel + +**Since Camel 2.12** + +**Only producer is supported** + +The MVEL component allows you to process a message using an +[MVEL](http://mvel.documentnode.com/) template. This can be ideal when +using templating to generate responses for requests. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-mvel + x.x.x + + + +# URI format + + mvel:templateName[?options] + +Where **templateName** is the classpath-local URI of the template to +invoke; or the complete URL of the remote template (e.g.: +`\file://folder/myfile.mvel`). + +# MVEL Context + +Camel will provide exchange information in the MVEL context (just a +`Map`). The `Exchange` is transferred as: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
keyvalue

exchange

The Exchange +itself

exchange.properties

The Exchange +properties

variables

The variables

headers

The headers of the message

camelContext

The CamelContext

request

The message

in

The message

body

The message body

out

The Out message (only for InOut message +exchange pattern).

response

The Out message (only for InOut message +exchange pattern).

+ +# Hot reloading + +The mvel template resource is, by default, hot reloadable for both file +and classpath resources (expanded jar). If you set `contentCache=true`, +Camel will only load the resource once, and thus hot reloading is not +possible. This scenario can be used in production when the resource +never changes. + +# Dynamic templates + +Camel provides two headers by which you can define a different resource +location for a template, or the template content itself. If any of these +headers is set, then Camel uses this over the endpoint configured +resource. This allows you to provide a dynamic template at runtime. + +# Example + +For example, you could use something like + + from("activemq:My.Queue"). + to("mvel:com/acme/MyResponse.mvel"); + +To use a MVEL template to formulate a response to a message for InOut +message exchanges (where there is a `JMSReplyTo` header). + +To specify what template the component should use dynamically via a +header, so for example: + + from("direct:in"). + setHeader("CamelMvelResourceUri").constant("path/to/my/template.mvel"). + to("mvel:dummy?allowTemplateFromHeader=true"); + +To specify a template directly as a header, the component should use +dynamically via a header, so for example: + + from("direct:in"). + setHeader("CamelMvelTemplate").constant("@{\"The result is \" + request.body * 3}\" }"). + to("velocity:dummy?allowTemplateFromHeader=true"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|resourceUri|Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod.||string| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|contentCache|Sets whether to use resource content cache or not|false|boolean| +|encoding|Character encoding of the resource content.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-mybatis-bean.md b/camel-mybatis-bean.md new file mode 100644 index 0000000000000000000000000000000000000000..d7428642e51d14f7b4872e2c109e2f464a0a5e9b --- /dev/null +++ b/camel-mybatis-bean.md @@ -0,0 +1,99 @@ +# Mybatis-bean + +**Since Camel 2.22** + +**Only producer is supported** + +The MyBatis Bean component allows you to query, insert, update and +delete data in a relational database using +[MyBatis](http://mybatis.org/) bean annotations. + +This component can **only** be used as a producer. If you want to +consume from MyBatis, then use the regular **mybatis** component. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-mybatis + x.x.x + + + +This component will by default load the MyBatis SqlMapConfig file from +the root of the classpath with the expected name of +`SqlMapConfig.xml`. +If the file is located in another location, you will need to configure +the `configurationUri` option on the `MyBatisComponent` component. + +# Message Body + +The response from MyBatis will only be set as the body if it’s a +`SELECT` statement. That means, for example, for `INSERT` statements +Camel will not replace the body. This allows you to continue routing and +keep the original body. The response from MyBatis is always stored in +the header with the key `CamelMyBatisResult`. + +# Samples + +For example, if you wish to consume beans from a JMS queue and insert +them into a database, you could do the following: + + from("activemq:queue:newAccount") + .to("mybatis-bean:AccountService:insertBeanAccount"); + +Notice we have to specify the bean name and method name, as we need to +instruct Camel which kind of operation to invoke. + +Where `AccountService` is the type alias for the bean that has the +MyBatis bean annotations. You can configure type alias in the +SqlMapConfig file: + + + + + + +On the `AccountService` bean you can declare the MyBatis mappins using +annotations as shown: + + public interface AccountService { + + @Select("select ACC_ID as id, ACC_FIRST_NAME as firstName, ACC_LAST_NAME as lastName" + + ", ACC_EMAIL as emailAddress from ACCOUNT where ACC_ID = #{id}") + Account selectBeanAccountById(@Param("id") int no); + + @Select("select * from ACCOUNT order by ACC_ID") + @ResultMap("Account.AccountResult") + List selectBeanAllAccounts(); + + @Insert("insert into ACCOUNT (ACC_ID,ACC_FIRST_NAME,ACC_LAST_NAME,ACC_EMAIL)" + + " values (#{id}, #{firstName}, #{lastName}, #{emailAddress})") + void insertBeanAccount(Account account); + + } + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configurationUri|Location of MyBatis xml configuration file. The default value is: SqlMapConfig.xml loaded from the classpath|SqlMapConfig.xml|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|sqlSessionFactory|To use the SqlSessionFactory||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|beanName|Name of the bean with the MyBatis annotations. This can either by a type alias or a FQN class name.||string| +|methodName|Name of the method on the bean that has the SQL query to be executed.||string| +|executorType|The executor type to be used while executing statements. simple - executor does nothing special. reuse - executor reuses prepared statements. batch - executor reuses statements and batches updates.|SIMPLE|object| +|inputHeader|User the header value for input parameters instead of the message body. By default, inputHeader == null and the input parameters are taken from the message body. If outputHeader is set, the value is used and query parameters will be taken from the header instead of the body.||string| +|outputHeader|Store the query result in a header instead of the message body. By default, outputHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If outputHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. Setting outputHeader will also omit populating the default CamelMyBatisResult header since it would be the same as outputHeader all the time.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-mybatis.md b/camel-mybatis.md new file mode 100644 index 0000000000000000000000000000000000000000..85e0a3d235b19906fa8bea70346f2fcdecb9638c --- /dev/null +++ b/camel-mybatis.md @@ -0,0 +1,365 @@ +# Mybatis + +**Since Camel 2.7** + +**Both producer and consumer are supported** + +The MyBatis component allows you to query, poll, insert, update and +delete data in a relational database using +[MyBatis](http://mybatis.org/). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-mybatis + x.x.x + + + +# URI format + + mybatis:statementName[?options] + +Where **statementName** is the statement name in the MyBatis XML mapping +file which maps to the query, insert, update or delete operation you +wish to evaluate. + +You can append query options to the URI in the following format, +`?option=value&option=value&...` + +This component will by default load the MyBatis SqlMapConfig file from +the root of the classpath with the expected name of +`SqlMapConfig.xml`. +If the file is located in another location, you will need to configure +the `configurationUri` option on the `MyBatisComponent` component. + +# Message Body + +The response from MyBatis will only be set as the body if it’s a +`SELECT` statement. That means, for example, for `INSERT` statements +Camel will not replace the body. This allows you to continue routing and +keep the original body. The response from MyBatis is always stored in +the header with the key `CamelMyBatisResult`. + +# Samples + +For example, if you wish to consume beans from a JMS queue and insert +them into a database, you could do the following: + + from("activemq:queue:newAccount") + .to("mybatis:insertAccount?statementType=Insert"); + +Notice we have to specify the `statementType`, as we need to instruct +Camel which kind of operation to invoke. + +Where **insertAccount** is the MyBatis ID in the SQL mapping file: + + + + insert into ACCOUNT ( + ACC_ID, + ACC_FIRST_NAME, + ACC_LAST_NAME, + ACC_EMAIL + ) + values ( + #{id}, #{firstName}, #{lastName}, #{emailAddress} + ) + + +# Using StatementType for better control of MyBatis + +When routing to an MyBatis endpoint, you will want more fine-grained +control so you can control whether the SQL statement to be executed is a +`SELECT`, `UPDATE`, `DELETE` or `INSERT` etc. So for instance if we want +to route to an MyBatis endpoint in which the IN body contains parameters +to a `SELECT` statement we can do: + + from("direct:start") + .to("mybatis:selectAccountById?statementType=SelectOne") + .to("mock:result"); + +In the code above we can invoke the MyBatis statement +`selectAccountById` and the IN body should contain the account id we +want to retrieve, such as an `Integer` type. + +We can do the same for some of the other operations, such as +`SelectList`: + + from("direct:start") + .to("mybatis:selectAllAccounts?statementType=SelectList") + .to("mock:result"); + +And the same for `UPDATE`, where we can send an `Account` object as the +IN body to MyBatis: + + from("direct:start") + .to("mybatis:updateAccount?statementType=Update") + .to("mock:result"); + +## Using InsertList StatementType + +MyBatis allows you to insert multiple rows using its for-each batch +driver. To use this, you need to use the \ in the mapper +XML file. For example, as shown below: + + + + insert into ACCOUNT ( + ACC_ID, + ACC_FIRST_NAME, + ACC_LAST_NAME, + ACC_EMAIL + ) + values ( + + #{Account.id}, #{Account.firstName}, #{Account.lastName}, #{Account.emailAddress} + + ) + + +Then you can insert multiple rows, by sending a Camel message to the +`mybatis` endpoint which uses the `InsertList` statement type, as shown +below: + + from("direct:start") + .to("mybatis:batchInsertAccount?statementType=InsertList") + .to("mock:result"); + +## Using UpdateList StatementType + +MyBatis allows you to update multiple rows using its for-each batch +driver. To use this, you need to use the \ in the mapper +XML file. For example, as shown below: + + + update ACCOUNT set + ACC_EMAIL = #{emailAddress} + where + ACC_ID in + + #{Account.id} + + + +Then you can update multiple rows, by sending a Camel message to the +mybatis endpoint which uses the UpdateList statement type, as shown +below: + + from("direct:start") + .to("mybatis:batchUpdateAccount?statementType=UpdateList") + .to("mock:result"); + +## Using DeleteList StatementType + +MyBatis allows you to delete multiple rows using its for-each batch +driver. To use this, you need to use the \ in the mapper +XML file. For example, as shown below: + + + delete from ACCOUNT + where + ACC_ID in + + #{AccountID} + + + +Then you can delete multiple rows, by sending a Camel message to the +mybatis endpoint which uses the DeleteList statement type, as shown +below: + + from("direct:start") + .to("mybatis:batchDeleteAccount?statementType=DeleteList") + .to("mock:result"); + +## Notice on InsertList, UpdateList and DeleteList StatementTypes + +Parameter of any type (List, Map, etc.) can be passed to mybatis, and an +end user is responsible for handling it as required with the help of +[mybatis dynamic +queries](http://www.mybatis.org/mybatis-3/dynamic-sql.html) +capabilities. + +## Scheduled polling example + +This component supports scheduled polling and can therefore be used as a +Polling Consumer. For example, to poll the database every minute: + + from("mybatis:selectAllAccounts?delay=60000") + .to("activemq:queue:allAccounts"); + +Alternatively you can use another mechanism for triggering the scheduled +polls, such as the [Timer](#timer-component.adoc) or +[Quartz](#timer-component.adoc) components. In the sample below we poll +the database, every 30 seconds using the [Timer](#timer-component.adoc) +component and send the data to the JMS queue: + + from("timer://pollTheDatabase?delay=30000") + .to("mybatis:selectAllAccounts") + .to("activemq:queue:allAccounts"); + +And the MyBatis SQL mapping file used: + + + + +## Using onConsume + +This component supports executing statements **after** data have been +consumed and processed by Camel. This allows you to do post updates in +the database. Notice all statements must be `UPDATE` statements. Camel +supports executing multiple statements whose names should be separated +by commas. + +The route below illustrates we execute the **consumeAccount** statement +data is processed. This allows us to change the status of the row in the +database to process, so we avoid consuming it twice or more. + + from("mybatis:selectUnprocessedAccounts?onConsume=consumeAccount") + .to("mock:results"); + +And the statements in the sqlmap file: + + + update ACCOUNT set PROCESSED = true where ACC_ID = #{id} + + +## Participating in transactions + +Setting up a transaction manager under camel-mybatis can be a little bit +fiddly, as it involves externalising the database configuration outside +the standard MyBatis `SqlMapConfig.xml` file. + +The first part requires the setup of a `DataSource`. This is typically a +pool (either DBCP, or c3p0), which needs to be wrapped in a Spring +proxy. This proxy enables non-Spring use of the `DataSource` to +participate in Spring transactions (the MyBatis `SqlSessionFactory` does +just this). + + + + + + + + + + + + +This has the additional benefit of enabling the database configuration +to be externalised using property placeholders. + +A transaction manager is then configured to manage the outermost +`DataSource`: + + + + + +A [mybatis-spring](http://www.mybatis.org/spring/index.html) +[`SqlSessionFactoryBean`](http://www.mybatis.org/spring/factorybean.html) +then wraps that same `DataSource`: + + + + + + + + + +The camel-mybatis component is then configured with that factory: + + + + + +Finally, a transaction policy is defined over the top of the transaction +manager, which can then be used as usual: + + + + + + + + + + + + + + +# MyBatis Spring Boot Starter integration + +Spring Boot users can use +[mybatis-spring-boot-starter](https://mybatis.org/spring-boot-starter/mybatis-spring-boot-autoconfigure/) +artifact provided by the mybatis team + + + org.mybatis.spring.boot + mybatis-spring-boot-starter + 3.0.3 + + +in particular, autoconfigured beans from mybatis-spring-boot-starter can +be used as follows: + + #application.properties + camel.component.mybatis.sql-session-factory = #sqlSessionFactory + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configurationUri|Location of MyBatis xml configuration file. The default value is: SqlMapConfig.xml loaded from the classpath|SqlMapConfig.xml|string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|sqlSessionFactory|To use the SqlSessionFactory||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|statement|The statement name in the MyBatis XML mapping file which maps to the query, insert, update or delete operation you wish to evaluate.||string| +|maxMessagesPerPoll|This option is intended to split results returned by the database pool into the batches and deliver them in multiple exchanges. This integer defines the maximum messages to deliver in single exchange. By default, no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disable it.|0|integer| +|onConsume|Statement to run after data has been processed in the route||string| +|routeEmptyResultSet|Whether allow empty resultset to be routed to the next hop|false|boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|transacted|Enables or disables transaction. If enabled then if processing an exchange failed then the consumer breaks out processing any further exchanges to cause a rollback eager.|false|boolean| +|useIterator|Process resultset individually or as a list|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|processingStrategy|To use a custom MyBatisProcessingStrategy||object| +|executorType|The executor type to be used while executing statements. simple - executor does nothing special. reuse - executor reuses prepared statements. batch - executor reuses statements and batches updates.|SIMPLE|object| +|inputHeader|User the header value for input parameters instead of the message body. By default, inputHeader == null and the input parameters are taken from the message body. If outputHeader is set, the value is used and query parameters will be taken from the header instead of the body.||string| +|outputHeader|Store the query result in a header instead of the message body. By default, outputHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If outputHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. Setting outputHeader will also omit populating the default CamelMyBatisResult header since it would be the same as outputHeader all the time.||string| +|statementType|Mandatory to specify for the producer to control which kind of operation to invoke.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| diff --git a/camel-nats.md b/camel-nats.md new file mode 100644 index 0000000000000000000000000000000000000000..16cf928ca9d3395f0a33758e6083980650b32dfb --- /dev/null +++ b/camel-nats.md @@ -0,0 +1,135 @@ +# Nats + +**Since Camel 2.17** + +**Both producer and consumer are supported** + +[NATS](http://nats.io/) is a fast and reliable messaging platform. + +Maven users will need to add the following dependency to their `pom.xml` +for this component. + + + org.apache.camel + camel-nats + + x.y.z + + +# URI format + + nats:topic[?options] + +Where **topic** is the topic name + +# Configuring servers + +You configure the NATS servers on either the component or the endpoint. + +For example, to configure this once on the component, you can do: + + NatsComponent nats = context.getComponent("nats", NatsComponent.class); + nats.setServers("someserver:4222,someotherserver:42222"); + +Notice how you can specify multiple servers separated by comma. + +Or you can specify the servers in the endpoint URI + + from("direct:send").to("nats:test?servers=localhost:4222"); + +The endpoint configuration will override any server configuration on the +component level. + +## Configuring username and password or token + +You can specify username and password for the servers in the server +URLs, where its `username:password@url`, or `token@url` etc: + + NatsComponent nats = context.getComponent("nats", NatsComponent.class); + nats.setServers("scott:tiger@someserver:4222,superman:123@someotherserver:42222"); + +If you are using Camel Main or Spring Boot, you can configure the server +urls in the `application.properties` file + + camel.component.nats.servers=scott:tiger@someserver:4222,superman:123@someotherserver:42222 + +# Request/Reply support + +The producer supports request/reply where it can wait for an expected +reply message. + +The consumer will, when routing the message is complete, send back the +message as reply-message if required. + +# Examples + +**Producer example:** + + from("direct:send") + .to("nats:mytopic"); + +In case of using authorization, you can directly specify your +credentials in the server URL + + from("direct:send") + .to("nats:mytopic?servers=username:password@localhost:4222"); + +or your token + + from("direct:send") + .to("nats:mytopic?servers=token@localhost:4222); + +**Consumer example:** + + from("nats:mytopic?maxMessages=5&queueName=myqueue") + .to("mock:result"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|servers|URLs to one or more NAT servers. Use comma to separate URLs when specifying multiple servers.||string| +|verbose|Whether or not running in verbose mode|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|topic|The name of topic we want to use||string| +|connectionTimeout|Timeout for connection attempts. (in milliseconds)|2000|integer| +|flushConnection|Define if we want to flush connection when stopping or not|true|boolean| +|flushTimeout|Set the flush timeout (in milliseconds)|1000|integer| +|maxPingsOut|maximum number of pings have not received a response allowed by the client|2|integer| +|maxReconnectAttempts|Max reconnection attempts|60|integer| +|noEcho|Turn off echo. If supported by the gnatsd version you are connecting to this flag will prevent the server from echoing messages back to the connection if it has subscriptions on the subject being published to.|false|boolean| +|noRandomizeServers|Whether or not randomizing the order of servers for the connection attempts|false|boolean| +|pedantic|Whether or not running in pedantic mode (this affects performance)|false|boolean| +|pingInterval|Ping interval to be aware if connection is still alive (in milliseconds)|120000|integer| +|reconnect|Whether or not using reconnection feature|true|boolean| +|reconnectTimeWait|Waiting time before attempts reconnection (in milliseconds)|2000|integer| +|requestCleanupInterval|Interval to clean up cancelled/timed out requests.|5000|integer| +|servers|URLs to one or more NAT servers. Use comma to separate URLs when specifying multiple servers.||string| +|verbose|Whether or not running in verbose mode|false|boolean| +|maxMessages|Stop receiving messages from a topic we are subscribing to after maxMessages||string| +|poolSize|Consumer thread pool size (default is 10)|10|integer| +|queueName|The Queue name if we are using nats for a queue configuration||string| +|replyToDisabled|Can be used to turn off sending back reply message in the consumer.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|replySubject|the subject to which subscribers should send response||string| +|requestTimeout|Request timeout in milliseconds|20000|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connection|Reference an already instantiated connection to Nats server||object| +|headerFilterStrategy|Define the header filtering strategy||object| +|traceConnection|Whether or not connection trace messages should be printed to standard out for fine grained debugging of connection issues.|false|boolean| +|credentialsFilePath|If we use useCredentialsFile to true we'll need to set the credentialsFilePath option. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|secure|Set secure option indicating TLS is required|false|boolean| +|sslContextParameters|To configure security using SSLContextParameters||object| diff --git a/camel-netty-http.md b/camel-netty-http.md new file mode 100644 index 0000000000000000000000000000000000000000..0d3c8873a1ab7486788703d4ce4535e4aecdf5c6 --- /dev/null +++ b/camel-netty-http.md @@ -0,0 +1,473 @@ +# Netty-http + +**Since Camel 2.14** + +**Both producer and consumer are supported** + +The Netty HTTP component is an extension to +[Netty](#netty-component.adoc) component to simplify HTTP transport with +[Netty](#netty-component.adoc). + +**Stream** + +Netty is stream-based, which means the input it receives is submitted to +Camel as a stream. That means you will only be able to read the content +of the stream **once**. If you find a situation where the message body +appears to be empty, or you need to access the data multiple times (eg: +doing multicasting, or redelivery error handling), you should use Stream +caching or convert the message body to a `String` which is safe to be +re-read multiple times. + +Note also that Netty HTTP reads the entire stream into memory using +`io.netty.handler.codec.http.HttpObjectAggregator` to build the entire +full http message. But the resulting message is still a stream-based +message that is readable once. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-netty-http + x.x.x + + + +# URI format + +The URI scheme for a netty component is as follows + + netty-http:http://0.0.0.0:8080[?options] + +**Query parameters vs. endpoint options** + +You may be wondering how Camel recognizes URI query parameters and +endpoint options. For example, you might create endpoint URI as follows: +`netty-http:http//example.com?myParam=myValue&compression=true` . In +this example `myParam` is the HTTP parameter, while `compression` is the +Camel endpoint option. The strategy used by Camel in such situations is +to resolve available endpoint options and remove them from the URI. It +means that for the discussed example, the HTTP request sent by Netty +HTTP producer to the endpoint will look as follows: +`http//example.com?myParam=myValue`, because `compression` endpoint +option will be resolved and removed from the target URL. + +Keep also in mind that you cannot specify endpoint options using dynamic +headers (like `CamelHttpQuery`). Endpoint options can be specified only +at the endpoint URI definition level (like `to` or `from` DSL elements). + +**A lot more options** + +This component inherits all the options from +[Netty](#netty-component.adoc), so make sure to look at the +[Netty](#netty-component.adoc) documentation as well. Notice that some +options from [Netty](#netty-component.adoc) are not applicable when +using this Netty HTTP component, such as options related to UDP +transport. + +# Access to Netty types + +This component uses the +`org.apache.camel.component.netty.http.NettyHttpMessage` as the message +implementation on the Exchange. This allows end users to get access to +the original Netty request/response instances if needed, as shown below. +Mind that the original response may not be accessible at all times. + + io.netty.handler.codec.http.HttpRequest request = exchange.getIn(NettyHttpMessage.class).getHttpRequest(); + +# Using HTTP Basic Authentication + +The Netty HTTP consumer supports HTTP basic authentication by specifying +the security realm name to use, as shown below + + + + ... + + +The realm name is mandatory to enable basic authentication. By default, +the JAAS based authenticator is used, which will use the realm name +specified (karaf in the example above) and use the JAAS realm and the +`JAAS \{{LoginModule}}s` of this realm for authentication. + +End user of Apache Karaf / ServiceMix has a karaf realm out of the box, +and hence why the example above would work out of the box in these +containers. + +## Specifying ACL on web resources + +The `org.apache.camel.component.netty.http.SecurityConstraint` allows to +define constraints on web resources. And the +`org.apache.camel.component.netty.http.SecurityConstraintMapping` is +provided out of the box, allowing to easily define inclusions and +exclusions with roles. + +For example, as shown below in the XML DSL, we define the constraint +bean: + + + + + + + + + + + + + + + /public/* + + + + +The constraint above is defined so that + +- access to /\* is restricted and any roles are accepted (also if user + has no roles) + +- access to /admin/\* requires the admin role + +- access to /guest/\* requires the admin or guest role + +- access to /public/\* is an exclusion that means no authentication is + needed, and is therefore public for everyone without logging in + +To use this constraint, we just need to refer to the bean id as shown +below: + + + + ... + + +# Examples + +In the route below, we use Netty HTTP as an HTTP server, which returns a +hardcoded *"Bye World"* message. + + from("netty-http:http://0.0.0.0:8080/foo") + .transform().constant("Bye World"); + +And we can call this HTTP server using Camel also, with the +ProducerTemplate as shown below: + + String out = template.requestBody("netty-http:http://0.0.0.0:8080/foo", "Hello World", String.class); + System.out.println(out); + +And we get *"Bye World"* as the output. + +## How do I let Netty match wildcards? + +By default, Netty HTTP will only match on exact uri’s. But you can +instruct Netty to match prefixes. For example + + from("netty-http:http://0.0.0.0:8123/foo").to("mock:foo"); + +In the route above Netty HTTP will only match if the uri is an exact +match, so it will match if you enter +`\http://0.0.0.0:8123/foo` but not match if you do +`\http://0.0.0.0:8123/foo/bar`. + +So if you want to enable wildcard matching, you do as follows: + + from("netty-http:http://0.0.0.0:8123/foo?matchOnUriPrefix=true").to("mock:foo"); + +So now Netty matches any endpoints with starts with `foo`. + +To match **any** endpoint, you can do: + + from("netty-http:http://0.0.0.0:8123?matchOnUriPrefix=true").to("mock:foo"); + +## Using multiple routes with same port + +In the same CamelContext you can have multiple routes from Netty HTTP +that shares the same port (e.g., a `io.netty.bootstrap.ServerBootstrap` +instance). Doing this requires a number of bootstrap options to be +identical in the routes, as the routes will share the same +`io.netty.bootstrap.ServerBootstrap` instance. The instance will be +configured with the options from the first route created. + +The options the routes must be identical configured is all the options +defined in the +`org.apache.camel.component.netty.NettyServerBootstrapConfiguration` +configuration class. If you have configured another route with different +options, Camel will throw an exception on startup, indicating the +options are not identical. To mitigate this ensure all options are +identical. + +Here is an example with two routes that share the same port. + +**Two routes sharing the same port** + + from("netty-http:http://0.0.0.0:{{port}}/foo") + .to("mock:foo") + .transform().constant("Bye World"); + + from("netty-http:http://0.0.0.0:{{port}}/bar") + .to("mock:bar") + .transform().constant("Bye Camel"); + +And here is an example of a mis-configured second route that does not +have identical +`org.apache.camel.component.netty.NettyServerBootstrapConfiguration` +option as the first route. This will cause Camel to fail on startup. + +**Two routes are sharing the same port, but the second route is +misconfigured and will fail on starting** + + from("netty-http:http://0.0.0.0:{{port}}/foo") + .to("mock:foo") + .transform().constant("Bye World"); + + // we cannot have a 2nd route on the same port with SSL enabled, when the 1st route is NOT + from("netty-http:http://0.0.0.0:{{port}}/bar?ssl=true") + .to("mock:bar") + .transform().constant("Bye Camel"); + +## Reusing the same server bootstrap configuration with multiple routes + +By configuring the common server bootstrap option in a single instance +of a +`org.apache.camel.component.netty.NettyServerBootstrapConfiguration` +type, we can use the `bootstrapConfiguration` option on the Netty HTTP +consumers to refer and reuse the same options across all consumers. + + + + + + + +And in the routes you refer to this option as shown below + + + + ... + + + + + ... + + + + + ... + + +## Reusing the same server bootstrap configuration with multiple routes across multiple bundles in OSGi container + +See the Netty HTTP Server Example for more details and example how to do +that. + +## Implementing a reverse proxy + +Netty HTTP component can act as a reverse proxy, in that case +`Exchange.HTTP_SCHEME`, `Exchange.HTTP_HOST` and `Exchange.HTTP_PORT` +headers are populated from the absolute URL received on the request line +of the HTTP request. + +Here’s an example of an HTTP proxy that simply transforms the response +from the origin server to uppercase. + + from("netty-http:proxy://0.0.0.0:8080") + .toD("netty-http:" + + "${headers." + Exchange.HTTP_SCHEME + "}://" + + "${headers." + Exchange.HTTP_HOST + "}:" + + "${headers." + Exchange.HTTP_PORT + "}") + .process(this::processResponse); + + void processResponse(final Exchange exchange) { + final NettyHttpMessage message = exchange.getIn(NettyHttpMessage.class); + final FullHttpResponse response = message.getHttpResponse(); + + final ByteBuf buf = response.content(); + final String string = buf.toString(StandardCharsets.UTF_8); + + buf.resetWriterIndex(); + ByteBufUtil.writeUtf8(buf, string.toUpperCase(Locale.US)); + } + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|To use the NettyConfiguration as configuration when creating endpoints.||object| +|disconnect|Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer.|false|boolean| +|keepAlive|Setting to ensure socket is not closed due to inactivity|true|boolean| +|reuseAddress|Setting to facilitate socket multiplexing|true|boolean| +|reuseChannel|This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY\_CHANNEL which allows you to obtain the channel during routing and use it as well.|false|boolean| +|sync|Setting to set endpoint as one-way or request-response|true|boolean| +|tcpNoDelay|Setting to improve TCP protocol performance|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|broadcast|Setting to choose Multicast over UDP|false|boolean| +|clientMode|If the clientMode is true, netty consumer will connect the address as a TCP client.|false|boolean| +|muteException|If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace.|false|boolean| +|reconnect|Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled|true|boolean| +|reconnectInterval|Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection|10000|integer| +|backlog|Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting.||integer| +|bossCount|When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty|1|integer| +|bossGroup|Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint||object| +|disconnectOnNoReply|If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back.|true|boolean| +|executorService|To use the given EventExecutorGroup.||object| +|maximumPoolSize|Sets a maximum thread pool size for the netty consumer ordered thread pool. The default size is 2 x cpu\_core plus 1. Setting this value to eg 10 will then use 10 threads unless 2 x cpu\_core plus 1 is a higher value, which then will override and be used. For example if there are 8 cores, then the consumer thread pool will be 17. This thread pool is used to route messages received from Netty by Camel. We use a separate thread pool to ensure ordering of messages and also in case some messages will block, then nettys worker threads (event loop) wont be affected.||integer| +|nettyServerBootstrapFactory|To use a custom NettyServerBootstrapFactory||object| +|networkInterface|When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group.||string| +|noReplyLogLevel|If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back.|WARN|object| +|serverClosedChannelExceptionCaughtLogLevel|If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server.|DEBUG|object| +|serverExceptionCaughtLogLevel|If the server (NettyConsumer) catches an exception then its logged using this logging level.|WARN|object| +|serverInitializerFactory|To use a custom ServerInitializerFactory||object| +|usingExecutorService|Whether to use ordered thread pool, to ensure events are processed orderly on the same channel.|true|boolean| +|connectTimeout|Time to wait for a socket connection to be available. Value is in milliseconds.|10000|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|requestTimeout|Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout.||integer| +|clientInitializerFactory|To use a custom ClientInitializerFactory||object| +|correlationManager|To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details.||object| +|lazyChannelCreation|Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started.|true|boolean| +|producerPoolBlockWhenExhausted|Sets the value for the blockWhenExhausted configuration attribute. It determines whether to block when the borrowObject() method is invoked when the pool is exhausted (the maximum number of active objects has been reached).|true|boolean| +|producerPoolEnabled|Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details.|true|boolean| +|producerPoolMaxIdle|Sets the cap on the number of idle instances in the pool.|100|integer| +|producerPoolMaxTotal|Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit.|-1|integer| +|producerPoolMaxWait|Sets the maximum duration (value in millis) the borrowObject() method should block before throwing an exception when the pool is exhausted and producerPoolBlockWhenExhausted is true. When less than 0, the borrowObject() method may block indefinitely.|-1|integer| +|producerPoolMinEvictableIdle|Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor.|300000|integer| +|producerPoolMinIdle|Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects.||integer| +|udpConnectionlessSending|This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port.|false|boolean| +|useByteBuf|If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out.|false|boolean| +|allowSerializedHeaders|Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|channelGroup|To use a explicit ChannelGroup.||object| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter headers.||object| +|nativeTransport|Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: http://netty.io/wiki/native-transports.html|false|boolean| +|nettyHttpBinding|To use a custom org.apache.camel.component.netty.http.NettyHttpBinding for binding to/from Netty and Camel Message API.||object| +|options|Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used.||object| +|receiveBufferSize|The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes.|65536|integer| +|receiveBufferSizePredictor|Configures the buffer size predictor. See details at Jetty documentation and this mail thread.||integer| +|sendBufferSize|The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes.|65536|integer| +|transferExchange|Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|udpByteArrayCodec|For UDP only. If enabled the using byte array codec instead of Java serialization protocol.|false|boolean| +|unixDomainSocketPath|Path to unix domain socket to use instead of inet socket. Host and port parameters will not be used, however required. It is ok to set dummy values for them. Must be used with nativeTransport=true and clientMode=false.||string| +|workerCount|When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu\_core\_threads x 2). User can use this option to override the default workerCount from Netty.||integer| +|workerGroup|To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads.||object| +|allowDefaultCodec|The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain.|true|boolean| +|autoAppendDelimiter|Whether or not to auto append missing end delimiter when sending using the textline codec.|true|boolean| +|decoderMaxLineLength|The max line length to use for the textline codec.|1024|integer| +|decoders|A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup.||string| +|delimiter|The delimiter to use for the textline codec. Possible values are LINE and NULL.|LINE|object| +|encoders|A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup.||string| +|encoding|The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset.||string| +|textline|Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default.|false|boolean| +|enabledProtocols|Which protocols to enable when using SSL|TLSv1.2,TLSv1.3|string| +|hostnameVerification|To enable/disable hostname verification on SSLEngine|false|boolean| +|keyStoreFile|Client side certificate keystore to be used for encryption||string| +|keyStoreFormat|Keystore format to be used for payload encryption. Defaults to JKS if not set||string| +|keyStoreResource|Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|needClientAuth|Configures whether the server needs client authentication when using SSL.|false|boolean| +|passphrase|Password setting to use in order to encrypt/decrypt payloads sent using SSH||string| +|securityConfiguration|Refers to a org.apache.camel.component.netty.http.NettyHttpSecurityConfiguration for configuring secure web resources.||object| +|securityProvider|Security provider to be used for payload encryption. Defaults to SunX509 if not set.||string| +|ssl|Setting to specify whether SSL encryption is applied to this endpoint|false|boolean| +|sslClientCertHeaders|When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range.|false|boolean| +|sslContextParameters|To configure security using SSLContextParameters||object| +|sslHandler|Reference to a class that could be used to return an SSL Handler||object| +|trustStoreFile|Server side certificate keystore to be used for encryption||string| +|trustStoreResource|Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|protocol|The protocol to use which is either http, https or proxy - a consumer only option.||string| +|host|The local hostname such as localhost, or 0.0.0.0 when being a consumer. The remote HTTP server hostname when using producer.||string| +|port|The host port number||integer| +|path|Resource path||string| +|bridgeEndpoint|If the option is true, the producer will ignore the NettyHttpConstants.HTTP\_URI header, and use the endpoint's URI for request. You may also set the throwExceptionOnFailure to be false to let the producer send all the fault response back. The consumer working in the bridge mode will skip the gzip compression and WWW URL form encoding (by adding the Exchange.SKIP\_GZIP\_ENCODING and Exchange.SKIP\_WWW\_FORM\_URLENCODED headers to the consumed exchange).|false|boolean| +|disconnect|Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer.|false|boolean| +|keepAlive|Setting to ensure socket is not closed due to inactivity|true|boolean| +|reuseAddress|Setting to facilitate socket multiplexing|true|boolean| +|reuseChannel|This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY\_CHANNEL which allows you to obtain the channel during routing and use it as well.|false|boolean| +|sync|Setting to set endpoint as one-way or request-response|true|boolean| +|tcpNoDelay|Setting to improve TCP protocol performance|true|boolean| +|matchOnUriPrefix|Whether or not Camel should try to find a target consumer by matching the URI prefix if no exact match is found.|false|boolean| +|muteException|If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace.|false|boolean| +|send503whenSuspended|Whether to send back HTTP status code 503 when the consumer has been suspended. If the option is false then the Netty Acceptor is unbound when the consumer is suspended, so clients cannot connect anymore.|true|boolean| +|backlog|Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting.||integer| +|bossCount|When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty|1|integer| +|bossGroup|Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|chunkedMaxContentLength|Value in bytes the max content length per chunked frame received on the Netty HTTP server.|1048576|integer| +|compression|Allow using gzip/deflate for compression on the Netty HTTP server if the client supports it from the HTTP headers.|false|boolean| +|disconnectOnNoReply|If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back.|true|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|httpMethodRestrict|To disable HTTP methods on the Netty HTTP consumer. You can specify multiple separated by comma.||string| +|logWarnOnBadRequest|Whether Netty HTTP server should log a WARN if decoding the HTTP request failed and a HTTP Status 400 (bad request) is returned.|true|boolean| +|mapHeaders|If this option is enabled, then during binding from Netty to Camel Message then the headers will be mapped as well (eg added as header to the Camel Message as well). You can turn off this option to disable this. The headers can still be accessed from the org.apache.camel.component.netty.http.NettyHttpMessage message with the method getHttpRequest() that returns the Netty HTTP request io.netty.handler.codec.http.HttpRequest instance.|true|boolean| +|maxChunkSize|The maximum length of the content or each chunk. If the content length (or the length of each chunk) exceeds this value, the content or chunk will be split into multiple io.netty.handler.codec.http.HttpContents whose length is maxChunkSize at maximum. See io.netty.handler.codec.http.HttpObjectDecoder|8192|integer| +|maxHeaderSize|The maximum length of all headers. If the sum of the length of each header exceeds this value, a io.netty.handler.codec.TooLongFrameException will be raised.|8192|integer| +|maxInitialLineLength|The maximum length of the initial line (e.g. {code GET / HTTP/1.0} or {code HTTP/1.0 200 OK}) If the length of the initial line exceeds this value, a TooLongFrameException will be raised. See io.netty.handler.codec.http.HttpObjectDecoder|4096|integer| +|nettyServerBootstrapFactory|To use a custom NettyServerBootstrapFactory||object| +|nettySharedHttpServer|To use a shared Netty HTTP server. See Netty HTTP Server Example for more details.||object| +|noReplyLogLevel|If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back.|WARN|object| +|serverClosedChannelExceptionCaughtLogLevel|If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server.|DEBUG|object| +|serverExceptionCaughtLogLevel|If the server (NettyConsumer) catches an exception then its logged using this logging level.|WARN|object| +|serverInitializerFactory|To use a custom ServerInitializerFactory||object| +|traceEnabled|Specifies whether to enable HTTP TRACE for this Netty HTTP consumer. By default TRACE is turned off.|false|boolean| +|urlDecodeHeaders|If this option is enabled, then during binding from Netty to Camel Message then the header values will be URL decoded (eg %20 will be a space character. Notice this option is used by the default org.apache.camel.component.netty.http.NettyHttpBinding and therefore if you implement a custom org.apache.camel.component.netty.http.NettyHttpBinding then you would need to decode the headers accordingly to this option.|false|boolean| +|usingExecutorService|Whether to use ordered thread pool, to ensure events are processed orderly on the same channel.|true|boolean| +|connectTimeout|Time to wait for a socket connection to be available. Value is in milliseconds.|10000|integer| +|cookieHandler|Configure a cookie handler to maintain a HTTP session||object| +|requestTimeout|Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout.||integer| +|throwExceptionOnFailure|Option to disable throwing the HttpOperationFailedException in case of failed responses from the remote server. This allows you to get all responses regardless of the HTTP status code.|true|boolean| +|clientInitializerFactory|To use a custom ClientInitializerFactory||object| +|lazyChannelCreation|Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|okStatusCodeRange|The status codes which are considered a success response. The values are inclusive. Multiple ranges can be defined, separated by comma, e.g. 200-204,209,301-304. Each range must be a single number or from-to with the dash included. The default range is 200-299|200-299|string| +|producerPoolBlockWhenExhausted|Sets the value for the blockWhenExhausted configuration attribute. It determines whether to block when the borrowObject() method is invoked when the pool is exhausted (the maximum number of active objects has been reached).|true|boolean| +|producerPoolEnabled|Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details.|true|boolean| +|producerPoolMaxIdle|Sets the cap on the number of idle instances in the pool.|100|integer| +|producerPoolMaxTotal|Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit.|-1|integer| +|producerPoolMaxWait|Sets the maximum duration (value in millis) the borrowObject() method should block before throwing an exception when the pool is exhausted and producerPoolBlockWhenExhausted is true. When less than 0, the borrowObject() method may block indefinitely.|-1|integer| +|producerPoolMinEvictableIdle|Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor.|300000|integer| +|producerPoolMinIdle|Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects.||integer| +|useRelativePath|Sets whether to use a relative path in HTTP requests.|true|boolean| +|allowSerializedHeaders|Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|channelGroup|To use a explicit ChannelGroup.||object| +|configuration|To use a custom configured NettyHttpConfiguration for configuring this endpoint.||object| +|disableStreamCache|Determines whether or not the raw input stream from Netty HttpRequest#getContent() or HttpResponset#getContent() is cached or not (Camel will read the stream into a in light-weight memory based Stream caching) cache. By default Camel will cache the Netty input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. Mind that if you enable this option, then you cannot read the Netty stream multiple times out of the box, and you would need manually to reset the reader index on the Netty raw stream. Also Netty will auto-close the Netty stream when the Netty HTTP server/HTTP client is done processing, which means that if the asynchronous routing engine is in use then any asynchronous thread that may continue routing the org.apache.camel.Exchange may not be able to read the Netty stream, because Netty has closed it.|false|boolean| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter headers.||object| +|nativeTransport|Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: http://netty.io/wiki/native-transports.html|false|boolean| +|nettyHttpBinding|To use a custom org.apache.camel.component.netty.http.NettyHttpBinding for binding to/from Netty and Camel Message API.||object| +|options|Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used.||object| +|receiveBufferSize|The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes.|65536|integer| +|receiveBufferSizePredictor|Configures the buffer size predictor. See details at Jetty documentation and this mail thread.||integer| +|sendBufferSize|The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes.|65536|integer| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|transferException|If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk.|false|boolean| +|transferExchange|Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|unixDomainSocketPath|Path to unix domain socket to use instead of inet socket. Host and port parameters will not be used, however required. It is ok to set dummy values for them. Must be used with nativeTransport=true and clientMode=false.||string| +|workerCount|When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu\_core\_threads x 2). User can use this option to override the default workerCount from Netty.||integer| +|workerGroup|To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads.||object| +|decoders|A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup.||string| +|encoders|A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup.||string| +|enabledProtocols|Which protocols to enable when using SSL|TLSv1.2,TLSv1.3|string| +|hostnameVerification|To enable/disable hostname verification on SSLEngine|false|boolean| +|keyStoreFile|Client side certificate keystore to be used for encryption||string| +|keyStoreFormat|Keystore format to be used for payload encryption. Defaults to JKS if not set||string| +|keyStoreResource|Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|needClientAuth|Configures whether the server needs client authentication when using SSL.|false|boolean| +|passphrase|Password setting to use in order to encrypt/decrypt payloads sent using SSH||string| +|securityConfiguration|Refers to a org.apache.camel.component.netty.http.NettyHttpSecurityConfiguration for configuring secure web resources.||object| +|securityOptions|To configure NettyHttpSecurityConfiguration using key/value pairs from the map||object| +|securityProvider|Security provider to be used for payload encryption. Defaults to SunX509 if not set.||string| +|ssl|Setting to specify whether SSL encryption is applied to this endpoint|false|boolean| +|sslClientCertHeaders|When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range.|false|boolean| +|sslContextParameters|To configure security using SSLContextParameters||object| +|sslHandler|Reference to a class that could be used to return an SSL Handler||object| +|trustStoreFile|Server side certificate keystore to be used for encryption||string| +|trustStoreResource|Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| diff --git a/camel-netty.md b/camel-netty.md new file mode 100644 index 0000000000000000000000000000000000000000..ca9a2b49c62a18369f6acdf64c5f5c1caa52b050 --- /dev/null +++ b/camel-netty.md @@ -0,0 +1,773 @@ +# Netty + +**Since Camel 2.14** + +**Both producer and consumer are supported** + +The Netty component in Camel is a socket communication component, based +on the [Netty](http://netty.io/) project version 4. Netty is a NIO +client server framework that enables quick and easy development of +networkServerInitializerFactory applications such as protocol servers +and clients. Netty greatly simplifies and streamlines network +programming such as TCP and UDP socket server. + +This Camel component supports both producer and consumer endpoints. + +The Netty component has several options and allows fine-grained control +of a number of TCP/UDP communication parameters (buffer sizes, +`keepAlive`, `tcpNoDelay`, etc.), and facilitates both In-Only and +In-Out communication on a Camel route. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-netty + x.x.x + + + +# URI format + +The URI scheme for a netty component is as follows + +**TCP** + + netty:tcp://0.0.0.0:99999[?options] + +**UDP** + + netty:udp://remotehost:99999/[?options] + +This component supports producer and consumer endpoints for both TCP and +UDP. + +# Registry-based Options + +Codec Handlers and SSL Keystores can be enlisted in the Registry, such +as in the Spring XML file. The values that could be passed in are the +following: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescription

passphrase

password setting to use to +encrypt/decrypt payloads sent using SSH

keyStoreFormat

keystore format to be used for payload +encryption. Defaults to JKS if not set

securityProvider

Security provider to be used for +payload encryption. Defaults to SunX509 if not +set.

keyStoreFile

deprecated: Client +side certificate keystore to be used for encryption

trustStoreFile

deprecated: Server +side certificate keystore to be used for encryption

keyStoreResource

Client side certificate keystore to be +used for encryption. It is loaded by default from classpath, but you can +prefix with "classpath:", "file:", or +"http:" to load the resource from different +systems.

trustStoreResource

Server side certificate keystore to be +used for encryption. It is loaded by default from classpath, but you can +prefix with "classpath:", "file:", or +"http:" to load the resource from different +systems.

sslHandler

Reference to a class that could be used +to return an SSL Handler

encoder

A custom ChannelHandler +class that can be used to perform special marshalling of outbound +payloads. Must override +io.netty.channel.ChannelInboundHandlerAdapter.

encoders

A list of encoders to be used. You can +use a string that has values separated by comma, and have the values be +looked up in the Registry. Remember to prefix the value with +# so Camel knows it should look up.

decoder

A custom ChannelHandler +class that can be used to perform special marshalling of inbound +payloads. Must override +io.netty.channel.ChannelOutboundHandlerAdapter.

decoders

A list of decoders to be used. You can +use a string that has values separated by comma, and have the values be +looked up in the Registry. Remember to prefix the value with +# so Camel knows it should look up.

+ +Read below about using non-shareable encoders/decoders. + +## Using non-shareable encoders or decoders + +If your encoders or decoders are not shareable (e.g., they don’t have +the @Shareable class annotation), then your encoder/decoder must +implement the `org.apache.camel.component.netty.ChannelHandlerFactory` +interface, and return a new instance in the `newChannelHandler` method. +This is to ensure the encoder/decoder can safely be used. If this is not +the case, then the Netty component will log a WARN when an endpoint is +created. + +The Netty component offers a +`org.apache.camel.component.netty.ChannelHandlerFactories` factory +class, that has a number of commonly used methods. + +# Sending Messages to/from a Netty endpoint + +## Netty Producer + +In Producer mode, the component provides the ability to send payloads to +a socket endpoint using either TCP or UDP protocols (with optional SSL +support). + +The producer mode supports both one-way and request-response based +operations. + +## Netty Consumer + +In Consumer mode, the component provides the ability to: + +- listen to a specified socket using either TCP or UDP protocols (with + optional SSL support), + +- receive requests on the socket using text/xml, binary and serialized + object-based payloads and + +- send them along on a route as message exchanges. + +The consumer mode supports both one-way and request-response based +operations. + +## Using Multiple Codecs + +In certain cases, it may be necessary to add chains of encoders and +decoders to the netty pipeline. To add multiple codecs to a Camel netty +endpoint, the `encoders` and `decoders` uri parameters should be used. +Like the `encoder` and `decoder` parameters they are used to supply +references (lists of `ChannelUpstreamHandlers` and +`ChannelDownstreamHandlers`) that should be added to the pipeline. + +Note that if encoders are specified, then the encoder param will be +ignored, similarly for decoders and the decoder param. + +Read further about using [non-shareable +encoders/decoders](#Netty-NonShareableEncodersOrDecoders). + +The lists of codecs need to be added to the Camel’s registry, so they +can be resolved when the endpoint is created. + + ChannelHandlerFactory lengthDecoder = ChannelHandlerFactories.newLengthFieldBasedFrameDecoder(1048576, 0, 4, 0, 4); + + StringDecoder stringDecoder = new StringDecoder(); + registry.bind("length-decoder", lengthDecoder); + registry.bind("string-decoder", stringDecoder); + + LengthFieldPrepender lengthEncoder = new LengthFieldPrepender(4); + StringEncoder stringEncoder = new StringEncoder(); + registry.bind("length-encoder", lengthEncoder); + registry.bind("string-encoder", stringEncoder); + + List decoders = new ArrayList(); + decoders.add(lengthDecoder); + decoders.add(stringDecoder); + + List encoders = new ArrayList(); + encoders.add(lengthEncoder); + encoders.add(stringEncoder); + + registry.bind("encoders", encoders); + registry.bind("decoders", decoders); + +Spring’s native collections support can be used to specify the codec +lists in an application context + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +The bean names can then be used in netty endpoint definitions either as +a comma-separated list or contained in a list, e.g.: + +Java +from("direct:multiple-codec").to("netty:tcp://0.0.0.0:{{port}}?encoders=#encoders\&sync=false"); + + from("netty:tcp://0.0.0.0:{{port}}?decoders=#length-decoder,#string-decoder&sync=false").to("mock:multiple-codec"); + +XML + + + + + + + + + + + +# Closing Channel When Complete + +When acting as a server, you sometimes want to close the channel when, +for example, a client conversion is finished. You can do this by simply +setting the endpoint option `disconnect=true`. + +However, you can also instruct Camel on a per-message basis as follows. +To instruct Camel to close the channel, you should add a header with the +key `CamelNettyCloseChannelWhenComplete` set to a boolean `true` value. +For instance, the example below will close the channel after it has +written the bye message back to the client: + + from("netty:tcp://0.0.0.0:8080").process(new Processor() { + public void process(Exchange exchange) throws Exception { + String body = exchange.getIn().getBody(String.class); + exchange.getOut().setBody("Bye " + body); + // some condition that determines if we should close + if (close) { + exchange.getOut().setHeader(NettyConstants.NETTY_CLOSE_CHANNEL_WHEN_COMPLETE, true); + } + } + }); + +Adding custom channel pipeline factories to gain complete control over a +created pipeline + +# Custom pipeline + +Custom channel pipelines provide complete control to the user over the +handler/interceptor chain by inserting custom handler(s), encoder(s) \& +decoder(s) without having to specify them in the Netty Endpoint URL in a +straightforward way. + +To add a custom pipeline, a custom channel pipeline factory must be +created and registered with the context via the context registry (or the +camel-spring `ApplicationContextRegistry`, etc). + +A custom pipeline factory must be constructed as follows + +- A Producer-linked channel pipeline factory must extend the abstract + class `ClientInitializerFactory`. + +- A Consumer-linked channel pipeline factory must extend the abstract + class `ServerInitializerFactory`. + +- The classes should override the `initChannel()` method to insert + custom handler(s), encoder(s) and decoder(s). Not overriding the + `initChannel()` method creates a pipeline with no handlers, encoders + or decoders wired to the pipeline. + +The example below shows how `ServerInitializerFactory` factory may be +created + +## Using custom pipeline factory + + public class SampleServerInitializerFactory extends ServerInitializerFactory { + private int maxLineSize = 1024; + + protected void initChannel(Channel ch) throws Exception { + ChannelPipeline channelPipeline = ch.pipeline(); + + channelPipeline.addLast("encoder-SD", new StringEncoder(CharsetUtil.UTF_8)); + channelPipeline.addLast("decoder-DELIM", new DelimiterBasedFrameDecoder(maxLineSize, true, Delimiters.lineDelimiter())); + channelPipeline.addLast("decoder-SD", new StringDecoder(CharsetUtil.UTF_8)); + // here we add the default Camel ServerChannelHandler for the consumer, to allow Camel to route the message, etc. + channelPipeline.addLast("handler", new ServerChannelHandler(consumer)); + } + } + +The custom channel pipeline factory can then be added to the registry +and instantiated/utilized on a Camel route in the following way + + Registry registry = camelContext.getRegistry(); + ServerInitializerFactory factory = new TestServerInitializerFactory(); + registry.bind("spf", factory); + context.addRoutes(new RouteBuilder() { + public void configure() { + String netty_ssl_endpoint = + "netty:tcp://0.0.0.0:5150?serverInitializerFactory=#spf" + String return_string = + "When You Go Home, Tell Them Of Us And Say," + + "For Your Tomorrow, We Gave Our Today."; + + from(netty_ssl_endpoint) + .process(new Processor() { + public void process(Exchange exchange) throws Exception { + exchange.getOut().setBody(return_string); + } + } + } + }); + +# Reusing Netty boss and worker thread pools + +Netty has two kinds of thread pools: boss and worker. By default, each +Netty consumer and producer has their private thread pools. If you want +to reuse these thread pools among multiple consumers or producers, then +the thread pools must be created and enlisted in the Registry. + +For example, using Spring XML we can create a shared worker thread pool +using the `NettyWorkerPoolBuilder` with two worker threads as shown +below: + + + + + + + + + + +For boss thread pool there is a +`org.apache.camel.component.netty.NettyServerBossPoolBuilder` builder +for Netty consumers, and a +`org.apache.camel.component.netty.NettyClientBossPoolBuilder` for the +Netty producers. + +Then in the Camel routes we can refer to this worker pools by +configuring the `workerPool` option in the URI as shown below: + + + + + ... + + +And if we have another route, we can refer to the shared worker pool: + + + + + ... + + +And so forth. + +# Multiplexing concurrent messages over a single connection with request/reply + +When using Netty for request/reply messaging via the netty producer, +then by default, each message is sent via a non-shared connection +(pooled). This ensures that replies are automatic being able to map to +the correct request thread for further routing in Camel. In other words, +correlation between request/reply messages happens out-of-the-box +because the replies come back on the same connection that was used for +sending the request; and this connection is not shared with others. When +the response comes back, the connection is returned to the connection +pool, where it can be reused by others. + +However, if you want to multiplex concurrent request/responses on a +single shared connection, then you need to turn off the connection +pooling by setting `producerPoolEnabled=false`. Now this means there is +a potential issue with interleaved responses if replies come back +out-of-order. Therefore, you need to have a correlation id in both the +request and reply messages, so you can properly correlate the replies to +the Camel callback that is responsible for continue processing the +message in Camel. To do this, you need to implement +`NettyCamelStateCorrelationManager` as correlation manager and configure +it via the `correlationManager=#myManager` option. + +We recommend extending the `TimeoutCorrelationManagerSupport` when you +build custom correlation managers. This provides support for timeout and +other complexities you otherwise would need to implement as well. + +You can find an example with the Apache Camel source code in the +examples directory under the `camel-example-netty-custom-correlation` +directory. + +# Native transport + +To enable native transport, you need to add additional dependency for +epoll or kqueue depending on your OS and CPU arch. To make it easier add +the following extension to your `build` section of `pom.xml`: + + + + kr.motd.maven + os-maven-plugin + + + +So then you need to add the following dependency: + +Linux/Unix + +io.netty +netty-transport-native-epoll +${os.detected.classifier} + + +MacOS/BSD + +io.netty +netty-transport-native-kqueue +${os.detected.classifier} + + +# Examples + +## A UDP Netty endpoint using Request-Reply and serialized object payload + +Note that Object serialization is not allowed by default, and so a +decoder must be configured. + + @BindToRegistry("decoder") + public ChannelHandler getDecoder() throws Exception { + return new DefaultChannelHandlerFactory() { + @Override + public ChannelHandler newChannelHandler() { + return new DatagramPacketObjectDecoder(ClassResolvers.weakCachingResolver(null)); + } + }; + } + + RouteBuilder builder = new RouteBuilder() { + public void configure() { + from("netty:udp://0.0.0.0:5155?sync=true&decoders=#decoder") + .process(new Processor() { + public void process(Exchange exchange) throws Exception { + Poetry poetry = (Poetry) exchange.getIn().getBody(); + // Process poetry in some way + exchange.getOut().setBody("Message received); + } + } + } + }; + +## A TCP-based Netty consumer endpoint using One-way communication + + RouteBuilder builder = new RouteBuilder() { + public void configure() { + from("netty:tcp://0.0.0.0:5150") + .to("mock:result"); + } + }; + +## An SSL/TCP-based Netty consumer endpoint using Request-Reply communication + +Using the JSSE Configuration Utility + +The Netty component supports SSL/TLS configuration through the [Camel +JSSE Configuration +Utility](#manual::camel-configuration-utilities.adoc). This utility +greatly decreases the amount of component-specific code you need to +write and is configurable at the endpoint and component levels. The +following examples demonstrate how to use the utility with the Netty +component. + +Programmatic configuration of the component + + KeyStoreParameters ksp = new KeyStoreParameters(); + ksp.setResource("/users/home/server/keystore.jks"); + ksp.setPassword("keystorePassword"); + + KeyManagersParameters kmp = new KeyManagersParameters(); + kmp.setKeyStore(ksp); + kmp.setKeyPassword("keyPassword"); + + SSLContextParameters scp = new SSLContextParameters(); + scp.setKeyManagers(kmp); + + NettyComponent nettyComponent = getContext().getComponent("netty", NettyComponent.class); + nettyComponent.getConfiguration().setSslContextParameters(scp); + +Spring DSL based configuration of endpoint + + ... + + + + + ... + ... + + ... + +Using Basic SSL/TLS configuration on the Jetty Component + + Registry registry = context.getRegistry(); + registry.bind("password", "changeit"); + registry.bind("ksf", new File("src/test/resources/keystore.jks")); + registry.bind("tsf", new File("src/test/resources/keystore.jks")); + + context.addRoutes(new RouteBuilder() { + public void configure() { + String netty_ssl_endpoint = + "netty:tcp://0.0.0.0:5150?sync=true&ssl=true&passphrase=#password" + + "&keyStoreFile=#ksf&trustStoreFile=#tsf"; + String return_string = + "When You Go Home, Tell Them Of Us And Say," + + "For Your Tomorrow, We Gave Our Today."; + + from(netty_ssl_endpoint) + .process(new Processor() { + public void process(Exchange exchange) throws Exception { + exchange.getOut().setBody(return_string); + } + } + } + }); + +Getting access to SSLSession and the client certificate + +You can get access to the `javax.net.ssl.SSLSession` if you e.g., need +to get details about the client certificate. When `ssl=true` then the +[Netty](#netty-component.adoc) component will store the `SSLSession` as +a header on the Camel Message as shown below: + + SSLSession session = exchange.getIn().getHeader(NettyConstants.NETTY_SSL_SESSION, SSLSession.class); + // get the first certificate which is client certificate + javax.security.cert.X509Certificate cert = session.getPeerCertificateChain()[0]; + Principal principal = cert.getSubjectDN(); + +Remember to set `needClientAuth=true` to authenticate the client, +otherwise `SSLSession` cannot access information about the client +certificate, and you may get an exception +`javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated`. You +may also get this exception if the client certificate is expired or not +valid, etc. + +The option `sslClientCertHeaders` can be set to `true` which then +enriches the Camel Message with headers having details about the client +certificate. For example, the subject name is readily available in the +header `CamelNettySSLClientCertSubjectName`. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|To use the NettyConfiguration as configuration when creating endpoints.||object| +|disconnect|Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer.|false|boolean| +|keepAlive|Setting to ensure socket is not closed due to inactivity|true|boolean| +|reuseAddress|Setting to facilitate socket multiplexing|true|boolean| +|reuseChannel|This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY\_CHANNEL which allows you to obtain the channel during routing and use it as well.|false|boolean| +|sync|Setting to set endpoint as one-way or request-response|true|boolean| +|tcpNoDelay|Setting to improve TCP protocol performance|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|broadcast|Setting to choose Multicast over UDP|false|boolean| +|clientMode|If the clientMode is true, netty consumer will connect the address as a TCP client.|false|boolean| +|reconnect|Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled|true|boolean| +|reconnectInterval|Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection|10000|integer| +|backlog|Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting.||integer| +|bossCount|When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty|1|integer| +|bossGroup|Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint||object| +|disconnectOnNoReply|If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back.|true|boolean| +|executorService|To use the given EventExecutorGroup.||object| +|maximumPoolSize|Sets a maximum thread pool size for the netty consumer ordered thread pool. The default size is 2 x cpu\_core plus 1. Setting this value to eg 10 will then use 10 threads unless 2 x cpu\_core plus 1 is a higher value, which then will override and be used. For example if there are 8 cores, then the consumer thread pool will be 17. This thread pool is used to route messages received from Netty by Camel. We use a separate thread pool to ensure ordering of messages and also in case some messages will block, then nettys worker threads (event loop) wont be affected.||integer| +|nettyServerBootstrapFactory|To use a custom NettyServerBootstrapFactory||object| +|networkInterface|When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group.||string| +|noReplyLogLevel|If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back.|WARN|object| +|serverClosedChannelExceptionCaughtLogLevel|If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server.|DEBUG|object| +|serverExceptionCaughtLogLevel|If the server (NettyConsumer) catches an exception then its logged using this logging level.|WARN|object| +|serverInitializerFactory|To use a custom ServerInitializerFactory||object| +|usingExecutorService|Whether to use ordered thread pool, to ensure events are processed orderly on the same channel.|true|boolean| +|connectTimeout|Time to wait for a socket connection to be available. Value is in milliseconds.|10000|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|requestTimeout|Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout.||integer| +|clientInitializerFactory|To use a custom ClientInitializerFactory||object| +|correlationManager|To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details.||object| +|lazyChannelCreation|Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started.|true|boolean| +|producerPoolBlockWhenExhausted|Sets the value for the blockWhenExhausted configuration attribute. It determines whether to block when the borrowObject() method is invoked when the pool is exhausted (the maximum number of active objects has been reached).|true|boolean| +|producerPoolEnabled|Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details.|true|boolean| +|producerPoolMaxIdle|Sets the cap on the number of idle instances in the pool.|100|integer| +|producerPoolMaxTotal|Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit.|-1|integer| +|producerPoolMaxWait|Sets the maximum duration (value in millis) the borrowObject() method should block before throwing an exception when the pool is exhausted and producerPoolBlockWhenExhausted is true. When less than 0, the borrowObject() method may block indefinitely.|-1|integer| +|producerPoolMinEvictableIdle|Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor.|300000|integer| +|producerPoolMinIdle|Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects.||integer| +|udpConnectionlessSending|This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port.|false|boolean| +|useByteBuf|If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out.|false|boolean| +|allowSerializedHeaders|Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|channelGroup|To use a explicit ChannelGroup.||object| +|nativeTransport|Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: http://netty.io/wiki/native-transports.html|false|boolean| +|options|Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used.||object| +|receiveBufferSize|The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes.|65536|integer| +|receiveBufferSizePredictor|Configures the buffer size predictor. See details at Jetty documentation and this mail thread.||integer| +|sendBufferSize|The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes.|65536|integer| +|transferExchange|Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|udpByteArrayCodec|For UDP only. If enabled the using byte array codec instead of Java serialization protocol.|false|boolean| +|unixDomainSocketPath|Path to unix domain socket to use instead of inet socket. Host and port parameters will not be used, however required. It is ok to set dummy values for them. Must be used with nativeTransport=true and clientMode=false.||string| +|workerCount|When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu\_core\_threads x 2). User can use this option to override the default workerCount from Netty.||integer| +|workerGroup|To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads.||object| +|allowDefaultCodec|The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain.|true|boolean| +|autoAppendDelimiter|Whether or not to auto append missing end delimiter when sending using the textline codec.|true|boolean| +|decoderMaxLineLength|The max line length to use for the textline codec.|1024|integer| +|decoders|A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup.||string| +|delimiter|The delimiter to use for the textline codec. Possible values are LINE and NULL.|LINE|object| +|encoders|A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup.||string| +|encoding|The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset.||string| +|textline|Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default.|false|boolean| +|enabledProtocols|Which protocols to enable when using SSL|TLSv1.2,TLSv1.3|string| +|hostnameVerification|To enable/disable hostname verification on SSLEngine|false|boolean| +|keyStoreFile|Client side certificate keystore to be used for encryption||string| +|keyStoreFormat|Keystore format to be used for payload encryption. Defaults to JKS if not set||string| +|keyStoreResource|Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|needClientAuth|Configures whether the server needs client authentication when using SSL.|false|boolean| +|passphrase|Password setting to use in order to encrypt/decrypt payloads sent using SSH||string| +|securityProvider|Security provider to be used for payload encryption. Defaults to SunX509 if not set.||string| +|ssl|Setting to specify whether SSL encryption is applied to this endpoint|false|boolean| +|sslClientCertHeaders|When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range.|false|boolean| +|sslContextParameters|To configure security using SSLContextParameters||object| +|sslHandler|Reference to a class that could be used to return an SSL Handler||object| +|trustStoreFile|Server side certificate keystore to be used for encryption||string| +|trustStoreResource|Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|protocol|The protocol to use which can be tcp or udp.||string| +|host|The hostname. For the consumer the hostname is localhost or 0.0.0.0. For the producer the hostname is the remote host to connect to||string| +|port|The host port number||integer| +|disconnect|Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer.|false|boolean| +|keepAlive|Setting to ensure socket is not closed due to inactivity|true|boolean| +|reuseAddress|Setting to facilitate socket multiplexing|true|boolean| +|reuseChannel|This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY\_CHANNEL which allows you to obtain the channel during routing and use it as well.|false|boolean| +|sync|Setting to set endpoint as one-way or request-response|true|boolean| +|tcpNoDelay|Setting to improve TCP protocol performance|true|boolean| +|broadcast|Setting to choose Multicast over UDP|false|boolean| +|clientMode|If the clientMode is true, netty consumer will connect the address as a TCP client.|false|boolean| +|reconnect|Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled|true|boolean| +|reconnectInterval|Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection|10000|integer| +|backlog|Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting.||integer| +|bossCount|When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty|1|integer| +|bossGroup|Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|disconnectOnNoReply|If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back.|true|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|nettyServerBootstrapFactory|To use a custom NettyServerBootstrapFactory||object| +|networkInterface|When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group.||string| +|noReplyLogLevel|If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back.|WARN|object| +|serverClosedChannelExceptionCaughtLogLevel|If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server.|DEBUG|object| +|serverExceptionCaughtLogLevel|If the server (NettyConsumer) catches an exception then its logged using this logging level.|WARN|object| +|serverInitializerFactory|To use a custom ServerInitializerFactory||object| +|usingExecutorService|Whether to use ordered thread pool, to ensure events are processed orderly on the same channel.|true|boolean| +|connectTimeout|Time to wait for a socket connection to be available. Value is in milliseconds.|10000|integer| +|requestTimeout|Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout.||integer| +|clientInitializerFactory|To use a custom ClientInitializerFactory||object| +|correlationManager|To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details.||object| +|lazyChannelCreation|Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|producerPoolBlockWhenExhausted|Sets the value for the blockWhenExhausted configuration attribute. It determines whether to block when the borrowObject() method is invoked when the pool is exhausted (the maximum number of active objects has been reached).|true|boolean| +|producerPoolEnabled|Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details.|true|boolean| +|producerPoolMaxIdle|Sets the cap on the number of idle instances in the pool.|100|integer| +|producerPoolMaxTotal|Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit.|-1|integer| +|producerPoolMaxWait|Sets the maximum duration (value in millis) the borrowObject() method should block before throwing an exception when the pool is exhausted and producerPoolBlockWhenExhausted is true. When less than 0, the borrowObject() method may block indefinitely.|-1|integer| +|producerPoolMinEvictableIdle|Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor.|300000|integer| +|producerPoolMinIdle|Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects.||integer| +|udpConnectionlessSending|This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port.|false|boolean| +|useByteBuf|If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out.|false|boolean| +|allowSerializedHeaders|Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|channelGroup|To use a explicit ChannelGroup.||object| +|nativeTransport|Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: http://netty.io/wiki/native-transports.html|false|boolean| +|options|Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used.||object| +|receiveBufferSize|The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes.|65536|integer| +|receiveBufferSizePredictor|Configures the buffer size predictor. See details at Jetty documentation and this mail thread.||integer| +|sendBufferSize|The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes.|65536|integer| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|transferExchange|Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level.|false|boolean| +|udpByteArrayCodec|For UDP only. If enabled the using byte array codec instead of Java serialization protocol.|false|boolean| +|unixDomainSocketPath|Path to unix domain socket to use instead of inet socket. Host and port parameters will not be used, however required. It is ok to set dummy values for them. Must be used with nativeTransport=true and clientMode=false.||string| +|workerCount|When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu\_core\_threads x 2). User can use this option to override the default workerCount from Netty.||integer| +|workerGroup|To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads.||object| +|allowDefaultCodec|The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain.|true|boolean| +|autoAppendDelimiter|Whether or not to auto append missing end delimiter when sending using the textline codec.|true|boolean| +|decoderMaxLineLength|The max line length to use for the textline codec.|1024|integer| +|decoders|A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup.||string| +|delimiter|The delimiter to use for the textline codec. Possible values are LINE and NULL.|LINE|object| +|encoders|A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup.||string| +|encoding|The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset.||string| +|textline|Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default.|false|boolean| +|enabledProtocols|Which protocols to enable when using SSL|TLSv1.2,TLSv1.3|string| +|hostnameVerification|To enable/disable hostname verification on SSLEngine|false|boolean| +|keyStoreFile|Client side certificate keystore to be used for encryption||string| +|keyStoreFormat|Keystore format to be used for payload encryption. Defaults to JKS if not set||string| +|keyStoreResource|Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|needClientAuth|Configures whether the server needs client authentication when using SSL.|false|boolean| +|passphrase|Password setting to use in order to encrypt/decrypt payloads sent using SSH||string| +|securityProvider|Security provider to be used for payload encryption. Defaults to SunX509 if not set.||string| +|ssl|Setting to specify whether SSL encryption is applied to this endpoint|false|boolean| +|sslClientCertHeaders|When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range.|false|boolean| +|sslContextParameters|To configure security using SSLContextParameters||object| +|sslHandler|Reference to a class that could be used to return an SSL Handler||object| +|trustStoreFile|Server side certificate keystore to be used for encryption||string| +|trustStoreResource|Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| diff --git a/camel-nitrite.md b/camel-nitrite.md new file mode 100644 index 0000000000000000000000000000000000000000..8f22dd34ea65a00568a89f941d4e5727d4c609f9 --- /dev/null +++ b/camel-nitrite.md @@ -0,0 +1,254 @@ +# Nitrite + +**Since Camel 3.0** + +**Both producer and consumer are supported** + +Nitrite component is used to access [Nitrite NoSQL +database](https://github.com/dizitart/nitrite-database) + +Maven users will need to add the following dependency to their `pom.xml` +for this component. + + + org.apache.camel + camel-nitrite + x.x.x + + + +# Producer operations + +The following Operations are available to specify as +`NitriteConstants.OPERATION` when producing to Nitrite. + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ClassTypeParametersDescription

FindCollectionOperation

collection

Filter(optional), FindOptions(optional)

Find Documents in collection by Filter. +If not specified, returns all documents

RemoveCollectionOperation

collection

Filter(required), RemoveOptions(optional)

Remove documents matching +Filter

UpdateCollectionOperation

collection

Filter(required), UpdateOptions(optional), Document(optional)

Update documents matching Filter. If +Document not specified, the message body is used

CreateIndexOperation

common

field:String(required), IndexOptions(required)

Create index with IndexOptions on +field

DropIndexOperation

common

field:String(required)

Drop index on field

ExportDatabaseOperation

common

ExportOptions(optional)

Export full database to JSON and stores +result in body - see Nitrite docs for details about format

GetAttributesOperation

common

Get attributes of a collection

GetByIdOperation

common

NitriteId

Get Document by _id

ImportDatabaseOperation

common

Import the full database from JSON in +body

InsertOperation

common

payload(optional)

Insert document to collection or object +to ObjectRepository. If parameter is not specified, inserts message +body

ListIndicesOperation

common

List indexes in collection and stores +List<Index> in message body

RebuildIndexOperation

common

field (required), async (optional)

Rebuild existing index on +field

UpdateOperation

common

payload(optional)

Update document in collection or object +in ObjectRepository. If parameter is not specified, updates document +from message body

UpsertOperation

common

payload(optional)

Upsert (Insert or Update) document in +collection or object in ObjectRepository. If parameter is not specified, +updates document from message body

FindRepositoryOperation

repository

ObjectFilter(optional), FindOptions(optional)

Find objects in ObjectRepository by +ObjectFilter. If not specified, returns all objects in +repository

RemoveRepositoryOperation

repository

ObjectFilter(required), RemoveOptions(optional)

Remove objects in ObjectRepository +matched by ObjectFilter

UpdateRepositoryOperation

repository

ObjectFilter(required), UpdateOptions(optional), payload(optional)

Update objects matching ObjectFilter. +If payload not specified, the message body is used

+ +# Examples + +## Consume changes in a collection. + + from("nitrite:/path/to/database.db?collection=myCollection") + .to("log:change"); + +## Consume changes in object repository. + + from("nitrite:/path/to/database.db?repositoryClass=my.project.MyPersistentObject") + .to("log:change"); + + package my.project; + + @Indices({ + @Index(value = "key1", type = IndexType.NonUnique) + }) + public class MyPersistentObject { + @Id + private long id; + private String key1; + // Getters, setters + } + +## Insert or update document + + from("direct:upsert") + .setBody(constant(Document.createDocument("key1", "val1"))) + .to("nitrite:/path/to/database.db?collection=myCollection"); + +## Get Document by id + + from("direct:getByID") + .setHeader(NitriteConstants.OPERATION, () -> new GetByIdOperation(NitriteId.createId(123L))) + .to("nitrite:/path/to/database.db?collection=myCollection") + .to("log:result") + +## Find Document in collection + + from("direct:getByID") + .setHeader(NitriteConstants.OPERATION, () -> new FindCollectionOperation(Filters.eq("myKey", "withValue"))) + .to("nitrite:/path/to/database.db?collection=myCollection") + .to("log:result"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|database|Path to database file. Will be created if not exists.||string| +|collection|Name of Nitrite collection. Cannot be used in combination with repositoryClass option.||string| +|repositoryClass|Class of Nitrite ObjectRepository. Cannot be used in combination with collection option.||string| +|repositoryName|Optional name of ObjectRepository. Can be only used in combination with repositoryClass, otherwise have no effect||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|password|Password for Nitrite database. Required, if option username specified.||string| +|username|Username for Nitrite database. Database is not secured if option not specified.||string| diff --git a/camel-oaipmh.md b/camel-oaipmh.md new file mode 100644 index 0000000000000000000000000000000000000000..4bc5c249036c37eeeaf684ed13bf834109576f2e --- /dev/null +++ b/camel-oaipmh.md @@ -0,0 +1,99 @@ +# Oaipmh + +**Since Camel 3.5** + +**Both producer and consumer are supported** + +The OAI-PMH component is used for harvest OAI-PMH data providers. It +allows doing requests to OAI-PMH endpoints using all verbs supported by +the protocol. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-oaipmh + x.x.x + + + +# URI format + + oaipmh:url[?options] + +# Usage + +The OAI-PMH component supports both consumer and producer endpoints. + +# Producer Example + +The following is a basic example of how to send a request to an OAI-PMH +Server. + +in Java DSL + + from("direct:start").to("oaipmh:baseUrlRepository/oai/request"); + +The result is a set of pages in XML format with all the records of the +consulted repository. + +# Consumer Example + +The following is a basic example of how to receive all messages from an +OAI-PMH Server. In Java DSL + + from("oaipmh:baseUrlRepository/oai/request") + .to(mock:result) + +# More Information + +For more details see the [OAI-PMH +documentation](http://www.openarchives.org/pmh/). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|baseUrl|Base URL of the repository to which the request is made through the OAI-PMH protocol||string| +|from|Specifies a lower bound for datestamp-based selective harvesting. UTC DateTime value||string| +|identifier|Identifier of the requested resources. Applicable only with certain verbs||string| +|metadataPrefix|Specifies the metadataPrefix of the format that should be included in the metadata part of the returned records.|oai\_dc|string| +|set|Specifies membership as a criteria for set-based selective harvesting||string| +|until|Specifies an upper bound for datestamp-based selective harvesting. UTC DateTime value.||string| +|verb|Request name supported by OAI-PMh protocol|ListRecords|string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|onlyFirst|Returns the response of a single request. Otherwise it will make requests until there is no more data to return.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|ignoreSSLWarnings|Ignore SSL certificate warnings|false|boolean| +|ssl|Causes the defined url to make an https request|false|boolean| diff --git a/camel-olingo2.md b/camel-olingo2.md new file mode 100644 index 0000000000000000000000000000000000000000..08748084c23c69fbeb5bbdda93e8864fa4f90402 --- /dev/null +++ b/camel-olingo2.md @@ -0,0 +1,244 @@ +# Olingo2 + +**Since Camel 2.14** + +**Both producer and consumer are supported** + +Starting with Camel 4.0, our project has migrated to JakartaEE. Some +parts of Apache Olingo2 may depend on J2EE, which may result in +unexpected behavior and other runtime problems. + +The Olingo2 component uses [Apache Olingo](http://olingo.apache.org/) +version 2.0 APIs to interact with OData 2.0 compliant services. A number +of popular commercial and enterprise vendors and products support the +OData protocol. A sample list of supporting products can be found on the +OData [website](http://www.odata.org/ecosystem/). + +The Olingo2 component supports reading feeds, delta feeds, entities, +simple and complex properties, links, counts, using custom and OData +system query parameters. It supports updating entities, properties, and +association links. It also supports submitting queries and change +requests as a single OData batch operation. + +The component supports configuring HTTP connection parameters and +headers for OData service connection. This allows configuring use of +SSL, OAuth2.0, etc. as required by the target OData service. + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-olingo2 + ${camel-version} + + +# URI format + + olingo2://endpoint/?[options] + +# Endpoint HTTP Headers + +The component level configuration property **httpHeaders** supplies +static HTTP header information. However, some systems require dynamic +header information to be passed to and received from the endpoint. A +sample use case would be systems that require dynamic security tokens. +The **endpointHttpHeaders** and **responseHttpHeaders** endpoint +properties provide this capability. Set headers that need to be passed +to the endpoint in the **`CamelOlingo2.endpointHttpHeaders`** property +and the response headers will be returned in a +**`CamelOlingo2.responseHttpHeaders`** property. Both properties are of +the type `java.util.Map`. + +# OData Resource Type Mapping + +The result of **read** endpoint and data type of **data** option depends +on the OData resource being queried, created or modified. + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OData Resource TypeResource URI from resourcePath and +keyPredicateIn or Out Body Type

Entity data model

$metadata

org.apache.olingo.odata2.api.edm.Edm

Service document

/

org.apache.olingo.odata2.api.servicedocument.ServiceDocument

OData feed

<entity-set>

org.apache.olingo.odata2.api.ep.feed.ODataFeed

OData entry

<entity-set>(<key-predicate>)

org.apache.olingo.odata2.api.ep.entry.ODataEntry +for Out body (response) java.util.Map<String, Object> +for In body (request)

Simple property

<entity-set>(<key-predicate>)/<simple-property>

The appropriate Java data type as +described by Olingo EdmProperty

Simple property value

<entity-set>(<key-predicate>)/<simple-property>/$value

The appropriate Java data type as +described by Olingo EdmProperty

Complex property

<entity-set>(<key-predicate>)/<complex-property>

java.util.Map<String, +Object>

Zero or one association link

<entity-set>(<key-predicate>/$link/<one-to-one-entity-set-property>

String for response +java.util.Map<String, Object> with key property names +and values for request

Zero or many association links

<entity-set>(<key-predicate>/$link/<one-to-many-entity-set-property>

java.util.List<String> +for response +java.util.List<java.util.Map<String, Object>> +containing a list of key property names and values for request

Count

<resource-uri>/$count

java.lang.Long

+ +# Samples + +The following route reads top 5 entries from the Manufacturer feed +ordered by ascending Name property. + + from("direct:...") + .setHeader("CamelOlingo2.$top", "5"); + .to("olingo2://read/Manufacturers?orderBy=Name%20asc"); + +The following route reads Manufacturer entry using the key property +value in incoming **id** header. + + from("direct:...") + .setHeader("CamelOlingo2.keyPredicate", header("id")) + .to("olingo2://read/Manufacturers"); + +The following route creates Manufacturer entry using the +`java.util.Map` in the body message. + + from("direct:...") + .to("olingo2://create/Manufacturers"); + +The following route polls Manufacturer [delta +feed](http://olingo.apache.org/doc/tutorials/deltaClient.html) every 30 +seconds. The bean **blah** updates the bean **paramsBean** to add an +updated **!deltatoken** property with the value returned in the +**ODataDeltaFeed** result. Since the initial delta token is not known, +the consumer endpoint will produce an **ODataFeed** value the first +time, and **ODataDeltaFeed** on subsequent polls. + + from("olingo2://read/Manufacturers?queryParams=#paramsBean&timeUnit=SECONDS&delay=30") + .to("bean:blah"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|To use the shared configuration||object| +|connectTimeout|HTTP connection creation timeout in milliseconds, defaults to 30,000 (30 seconds)|30000|integer| +|contentType|Content-Type header value can be used to specify JSON or XML message format, defaults to application/json;charset=utf-8|application/json;charset=utf-8|string| +|entityProviderReadProperties|Custom entity provider read properties applied to all read operations.||object| +|entityProviderWriteProperties|Custom entity provider write properties applied to create, update, patch, batch and merge operations. For instance users can skip the Json object wrapper or enable content only mode when sending request data. A service URI set in the properties will always be overwritten by the serviceUri configuration parameter. Please consider to using the serviceUri configuration parameter instead of setting the respective write property here.||object| +|filterAlreadySeen|Set this to true to filter out results that have already been communicated by this component.|false|boolean| +|httpHeaders|Custom HTTP headers to inject into every request, this could include OAuth tokens, etc.||object| +|proxy|HTTP proxy server configuration||object| +|serviceUri|Target OData service base URI, e.g. http://services.odata.org/OData/OData.svc||string| +|socketTimeout|HTTP request timeout in milliseconds, defaults to 30,000 (30 seconds)|30000|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|splitResult|For endpoints that return an array or collection, a consumer endpoint will map every element to distinct messages, unless splitResult is set to false.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|httpAsyncClientBuilder|Custom HTTP async client builder for more complex HTTP client configuration, overrides connectionTimeout, socketTimeout, proxy and sslContext. Note that a socketTimeout MUST be specified in the builder, otherwise OData requests could block indefinitely||object| +|httpClientBuilder|Custom HTTP client builder for more complex HTTP client configuration, overrides connectionTimeout, socketTimeout, proxy and sslContext. Note that a socketTimeout MUST be specified in the builder, otherwise OData requests could block indefinitely||object| +|sslContextParameters|To configure security using SSLContextParameters||object| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|apiName|What kind of operation to perform||object| +|methodName|What sub operation to use for the selected operation||string| +|connectTimeout|HTTP connection creation timeout in milliseconds, defaults to 30,000 (30 seconds)|30000|integer| +|contentType|Content-Type header value can be used to specify JSON or XML message format, defaults to application/json;charset=utf-8|application/json;charset=utf-8|string| +|entityProviderReadProperties|Custom entity provider read properties applied to all read operations.||object| +|entityProviderWriteProperties|Custom entity provider write properties applied to create, update, patch, batch and merge operations. For instance users can skip the Json object wrapper or enable content only mode when sending request data. A service URI set in the properties will always be overwritten by the serviceUri configuration parameter. Please consider to using the serviceUri configuration parameter instead of setting the respective write property here.||object| +|filterAlreadySeen|Set this to true to filter out results that have already been communicated by this component.|false|boolean| +|httpHeaders|Custom HTTP headers to inject into every request, this could include OAuth tokens, etc.||object| +|inBody|Sets the name of a parameter to be passed in the exchange In Body||string| +|proxy|HTTP proxy server configuration||object| +|serviceUri|Target OData service base URI, e.g. http://services.odata.org/OData/OData.svc||string| +|socketTimeout|HTTP request timeout in milliseconds, defaults to 30,000 (30 seconds)|30000|integer| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|splitResult|For endpoints that return an array or collection, a consumer endpoint will map every element to distinct messages, unless splitResult is set to false.|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|httpAsyncClientBuilder|Custom HTTP async client builder for more complex HTTP client configuration, overrides connectionTimeout, socketTimeout, proxy and sslContext. Note that a socketTimeout MUST be specified in the builder, otherwise OData requests could block indefinitely||object| +|httpClientBuilder|Custom HTTP client builder for more complex HTTP client configuration, overrides connectionTimeout, socketTimeout, proxy and sslContext. Note that a socketTimeout MUST be specified in the builder, otherwise OData requests could block indefinitely||object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|sslContextParameters|To configure security using SSLContextParameters||object| diff --git a/camel-olingo4.md b/camel-olingo4.md new file mode 100644 index 0000000000000000000000000000000000000000..f164dc24e1dbc321c8075d12bc9810b11719e515 --- /dev/null +++ b/camel-olingo4.md @@ -0,0 +1,216 @@ +# Olingo4 + +**Since Camel 2.19** + +**Both producer and consumer are supported** + +The Olingo4 component uses [Apache Olingo](http://olingo.apache.org/) +version 4.0 APIs to interact with OData 4.0 compliant service. Since +version 4.0, OData is an OASIS standard and a number of popular open +source and commercial vendors and products support this protocol. A +sample list of supporting products can be found on the OData +[website](http://www.odata.org/ecosystem/). + +The Olingo4 component supports reading entity sets, entities, simple and +complex properties, counts, using custom and OData system query +parameters. It supports updating entities and properties. It also +supports submitting queries and change requests as a single OData batch +operation. + +The component supports configuring HTTP connection parameters and +headers for OData service connection. This allows configuring use of +SSL, OAuth2.0, etc. as required by the target OData service. + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-olingo4 + x.x.x + + + +# URI format + + olingo4://endpoint/?[options] + +# Endpoint HTTP Headers + +The component level configuration property **httpHeaders** supplies +static HTTP header information. However, some systems require dynamic +header information to be passed to and received from the endpoint. A +sample use case would be systems that require dynamic security tokens. +The **endpointHttpHeaders** and **responseHttpHeaders** endpoint +properties provide this capability. Set headers that need to be passed +to the endpoint in the **`CamelOlingo4.endpointHttpHeaders`** property +and the response headers will be returned in a +**`CamelOlingo4.responseHttpHeaders`** property. Both properties are of +the type **`java.util.Map`**. + +# OData Resource Type Mapping + +The result of **read** endpoint and data type of **data** option depends +on the OData resource being queried, created or modified. + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OData Resource TypeResource URI from resourcePath and +keyPredicateIn or Out Body Type

Entity data model

$metadata

org.apache.olingo.commons.api.edm.Edm

Service document

/

org.apache.olingo.client.api.domain.ClientServiceDocument

OData entity set

<entity-set>

org.apache.olingo.client.api.domain.ClientEntitySet

OData entity

<entity-set>(<key-predicate>)

org.apache.olingo.client.api.domain.ClientEntity +for Out body (response) java.util.Map<String, Object> +for In body (request)

Simple property

<entity-set>(<key-predicate>)/<simple-property>

org.apache.olingo.client.api.domain.ClientPrimitiveValue

Simple property value

<entity-set>(<key-predicate>)/<simple-property>/$value

org.apache.olingo.client.api.domain.ClientPrimitiveValue

Complex property

<entity-set>(<key-predicate>)/<complex-property>

org.apache.olingo.client.api.domain.ClientComplexValue

Count

<resource-uri>/$count

java.lang.Long

+ +# Samples + +The following route reads top 5 entries from the People entity ordered +by ascending FirstName property. + + from("direct:...") + .setHeader("CamelOlingo4.$top", "5"); + .to("olingo4://read/People?orderBy=FirstName%20asc"); + +The following route reads Airports entity using the key property value +in incoming **id** header. + + from("direct:...") + .setHeader("CamelOlingo4.keyPredicate", header("id")) + .to("olingo4://read/Airports"); + +The following route creates People entity using the **ClientEntity** in +body message. + + from("direct:...") + .to("olingo4://create/People"); + +The following route calls an odata action using the **ClientEntity** in +the body message. The body message may be null for actions that don’t +expect an input. + + from("direct:...") + .to("olingo4://action/People"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|To use the shared configuration||object| +|connectTimeout|HTTP connection creation timeout in milliseconds, defaults to 30,000 (30 seconds)|30000|integer| +|contentType|Content-Type header value can be used to specify JSON or XML message format, defaults to application/json;charset=utf-8|application/json;charset=utf-8|string| +|filterAlreadySeen|Set this to true to filter out results that have already been communicated by this component.|false|boolean| +|httpHeaders|Custom HTTP headers to inject into every request, this could include OAuth tokens, etc.||object| +|proxy|HTTP proxy server configuration||object| +|serviceUri|Target OData service base URI, e.g. http://services.odata.org/OData/OData.svc||string| +|socketTimeout|HTTP request timeout in milliseconds, defaults to 30,000 (30 seconds)|30000|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|splitResult|For endpoints that return an array or collection, a consumer endpoint will map every element to distinct messages, unless splitResult is set to false.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|httpAsyncClientBuilder|Custom HTTP async client builder for more complex HTTP client configuration, overrides connectionTimeout, socketTimeout, proxy and sslContext. Note that a socketTimeout MUST be specified in the builder, otherwise OData requests could block indefinitely||object| +|httpClientBuilder|Custom HTTP client builder for more complex HTTP client configuration, overrides connectionTimeout, socketTimeout, proxy and sslContext. Note that a socketTimeout MUST be specified in the builder, otherwise OData requests could block indefinitely||object| +|sslContextParameters|To configure security using SSLContextParameters||object| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|apiName|What kind of operation to perform||object| +|methodName|What sub operation to use for the selected operation||string| +|connectTimeout|HTTP connection creation timeout in milliseconds, defaults to 30,000 (30 seconds)|30000|integer| +|contentType|Content-Type header value can be used to specify JSON or XML message format, defaults to application/json;charset=utf-8|application/json;charset=utf-8|string| +|filterAlreadySeen|Set this to true to filter out results that have already been communicated by this component.|false|boolean| +|httpHeaders|Custom HTTP headers to inject into every request, this could include OAuth tokens, etc.||object| +|inBody|Sets the name of a parameter to be passed in the exchange In Body||string| +|proxy|HTTP proxy server configuration||object| +|serviceUri|Target OData service base URI, e.g. http://services.odata.org/OData/OData.svc||string| +|socketTimeout|HTTP request timeout in milliseconds, defaults to 30,000 (30 seconds)|30000|integer| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|splitResult|For endpoints that return an array or collection, a consumer endpoint will map every element to distinct messages, unless splitResult is set to false.|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|httpAsyncClientBuilder|Custom HTTP async client builder for more complex HTTP client configuration, overrides connectionTimeout, socketTimeout, proxy and sslContext. Note that a socketTimeout MUST be specified in the builder, otherwise OData requests could block indefinitely||object| +|httpClientBuilder|Custom HTTP client builder for more complex HTTP client configuration, overrides connectionTimeout, socketTimeout, proxy and sslContext. Note that a socketTimeout MUST be specified in the builder, otherwise OData requests could block indefinitely||object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|sslContextParameters|To configure security using SSLContextParameters||object| diff --git a/camel-opensearch.md b/camel-opensearch.md new file mode 100644 index 0000000000000000000000000000000000000000..850a911d30c6f93a8e6da137ee38bc5c60a3a5c7 --- /dev/null +++ b/camel-opensearch.md @@ -0,0 +1,324 @@ +# Opensearch + +**Since Camel 4.0** + +**Only producer is supported** + +The OpenSearch component allows you to interface with an +[OpenSearch](https://opensearch.org/) 2.8.x API using the Java API +Client library. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-opensearch + x.x.x + + + +# URI format + + opensearch://clusterName[?options] + +# Message Operations + +The following [https://opensearch.org/](https://opensearch.org/) operations are currently +supported. Set an endpoint URI option or exchange header with a key of +"operation" and a value set to one of the following. Some operations +also require other parameters or the message body to be set. + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
operationmessage bodydescription

Index

Map, +String, byte[], +Reader, InputStream or +IndexRequest.Builder content to index

Adds content to an index and returns +the content’s indexId in the body. You can set the name of the target +index by setting the message header with the key "indexName". You can +set the indexId by setting the message header with the key +"indexId".

GetById

String or +GetRequest.Builder index id of content to +retrieve

Retrieves the document corresponding to +the given index id and returns a GetResponse object in the body. You can +set the name of the target index by setting the message header with the +key "indexName". You can set the type of document by setting the message +header with the key "documentClass".

Delete

String or +DeleteRequest.Builder index id of content to +delete

Deletes the specified indexName and +returns a Result object in the body. You can set the name of the target +index by setting the message header with the key "indexName".

DeleteIndex

String or +DeleteIndexRequest.Builder index name of the index to +delete

Deletes the specified indexName and +returns a status code in the body. You can set the name of the target +index by setting the message header with the key "indexName".

Bulk

Iterable or +BulkRequest.Builder of any type that is already +accepted (DeleteOperation.Builder for delete operation, +UpdateOperation.Builder for update operation, CreateOperation.Builder +for create operation, byte[], InputStream, String, Reader, Map or any +document type for index operation)

Adds/Updates/Deletes content from/to an +index and returns a List<BulkResponseItem> object in the body You +can set the name of the target index by setting the message header with +the key "indexName".

Search

Map, +String or +SearchRequest.Builder

Search the content with the map of +query string. You can set the name of the target index by setting the +message header with the key "indexName". You can set the number of hits +to return by setting the message header with the key "size". You can set +the starting document offset by setting the message header with the key +"from".

MultiSearch

MsearchRequest.Builder

Multiple search in one

MultiGet

Iterable<String> +or MgetRequest.Builder the id of the document to +retrieve

Multiple get in one

+

You can set the name of the target index by setting the message +header with the key "indexName".

Exists

None

Checks whether the index exists or not +and returns a Boolean flag in the body.

+

You must set the name of the target index by setting the message +header with the key "indexName".

Update

byte[], +InputStream, String, +Reader, Map or any document type +content to update

Updates content to an index and returns +the content’s indexId in the body. You can set the name of the target +index by setting the message header with the key "indexName". You can +set the indexId by setting the message header with the key +"indexId".

Ping

None

Pings the OpenSearch cluster and +returns true if the ping succeeded, false otherwise

+ +# Configure the component and enable basic authentication + +To use the OpenSearch component, it has to be configured with a minimum +configuration. + + OpensearchComponent opensearchComponent = new OpensearchComponent(); + opensearchComponent.setHostAddresses("opensearch-host:9200"); + camelContext.addComponent("opensearch", opensearchComponent); + +For basic authentication with OpenSearch or using reverse http proxy in +front of the OpenSearch cluster, simply setup basic authentication and +SSL on the component like the example below + + OpenSearchComponent opensearchComponent = new OpenSearchComponent(); + opensearchComponent.setHostAddresses("opensearch-host:9200"); + opensearchComponent.setUser("opensearchuser"); + opensearchComponent.setPassword("secure!!"); + + camelContext.addComponent("opensearch", opensearchComponent); + +# Index Example + +Below is a simple INDEX example + + from("direct:index") + .to("opensearch://opensearch?operation=Index&indexName=twitter"); + + + + + + +**For this operation, you’ll need to specify an indexId header.** + +A client would simply need to pass a body message containing a Map to +the route. The result body contains the indexId created. + + Map map = new HashMap(); + map.put("content", "test"); + String indexId = template.requestBody("direct:index", map, String.class); + +# Search Example + +Searching on specific field(s) and value use the Operation ´Search´. +Pass in the query JSON String or the Map + + from("direct:search") + .to("opensearch://opensearch?operation=Search&indexName=twitter"); + + + + + + + String query = "{\"query\":{\"match\":{\"doc.content\":\"new release of ApacheCamel\"}}}"; + HitsMetadata response = template.requestBody("direct:search", query, HitsMetadata.class); + +Search on specific field(s) using Map. + + Map actualQuery = new HashMap<>(); + actualQuery.put("doc.content", "new release of ApacheCamel"); + + Map match = new HashMap<>(); + match.put("match", actualQuery); + + Map query = new HashMap<>(); + query.put("query", match); + HitsMetadata response = template.requestBody("direct:search", query, HitsMetadata.class); + +Search using OpenSearch scroll api to fetch all results. + + from("direct:search") + .to("opensearch://opensearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000"); + + + + + + + String query = "{\"query\":{\"match\":{\"doc.content\":\"new release of ApacheCamel\"}}}"; + try (OpenSearchScrollRequestIterator response = template.requestBody("direct:search", query, OpenSearchScrollRequestIterator.class)) { + // do something smart with results + } + +[Split EIP](#eips:split-eip.adoc) can also be used. + + from("direct:search") + .to("opensearch://opensearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000") + .split() + .body() + .streaming() + .to("mock:output") + .end(); + +# MultiSearch Example + +MultiSearching on specific field(s) and value uses the Operation +`MultiSearch`. Pass in the MultiSearchRequest instance + + from("direct:multiSearch") + .to("opensearch://opensearch?operation=MultiSearch"); + + + + + + +MultiSearch on specific field(s) + + MsearchRequest.Builder builder = new MsearchRequest.Builder().index("twitter").searches( + new RequestItem.Builder().header(new MultisearchHeader.Builder().build()) + .body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build(), + new RequestItem.Builder().header(new MultisearchHeader.Builder().build()) + .body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build()); + List> response = template.requestBody("direct:multiSearch", builder, List.class); + +# Document type + +For all the search operations, it is possible to indicate the type of +document to retrieve to get the result already unmarshalled with the +expected type. + +The document type can be set using the header "documentClass" or via the +uri parameter of the same name. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|connectionTimeout|The time in ms to wait before connection will time out.|30000|integer| +|hostAddresses|Comma separated list with ip:port formatted remote transport addresses to use. The ip and port options must be left blank for hostAddresses to be considered instead.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|maxRetryTimeout|The time in ms before retry|30000|integer| +|socketTimeout|The timeout in ms to wait before the socket will time out.|30000|integer| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|client|To use an existing configured OpenSearch client, instead of creating a client per endpoint. This allows to customize the client with specific settings.||object| +|enableSniffer|Enable automatically discover nodes from a running OpenSearch cluster. If this option is used in conjunction with Spring Boot then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot).|false|boolean| +|sniffAfterFailureDelay|The delay of a sniff execution scheduled after a failure (in milliseconds)|60000|integer| +|snifferInterval|The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions|300000|integer| +|enableSSL|Enable SSL|false|boolean| +|password|Password for authenticate||string| +|user|Basic authenticate user||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|clusterName|Name of the cluster||string| +|connectionTimeout|The time in ms to wait before connection will timeout.|30000|integer| +|disconnect|Disconnect after it finish calling the producer|false|boolean| +|from|Starting index of the response.||integer| +|hostAddresses|Comma separated list with ip:port formatted remote transport addresses to use.||string| +|indexName|The name of the index to act against||string| +|maxRetryTimeout|The time in ms before retry|30000|integer| +|operation|What operation to perform||object| +|scrollKeepAliveMs|Time in ms during which OpenSearch will keep search context alive|60000|integer| +|size|Size of the response.||integer| +|socketTimeout|The timeout in ms to wait before the socket will timeout.|30000|integer| +|useScroll|Enable scroll usage|false|boolean| +|waitForActiveShards|Index creation waits for the write consistency number of shards to be available|1|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|documentClass|The class to use when deserializing the documents.|ObjectNode|string| +|enableSniffer|Enable automatically discover nodes from a running OpenSearch cluster. If this option is used in conjunction with Spring Boot then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot).|false|boolean| +|sniffAfterFailureDelay|The delay of a sniff execution scheduled after a failure (in milliseconds)|60000|integer| +|snifferInterval|The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions|300000|integer| +|certificatePath|The certificate that can be used to access the ES Cluster. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string| +|enableSSL|Enable SSL|false|boolean| diff --git a/camel-openshift-build-configs.md b/camel-openshift-build-configs.md new file mode 100644 index 0000000000000000000000000000000000000000..2cd4d8dc1013b20a177bcdb0f1dbf7e796e51e13 --- /dev/null +++ b/camel-openshift-build-configs.md @@ -0,0 +1,87 @@ +# Openshift-build-configs + +**Since Camel 2.17** + +**Only producer is supported** + +The OpenShift Build Config component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Openshift Build Configs operations. + +# Supported producer operation + +- listBuildConfigs + +- listBuildConfigsByLabels + +- getBuildConfig + +# Openshift Build Configs Producer Examples + +- listBuilds: this operation lists the Build Configs on an Openshift + cluster + + + + from("direct:list"). + toF("openshift-build-configs:///?kubernetesClient=#kubernetesClient&operation=listBuildConfigs"). + to("mock:result"); + +This operation returns a List of Builds from your Openshift cluster + +- listBuildsByLabels: this operation lists the build configs by labels + on an Openshift cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_BUILD_CONFIGS_LABELS, labels); + } + }); + toF("openshift-build-configs:///?kubernetesClient=#kubernetesClient&operation=listBuildConfigsByLabels"). + to("mock:result"); + +This operation returns a List of Build configs from your cluster, using +a label selector (with key1 and key2, with value value1 and value2) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|operation|Producer operation to do on Kubernetes||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-openshift-builds.md b/camel-openshift-builds.md new file mode 100644 index 0000000000000000000000000000000000000000..5a59f26943e3ef80be5256e7bc1577fc492a9266 --- /dev/null +++ b/camel-openshift-builds.md @@ -0,0 +1,86 @@ +# Openshift-builds + +**Since Camel 2.17** + +**Only producer is supported** + +The Openshift Builds component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Openshift builds operations. + +# Supported producer operation + +- listBuilds + +- listBuildsByLabels + +- getBuild + +# Openshift Builds Producer Examples + +- listBuilds: this operation lists the Builds on an Openshift cluster + + + + from("direct:list"). + toF("openshift-builds:///?kubernetesClient=#kubernetesClient&operation=listBuilds"). + to("mock:result"); + +This operation returns a List of Builds from your Openshift cluster + +- listBuildsByLabels: this operation lists the builds by labels on an + Openshift cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_BUILDS_LABELS, labels); + } + }); + toF("openshift-builds:///?kubernetesClient=#kubernetesClient&operation=listBuildsByLabels"). + to("mock:result"); + +This operation returns a List of Builds from your cluster, using a label +selector (with key1 and key2, with value value1 and value2) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|operation|Producer operation to do on Kubernetes||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-openshift-deploymentconfigs.md b/camel-openshift-deploymentconfigs.md new file mode 100644 index 0000000000000000000000000000000000000000..4474f8d31ad9525137723efa01e9989de7a8715e --- /dev/null +++ b/camel-openshift-deploymentconfigs.md @@ -0,0 +1,125 @@ +# Openshift-deploymentconfigs + +**Since Camel 3.18** + +**Both producer and consumer are supported** + +The Openshift Deployment Configs component is one of [Kubernetes +Components](#kubernetes-summary.adoc) which provides a producer to +execute Openshift Deployment Configs operations and a consumer to +consume events related to Deployment Configs objects. + +# Supported producer operation + +- listDeploymentConfigs + +- listDeploymentsConfigsByLabels + +- getDeploymentConfig + +- createDeploymentConfig + +- updateDeploymentConfig + +- deleteDeploymentConfig + +- scaleDeploymentConfig + +# Openshift Deployment Configs Producer Examples + +- listDeploymentConfigs: this operation lists the deployments on an + Openshift cluster + + + + from("direct:list"). + toF("openshift-deploymentconfigs:///?kubernetesClient=#kubernetesClient&operation=listDeploymentConfigs"). + to("mock:result"); + +This operation returns a List of Deployment Configs from your cluster + +- listDeploymentConfigsByLabels: this operation lists the deployment + configs by labels on an Openshift cluster + + + + from("direct:listByLabels").process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + Map labels = new HashMap<>(); + labels.put("key1", "value1"); + labels.put("key2", "value2"); + exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_DEPLOYMENTS_LABELS, labels); + } + }); + toF("openshift-deploymentconfigs:///?kubernetesClient=#kubernetesClient&operation=listDeploymentConfigsByLabels"). + to("mock:result"); + +This operation returns a List of Deployment Configs from your cluster, +using a label selector (with key1 and key2, with value value1 and +value2) + +# Openshift Deployment Configs Consumer Example + + fromF("openshift-deploymentconfigs://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new OpenshiftProcessor()).to("mock:result"); + public class OpenshiftProcessor implements Processor { + @Override + public void process(Exchange exchange) throws Exception { + Message in = exchange.getIn(); + DeploymentConfig dp = exchange.getIn().getBody(DeploymentConfig.class); + log.info("Got event with configmap name: " + dp.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); + } + } + +This consumer will return a list of events on the namespace default for +the deployment config test. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kubernetesClient|To use an existing kubernetes client.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|masterUrl|URL to a remote Kubernetes API server. This should only be used when your Camel application is connecting from outside Kubernetes. If you run your Camel application inside Kubernetes, then you can use local or client as the URL to tell Camel to run in local mode. If you connect remotely to Kubernetes, then you may also need some of the many other configuration options for secured connection with certificates, etc.||string| +|apiVersion|The Kubernetes API Version to use||string| +|dnsDomain|The dns domain, used for ServiceCall EIP||string| +|kubernetesClient|Default KubernetesClient to use if provided||object| +|namespace|The namespace||string| +|portName|The port name, used for ServiceCall EIP||string| +|portProtocol|The port protocol, used for ServiceCall EIP|tcp|string| +|crdGroup|The Consumer CRD Resource Group we would like to watch||string| +|crdName|The Consumer CRD Resource name we would like to watch||string| +|crdPlural|The Consumer CRD Resource Plural we would like to watch||string| +|crdScope|The Consumer CRD Resource Scope we would like to watch||string| +|crdVersion|The Consumer CRD Resource Version we would like to watch||string| +|labelKey|The Consumer Label key when watching at some resources||string| +|labelValue|The Consumer Label value when watching at some resources||string| +|poolSize|The Consumer pool size|1|integer| +|resourceName|The Consumer Resource Name we would like to watch||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|operation|Producer operation to do on Kubernetes||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|connectionTimeout|Connection timeout in milliseconds to use when making requests to the Kubernetes API server.||integer| +|caCertData|The CA Cert Data||string| +|caCertFile|The CA Cert File||string| +|clientCertData|The Client Cert Data||string| +|clientCertFile|The Client Cert File||string| +|clientKeyAlgo|The Key Algorithm used by the client||string| +|clientKeyData|The Client Key data||string| +|clientKeyFile|The Client Key file||string| +|clientKeyPassphrase|The Client Key Passphrase||string| +|oauthToken|The Auth Token||string| +|password|Password to connect to Kubernetes||string| +|trustCerts|Define if the certs we used are trusted anyway or not||boolean| +|username|Username to connect to Kubernetes||string| diff --git a/camel-openstack-cinder.md b/camel-openstack-cinder.md new file mode 100644 index 0000000000000000000000000000000000000000..67e9d7296b520a91ce242da32836e37f377ec51e --- /dev/null +++ b/camel-openstack-cinder.md @@ -0,0 +1,145 @@ +# Openstack-cinder + +**Since Camel 2.19** + +**Only producer is supported** + +The Openstack Cinder component allows messages to be sent to an +OpenStack block storage services. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-openstack + ${camel-version} + + +where `${camel-version`} must be replaced by the actual version of +Camel. + +# URI Format + + openstack-cinder://hosturl[?options] + +# Usage + +You can use the following settings for each subsystem: + +# volumes + +## Operations you can perform with the Volume producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create new volume.

get

Get the volume.

getAll

Get all volumes.

getAllTypes

Get volume types.

update

Update the volume.

delete

Delete the volume.

+ +If you need more precise volume settings, you can create a new object of +the type **org.openstack4j.model.storage.block.Volume** and send in the +message body. + +# snapshots + +## Operations you can perform with the Snapshot producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create a new snapshot.

get

Get the snapshot.

getAll

Get all snapshots.

update

Get update the snapshot.

delete

Delete the snapshot.

+ +If you need more precise server settings, you can create a new object of +the type **org.openstack4j.model.storage.block.VolumeSnapshot** and send +in the message body. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|OpenStack host url||string| +|apiVersion|OpenStack API version|V3|string| +|config|OpenStack configuration||object| +|domain|Authentication domain|default|string| +|operation|The operation to do||string| +|password|OpenStack password||string| +|project|The project ID||string| +|subsystem|OpenStack Cinder subsystem||string| +|username|OpenStack username||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-openstack-glance.md b/camel-openstack-glance.md new file mode 100644 index 0000000000000000000000000000000000000000..6122a0e954623cd70d5651750423da5c50d39737 --- /dev/null +++ b/camel-openstack-glance.md @@ -0,0 +1,95 @@ +# Openstack-glance + +**Since Camel 2.19** + +**Only producer is supported** + +The Openstack Glance component allows messages to be sent to an +OpenStack image services. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-openstack + ${camel-version} + + +where `${camel-version`} must be replaced by the actual version of +Camel. + +# URI Format + + openstack-glance://hosturl[?options] + +# Usage + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

reserve

Reserve image.

create

Create a new image.

update

Update image.

upload

Upload image.

get

Get the image.

getAll

Get all images.

delete

Delete the image.

+ +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|OpenStack host url||string| +|apiVersion|OpenStack API version|V3|string| +|config|OpenStack configuration||object| +|domain|Authentication domain|default|string| +|operation|The operation to do||string| +|password|OpenStack password||string| +|project|The project ID||string| +|username|OpenStack username||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-openstack-keystone.md b/camel-openstack-keystone.md new file mode 100644 index 0000000000000000000000000000000000000000..f8094f706572abc34d626eb9a78f3e2714bb2dc4 --- /dev/null +++ b/camel-openstack-keystone.md @@ -0,0 +1,286 @@ +# Openstack-keystone + +**Since Camel 2.19** + +**Only producer is supported** + +The Openstack Keystone component allows messages to be sent to an +OpenStack identity services. + +The openstack-keystone component supports only Identity API v3 + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-openstack + ${camel-version} + + +where `${camel-version`} must be replaced by the actual version of +Camel. + +# URI Format + + openstack-keystone://hosturl[?options] + +# Usage + +You can use the following settings for each subsystem: + +# domains + +## Operations you can perform with the Domain producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create a new domain.

get

Get the domain.

getAll

Get all domains.

update

Update the domain.

delete

Delete the domain.

+ +If you need more precise domain settings, you can create a new object of +the type **org.openstack4j.model.identity.v3.Domain** and send in the +message body. + +# groups + +## Operations you can perform with the Group producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create a new group.

get

Get the group.

getAll

Get all groups.

update

Update the group.

delete

Delete the group.

addUserToGroup

Add the user to the group.

checkUserGroup

Check whether is the user in the +group.

removeUserFromGroup

Remove the user from the +group.

+ +If you need more precise group settings, you can create a new object of +the type **org.openstack4j.model.identity.v3.Group** and send in the +message body. + +# projects + +## Operations you can perform with the Project producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create a new project.

get

Get the project.

getAll

Get all projects.

update

Update the project.

delete

Delete the project.

+ +If you need more precise project settings, you can create a new object +of the type **org.openstack4j.model.identity.v3.Project** and send in +the message body. + +# regions + +## Operations you can perform with the Region producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create new region.

get

Get the region.

getAll

Get all regions.

update

Update the region.

delete

Delete the region.

+ +If you need more precise region settings, you can create a new object of +the type **org.openstack4j.model.identity.v3.Region** and send in the +message body. + +# users + +## Operations you can perform with the User producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create new user.

get

Get the user.

getAll

Get all users.

update

Update the user.

delete

Delete the user.

+ +If you need more precise user settings, you can create a new object of +the type **org.openstack4j.model.identity.v3.User** and send in the +message body. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|OpenStack host url||string| +|config|OpenStack configuration||object| +|domain|Authentication domain|default|string| +|operation|The operation to do||string| +|password|OpenStack password||string| +|project|The project ID||string| +|subsystem|OpenStack Keystone subsystem||string| +|username|OpenStack username||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-openstack-neutron.md b/camel-openstack-neutron.md new file mode 100644 index 0000000000000000000000000000000000000000..53a1ed487d9f07509e1838e66cfa026a7cd39a75 --- /dev/null +++ b/camel-openstack-neutron.md @@ -0,0 +1,224 @@ +# Openstack-neutron + +**Since Camel 2.19** + +**Only producer is supported** + +The Openstack Neutron component allows messages to be sent to an +OpenStack network services. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-openstack + ${camel-version} + + +where `${camel-version`} must be replaced by the actual version of +Camel. + +# URI Format + + openstack-neutron://hosturl[?options] + +# Usage + +You can use the following settings for each subsystem: + +# networks + +## Operations you can perform with the Network producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create a new network.

get

Get the network.

getAll

Get all networks.

delete

Delete the network.

+ +If you need more precise network settings, you can create a new object +of the type **org.openstack4j.model.network.Network** and send in the +message body. + +# subnets + +## Operations you can perform with the Subnet producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create new subnet.

get

Get the subnet.

getAll

Get all subnets.

delete

Delete the subnet.

action

Perform an action on the +subnet.

+ +If you need more precise subnet settings, you can create a new object of +the type **org.openstack4j.model.network.Subnet** and send in the +message body. + +# ports + +## Operations you can perform with the Port producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create a new port.

get

Get the port.

getAll

Get all ports.

update

Update the port.

delete

Delete the port.

+ +# routers + +## Operations you can perform with the Router producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create a new router.

get

Get the router.

getAll

Get all routers.

update

Update the router.

delete

Delete the router.

attachInterface

Attach an interface.

detachInterface

Detach an interface.

+ +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|OpenStack host url||string| +|apiVersion|OpenStack API version|V3|string| +|config|OpenStack configuration||object| +|domain|Authentication domain|default|string| +|operation|The operation to do||string| +|password|OpenStack password||string| +|project|The project ID||string| +|subsystem|OpenStack Neutron subsystem||string| +|username|OpenStack username||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-openstack-nova.md b/camel-openstack-nova.md new file mode 100644 index 0000000000000000000000000000000000000000..3da90d59ddfaa77f1d75eb2075cb55b888d91065 --- /dev/null +++ b/camel-openstack-nova.md @@ -0,0 +1,177 @@ +# Openstack-nova + +**Since Camel 2.19** + +**Only producer is supported** + +The Openstack Nova component allows messages to be sent to an OpenStack +compute services. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-openstack + ${camel-version} + + +where `${camel-version`} must be replaced by the actual version of +Camel. + +# URI Format + + openstack-nova://hosturl[?options] + +# Usage + +You can use the following settings for each subsystem: + +# flavors + +## Operations you can perform with the Flavor producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create new flavor.

get

Get the flavor.

getAll

Get all flavors.

delete

Delete the flavor.

+ +If you need more precise flavor settings, you can create a new object of +the type **org.openstack4j.model.compute.Flavor** and send in the +message body. + +# servers + +## Operations you can perform with the Server producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create a new server.

createSnapshot

Create snapshot of the server.

get

Get the server.

getAll

Get all servers.

delete

Delete the server.

action

Perform an action on the +server.

+ +If you need more precise server settings, you can create a new object of +the type **org.openstack4j.model.compute.ServerCreate** and send in the +message body. + +# keypairs + +## Operations you can perform with the Keypair producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create new keypair.

get

Get the keypair.

getAll

Get all keypairs.

delete

Delete the keypair.

+ +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|OpenStack host url||string| +|apiVersion|OpenStack API version|V3|string| +|config|OpenStack configuration||object| +|domain|Authentication domain|default|string| +|operation|The operation to do||string| +|password|OpenStack password||string| +|project|The project ID||string| +|subsystem|OpenStack Nova subsystem||string| +|username|OpenStack username||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-openstack-swift.md b/camel-openstack-swift.md new file mode 100644 index 0000000000000000000000000000000000000000..67f2a0b012a0da735091fd81aac4ab52493858d0 --- /dev/null +++ b/camel-openstack-swift.md @@ -0,0 +1,162 @@ +# Openstack-swift + +**Since Camel 2.19** + +**Only producer is supported** + +The Openstack Swift component allows messages to be sent to an OpenStack +object storage services. + +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-openstack + ${camel-version} + + +where `${camel-version`} must be replaced by the actual version of +Camel. + +# URI Format + + openstack-swift://hosturl[?options] + +# Usage + +You can use the following settings for each subsystem: + +# containers + +## Operations you can perform with the Container producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create a new container.

get

Get the container.

getAll

Get all containers.

update

Update the container.

delete

Delete the container.

getMetadata

Get metadata.

createUpdateMetadata

Create/update metadata.

deleteMetadata

Delete metadata.

+ +If you need more precise container settings, you can create a new object +of the type +**org.openstack4j.model.storage.object.options.CreateUpdateContainerOptions** +(in case of create or update operation) or +**org.openstack4j.model.storage.object.options.ContainerListOptions** +for listing containers and send in the message body. + +# objects + +## Operations you can perform with the Object producer + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationDescription

create

Create a new object.

get

Get the object.

getAll

Get all objects.

update

Get update the object.

delete

Delete the object.

getMetadata

Get metadata.

createUpdateMetadata

Create/update metadata.

+ +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|OpenStack host url||string| +|apiVersion|OpenStack API version|V3|string| +|config|OpenStack configuration||object| +|domain|Authentication domain|default|string| +|operation|The operation to do||string| +|password|OpenStack password||string| +|project|The project ID||string| +|subsystem|OpenStack Swift subsystem||string| +|username|OpenStack username||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-optaplanner.md b/camel-optaplanner.md new file mode 100644 index 0000000000000000000000000000000000000000..5dd4dd12b5978f0b48fab3ec9628eefb2b7513e1 --- /dev/null +++ b/camel-optaplanner.md @@ -0,0 +1,85 @@ +# Optaplanner + +**Since Camel 2.13** + +**Both producer and consumer are supported** + +The Optaplanner component solves the planning problem contained in a +message with [OptaPlanner](http://www.optaplanner.org/). +For example, feed it an unsolved Vehicle Routing problem and it solves +it. + +The component supports consumer listening for SloverManager results and +producer for processing Solution and ProblemChange. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-optaplanner + x.x.x + + +# URI format + + optaplanner:problemName[?options] + +You can append query options to the URI in the following format, +`?option=value&option=value&...` + +# Message Body + +Camel takes the planning problem for the *IN* body, solves it and +returns it on the *OUT* body. The *IN* body object supports the +following use cases: + +- If the body contains the `PlanningSolution` annotation, then it will + be solved using the solver identified by solverId and either + synchronously or asynchronously. + +- If the body is an instance of `ProblemChange`, then it will trigger + `addProblemFactChange`. + +- If the body is none of the above types, then the producer will + return the best result from the solver identified by `solverId`. + +## Samples + +Solve a planning problem on the ActiveMQ queue with OptaPlanner, passing +the SolverManager: + + from("activemq:My.Queue"). + .to("optaplanner:problemName?solverManager=#solverManager"); + +Expose OptaPlanner as a REST service, passing the Solver configuration +file: + + from("cxfrs:bean:rsServer?bindingStyle=SimpleConsumer") + .to("optaplanner:problemName?configFile=/org/foo/barSolverConfig.xml"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|problemName|Problem name||string| +|configFile|If SolverManager is absent from the header OptaPlannerConstants.SOLVER\_MANAGER then a SolverManager will be created using this Optaplanner config file.||string| +|problemId|In case of using SolverManager : the problem id|1L|integer| +|solverId|Specifies the solverId to user for the solver instance key|DEFAULT\_SOLVER|string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|async|Specifies to perform operations in async mode|false|boolean| +|threadPoolSize|Specifies the thread pool size to use when async is true|10|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|solverManager|SolverManager||object| diff --git a/camel-paho-mqtt5.md b/camel-paho-mqtt5.md new file mode 100644 index 0000000000000000000000000000000000000000..b7e197aa6914cead2f511eb165bd94b17ee0cd1d --- /dev/null +++ b/camel-paho-mqtt5.md @@ -0,0 +1,152 @@ +# Paho-mqtt5 + +**Since Camel 3.8** + +**Both producer and consumer are supported** + +Paho MQTT5 component provides connector for the [MQTT messaging protocol +version 5.0](https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html) +using the [Eclipse Paho](https://eclipse.org/paho/) library. Paho is one +of the most popular MQTT libraries, so if you would like to integrate it +with your Java project - Camel Paho connector is a way to go. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-paho-mqtt5 + x.y.z + + + +# URI format + + paho-mqtt5:topic[?options] + +Where `topic` is the name of the topic. + +# Default payload type + +By default, the Camel Paho component operates on the binary payloads +extracted out of (or put into) the MQTT message: + + // Receive payload + byte[] payload = (byte[]) consumerTemplate.receiveBody("paho-mqtt5:topic"); + + // Send payload + byte[] payload = "message".getBytes(); + producerTemplate.sendBody("paho-mqtt5:topic", payload); + +Of course, Camel build-in [type conversion +API](#manual::type-converter.adoc) can perform the automatic data type +transformations for you. In the example below Camel automatically +converts binary payload into `String` (and conversely): + + // Receive payload + String payload = consumerTemplate.receiveBody("paho-mqtt5:topic", String.class); + + // Send payload + String payload = "message"; + producerTemplate.sendBody("paho-mqtt5:topic", payload); + +# Samples + +For example, the following snippet reads messages from the MQTT broker +installed on the same host as the Camel router: + + from("paho-mqtt5:some/queue") + .to("mock:test"); + +While the snippet below sends a message to the MQTT broker: + + from("direct:test") + .to("paho-mqtt5:some/target/queue"); + +For example, this is how to read messages from the remote MQTT broker: + + from("paho-mqtt5:some/queue?brokerUrl=tcp://iot.eclipse.org:1883") + .to("mock:test"); + +And here we override the default topic and set to a dynamic topic + + from("direct:test") + .setHeader(PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC, simple("${header.customerId}")) + .to("paho-mqtt5:some/target/queue"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|automaticReconnect|Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes.|true|boolean| +|brokerUrl|The URL of the MQTT broker.|tcp://localhost:1883|string| +|cleanStart|Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable|true|boolean| +|clientId|MQTT client identifier. The identifier must be unique.||string| +|configuration|To use the shared Paho configuration||object| +|connectionTimeout|Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails.|30|integer| +|filePersistenceDirectory|Base directory used by file persistence. Will by default use user directory.||string| +|keepAliveInterval|Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds|60|integer| +|maxReconnectDelay|Get the maximum time (in millis) to wait between reconnects|128000|integer| +|persistence|Client persistence to be used - memory or file.|MEMORY|object| +|qos|Client quality of service level (0-2).|2|integer| +|receiveMaximum|Sets the Receive Maximum. This value represents the limit of QoS 1 and QoS 2 publications that the client is willing to process concurrently. There is no mechanism to limit the number of QoS 0 publications that the Server might try to send. The default value is 65535|65535|integer| +|retained|Retain option|false|boolean| +|serverURIs|Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used||string| +|sessionExpiryInterval|Sets the Session Expiry Interval. This value, measured in seconds, defines the maximum time that the broker will maintain the session for once the client disconnects. Clients should only connect with a long Session Expiry interval if they intend to connect to the server at some later point in time. By default this value is -1 and so will not be sent, in this case, the session will not expire. If a 0 is sent, the session will end immediately once the Network Connection is closed. When the client has determined that it has no longer any use for the session, it should disconnect with a Session Expiry Interval set to 0.|-1|integer| +|willMqttProperties|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The MQTT properties set for the message.||object| +|willPayload|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The byte payload for the message.||string| +|willQos|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The quality of service to publish the message at (0, 1 or 2).|1|integer| +|willRetained|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Whether or not the message should be retained.|false|boolean| +|willTopic|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|client|To use a shared Paho client||object| +|customWebSocketHeaders|Sets the Custom WebSocket Headers for the WebSocket Connection.||object| +|executorServiceTimeout|Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to.|1|integer| +|httpsHostnameVerificationEnabled|Whether SSL HostnameVerifier is enabled or not. The default value is true.|true|boolean| +|password|Password to be used for authentication against the MQTT broker||string| +|socketFactory|Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings.||object| +|sslClientProps|Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL\_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL\_RSA\_WITH\_AES\_128\_CBC\_SHA;SSL\_RSA\_WITH\_3DES\_EDE\_CBC\_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509.||object| +|sslHostnameVerifier|Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier||object| +|userName|Username to be used for authentication against the MQTT broker||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|topic|Name of the topic||string| +|automaticReconnect|Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes.|true|boolean| +|brokerUrl|The URL of the MQTT broker.|tcp://localhost:1883|string| +|cleanStart|Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable|true|boolean| +|clientId|MQTT client identifier. The identifier must be unique.||string| +|connectionTimeout|Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails.|30|integer| +|filePersistenceDirectory|Base directory used by file persistence. Will by default use user directory.||string| +|keepAliveInterval|Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds|60|integer| +|maxReconnectDelay|Get the maximum time (in millis) to wait between reconnects|128000|integer| +|persistence|Client persistence to be used - memory or file.|MEMORY|object| +|qos|Client quality of service level (0-2).|2|integer| +|receiveMaximum|Sets the Receive Maximum. This value represents the limit of QoS 1 and QoS 2 publications that the client is willing to process concurrently. There is no mechanism to limit the number of QoS 0 publications that the Server might try to send. The default value is 65535|65535|integer| +|retained|Retain option|false|boolean| +|serverURIs|Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used||string| +|sessionExpiryInterval|Sets the Session Expiry Interval. This value, measured in seconds, defines the maximum time that the broker will maintain the session for once the client disconnects. Clients should only connect with a long Session Expiry interval if they intend to connect to the server at some later point in time. By default this value is -1 and so will not be sent, in this case, the session will not expire. If a 0 is sent, the session will end immediately once the Network Connection is closed. When the client has determined that it has no longer any use for the session, it should disconnect with a Session Expiry Interval set to 0.|-1|integer| +|willMqttProperties|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The MQTT properties set for the message.||object| +|willPayload|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The byte payload for the message.||string| +|willQos|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The quality of service to publish the message at (0, 1 or 2).|1|integer| +|willRetained|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Whether or not the message should be retained.|false|boolean| +|willTopic|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|client|To use an existing mqtt client||object| +|customWebSocketHeaders|Sets the Custom WebSocket Headers for the WebSocket Connection.||object| +|executorServiceTimeout|Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to.|1|integer| +|httpsHostnameVerificationEnabled|Whether SSL HostnameVerifier is enabled or not. The default value is true.|true|boolean| +|password|Password to be used for authentication against the MQTT broker||string| +|socketFactory|Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings.||object| +|sslClientProps|Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL\_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL\_RSA\_WITH\_AES\_128\_CBC\_SHA;SSL\_RSA\_WITH\_3DES\_EDE\_CBC\_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509.||object| +|sslHostnameVerifier|Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier||object| +|userName|Username to be used for authentication against the MQTT broker||string| diff --git a/camel-paho.md b/camel-paho.md new file mode 100644 index 0000000000000000000000000000000000000000..3c185dca7447bf53de393612daf3a1e1f7fb56a9 --- /dev/null +++ b/camel-paho.md @@ -0,0 +1,150 @@ +# Paho + +**Since Camel 2.16** + +**Both producer and consumer are supported** + +Paho component provides a connector for the +[MQTT](https://en.wikipedia.org/wiki/MQTT) messaging protocol using the +[Eclipse Paho](https://eclipse.org/paho/) library. Paho is one of the +most popular MQTT libraries, so if you would like to integrate it with +your Java project - Camel Paho connector is a way to go. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-paho + x.y.z + + + +# URI format + + paho:topic[?options] + +Where `topic` is the name of the topic. + +# Default payload type + +By default, the Camel Paho component operates on the binary payloads +extracted out of (or put into) the MQTT message: + + // Receive payload + byte[] payload = (byte[]) consumerTemplate.receiveBody("paho:topic"); + + // Send payload + byte[] payload = "message".getBytes(); + producerTemplate.sendBody("paho:topic", payload); + +Of course, Camel build-in [type conversion +API](#manual::type-converter.adoc) can perform the automatic data type +transformations for you. In the example below Camel automatically +converts binary payload into `String` (and conversely): + + // Receive payload + String payload = consumerTemplate.receiveBody("paho:topic", String.class); + + // Send payload + String payload = "message"; + producerTemplate.sendBody("paho:topic", payload); + +# Samples + +For example, the following snippet reads messages from the MQTT broker +installed on the same host as the Camel router: + + from("paho:some/queue") + .to("mock:test"); + +While the snippet below sends a message to the MQTT broker: + + from("direct:test") + .to("paho:some/target/queue"); + +For example, this is how to read messages from the remote MQTT broker: + + from("paho:some/queue?brokerUrl=tcp://iot.eclipse.org:1883") + .to("mock:test"); + +And here we override the default topic and set to a dynamic topic + + from("direct:test") + .setHeader(PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC, simple("${header.customerId}")) + .to("paho:some/target/queue"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|automaticReconnect|Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes.|true|boolean| +|brokerUrl|The URL of the MQTT broker.|tcp://localhost:1883|string| +|cleanSession|Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable|true|boolean| +|clientId|MQTT client identifier. The identifier must be unique.||string| +|configuration|To use the shared Paho configuration||object| +|connectionTimeout|Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails.|30|integer| +|filePersistenceDirectory|Base directory used by file persistence. Will by default use user directory.||string| +|keepAliveInterval|Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds|60|integer| +|maxInflight|Sets the max inflight. please increase this value in a high traffic environment. The default value is 10|10|integer| +|maxReconnectDelay|Get the maximum time (in millis) to wait between reconnects|128000|integer| +|mqttVersion|Sets the MQTT version. The default action is to connect with version 3.1.1, and to fall back to 3.1 if that fails. Version 3.1.1 or 3.1 can be selected specifically, with no fall back, by using the MQTT\_VERSION\_3\_1\_1 or MQTT\_VERSION\_3\_1 options respectively.||integer| +|persistence|Client persistence to be used - memory or file.|MEMORY|object| +|qos|Client quality of service level (0-2).|2|integer| +|retained|Retain option|false|boolean| +|serverURIs|Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used||string| +|willPayload|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Sets the message for the LWT.||string| +|willQos|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Sets the quality of service to publish the message at (0, 1 or 2).||integer| +|willRetained|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Sets whether or not the message should be retained.|false|boolean| +|willTopic|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Sets the topic that the willPayload will be published to.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|client|To use a shared Paho client||object| +|customWebSocketHeaders|Sets the Custom WebSocket Headers for the WebSocket Connection.||object| +|executorServiceTimeout|Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to.|1|integer| +|httpsHostnameVerificationEnabled|Whether SSL HostnameVerifier is enabled or not. The default value is true.|true|boolean| +|password|Password to be used for authentication against the MQTT broker||string| +|socketFactory|Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings.||object| +|sslClientProps|Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL\_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL\_RSA\_WITH\_AES\_128\_CBC\_SHA;SSL\_RSA\_WITH\_3DES\_EDE\_CBC\_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509.||object| +|sslHostnameVerifier|Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier||object| +|userName|Username to be used for authentication against the MQTT broker||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|topic|Name of the topic||string| +|automaticReconnect|Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes.|true|boolean| +|brokerUrl|The URL of the MQTT broker.|tcp://localhost:1883|string| +|cleanSession|Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable|true|boolean| +|clientId|MQTT client identifier. The identifier must be unique.||string| +|connectionTimeout|Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails.|30|integer| +|filePersistenceDirectory|Base directory used by file persistence. Will by default use user directory.||string| +|keepAliveInterval|Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds|60|integer| +|maxInflight|Sets the max inflight. please increase this value in a high traffic environment. The default value is 10|10|integer| +|maxReconnectDelay|Get the maximum time (in millis) to wait between reconnects|128000|integer| +|mqttVersion|Sets the MQTT version. The default action is to connect with version 3.1.1, and to fall back to 3.1 if that fails. Version 3.1.1 or 3.1 can be selected specifically, with no fall back, by using the MQTT\_VERSION\_3\_1\_1 or MQTT\_VERSION\_3\_1 options respectively.||integer| +|persistence|Client persistence to be used - memory or file.|MEMORY|object| +|qos|Client quality of service level (0-2).|2|integer| +|retained|Retain option|false|boolean| +|serverURIs|Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used||string| +|willPayload|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Sets the message for the LWT.||string| +|willQos|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Sets the quality of service to publish the message at (0, 1 or 2).||integer| +|willRetained|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Sets whether or not the message should be retained.|false|boolean| +|willTopic|Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Sets the topic that the willPayload will be published to.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|client|To use an existing mqtt client||object| +|customWebSocketHeaders|Sets the Custom WebSocket Headers for the WebSocket Connection.||object| +|executorServiceTimeout|Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to.|1|integer| +|httpsHostnameVerificationEnabled|Whether SSL HostnameVerifier is enabled or not. The default value is true.|true|boolean| +|password|Password to be used for authentication against the MQTT broker||string| +|socketFactory|Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings.||object| +|sslClientProps|Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL\_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL\_RSA\_WITH\_AES\_128\_CBC\_SHA;SSL\_RSA\_WITH\_3DES\_EDE\_CBC\_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509.||object| +|sslHostnameVerifier|Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier||object| +|userName|Username to be used for authentication against the MQTT broker||string| diff --git a/camel-pdf.md b/camel-pdf.md new file mode 100644 index 0000000000000000000000000000000000000000..f4859e10d645139ba798a9ca0068ced7cf2728d5 --- /dev/null +++ b/camel-pdf.md @@ -0,0 +1,69 @@ +# Pdf + +**Since Camel 2.16** + +**Only producer is supported** + +The PDF component provides the ability to create, modify or extract +content from PDF documents. This component uses [Apache +PDFBox](https://pdfbox.apache.org/) as the underlying library to work +with PDF documents. + +To use the PDF component, Maven users will need to add the following +dependency to their `pom.xml`: + +**pom.xml** + + + org.apache.camel + camel-pdf + x.x.x + + + +# URI format + +The PDF component only supports producer endpoints. + + pdf:operation[?options] + +# Type converter + +Since Camel 4.8, the component is capable of doing simple document +conversions. For instance, suppose you are receiving a PDF byte as a +byte array: + + from("direct:start") + .to("pdf:extractText") + .to("mock:result"); + +It is now possible to get the body as a PD Document by using +`PDDocument doc = exchange.getIn().getBody(PDDocument.class);`, which +saves the trouble of converting the byte-array to a document. + +this only works for unprotected PDF files. For password-protected, the +files still need to be converted manually. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|operation|Operation type||object| +|font|Font|HELVETICA|string| +|fontSize|Font size in pixels|14.0|number| +|marginBottom|Margin bottom in pixels|20|integer| +|marginLeft|Margin left in pixels|20|integer| +|marginRight|Margin right in pixels|40|integer| +|marginTop|Margin top in pixels|20|integer| +|pageSize|Page size|A4|string| +|textProcessingFactory|Text processing to use. autoFormatting: Text is getting sliced by words, then max amount of words that fits in the line will be written into pdf document. With this strategy all words that doesn't fit in the line will be moved to the new line. lineTermination: Builds set of classes for line-termination writing strategy. Text getting sliced by line termination symbol and then it will be written regardless it fits in the line or not.|lineTermination|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-pg-replication-slot.md b/camel-pg-replication-slot.md new file mode 100644 index 0000000000000000000000000000000000000000..31effa3271758f123942b3c95d2ebe5d10880c8c --- /dev/null +++ b/camel-pg-replication-slot.md @@ -0,0 +1,90 @@ +# Pg-replication-slot + +**Since Camel 3.0** + +**Only consumer is supported** + +This is a component for Apache Camel that allows consuming from +PostgreSQL replication slots. The component works with PostgreSQL 10 or +later. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-pg-replication-slot + x.x.x + + + +URI format + +The pg-replication-slot component uses the following two styles of +endpoint URI notation: + + pg-replication-slot://host:port/database/slot:plugin[?parameters] + +# Examples + + from("pg-replication-slot://localhost:5432/finance/sync_slot:test_decoding?user={{username}}&password={{password}}&slotOptions.skip-empty-xacts=true&slotOptions.include-xids=false") + .to("mock:result"); + +# Tips + +PostgreSQL can generate a huge number of empty transactions on certain +operations (e.g. `VACUUM`). These transactions can congest your route. +Using `greedy=true` query parameter can help with this problem. It will +help your route filter out empty transactions quickly without waiting +for the `delay`\*`timeUnit` parameter between each exchange. + +The order of the messages is guaranteed, but the same message might come +more than once. So, for example, if you’re using this component to sync +data from PostgreSQL to another database, make sure your operations are +idempotent (e.g., use `UPSERT` instead of `INSERT`, etc). This will make +sure repeated messages won’t affect your system negatively. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|slot|Replication Slot name||string| +|host|Postgres host|localhost|string| +|port|Postgres port|5432|integer| +|database|Postgres database name||string| +|outputPlugin|Output plugin name||string| +|password|Postgres password||string| +|user|Postgres user|postgres|string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|autoCreateSlot|Auto create slot if it does not exist|true|boolean| +|slotOptions|Slot options to be passed to the output plugin.||object| +|statusInterval|Specifies the number of seconds between status packets sent back to Postgres server.|10|integer| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| diff --git a/camel-pgevent.md b/camel-pgevent.md new file mode 100644 index 0000000000000000000000000000000000000000..dbed71c48de404520affa9b8e05f4acdc2c3745f --- /dev/null +++ b/camel-pgevent.md @@ -0,0 +1,63 @@ +# Pgevent + +**Since Camel 2.15** + +**Both producer and consumer are supported** + +This is a component for Apache Camel that allows for Producing/Consuming +PostgreSQL events related to the LISTEN/NOTIFY commands added since +PostgreSQL 8.3. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-pgevent + x.x.x + + + +URI format + +The pgevent component uses the following two styles of endpoint URI +notation: + + pgevent:datasource[?parameters] + pgevent://host:port/database/channel[?parameters] + +# Common problems + +## Unable to connect to PostgreSQL database using DataSource + +Using the driver provided by PostgreSQL itself (`jdbc:postgresql:/...`) +when using a DataSource to connect to a PostgreSQL database does not +work. + +Please use the pgjdbc-ng driver (`jdbc:pgsql:/...`) instead. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|To connect using hostname and port to the database.|localhost|string| +|port|To connect using hostname and port to the database.|5432|integer| +|database|The database name. The database name can take any characters because it is sent as a quoted identifier. It is part of the endpoint URI, so diacritical marks and non-Latin letters have to be URL encoded.||string| +|channel|The channel name||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|datasource|To connect using the given javax.sql.DataSource instead of using hostname and port.||object| +|pass|Password for login||string| +|user|Username for login|postgres|string| diff --git a/camel-pinecone.md b/camel-pinecone.md new file mode 100644 index 0000000000000000000000000000000000000000..636784fb67fd255b949becafb0eacab3844a99a4 --- /dev/null +++ b/camel-pinecone.md @@ -0,0 +1,34 @@ +# Pinecone + +**Since Camel 4.6** + +**Only producer is supported** + +The Pionecone Component provides support for interacting with the +[Milvus Vector Database](https://pinecone.io/). + +# URI format + + pinecone:collection[?options] + +Where **collection** represents a named set of points (vectors with a +payload) defined in your database. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|The configuration;||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|token|Sets the API key to use for authentication||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|collection|The collection Name||string| +|token|Sets the API key to use for authentication||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-platform-http.md b/camel-platform-http.md new file mode 100644 index 0000000000000000000000000000000000000000..eaf8e083d2cd21a8a67ae544ab8d71e7bc4726bc --- /dev/null +++ b/camel-platform-http.md @@ -0,0 +1,89 @@ +# Platform-http + +**Since Camel 3.0** + +**Only consumer is supported** + +The Platform HTTP is used to allow Camel to use the existing HTTP server +from the runtime. For example, when running Camel on Spring Boot, +Quarkus, or other runtimes. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-platform-http + x.x.x + + + +# Platform HTTP Provider + +To use Platform HTTP, a provider (engine) is required to be available on +the classpath. The purpose is to have drivers for different runtimes +such as Quarkus, or Spring Boot. + +To use it with different runtimes: + +Quarkus + +org.apache.camel.quarkus +camel-quarkus-platform-http +x.x.x + + + +Spring Boot + +org.apache.camel.springboot +camel-platform-http-starter +x.x.x + + + +# Implementing a reverse proxy + +Platform HTTP component can act as a reverse proxy. In that case, some +headers are populated from the absolute URL received on the request line +of the HTTP request. Those headers are specific to the underlining +platform. + +At this moment, this feature is only supported for Quarkus implemented +in `camel-platform-http-vertx` component. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|engine|An HTTP Server engine implementation to serve the requests||object| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|path|The path under which this endpoint serves the HTTP requests, for proxy use 'proxy'||string| +|consumes|The content type this endpoint accepts as an input, such as application/xml or application/json. null or \*/\* mean no restriction.||string| +|cookieDomain|Sets which server can receive cookies.||string| +|cookieHttpOnly|Sets whether to prevent client side scripts from accessing created cookies.|false|boolean| +|cookieMaxAge|Sets the maximum cookie age in seconds.||integer| +|cookiePath|Sets the URL path that must exist in the requested URL in order to send the Cookie.|/|string| +|cookieSameSite|Sets whether to prevent the browser from sending cookies along with cross-site requests.|Lax|object| +|cookieSecure|Sets whether the cookie is only sent to the server with an encrypted request over HTTPS.|false|boolean| +|httpMethodRestrict|A comma separated list of HTTP methods to serve, e.g. GET,POST . If no methods are specified, all methods will be served.||string| +|matchOnUriPrefix|Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found.|false|boolean| +|muteException|If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace.|true|boolean| +|produces|The content type this endpoint produces, such as application/xml or application/json.||string| +|useCookieHandler|Whether to enable the Cookie Handler that allows Cookie addition, expiry, and retrieval (currently only supported by camel-platform-http-vertx)|false|boolean| +|useStreaming|Whether to use streaming for large requests and responses (currently only supported by camel-platform-http-vertx)|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|fileNameExtWhitelist|A comma or whitespace separated list of file extensions. Uploads having these extensions will be stored locally. Null value or asterisk () will allow all files.||string| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter headers to and from Camel message.||object| +|platformHttpEngine|An HTTP Server engine implementation to serve the requests of this endpoint.||object| diff --git a/camel-plc4x.md b/camel-plc4x.md new file mode 100644 index 0000000000000000000000000000000000000000..e1c21c5d72a6a350dc3397fc2925b7bef7154e93 --- /dev/null +++ b/camel-plc4x.md @@ -0,0 +1,110 @@ +# Plc4x + +**Since Camel 3.20** + +**Both producer and consumer are supported** + +The Camel Component for PLC4X allows you to create routes using the +PLC4X API to read from a Programmable Logic Controllers (PLC) device or +write to it. + +It supports various protocols by adding the driver dependencies: + +- Allen Bradley ETH + +- Automation Device Specification (ADS) + +- CANopen + +- EtherNet/IP + +- Firmata + +- KNXnet/IP + +- Modbus (TCP/UDP/Serial) + +- Open Platform Communications Unified Architecture (OPC UA) + +- Step7 (S7) + +The list of supported protocols is growing in +[PLC4X](https://plc4x.apache.org). There are good chance that they will +work out of the box just by adding the driver dependency. You can check +[here](https://plc4x.apache.org/users/protocols/index.html). + +# URI Format + + plc4x://driver[?options] + +The bucket will be created if it doesn’t already exist. + +You can append query options to the URI in the following format: +`?options=value&option2=value&...`. + +# Dependencies + +Maven users will need to add the following dependency to their +`pom.xml`. + +**pom.xml** + + + org.apache.camel + camel-plc4x + ${camel-version} + + +where `${camel-version}` must be replaced by the actual version of +Camel. + +# Consumer + +The consumer supports one-time reading or Triggered Reading. To read +from the PLC, use a `Map` containing the Alias and +Queries for the Data you want (tags). + +You can configure the *tags* using `tag.key=value` in the URI, and you +can repeat this for multiple tags. + +The Body created by the Consumer will be a `Map` +containing the Aliases and their associated value read from the PLC. + +# Polling Consumer + +The polling consumer supports consecutive reading. The input and output +are the same as for the regular consumer. + +# Producer + +To write data to the PLC, we also use a `Map`. The difference with the +Producer is that the `Value` of the Map has also to be a `Map`. Also, +this `Map` has to be set into the `Body` of the `Message` + +The used `Map` would be a `Map` where the +`Map` represent the Query and the data we want to write +to it. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|driver|PLC4X connection string for the connection to the target||string| +|autoReconnect|Whether to reconnect when no connection is present upon doing a request|false|boolean| +|period|Interval on which the Trigger should be checked||integer| +|tags|Tags as key/values from the Map to use in query||object| +|trigger|Query to a trigger. On a rising edge of the trigger, the tags will be read once||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-pubnub.md b/camel-pubnub.md new file mode 100644 index 0000000000000000000000000000000000000000..69503b8672f60e532ec2671639f7aaa37026cf20 --- /dev/null +++ b/camel-pubnub.md @@ -0,0 +1,175 @@ +# Pubnub + +**Since Camel 2.19** + +**Both producer and consumer are supported** + +Camel PubNub component can be used to communicate with the +[PubNub](https://www.pubnub.com/) data stream network for connected +devices. This component uses pubnub [java +library](https://github.com/pubnub/java). + +Use cases include: + +- Chat rooms: Sending and receiving messages + +- Locations and Connected cars: dispatching taxi cabs + +- Smart sensors: Receiving data from a sensor for data visualizations + +- Health: Monitoring heart rate from a patient’s wearable device + +- Multiplayer games + +- Interactive media: audience-participating voting system + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-pubnub + x.x.x + + + +# URI format + + pubnub:channel[?options] + +Where **channel** is the PubNub channel to publish or subscribe to. + +# Message body + +The message body can contain any JSON serializable data, including +Objects, Arrays, Integers, and Strings. Message data should not contain +special Java V4 classes or functions as these will not serialize. String +content can include any single-byte or multibyte UTF-8. + +Object serialization when sending is done automatically. Pass the full +object as the message payload. PubNub will take care of object +serialization. + +When receiving the message body uses objects provided by the PubNub API. + +# Examples + +## Publishing events + +Default operation when producing. The following snippet publishes the +event generated by PojoBean to the channel iot. + + from("timer:mytimer") + // generate some data as POJO. + .bean(PojoBean.class) + .to("pubnub:iot?publishKey=mypublishKey"); + +## Fire events aka BLOCKS Event Handlers + +See [blocks catalog](https://www.pubnub.com/blocks-catalog/) for all +kinds of serverless functions that can be invoked. Example of +geolocation lookup + + from("timer:geotimer") + .process(exchange -> exchange.getIn().setBody(new Foo("bar", "TEXT"))) + .to("pubnub:eon-maps-geolocation-input?operation=fire&publishKey=mypubkey&subscribeKey=mysubkey"); + + from("pubnub:eon-map-geolocation-output?subscribeKey=mysubkey) + // geolocation output will be logged here + .log("${body}"); + +## Subscribing to events + +The following snippet listens for events on the iot channel. If you can +add the option withPresence, you will also receive channel Join, Leave +asf events. + + from("pubnub:iot?subscribeKey=mySubscribeKey") + .log("${body}") + .to("mock:result"); + +## Performing operations + +- `herenow`: obtain information about the current state of a channel + including a list of unique user-ids currently subscribed to the + channel and the total occupancy count of the channel: + + + + from("direct:control") + .to("pubnub:myChannel?publishKey=mypublishKey&subscribeKey=mySubscribeKey&operation=herenow") + .to("mock:result"); + +- `wherenow`: obtain information about the current list of channels to + which a uuid is subscribed: + + + + from("direct:control") + .to("pubnub:myChannel?publishKey=mypublishKey&subscribeKey=mySubscribeKey&operation=wherenow&uuid=spyonme") + .to("mock:result"); + +- `setstate`: used to set key/value pairs specific to a subscriber + uuid: + + + + from("direct:control") + .bean(StateGenerator.class) + .to("pubnub:myChannel?publishKey=mypublishKey&subscribeKey=mySubscribeKey&operation=setstate&uuid=myuuid"); + +- `gethistory`: Fetches historical messages of a channel: + + + + from("direct:control") + .to("pubnub:myChannel?publishKey=mypublishKey&subscribeKey=mySubscribeKey&operation=gethistory"); + +There are a couple of examples in the test directory that show some of +the PubNub features. They require a PubNub account, from where you can +obtain a publish- and subscribe key. + +The example PubNubSensorExample already contains a subscribe key +provided by PubNub, so this is ready to run without an account. The +example illustrates the PubNub component subscribing to an infinite +stream of sensor data. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|The component configurations||object| +|uuid|UUID to be used as a device identifier, a default UUID is generated if not passed.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|withPresence|Also subscribe to related presence information|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|operation|The operation to perform. PUBLISH: Default. Send a message to all subscribers of a channel. FIRE: allows the client to send a message to BLOCKS Event Handlers. These messages will go directly to any Event Handlers registered on the channel. HERENOW: Obtain information about the current state of a channel including a list of unique user-ids currently subscribed to the channel and the total occupancy count. GETSTATE: Used to get key/value pairs specific to a subscriber uuid. State information is supplied as a JSON object of key/value pairs SETSTATE: Used to set key/value pairs specific to a subscriber uuid GETHISTORY: Fetches historical messages of a channel.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|authKey|If Access Manager is utilized, client will use this authKey in all restricted requests.||string| +|cipherKey|If cipher is passed, all communications to/from PubNub will be encrypted.||string| +|publishKey|The publish key obtained from your PubNub account. Required when publishing messages.||string| +|secretKey|The secret key used for message signing.||string| +|secure|Use SSL for secure transmission.|true|boolean| +|subscribeKey|The subscribe key obtained from your PubNub account. Required when subscribing to channels or listening for presence events||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|channel|The channel used for subscribing/publishing events||string| +|uuid|UUID to be used as a device identifier, a default UUID is generated if not passed.||string| +|withPresence|Also subscribe to related presence information|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|operation|The operation to perform. PUBLISH: Default. Send a message to all subscribers of a channel. FIRE: allows the client to send a message to BLOCKS Event Handlers. These messages will go directly to any Event Handlers registered on the channel. HERENOW: Obtain information about the current state of a channel including a list of unique user-ids currently subscribed to the channel and the total occupancy count. GETSTATE: Used to get key/value pairs specific to a subscriber uuid. State information is supplied as a JSON object of key/value pairs SETSTATE: Used to set key/value pairs specific to a subscriber uuid GETHISTORY: Fetches historical messages of a channel.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|pubnub|Reference to a Pubnub client in the registry.||object| +|authKey|If Access Manager is utilized, client will use this authKey in all restricted requests.||string| +|cipherKey|If cipher is passed, all communications to/from PubNub will be encrypted.||string| +|publishKey|The publish key obtained from your PubNub account. Required when publishing messages.||string| +|secretKey|The secret key used for message signing.||string| +|secure|Use SSL for secure transmission.|true|boolean| +|subscribeKey|The subscribe key obtained from your PubNub account. Required when subscribing to channels or listening for presence events||string| diff --git a/camel-pulsar.md b/camel-pulsar.md new file mode 100644 index 0000000000000000000000000000000000000000..5d65d0bf00e740abeac286135fe90b3edde7a9ab --- /dev/null +++ b/camel-pulsar.md @@ -0,0 +1,128 @@ +# Pulsar + +**Since Camel 2.24** + +**Both producer and consumer are supported** + +Maven users will need to add the following dependency to their `pom.xml` +for this component. + + + org.apache.camel + camel-pulsar + + x.y.z + + +# URI format + + pulsar:[persistent|non-persistent]://tenant/namespace/topic + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|authenticationClass|The Authentication FQCN to be used while creating the client from URI||string| +|authenticationParams|The Authentication Parameters to be used while creating the client from URI||string| +|configuration|Allows to pre-configure the Pulsar component with common options that the endpoints will reuse.||object| +|serviceUrl|The Pulsar Service URL to point while creating the client from URI||string| +|ackGroupTimeMillis|Group the consumer acknowledgments for the specified time in milliseconds - defaults to 100|100|integer| +|ackTimeoutMillis|Timeout for unacknowledged messages in milliseconds - defaults to 10000|10000|integer| +|ackTimeoutRedeliveryBackoff|RedeliveryBackoff to use for ack timeout redelivery backoff.||object| +|allowManualAcknowledgement|Whether to allow manual message acknowledgements. If this option is enabled, then messages are not acknowledged automatically after successful route completion. Instead, an instance of PulsarMessageReceipt is stored as a header on the org.apache.camel.Exchange. Messages can then be acknowledged using PulsarMessageReceipt at any time before the ackTimeout occurs.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|consumerName|Name of the consumer when subscription is EXCLUSIVE|sole-consumer|string| +|consumerNamePrefix|Prefix to add to consumer names when a SHARED or FAILOVER subscription is used|cons|string| +|consumerQueueSize|Size of the consumer queue - defaults to 10|10|integer| +|deadLetterTopic|Name of the topic where the messages which fail maxRedeliverCount times will be sent. Note: if not set, default topic name will be topicName-subscriptionName-DLQ||string| +|enableRetry|To enable retry letter topic mode. The default retry letter topic uses this format: topicname-subscriptionname-RETRY|false|boolean| +|keySharedPolicy|Policy to use by consumer when using key-shared subscription type.||string| +|maxRedeliverCount|Maximum number of times that a message will be redelivered before being sent to the dead letter queue. If this value is not set, no Dead Letter Policy will be created||integer| +|messageListener|Whether to use the messageListener interface, or to receive messages using a separate thread pool|true|boolean| +|negativeAckRedeliveryBackoff|RedeliveryBackoff to use for negative ack redelivery backoff.||object| +|negativeAckRedeliveryDelayMicros|Set the negative acknowledgement delay|60000000|integer| +|numberOfConsumers|Number of consumers - defaults to 1|1|integer| +|numberOfConsumerThreads|Number of threads to receive and handle messages when using a separate thread pool|1|integer| +|readCompacted|Enable compacted topic reading.|false|boolean| +|retryLetterTopic|Name of the topic to use in retry mode. Note: if not set, default topic name will be topicName-subscriptionName-RETRY||string| +|subscriptionInitialPosition|Control the initial position in the topic of a newly created subscription. Default is latest message.|LATEST|object| +|subscriptionName|Name of the subscription to use|subs|string| +|subscriptionTopicsMode|Determines to which topics this consumer should be subscribed to - Persistent, Non-Persistent, or both. Only used with pattern subscriptions.|PersistentOnly|object| +|subscriptionType|Type of the subscription EXCLUSIVESHAREDFAILOVERKEY\_SHARED, defaults to EXCLUSIVE|EXCLUSIVE|object| +|topicsPattern|Whether the topic is a pattern (regular expression) that allows the consumer to subscribe to all matching topics in the namespace|false|boolean| +|pulsarMessageReceiptFactory|Provide a factory to create an alternate implementation of PulsarMessageReceipt.||object| +|batcherBuilder|Control batching method used by the producer.|DEFAULT|object| +|batchingEnabled|Control whether automatic batching of messages is enabled for the producer.|true|boolean| +|batchingMaxMessages|The maximum size to batch messages.|1000|integer| +|batchingMaxPublishDelayMicros|The maximum time period within which the messages sent will be batched if batchingEnabled is true.|1000|integer| +|blockIfQueueFull|Whether to block the producing thread if pending messages queue is full or to throw a ProducerQueueIsFullError|false|boolean| +|chunkingEnabled|Control whether chunking of messages is enabled for the producer.|false|boolean| +|compressionType|Compression type to use|NONE|object| +|hashingScheme|Hashing function to use when choosing the partition to use for a particular message|JavaStringHash|string| +|initialSequenceId|The first message published will have a sequence Id of initialSequenceId 1.|-1|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|maxPendingMessages|Size of the pending massages queue. When the queue is full, by default, any further sends will fail unless blockIfQueueFull=true|1000|integer| +|maxPendingMessagesAcrossPartitions|The maximum number of pending messages for partitioned topics. The maxPendingMessages value will be reduced if (number of partitions maxPendingMessages) exceeds this value. Partitioned topics have a pending message queue for each partition.|50000|integer| +|messageRouter|Custom Message Router to use||object| +|messageRoutingMode|Message Routing Mode to use|RoundRobinPartition|object| +|producerName|Name of the producer. If unset, lets Pulsar select a unique identifier.||string| +|sendTimeoutMs|Send timeout in milliseconds|30000|integer| +|autoConfiguration|The pulsar auto configuration||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|pulsarClient|The pulsar client||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|persistence|Whether the topic is persistent or non-persistent||string| +|tenant|The tenant||string| +|namespace|The namespace||string| +|topic|The topic||string| +|authenticationClass|The Authentication FQCN to be used while creating the client from URI||string| +|authenticationParams|The Authentication Parameters to be used while creating the client from URI||string| +|serviceUrl|The Pulsar Service URL to point while creating the client from URI||string| +|ackGroupTimeMillis|Group the consumer acknowledgments for the specified time in milliseconds - defaults to 100|100|integer| +|ackTimeoutMillis|Timeout for unacknowledged messages in milliseconds - defaults to 10000|10000|integer| +|ackTimeoutRedeliveryBackoff|RedeliveryBackoff to use for ack timeout redelivery backoff.||object| +|allowManualAcknowledgement|Whether to allow manual message acknowledgements. If this option is enabled, then messages are not acknowledged automatically after successful route completion. Instead, an instance of PulsarMessageReceipt is stored as a header on the org.apache.camel.Exchange. Messages can then be acknowledged using PulsarMessageReceipt at any time before the ackTimeout occurs.|false|boolean| +|consumerName|Name of the consumer when subscription is EXCLUSIVE|sole-consumer|string| +|consumerNamePrefix|Prefix to add to consumer names when a SHARED or FAILOVER subscription is used|cons|string| +|consumerQueueSize|Size of the consumer queue - defaults to 10|10|integer| +|deadLetterTopic|Name of the topic where the messages which fail maxRedeliverCount times will be sent. Note: if not set, default topic name will be topicName-subscriptionName-DLQ||string| +|enableRetry|To enable retry letter topic mode. The default retry letter topic uses this format: topicname-subscriptionname-RETRY|false|boolean| +|keySharedPolicy|Policy to use by consumer when using key-shared subscription type.||string| +|maxRedeliverCount|Maximum number of times that a message will be redelivered before being sent to the dead letter queue. If this value is not set, no Dead Letter Policy will be created||integer| +|messageListener|Whether to use the messageListener interface, or to receive messages using a separate thread pool|true|boolean| +|negativeAckRedeliveryBackoff|RedeliveryBackoff to use for negative ack redelivery backoff.||object| +|negativeAckRedeliveryDelayMicros|Set the negative acknowledgement delay|60000000|integer| +|numberOfConsumers|Number of consumers - defaults to 1|1|integer| +|numberOfConsumerThreads|Number of threads to receive and handle messages when using a separate thread pool|1|integer| +|readCompacted|Enable compacted topic reading.|false|boolean| +|retryLetterTopic|Name of the topic to use in retry mode. Note: if not set, default topic name will be topicName-subscriptionName-RETRY||string| +|subscriptionInitialPosition|Control the initial position in the topic of a newly created subscription. Default is latest message.|LATEST|object| +|subscriptionName|Name of the subscription to use|subs|string| +|subscriptionTopicsMode|Determines to which topics this consumer should be subscribed to - Persistent, Non-Persistent, or both. Only used with pattern subscriptions.|PersistentOnly|object| +|subscriptionType|Type of the subscription EXCLUSIVESHAREDFAILOVERKEY\_SHARED, defaults to EXCLUSIVE|EXCLUSIVE|object| +|topicsPattern|Whether the topic is a pattern (regular expression) that allows the consumer to subscribe to all matching topics in the namespace|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|batcherBuilder|Control batching method used by the producer.|DEFAULT|object| +|batchingEnabled|Control whether automatic batching of messages is enabled for the producer.|true|boolean| +|batchingMaxMessages|The maximum size to batch messages.|1000|integer| +|batchingMaxPublishDelayMicros|The maximum time period within which the messages sent will be batched if batchingEnabled is true.|1000|integer| +|blockIfQueueFull|Whether to block the producing thread if pending messages queue is full or to throw a ProducerQueueIsFullError|false|boolean| +|chunkingEnabled|Control whether chunking of messages is enabled for the producer.|false|boolean| +|compressionType|Compression type to use|NONE|object| +|hashingScheme|Hashing function to use when choosing the partition to use for a particular message|JavaStringHash|string| +|initialSequenceId|The first message published will have a sequence Id of initialSequenceId 1.|-1|integer| +|maxPendingMessages|Size of the pending massages queue. When the queue is full, by default, any further sends will fail unless blockIfQueueFull=true|1000|integer| +|maxPendingMessagesAcrossPartitions|The maximum number of pending messages for partitioned topics. The maxPendingMessages value will be reduced if (number of partitions maxPendingMessages) exceeds this value. Partitioned topics have a pending message queue for each partition.|50000|integer| +|messageRouter|Custom Message Router to use||object| +|messageRoutingMode|Message Routing Mode to use|RoundRobinPartition|object| +|producerName|Name of the producer. If unset, lets Pulsar select a unique identifier.||string| +|sendTimeoutMs|Send timeout in milliseconds|30000|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-qdrant.md b/camel-qdrant.md new file mode 100644 index 0000000000000000000000000000000000000000..2958fb9ee2f40a0da227c0395e2f8cdf0c59b915 --- /dev/null +++ b/camel-qdrant.md @@ -0,0 +1,137 @@ +# Qdrant + +**Since Camel 4.5** + +**Only producer is supported** + +The Qdrant Component provides support for interacting with the [Qdrant +Vector Database](https://qdrant.tech). + +# URI format + + qdrant:collection[?options] + +Where **collection** represents a named set of points (vectors with a +payload) defined in your database. + +# Collection Samples + +In the route below, we use the qdrant component to create a collection +named *myCollection* with the given parameters: + +## Create Collection + +Java +from("direct:in") +.setHeader(Qdrant.Headers.ACTION) +.constant(QdrantAction.CREATE\_COLLECTION) +.setBody() +.constant( +Collections.VectorParams.newBuilder() +.setSize(2) +.setDistance(Collections.Distance.Cosine).build()) +.to("qdrant:myCollection"); + +## Delete Collection + +In the route below, we use the qdrant component to delete a collection +named *myCollection*: + +Java +from("direct:in") +.setHeader(Qdrant.Headers.ACTION) +.constant(QdrantAction.DELETE\_COLLECTION) +.to("qdrant:myCollection"); + +## Collection Info + +In the route below, we use the qdrant component to get information about +the collection named `myCollection`: + +Java +from("direct:in") +.setHeader(Qdrant.Headers.ACTION) +.constant(QdrantAction.COLLECTION\_INFO) +.to("qdrant:myCollection") +.process(this::process); + +If there is a collection, you will receive a reply of type +`Collections.CollectionInfo`. If there is not, the exchange will contain +an exception of type `QdrantActionException` with a cause of type +`StatusRuntimeException statusRuntimeException` and status +`Status.NOT_FOUND`. + +# Points Samples + +## Upsert + +In the route below we use the qdrant component to perform insert + +updates (upsert) on points in the collection named *myCollection*: + +Java +from("direct:in") +.setHeader(Qdrant.Headers.ACTION) +.constant(QdrantAction.UPSERT) +.setBody() +.constant( +Points.PointStruct.newBuilder() +.setId(id(8)) +.setVectors(VectorsFactory.vectors(List.of(3.5f, 4.5f))) +.putAllPayload(Map.of( +"foo", value("hello"), +"bar", value(1))) +.build()) +.to("qdrant:myCollection"); + +## Retrieve + +In the route below, we use the qdrant component to retrieve information +of a single point by id from the collection named *myCollection*: + +Java +from("direct:in") +.setHeader(Qdrant.Headers.ACTION) +.constant(QdrantAction.RETRIEVE) +.setBody() +.constant(PointIdFactory.id(8)) +.to("qdrant:myCollection"); + +## Delete + +In the route below, we use the qdrant component to delete points from +the collection named `myCollection` according to a criteria: + +Java +from("direct:in") +.setHeader(Qdrant.Headers.ACTION) +.constant(QdrantAction.DELETE) +.setBody() +.constant(ConditionFactory.matchKeyword("foo", "hello")) +.to("qdrant:myCollection"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|apiKey|Sets the API key to use for authentication||string| +|configuration|The configuration;||object| +|host|The host to connect to.|localhost|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|port|The port to connect to.|6334|integer| +|timeout|Sets a default timeout for all requests||object| +|tls|Whether the client uses Transport Layer Security (TLS) to secure communications|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|collection|The collection Name||string| +|apiKey|Sets the API key to use for authentication||string| +|host|The host to connect to.|localhost|string| +|port|The port to connect to.|6334|integer| +|timeout|Sets a default timeout for all requests||object| +|tls|Whether the client uses Transport Layer Security (TLS) to secure communications|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-quartz.md b/camel-quartz.md new file mode 100644 index 0000000000000000000000000000000000000000..0263a990409b77f4fb2355d488d8f2ae1b651e8e --- /dev/null +++ b/camel-quartz.md @@ -0,0 +1,426 @@ +# Quartz + +**Since Camel 2.12** + +**Only consumer is supported** + +The Quartz component provides a scheduled delivery of messages using the +[Quartz Scheduler 2.x](http://www.quartz-scheduler.org/). +Each endpoint represents a different timer (in Quartz terms, a Trigger +and JobDetail). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-quartz + x.x.x + + + +# URI format + + quartz://timerName?options + quartz://groupName/timerName?options + quartz://groupName/timerName?cron=expression + quartz://timerName?cron=expression + +The component uses either a `CronTrigger` or a `SimpleTrigger`. If no +cron expression is provided, the component uses a simple trigger. If no +`groupName` is provided, the quartz component uses the `Camel` group +name. + +# Configuring quartz.properties file + +By default, Quartz will look for a `quartz.properties` file in the +`org/quartz` directory of the classpath. If you are using WAR +deployments, this means just drop the quartz.properties in +`WEB-INF/classes/org/quartz`. + +However, the Camel [Quartz](#quartz-component.adoc) component also +allows you to configure properties: + + ++++++ + + + + + + + + + + + + + + + + + + + + + + +
ParameterDefaultTypeDescription

properties

null

Properties

You can configure a +java.util.Properties instance.

propertiesFile

null

String

File name of the properties to load +from the classpath

+ +To do this, you can configure this in Spring XML as follows + + + + + +# Enabling Quartz scheduler in JMX + +You need to configure the quartz scheduler properties to enable JMX. +That is typically setting the option `"org.quartz.scheduler.jmx.export"` +to a `true` value in the configuration file. + +This option is set to true by default, unless explicitly disabled. + +# Clustering + +If you use Quartz in clustered mode, e.g., the `JobStore` is clustered. +Then the [Quartz](#quartz-component.adoc) component will **not** +pause/remove triggers when a node is being stopped/shutdown. This allows +the trigger to keep running on the other nodes in the cluster. + +When running in clustered node, no checking is done to ensure unique job +name/group for endpoints. + +# Message Headers + +Camel adds the getters from the Quartz Execution Context as header +values. The following headers are added: +`calendar`, `fireTime`, `jobDetail`, `jobInstance`, `jobRuntTime`, +`mergedJobDataMap`, `nextFireTime`, `previousFireTime`, `refireCount`, +`result`, `scheduledFireTime`, `scheduler`, `trigger`, `triggerName`, +`triggerGroup`. + +The `fireTime` header contains the `java.util.Date` of when the exchange +was fired. + +# Using Cron Triggers + +Quartz supports [Cron-like +expressions](http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html) +for specifying timers in a handy format. You can use these expressions +in the `cron` URI parameter; though, to preserve valid URI encoding, we +allow `+` to be used instead of spaces. + +For example, the following will fire a message every five minutes +starting at 12pm (noon) to 6pm on weekdays: + + from("quartz://myGroup/myTimerName?cron=0+0/5+12-18+?+*+MON-FRI") + .to("activemq:Totally.Rocks"); + +which is equivalent to using the cron expression + + 0 0/5 12-18 ? * MON-FRI + +The following table shows the URI character encodings we use to preserve +valid URI syntax: + + ++++ + + + + + + + + + + + + +
URI CharacterCron character

+

Space

+ +# Specifying time zone + +The Quartz Scheduler allows you to configure time zone per trigger. For +example, to use a time zone of your country, then you can do as follows: + + quartz://groupName/timerName?cron=0+0/5+12-18+?+*+MON-FRI&trigger.timeZone=Europe/Stockholm + +The timeZone value is the values accepted by `java.util.TimeZone`. + +# Specifying start date + +The Quartz Scheduler allows you to configure start date per trigger. You +can provide the start date in the date format yyyy-MM-dd’T'HH:mm:ssz. + + quartz://groupName/timerName?cron=0+0/5+12-18+?+*+MON-FRI&trigger.startAt=2023-11-22T14:32:36UTC + +# Specifying end date + +The Quartz Scheduler allows you to configure end date per trigger. You +can provide the end date in the date format yyyy-MM-dd’T'HH:mm:ssz. + + quartz://groupName/timerName?cron=0+0/5+12-18+?+*+MON-FRI&trigger.endAt=2023-11-22T14:32:36UTC + +Note: Start and end dates may be affected by time drifts and +unpredictable behavior during daylight-saving time changes. Exercise +caution, especially in environments where precise timing is critical. + +# Configuring misfire instructions + +The quartz scheduler can be configured with a misfire instruction to +handle misfire situations for the trigger. The concrete trigger type +that you are using will have defined a set of additional +`MISFIRE_INSTRUCTION_XXX` constants that may be set as this property’s +value. + +For example, to configure the simple trigger to use misfire instruction +4: + + quartz://myGroup/myTimerName?trigger.repeatInterval=2000&trigger.misfireInstruction=4 + +And likewise, you can configure the cron trigger with one of its misfire +instructions as well: + + quartz://myGroup/myTimerName?cron=0/2+*+*+*+*+?&trigger.misfireInstruction=2 + +The simple and cron triggers have the following misfire instructions +representative: + +## SimpleTrigger.MISFIRE\_INSTRUCTION\_FIRE\_NOW = 1 (default) + +Instructs the Scheduler that upon a mis-fire situation, the +SimpleTrigger wants to be fired now by Scheduler. + +This instruction should typically only be used for *one-shot* +(non-repeating) Triggers. If it is used on a trigger with a repeat count +\> 0, then it is equivalent to the instruction +`MISFIRE_INSTRUCTION_RESCHEDULE_NOW_WITH_REMAINING_REPEAT_COUNT`. + +## SimpleTrigger.MISFIRE\_INSTRUCTION\_RESCHEDULE\_NOW\_WITH\_EXISTING\_REPEAT\_COUNT = 2 + +Instructs the Scheduler that upon a mis-fire situation, the +SimpleTrigger wants to be re-scheduled to `now` (even if the associated +Calendar excludes `now`) with the repeat count left as-is. This does +obey the Trigger end-time, however, so if `now` is after the end-time +the Trigger will not fire again. + +Use of this instruction causes the trigger to *forget* the start-time +and repeat-count that it was originally setup with. This is only an +issue if you for some reason wanted to be able to tell what the original +values were at some later time. + +## SimpleTrigger.MISFIRE\_INSTRUCTION\_RESCHEDULE\_NOW\_WITH\_REMAINING\_REPEAT\_COUNT = 3 + +Instructs the Scheduler that upon a mis-fire situation, the +SimpleTrigger wants to be re-scheduled to `now` (even if the associated +Calendar excludes `now`) with the repeat count set to what it would be, +if it had not missed any firings. This does obey the Trigger end-time, +however, so if `now` is after the end-time the Trigger will not fire +again. + +Use of this instruction causes the trigger to *forget* the start-time +and repeat-count that it was originally setup with. Instead, the repeat +count on the trigger will be changed to whatever the remaining repeat +count is. This is only an issue if you for some reason wanted to be able +to tell what the original values were at some later time. + +This instruction could cause the Trigger to go to the *COMPLETE* state +after firing `now`, if all the repeat-fire-times where missed. + +## SimpleTrigger.MISFIRE\_INSTRUCTION\_RESCHEDULE\_NEXT\_WITH\_REMAINING\_COUNT = 4 + +Instructs the Scheduler that upon a mis-fire situation, the +SimpleTrigger wants to be re-scheduled to the next scheduled time after +`now` - taking into account any associated Calendar and with the repeat +count set to what it would be, if it had not missed any firings. + +This instruction could cause the Trigger to go directly to the +*COMPLETE* state if all fire-times where missed. + +## SimpleTrigger.MISFIRE\_INSTRUCTION\_RESCHEDULE\_NEXT\_WITH\_EXISTING\_COUNT = 5 + +Instructs the Scheduler that upon a mis-fire situation, the +SimpleTrigger wants to be re-scheduled to the next scheduled time after +`now` - taking into account any associated Calendar, and with the repeat +count left unchanged. + +This instruction could cause the Trigger to go directly to the +*COMPLETE* state if the end-time of the trigger has arrived. + +## CronTrigger.MISFIRE\_INSTRUCTION\_FIRE\_ONCE\_NOW = 1 (default) + +Instructs the Scheduler that upon a mis-fire situation, the CronTrigger +wants to be fired now by Scheduler. + +## CronTrigger.MISFIRE\_INSTRUCTION\_DO\_NOTHING = 2 + +Instructs the Scheduler that upon a mis-fire situation, the CronTrigger +wants to have its next-fire-time updated to the next time in the +schedule after the current time (taking into account any associated +Calendar. However, it does not want to be fired now. + +# Using QuartzScheduledPollConsumerScheduler + +The [Quartz](#quartz-component.adoc) component provides a Polling +Consumer scheduler which allows to use cron based scheduling for +[Polling Consumers](#eips:polling-consumer.adoc) such as the File and +FTP consumers. + +For example, to use a cron based expression to poll for files every +second second, then a Camel route can be defined simply as: + + from("file:inbox?scheduler=quartz&scheduler.cron=0/2+*+*+*+*+?") + .to("bean:process"); + +Notice we define the `scheduler=quartz` to instruct Camel to use the +[Quartz-based](#quartz-component.adoc) scheduler. Then we use +`scheduler.xxx` options to configure the scheduler. The +[Quartz](#quartz-component.adoc) scheduler requires the cron option to +be set. + +The following options are supported: + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDefaultTypeDescription

quartzScheduler

null

org.quartz.Scheduler

To use a custom Quartz scheduler. If +none is configured, then the shared scheduler from the Quartz component is used.

cron

null

String

Mandatory: To define +the cron expression for triggering the polls.

triggerId

null

String

To specify the trigger id. If none is +provided, then a UUID is generated and used.

triggerGroup

QuartzScheduledPollConsumerScheduler

String

To specify the trigger group.

timeZone

Default

TimeZone

The time zone to use for the CRON +trigger.

+ +**Important:** Remember configuring these options from the endpoint URIs +must be prefixed with `scheduler.`. For example, to configure the +trigger id and group: + + from("file:inbox?scheduler=quartz&scheduler.cron=0/2+*+*+*+*+?&scheduler.triggerId=myId&scheduler.triggerGroup=myGroup") + .to("bean:process"); + +There is also a CRON scheduler in Spring, so you can use the following +as well: + + from("file:inbox?scheduler=spring&scheduler.cron=0/2+*+*+*+*+?") + .to("bean:process"); + +# Cron Component Support + +The Quartz component can be used as implementation of the Camel Cron +component. + +Maven users will need to add the following additional dependency to +their `pom.xml`: + + + org.apache.camel + camel-cron + x.x.x + + + +Users can then use the cron component instead of the quartz component, +as in the following route: + + from("cron://name?schedule=0+0/5+12-18+?+*+MON-FRI") + .to("activemq:Totally.Rocks"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|enableJmx|Whether to enable Quartz JMX which allows to manage the Quartz scheduler from JMX. This options is default true|true|boolean| +|prefixInstanceName|Whether to prefix the Quartz Scheduler instance name with the CamelContext name. This is enabled by default, to let each CamelContext use its own Quartz scheduler instance by default. You can set this option to false to reuse Quartz scheduler instances between multiple CamelContext's.|true|boolean| +|prefixJobNameWithEndpointId|Whether to prefix the quartz job with the endpoint id. This option is default false.|false|boolean| +|properties|Properties to configure the Quartz scheduler.||object| +|propertiesFile|File name of the properties to load from the classpath||string| +|propertiesRef|References to an existing Properties or Map to lookup in the registry to use for configuring quartz.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|scheduler|To use the custom configured Quartz scheduler, instead of creating a new Scheduler.||object| +|schedulerFactory|To use the custom SchedulerFactory which is used to create the Scheduler.||object| +|autoStartScheduler|Whether the scheduler should be auto started. This option is default true|true|boolean| +|interruptJobsOnShutdown|Whether to interrupt jobs on shutdown which forces the scheduler to shutdown quicker and attempt to interrupt any running jobs. If this is enabled then any running jobs can fail due to being interrupted. When a job is interrupted then Camel will mark the exchange to stop continue routing and set java.util.concurrent.RejectedExecutionException as caused exception. Therefore use this with care, as its often better to allow Camel jobs to complete and shutdown gracefully.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|groupName|The quartz group name to use. The combination of group name and trigger name should be unique.|Camel|string| +|triggerName|The quartz trigger name to use. The combination of group name and trigger name should be unique.||string| +|cron|Specifies a cron expression to define when to trigger.||string| +|deleteJob|If set to true, then the trigger automatically delete when route stop. Else if set to false, it will remain in scheduler. When set to false, it will also mean user may reuse pre-configured trigger with camel Uri. Just ensure the names match. Notice you cannot have both deleteJob and pauseJob set to true.|true|boolean| +|durableJob|Whether or not the job should remain stored after it is orphaned (no triggers point to it).|false|boolean| +|pauseJob|If set to true, then the trigger automatically pauses when route stop. Else if set to false, it will remain in scheduler. When set to false, it will also mean user may reuse pre-configured trigger with camel Uri. Just ensure the names match. Notice you cannot have both deleteJob and pauseJob set to true.|false|boolean| +|recoverableJob|Instructs the scheduler whether or not the job should be re-executed if a 'recovery' or 'fail-over' situation is encountered.|false|boolean| +|stateful|Uses a Quartz PersistJobDataAfterExecution and DisallowConcurrentExecution instead of the default job.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|customCalendar|Specifies a custom calendar to avoid specific range of date||object| +|ignoreExpiredNextFireTime|Whether to ignore quartz cannot schedule a trigger because the trigger will never fire in the future. This can happen when using a cron trigger that are configured to only run in the past. By default, Quartz will fail to schedule the trigger and therefore fail to start the Camel route. You can set this to true which then logs a WARN and then ignore the problem, meaning that the route will never fire in the future.|false|boolean| +|jobParameters|To configure additional options on the job.||object| +|prefixJobNameWithEndpointId|Whether the job name should be prefixed with endpoint id|false|boolean| +|triggerParameters|To configure additional options on the trigger. The parameter timeZone is supported if the cron option is present. Otherwise the parameters repeatInterval and repeatCount are supported. Note: When using repeatInterval values of 1000 or less, the first few events after starting the camel context may be fired more rapidly than expected.||object| +|usingFixedCamelContextName|If it is true, JobDataMap uses the CamelContext name directly to reference the CamelContext, if it is false, JobDataMap uses use the CamelContext management name which could be changed during the deploy time.|false|boolean| +|autoStartScheduler|Whether or not the scheduler should be auto started.|true|boolean| +|triggerStartDelay|In case of scheduler has already started, we want the trigger start slightly after current time to ensure endpoint is fully started before the job kicks in. Negative value shifts trigger start time in the past.|500|duration| diff --git a/camel-quickfix.md b/camel-quickfix.md new file mode 100644 index 0000000000000000000000000000000000000000..610b0dd39cd1fd7fbd70e9c0452b336ee3d55599 --- /dev/null +++ b/camel-quickfix.md @@ -0,0 +1,530 @@ +# Quickfix + +**Since Camel 2.1** + +**Both producer and consumer are supported** + +The Quickfix component adapts the +[QuickFIX/J](http://www.quickfixj.org/) FIX engine for using in Camel. +This component uses the standard [Financial Interchange (FIX) +protocol](http://www.fixprotocol.org/) for message transport. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-quickfix + x.x.x + + + +# URI format + + quickfix:configFile[?sessionID=sessionID&lazyCreateEngine=true|false] + +The **configFile** is the name of the QuickFIX/J configuration to use +for the FIX engine (located as a resource found in your classpath). The +optional **sessionID** identifies a specific FIX session. The format of +the sessionID is: + + (BeginString):(SenderCompID)[/(SenderSubID)[/(SenderLocationID)]]->(TargetCompID)[/(TargetSubID)[/(TargetLocationID)]] + +The optional **lazyCreateEngine** parameter allows creating QuickFIX/J +engine on demand. Value **true** means the engine is started when the +first message is sent or there’s consumer configured in route +definition. When **false** value is used, the engine is started at the +endpoint creation. When this parameter is missing, the value of +component’s property **lazyCreateEngines** is being used. + +Example URIs: + + quickfix:config.cfg + + quickfix:config.cfg?sessionID=FIX.4.2:MyTradingCompany->SomeExchange + + quickfix:config.cfg?sessionID=FIX.4.2:MyTradingCompany->SomeExchange&lazyCreateEngine=true + +# Endpoints + +FIX sessions are endpoints for the **quickfix** component. An endpoint +URI may specify a single session or all sessions managed by a specific +QuickFIX/J engine. Typical applications will use only one FIX engine, +but advanced users may create multiple FIX engines by referencing +different configuration files in **quickfix** component endpoint URIs. + +When a consumer does not include a session ID in the endpoint URI, it +will receive exchanges for all sessions managed by the FIX engine +associated with the configuration file specified in the URI. If a +producer does not specify a session in the endpoint URI, then it must +include the session-related fields in the FIX message being sent. If a +session is specified in the URI, then the component will automatically +inject the session-related fields into the FIX message. + +The DataDictionary header is useful if string messages are being +received and need to be parsed in a route. QuickFIX/J requires a data +dictionary to parse certain types of messages (with repeating groups, +for example). By injecting a DataDictionary header in the route after +receiving a message string, the FIX engine can properly parse the data. + +# QuickFIX/J Configuration Extensions + +When using QuickFIX/J directly, one typically writes code to create +instances of logging adapters, message stores, and communication +connectors. The **quickfix** component will automatically create +instances of these classes based on information in the configuration +file. It also provides defaults for many of the commonly required +settings and adds additional capabilities (like the ability to activate +JMX support). + +The following sections describe how the **quickfix** component processes +the QuickFIX/J configuration. For comprehensive information about +QuickFIX/J configuration, see the [QFJ user +manual](http://www.quickfixj.org/quickfixj/usermanual/usage/configuration.html). + +## Communication Connectors + +When the component detects an initiator or acceptor session setting in +the QuickFIX/J configuration file, it will automatically create the +corresponding initiator and/or acceptor connector. These settings can be +in the default or in a specific session section of the configuration +file. + + ++++ + + + + + + + + + + + + + + + + +
Session SettingComponent Action

ConnectionType=initiator

Create an initiator connector

ConnectionType=acceptor

Create an acceptor connector

+ +The threading model for the QuickFIX/J session connectors can also be +specified. These settings affect all sessions in the configuration file +and must be placed in the settings default section. + + ++++ + + + + + + + + + + + + + + + + +
Default/Global SettingComponent Action

ThreadModel=ThreadPerConnector

Use SocketInitiator or +SocketAcceptor (default)

ThreadModel=ThreadPerSession

Use +ThreadedSocketInitiator or +ThreadedSocketAcceptor

+ +## Logging + +The QuickFIX/J logger implementation can be specified by including the +following settings in the default section of the configuration file. The +`ScreenLog` is the default if none of the following settings are present +in the configuration. It’s an error to include settings that imply more +than one log implementation. The log factory implementation can also be +set directly on the Quickfix component. This will override any related +values in the QuickFIX/J settings file. + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Default/Global SettingComponent Action

ScreenLogShowEvents

Use a ScreenLog

ScreenLogShowIncoming

Use a ScreenLog

ScreenLogShowOutgoing

Use a ScreenLog

SLF4J*

Use a SLF4JLog. Any of the +SLF4J settings will cause this log to be used.

FileLogPath

Use a FileLog

JdbcDriver

Use a JdbcLog

+ +## Message Store + +The QuickFIX/J message store implementation can be specified by +including the following settings in the default section of the +configuration file. The `MemoryStore` is the default if none of the +following settings are present in the configuration. It’s an error to +include settings that imply more than one message store implementation. +The message store factory implementation can also be set directly on the +Quickfix component. This will override any related values in the +QuickFIX/J settings file. + + ++++ + + + + + + + + + + + + + + + + + + + + +
Default/Global SettingComponent Action

JdbcDriver

Use a JdbcStore

FileStorePath

Use a FileStore

SleepycatDatabaseDir

Use a +SleepcatStore

+ +## Message Factory + +A message factory is used to construct domain objects from raw FIX +messages. The default message factory is `DefaultMessageFactory`. +However, advanced applications may require a custom message factory. +This can be set on the QuickFIX/J component. + +## JMX + + ++++ + + + + + + + + + + + + +
Default/Global SettingComponent Action

UseJmx

if Y, then enable +QuickFIX/J JMX

+ +## Other Defaults + +The component provides some default settings for what are normally +required settings in QuickFIX/J configuration files. `SessionStartTime` +and `SessionEndTime` default to "00:00:00", meaning the session will not +be automatically started and stopped. The `HeartBtInt` (heartbeat +interval) defaults to 30 seconds. + +## Minimal Initiator Configuration Example + + [SESSION] + ConnectionType=initiator + BeginString=FIX.4.4 + SenderCompID=YOUR_SENDER + TargetCompID=YOUR_TARGET + +# Using the InOut Message Exchange Pattern + +Although the FIX protocol is event-driven and asynchronous, there are +specific pairs of messages that represent a request-reply message +exchange. To use an InOut exchange pattern, there should be a single +request message and single reply message to the request. Examples +include an OrderStatusRequest message and UserRequest. + +## Implementing InOut Exchanges for Consumers + +Add "exchangePattern=InOut" to the QuickFIX/J enpoint URI. The +`MessageOrderStatusService` in the example below is a bean with a +synchronous service method. The method returns the response to the +request (an ExecutionReport in this case) which is then sent back to the +requestor session. + + from("quickfix:examples/inprocess.qf.cfg?sessionID=FIX.4.2:MARKET->TRADER&exchangePattern=InOut") + .filter(header(QuickfixjEndpoint.MESSAGE_TYPE_KEY).isEqualTo(MsgType.ORDER_STATUS_REQUEST)) + .bean(new MarketOrderStatusService()); + +## Implementing InOut Exchanges for Producers + +For producers, sending a message will block until a reply is received or +a timeout occurs. There is no standard way to correlate reply messages +in FIX. Therefore, a correlation criteria must be defined for each type +of InOut exchange. The correlation criteria and timeout can be specified +using `Exchange` properties. + + ++++++ + + + + + + + + + + + + + + + + + + + + + + +
DescriptionKey StringKey ConstantDefault

Correlation Criteria

"CorrelationCriteria"

QuickfixjProducer.CORRELATION_CRITERIA_KEY

None

Correlation Timeout in +Milliseconds

"CorrelationTimeout"

QuickfixjProducer.CORRELATION_TIMEOUT_KEY

1000

+ +The correlation criteria is defined with a `MessagePredicate` object. +The following example will treat a FIX ExecutionReport from the +specified session where the transaction type is STATUS and the Order ID +matches our request. The session ID should be for the *requestor*, the +sender and target CompID fields will be reversed when looking for the +reply. + + exchange.setProperty(QuickfixjProducer.CORRELATION_CRITERIA_KEY, + new MessagePredicate(new SessionID(sessionID), MsgType.EXECUTION_REPORT) + .withField(ExecTransType.FIELD, Integer.toString(ExecTransType.STATUS)) + .withField(OrderID.FIELD, request.getString(OrderID.FIELD))); + +## Example + +The source code contains an example called `RequestReplyExample` that +demonstrates the InOut exchanges for a consumer and producer. This +example creates a simple HTTP server endpoint that accepts order status +requests. The HTTP request is converted to a FIX +OrderStatusRequestMessage, is augmented with a correlation criteria, and +is then routed to a quickfix endpoint. The response is then converted to +a JSON-formatted string and sent back to the HTTP server endpoint to be +provided as the web response. + +# Spring Configuration + +The QuickFIX/J component includes a Spring `FactoryBean` for configuring +the session settings within a Spring context. A type converter for +QuickFIX/J session ID strings is also included. The following example +shows a simple configuration of an acceptor and initiator session with +default settings for both sessions. + + + + + + + ${in.header.EventCategory} == 'AppMessageReceived' + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +The QuickFIX/J component includes a `QuickfixjConfiguration` class for +configuring the session settings. A type converter for QuickFIX/J +session ID strings is also included. The following example shows a +simple configuration of an acceptor and initiator session with default +settings for both sessions. + +# Exception handling + +QuickFIX/J behavior can be modified if certain exceptions are thrown +during processing of a message. If a `RejectLogon` exception is thrown +while processing an incoming logon administrative message, then the +logon will be rejected. + +Normally, QuickFIX/J handles the logon process automatically. However, +sometimes an outgoing logon message must be modified to include +credentials required by a FIX counterparty. If the FIX logon message +body is modified when sending a logon message +(EventCategory=`AdminMessageSent` the modified message will be sent to +the counterparty. It is important that the outgoing logon message is +being processed *synchronously*. If it is processed asynchronously (on +another thread), the FIX engine will immediately send the unmodified +outgoing message when its callback method returns. + +# FIX Sequence Number Management + +If an application exception is thrown during *synchronous* exchange +processing, this will cause QuickFIX/J to not increment incoming FIX +message sequence numbers and will cause a resend of the counterparty +message. This FIX protocol behavior is primarily intended to handle +*transport* errors rather than application errors. There are risks +associated with using this mechanism to handle application errors. The +primary risk is that the message will repeatedly cause application +errors each time it’s re-received. A better solution is to persist the +incoming message (database, JMS queue) immediately before processing it. +This also allows the application to process messages asynchronously +without losing messages when errors occur. + +Although it’s possible to send messages to a FIX session before it’s +logged on (the messages will be sent at logon time), it is usually a +better practice to wait until the session is logged on. This eliminates +the required sequence number resynchronization steps at logon. Waiting +for session logon can be done by setting up a route that processes the +`SessionLogon` event category and signals the application to start +sending messages. + +See the FIX protocol specifications and the QuickFIX/J documentation for +more details about FIX sequence number management. + +# Route Examples + +Several examples are included in the QuickFIX/J component source code +(test subdirectories). One of these examples implements a trival trade +execution simulation. The example defines an application component that +uses the URI scheme "trade-executor". + +The following route receives messages for the trade executor session and +passes application messages to the trade executor component. + + from("quickfix:examples/inprocess.qf.cfg?sessionID=FIX.4.2:MARKET->TRADER"). + filter(header(QuickfixjEndpoint.EVENT_CATEGORY_KEY).isEqualTo(QuickfixjEventCategory.AppMessageReceived)). + to("trade-executor:market"); + +The trade executor component generates messages that are routed back to +the trade session. The session ID must be set in the FIX message itself +since no session ID is specified in the endpoint URI. + + from("trade-executor:market").to("quickfix:examples/inprocess.qf.cfg"); + +The trader session consumes execution report messages from the market +and processes them. + + from("quickfix:examples/inprocess.qf.cfg?sessionID=FIX.4.2:TRADER->MARKET"). + filter(header(QuickfixjEndpoint.MESSAGE_TYPE_KEY).isEqualTo(MsgType.EXECUTION_REPORT)). + bean(new MyTradeExecutionProcessor()); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|eagerStopEngines|Whether to eager stop engines when there are no active consumer or producers using the engine. For example when stopping a route, then the engine can be stopped as well. And when the route is started, then the engine is started again. This can be turned off to only stop the engines when Camel is shutdown.|true|boolean| +|lazyCreateEngines|If set to true, the engines will be created and started when needed (when first message is send)|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|logFactory|To use the given LogFactory||object| +|messageFactory|To use the given MessageFactory||object| +|messageStoreFactory|To use the given MessageStoreFactory||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configurationName|Path to the quickfix configuration file. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the configuration file using these protocols (classpath is default). ref will lookup the configuration file in the registry. bean will call a method on a bean to be used as the configuration. For bean you can specify the method name after dot, eg bean:myBean.myMethod||string| +|lazyCreateEngine|This option allows creating QuickFIX/J engine on demand. Value true means the engine is started when first message is send or there's consumer configured in route definition. When false value is used, the engine is started at the endpoint creation. When this parameter is missing, the value of component's property lazyCreateEngines is being used.|false|boolean| +|sessionID|The optional sessionID identifies a specific FIX session. The format of the sessionID is: (BeginString):(SenderCompID)/(SenderSubID)/(SenderLocationID)-(TargetCompID)/(TargetSubID)/(TargetLocationID)||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-reactive-streams.md b/camel-reactive-streams.md new file mode 100644 index 0000000000000000000000000000000000000000..a0975957b5f26fb54adc70077e5696079f61a96e --- /dev/null +++ b/camel-reactive-streams.md @@ -0,0 +1,345 @@ +# Reactive-streams + +**Since Camel 2.19** + +**Both producer and consumer are supported** + +The Reactive Streams component allows you to exchange messages with +reactive stream processing libraries compatible with the [reactive +streams](http://www.reactive-streams.org/) standard. + +The component supports backpressure and has been tested using the +[reactive streams technology compatibility kit +(TCK)](https://github.com/reactive-streams/reactive-streams-jvm/tree/master/tck). + +The Camel module provides a **reactive-streams** component that allows +users to define incoming and outgoing streams within Camel routes, and a +direct client API that allows using Camel endpoints directly into any +external reactive framework. + +Camel uses an internal implementation of the reactive streams +*Publisher* and *Subscriber*, so it’s not tied to any specific +framework. The following reactive frameworks have been used in the +integration tests: [Reactor Core +3](https://github.com/reactor/reactor-core), [RxJava +2](https://github.com/ReactiveX/RxJava). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-reactive-streams + x.x.x + + + +# URI format + + reactive-streams://stream?[options] + +Where **stream** is a logical stream name used to bind Camel routes to +the external stream processing systems. + +# Usage + +The library is aimed to support all the communication modes needed by an +application to interact with Camel data: + +- **Get** data from Camel routes (In-Only from Camel) + +- **Send** data to Camel routes (In-Only towards Camel) + +- **Request** a transformation to a Camel route (In-Out towards Camel) + +- **Process** data flowing from a Camel route using a reactive + processing step (In-Out from Camel) + +# Getting data from Camel + +To subscribe to data flowing from a Camel route, exchanges should be +redirected to a named stream, like in the following snippet: + + from("timer:clock") + .setBody().header(Exchange.TIMER_COUNTER) + .to("reactive-streams:numbers"); + +Routes can also be written using the XML DSL. + +In the example, an unbounded stream of numbers is associated to the name +`numbers`. The stream can be accessed using the `CamelReactiveStreams` +utility class. + + CamelReactiveStreamsService camel = CamelReactiveStreams.get(context); + + // Getting a stream of exchanges + Publisher exchanges = camel.fromStream("numbers"); + + // Getting a stream of Integers (using Camel standard conversion system) + Publisher numbers = camel.fromStream("numbers", Integer.class); + +The stream can be used easily with any reactive streams compatible +library. Here is an example of how to use it with [RxJava +2](https://github.com/ReactiveX/RxJava) (although any reactive framework +can be used to process events). + + Flowable.fromPublisher(numbers) + .doOnNext(System.out::println) + .subscribe(); + +The example prints all numbers generated by Camel into `System.out`. + +## Getting data from Camel using the direct API + +For short Camel routes and for users that prefer defining the whole +processing flow using functional constructs of the reactive framework +(without using the Camel DSL at all), streams can also be defined using +Camel URIs. + + CamelReactiveStreamsService camel = CamelReactiveStreams.get(context); + + // Get a stream from all the files in a directory + Publisher files = camel.from("file:folder", String.class); + + // Use the stream in RxJava + Flowable.fromPublisher(files) + .doOnNext(System.out::println) + .subscribe(); + +# Sending data to Camel + +When an external library needs to push events into a Camel route, the +Reactive Streams endpoint must be set as consumer. + + from("reactive-streams:elements") + .to("log:INFO"); + +A handle to the `elements` stream can be obtained from the +`CamelReactiveStreams` utility class. + + CamelReactiveStreamsService camel = CamelReactiveStreams.get(context); + + Subscriber elements = camel.streamSubscriber("elements", String.class); + +The subscriber can be used to push events to the Camel route that +consumes from the `elements` stream. + +Here is an example of how to use it with [RxJava +2](https://github.com/ReactiveX/RxJava) (although any reactive framework +can be used to publish events). + + Flowable.interval(1, TimeUnit.SECONDS) + .map(i -> "Item " + i) + .subscribe(elements); + +String items are generated every second by RxJava in the example, and +they are pushed into the Camel route defined above. + +## Sending data to Camel using the direct API + +Also in this case, the direct API can be used to obtain a Camel +subscriber from an endpoint URI. + + CamelReactiveStreamsService camel = CamelReactiveStreams.get(context); + + // Send two strings to the "seda:queue" endpoint + Flowable.just("hello", "world") + .subscribe(camel.subscriber("seda:queue", String.class)); + +# Request a transformation to Camel + +Routes defined in some Camel DSL can be used within a reactive stream +framework to perform a specific transformation. The same mechanism can +be also used to e.g., send data to an *http* endpoint and continue. + +The following snippet shows how RxJava functional code can request the +task of loading and marshalling files to Camel. + + CamelReactiveStreamsService camel = CamelReactiveStreams.get(context); + + // Process files starting from their names + Flowable.just(new File("file1.txt"), new File("file2.txt")) + .flatMap(file -> camel.toStream("readAndMarshal", String.class)) + // Camel output will be converted to String + // other steps + .subscribe(); + +In order this to work, a route like the following should be defined in +the Camel context: + + from("reactive-streams:readAndMarshal") + .marshal() // ... other details + +## Request a transformation to Camel using the direct API + +An alternative approach consists of using the URI endpoints directly in +the reactive flow: + + CamelReactiveStreamsService camel = CamelReactiveStreams.get(context); + + // Process files starting from their names + Flowable.just(new File("file1.txt"), new File("file2.txt")) + .flatMap(file -> camel.to("direct:process", String.class)) + // Camel output will be converted to String + // other steps + .subscribe(); + +When using the `to()` method instead of the `toStream`, there is no need +to define the route using `reactive-streams:` endpoints (although they +are used under the hood). + +In this case, the Camel transformation can be just: + + from("direct:process") + .marshal() // ... other details + +# Process Camel data into the reactive framework + +While a reactive streams *Publisher* allows exchanging data in a +unidirectional way, Camel routes often use an in-out exchange pattern +(e.g., to define REST endpoints and, in general, where a reply is needed +for each request). + +In these circumstances, users can add a reactive processing step to the +flow, to enhance a Camel route or to define the entire transformation +using the reactive framework. + +For example, given the following route: + + from("timer:clock") + .setBody().header(Exchange.TIMER_COUNTER) + .to("direct:reactive") + .log("Continue with Camel route... n=${body}"); + +A reactive processing step can be associated with the "direct:reactive" +endpoint: + + CamelReactiveStreamsService camel = CamelReactiveStreams.get(context); + + camel.process("direct:reactive", Integer.class, items -> + Flowable.fromPublisher(items) // RxJava + .map(n -> -n)); // make every number negative + +Data flowing in the Camel route will be processed by the external +reactive framework then continue the processing flow inside Camel. + +This mechanism can also be used to define a In-Out exchange in a +completely reactive way. + + CamelReactiveStreamsService camel = CamelReactiveStreams.get(context); + + // requires a rest-capable Camel component + camel.process("rest:get:orders", exchange -> + Flowable.fromPublisher(exchange) + .flatMap(ex -> allOrders())); // retrieve orders asynchronously + +See Camel examples (**camel-example-reactive-streams**) for details. + +# Advanced Topics + +## Controlling Backpressure (producer side) + +When routing Camel exchanges to an external subscriber, backpressure is +handled by an internal buffer that caches exchanges before delivering +them. If the subscriber is slower than the exchange rate, the buffer may +become too big. In many circumstances, this must be avoided. + +Considering the following route: + + from("jms:queue") + .to("reactive-streams:flow"); + +If the JMS queue contains a high number of messages and the Subscriber +associated with the `flow` stream is too slow, messages are dequeued +from JMS and appended to the buffer, possibly causing a "out of memory" +error. To avoid such problems, a `ThrottlingInflightRoutePolicy` can be +set in the route. + + ThrottlingInflightRoutePolicy policy = new ThrottlingInflightRoutePolicy(); + policy.setMaxInflightExchanges(10); + + from("jms:queue") + .routePolicy(policy) + .to("reactive-streams:flow"); + +The policy limits the maximum number of active exchanges (and so the +maximum size of the buffer), keeping it lower than the threshold (`10` +in the example). When more than `10` messages are in flight, the route +is suspended, waiting for the subscriber to process them. + +With this mechanism, the subscriber controls the route suspension/resume +automatically, through backpressure. When multiple subscribers are +consuming items from the same stream, the slowest one controls the route +status automatically. + +In other circumstances, e.g., when using a `http` consumer, the route +suspension makes the http service unavailable, so using the default +configuration (no policy, unbounded buffer) should be preferable. Users +should try to avoid memory issues by limiting the number of requests to +the http service (e.g., scaling out). + +In contexts where a certain amount of data loss is acceptable, setting a +backpressure strategy other than `BUFFER` can be a solution for dealing +with fast sources. + + from("direct:thermostat") + .to("reactive-streams:flow?backpressureStrategy=LATEST"); + +When the `LATEST` backpressure strategy is used, the publisher keeps +only the last exchange received from the route, while older data is +discarded (other options are available). + +## Controlling Backpressure (consumer side) + +When Camel consumes items from a reactive-streams publisher, the maximum +number of inflight exchanges can be set as endpoint option. + +The subscriber associated with the consumer interacts with the publisher +to keep the number of messages in the route lower than the threshold. + +An example of backpressure-aware route: + + from("reactive-streams:numbers?maxInflightExchanges=10") + .to("direct:endpoint"); + +The number of items that Camel requests to the source publisher (through +the reactive streams backpressure mechanism) is always lower than `10`. +Messages are processed by a single thread in the Camel side. + +The number of concurrent consumers (threads) can also be set as endpoint +option (`concurrentConsumers`). When using 1 consumer (the default), the +order of items in the source stream is maintained. When this value is +increased, items will be processed concurrently by multiple threads (so +not preserving the order). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|threadPoolMaxSize|The maximum number of threads used by the reactive streams internal engine.|10|integer| +|threadPoolMinSize|The minimum number of threads used by the reactive streams internal engine.||integer| +|threadPoolName|The name of the thread pool used by the reactive streams internal engine.|CamelReactiveStreamsWorker|string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|backpressureStrategy|The backpressure strategy to use when pushing events to a slow subscriber.|BUFFER|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|reactiveStreamsEngineConfiguration|To use an existing reactive stream engine configuration.||object| +|serviceType|Set the type of the underlying reactive streams implementation to use. The implementation is looked up from the registry or using a ServiceLoader, the default implementation is DefaultCamelReactiveStreamsService||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|stream|Name of the stream channel used by the endpoint to exchange messages.||string| +|concurrentConsumers|Number of threads used to process exchanges in the Camel route.|1|integer| +|exchangesRefillLowWatermark|Set the low watermark of requested exchanges to the active subscription as percentage of the maxInflightExchanges. When the number of pending items from the upstream source is lower than the watermark, new items can be requested to the subscription. If set to 0, the subscriber will request items in batches of maxInflightExchanges, only after all items of the previous batch have been processed. If set to 1, the subscriber can request a new item each time an exchange is processed (chatty). Any intermediate value can be used.|0.25|number| +|forwardOnComplete|Determines if onComplete events should be pushed to the Camel route.|false|boolean| +|forwardOnError|Determines if onError events should be pushed to the Camel route. Exceptions will be set as message body.|false|boolean| +|maxInflightExchanges|Maximum number of exchanges concurrently being processed by Camel. This parameter controls backpressure on the stream. Setting a non-positive value will disable backpressure.|128|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|backpressureStrategy|The backpressure strategy to use when pushing events to a slow subscriber.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-ref.md b/camel-ref.md new file mode 100644 index 0000000000000000000000000000000000000000..475f01471f4aceb8164f846396018696480209d3 --- /dev/null +++ b/camel-ref.md @@ -0,0 +1,80 @@ +# Ref + +**Since Camel 1.2** + +**Both producer and consumer are supported** + +The Ref component is used for lookup of existing endpoints bound in the +Registry. + +# URI format + + ref:someName[?options] + +Where **someName** is the name of an endpoint in the Registry (usually, +but not always, the Spring registry). If you are using the Spring +registry, `someName` would be the bean ID of an endpoint in the Spring +registry. + +# Runtime lookup + +This component can be used when you need dynamic discovery of endpoints +in the Registry where you can compute the URI at runtime. Then you can +look up the endpoint using the following code: + + // lookup the endpoint + String myEndpointRef = "bigspenderOrder"; + Endpoint endpoint = context.getEndpoint("ref:" + myEndpointRef); + + Producer producer = endpoint.createProducer(); + Exchange exchange = producer.createExchange(); + exchange.getIn().setBody(payloadToSend); + // send the exchange + producer.process(exchange); + +With Spring XML, you could have a list of endpoints defined in the +Registry such as: + + + + + + +# Sample + +Bind endpoints to the Camel registry: + + context.getRegistry().bind("endpoint1", context.getEndpoint("direct:start")); + context.getRegistry().bind("endpoint2", context.getEndpoint("log:end")); + +Use the `ref` URI scheme to refer to endpoint’s bond to the Camel +registry: + + public class MyRefRoutes extends RouteBuilder { + @Override + public void configure() { + // direct:start -> log:end + from("ref:endpoint1") + .to("ref:endpoint2"); + } + } + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Name of endpoint to lookup in the registry.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-rest-api.md b/camel-rest-api.md new file mode 100644 index 0000000000000000000000000000000000000000..6337237879f8e9d20db015c939e1459f3dfd1f04 --- /dev/null +++ b/camel-rest-api.md @@ -0,0 +1,29 @@ +# Rest-api + +**Since Camel 2.16** + +**Only consumer is supported** + +The REST API component is used for providing Swagger API of the REST +services which has been defined using the rest-dsl in Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|consumerComponentName|The Camel Rest API component to use for the consumer REST transport, such as jetty, servlet, undertow. If no component has been explicitly configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestApiConsumerFactory is registered in the registry. If either one is found, then that is being used.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|path|The base path||string| +|apiComponentName|The Camel Rest API component to use for generating the API of the REST services, such as openapi.||string| +|consumerComponentName|The Camel Rest component to use for the consumer REST transport, such as jetty, servlet, undertow. If no component has been explicitly configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| diff --git a/camel-rest-openapi.md b/camel-rest-openapi.md new file mode 100644 index 0000000000000000000000000000000000000000..4287d260caa278f8d55e52ebaa3bd212ea7d27f0 --- /dev/null +++ b/camel-rest-openapi.md @@ -0,0 +1,221 @@ +# Rest-openapi + +**Since Camel 3.1** + +**Both producer and consumer are supported** + +The REST OpenApi configures rest producers from +[OpenApi](https://www.openapis.org/) (Open API) specification document +and delegates to a component implementing the *RestProducerFactory* +interface. Currently, known working components are: + +- [http](#http-component.adoc) + +- [netty-http](#netty-http-component.adoc) + +- [undertow](#undertow-component.adoc) + +- [vertx-http](#vertx-http-component.adoc) + +Only OpenAPI spec version 3.x is supported. You cannot use the old +Swagger 2.0 spec. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-rest-openapi + x.x.x + + + +# URI format + + rest-openapi:[specificationPath#]operationId + +Where `operationId` is the ID of the operation in the OpenApi +specification, and `specificationPath` is the path to the specification. +If the `specificationPath` is not specified it defaults to +`openapi.json`. The lookup mechanism uses Camels `ResourceHelper` to +load the resource, which means that you can use CLASSPATH resources +(`classpath:my-specification.json`), files (`file:/some/path.json`), the +web (`\http://api.example.com/openapi.json`) or reference a bean +(`ref:nameOfBean`) or use a method of a bean +(`bean:nameOfBean.methodName`) to get the specification resource, +failing that OpenApi’s own resource loading support. + +This component does not act as an HTTP client. It delegates that to +another component mentioned above. The lookup mechanism searches for a +single component that implements the *RestProducerFactory* interface and +uses that. If the CLASSPATH contains more than one, then the property +`componentName` should be set to indicate which component to delegate +to. + +Most of the configuration is taken from the OpenApi specification, but +the option exists to override those by specifying them on the component +or on the endpoint. Typically, you would need to override the `host` or +`basePath` if those differ from the specification. + +The `host` parameter should contain the absolute URI containing scheme, +hostname and port number, for instance: `\https://api.example.com` + +With `componentName` you specify what component is used to perform the +requests, this named component needs to be present in the Camel context +and implement the required *RestProducerFactory* interface — as do the +components listed at the top. + +If you do not specify the *componentName* at either component or +endpoint level, CLASSPATH is searched for a suitable delegate. There +should be only one component present on the CLASSPATH that implements +the *RestProducerFactory* interface for this to work. + +This component’s endpoint URI is lenient which means that in addition to +message headers you can specify REST operation’s parameters as endpoint +parameters, these will be constant for all subsequent invocations, so it +makes sense to use this feature only for parameters that are indeed +constant for all invocations — for example API version in path such as +`/api/{version}/users/{id}`. + +# Example: PetStore + +Checkout the `rest-openapi-simple` example project in the +[https://github.com/apache/camel-spring-boot-examples](https://github.com/apache/camel-spring-boot-examples) repository. + +For example, if you wanted to use the +[*PetStore*](https://petstore3.swagger.io/api/v3/) provided REST API +simply reference the specification URI and desired operation id from the +OpenApi specification or download the specification and store it as +`openapi.json` (in the root) of CLASSPATH that way it will be +automatically used. Let’s use the [HTTP](#http-component.adoc) component +to perform all the requests and Camel’s excellent support for Spring +Boot. + +Here are our dependencies defined in Maven POM file: + + + org.apache.camel.springboot + camel-http-starter + + + + org.apache.camel.springboot + camel-rest-openapi-starter + + +Start by defining a *RestOpenApiComponent* bean: + + @Bean + public Component petstore(CamelContext camelContext) { + RestOpenApiComponent petstore = new RestOpenApiComponent(camelContext); + petstore.setSpecificationUri("https://petstore3.swagger.io/api/v3/openapi.json"); + petstore.setHost("https://petstore3.swagger.io"); + return petstore; + } + +Support in Camel for Spring Boot will auto create the `HttpComponent` +Spring bean, and you can configure it using `application.properties` (or +`application.yml`) using prefix `camel.component.http.`. We are defining +the `petstore` component here to have a named component in the Camel +context that we can use to interact with the PetStore REST API, if this +is the only `rest-openapi` component used we might configure it in the +same manner (using `application.properties`). + +In this example, there is no need to explicitly associate the `petstore` +component with the `HttpComponent` as Camel will use the first class on +the CLASSPATH that implements `RestProducerFactory`. However, if a +different component is required, then calling +`petstore.setComponentName("http")` would use the named component from +the Camel registry. + +Now in our application we can simply use the `ProducerTemplate` to +invoke PetStore REST methods: + + @Autowired + ProducerTemplate template; + + String getPetJsonById(int petId) { + return template.requestBodyAndHeader("petstore:getPetById", null, "petId", petId); + } + +# Request validation + +API requests can be validated against the configured OpenAPI +specification before they are sent by setting the +`requestValidationEnabled` option to `true`. Validation is provided by +the +[swagger-request-validator](https://bitbucket.org/atlassian/swagger-request-validator/src/master/). + +The validator checks for the following conditions: + +- request body - Checks if the request body is required and whether + there is any body on the Camel Exchange. + +- valid json - Checks if the content-type is `application/json` that + the message body can be parsed as valid JSon. + +- content-type - Validates whether the `Content-Type` header for the + request is valid for the API operation. The value is taken from the + `Content-Type` Camel message exchange header. + +- request parameters - Validates whether an HTTP header required by + the API operation is present. The header is expected to be present + among the Camel message exchange headers. + +- query parameters - Validates whether an HTTP query parameter + required by the API operation is present. The query parameter is + expected to be present among the Camel message exchange headers. + +If any of the validation checks fail, then a +`RestOpenApiValidationException` is thrown. The exception object has a +`getValidationErrors` method that returns the error messages from the +validator. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|basePath|API basePath, for example /v2. Default is unset, if set overrides the value present in OpenApi specification.||string| +|specificationUri|Path to the OpenApi specification file. The scheme, host base path are taken from this specification, but these can be overridden with properties on the component or endpoint level. If not given the component tries to load openapi.json resource. Note that the host defined on the component and endpoint of this Component should contain the scheme, hostname and optionally the port in the URI syntax (i.e. https://api.example.com:8080). Can be overridden in endpoint configuration.|openapi.json|string| +|apiContextPath|Sets the context-path to use for servicing the OpenAPI specification||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|clientRequestValidation|Whether to enable validation of the client request to check if the incoming request is valid according to the OpenAPI specification|false|boolean| +|missingOperation|Whether the consumer should fail,ignore or return a mock response for OpenAPI operations that are not mapped to a corresponding route.|fail|string| +|bindingPackageScan|Package name to use as base (offset) for classpath scanning of POJO classes are located when using binding mode is enabled for JSon or XML. Multiple package names can be separated by comma.||string| +|consumerComponentName|Name of the Camel component that will service the requests. The component must be present in Camel registry and it must implement RestOpenApiConsumerFactory service provider interface. If not set CLASSPATH is searched for single component that implements RestOpenApiConsumerFactory SPI. Can be overridden in endpoint configuration.||string| +|mockIncludePattern|Used for inclusive filtering of mock data from directories. The pattern is using Ant-path style pattern. Multiple patterns can be specified separated by comma.|classpath:camel-mock/\*\*|string| +|restOpenapiProcessorStrategy|To use a custom strategy for how to process Rest DSL requests||object| +|host|Scheme hostname and port to direct the HTTP requests to in the form of https://hostname:port. Can be configured at the endpoint, component or in the corresponding REST configuration in the Camel Context. If you give this component a name (e.g. petstore) that REST configuration is consulted first, rest-openapi next, and global configuration last. If set overrides any value found in the OpenApi specification, RestConfiguration. Can be overridden in endpoint configuration.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|requestValidationEnabled|Enable validation of requests against the configured OpenAPI specification|false|boolean| +|componentName|Name of the Camel component that will perform the requests. The component must be present in Camel registry and it must implement RestProducerFactory service provider interface. If not set CLASSPATH is searched for single component that implements RestProducerFactory SPI. Can be overridden in endpoint configuration.||string| +|consumes|What payload type this component capable of consuming. Could be one type, like application/json or multiple types as application/json, application/xml; q=0.5 according to the RFC7231. This equates to the value of Accept HTTP header. If set overrides any value found in the OpenApi specification. Can be overridden in endpoint configuration||string| +|produces|What payload type this component is producing. For example application/json according to the RFC7231. This equates to the value of Content-Type HTTP header. If set overrides any value present in the OpenApi specification. Can be overridden in endpoint configuration.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|sslContextParameters|Customize TLS parameters used by the component. If not set defaults to the TLS parameters set in the Camel context||object| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|specificationUri|Path to the OpenApi specification file. The scheme, host base path are taken from this specification, but these can be overridden with properties on the component or endpoint level. If not given the component tries to load openapi.json resource from the classpath. Note that the host defined on the component and endpoint of this Component should contain the scheme, hostname and optionally the port in the URI syntax (i.e. http://api.example.com:8080). Overrides component configuration. The OpenApi specification can be loaded from different sources by prefixing with file: classpath: http: https:. Support for https is limited to using the JDK installed UrlHandler, and as such it can be cumbersome to setup TLS/SSL certificates for https (such as setting a number of javax.net.ssl JVM system properties). How to do that consult the JDK documentation for UrlHandler. Default value notice: By default loads openapi.json file|openapi.json|string| +|operationId|ID of the operation from the OpenApi specification. This is required when using producer||string| +|apiContextPath|Sets the context-path to use for servicing the OpenAPI specification||string| +|clientRequestValidation|Whether to enable validation of the client request to check if the incoming request is valid according to the OpenAPI specification|false|boolean| +|consumes|What payload type this component capable of consuming. Could be one type, like application/json or multiple types as application/json, application/xml; q=0.5 according to the RFC7231. This equates or multiple types as application/json, application/xml; q=0.5 according to the RFC7231. This equates to the value of Accept HTTP header. If set overrides any value found in the OpenApi specification and. in the component configuration||string| +|missingOperation|Whether the consumer should fail,ignore or return a mock response for OpenAPI operations that are not mapped to a corresponding route.|fail|string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|consumerComponentName|Name of the Camel component that will service the requests. The component must be present in Camel registry and it must implement RestOpenApiConsumerFactory service provider interface. If not set CLASSPATH is searched for single component that implements RestOpenApiConsumerFactory SPI. Overrides component configuration.||string| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|mockIncludePattern|Used for inclusive filtering of mock data from directories. The pattern is using Ant-path style pattern. Multiple patterns can be specified separated by comma.|classpath:camel-mock/\*\*|string| +|restOpenapiProcessorStrategy|To use a custom strategy for how to process Rest DSL requests||object| +|basePath|API basePath, for example /v3. Default is unset, if set overrides the value present in OpenApi specification and in the component configuration.||string| +|host|Scheme hostname and port to direct the HTTP requests to in the form of https://hostname:port. Can be configured at the endpoint, component or in the corresponding REST configuration in the Camel Context. If you give this component a name (e.g. petstore) that REST configuration is consulted first, rest-openapi next, and global configuration last. If set overrides any value found in the OpenApi specification, RestConfiguration. Overrides all other configuration.||string| +|produces|What payload type this component is producing. For example application/json according to the RFC7231. This equates to the value of Content-Type HTTP header. If set overrides any value present in the OpenApi specification. Overrides all other configuration.||string| +|requestValidationEnabled|Enable validation of requests against the configured OpenAPI specification|false|boolean| +|componentName|Name of the Camel component that will perform the requests. The component must be present in Camel registry and it must implement RestProducerFactory service provider interface. If not set CLASSPATH is searched for single component that implements RestProducerFactory SPI. Overrides component configuration.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-rest.md b/camel-rest.md new file mode 100644 index 0000000000000000000000000000000000000000..a9057afa6c7c55c9843af21eb5249da2017f885b --- /dev/null +++ b/camel-rest.md @@ -0,0 +1,200 @@ +# Rest + +**Since Camel 2.14** + +**Both producer and consumer are supported** + +The REST component allows defining REST endpoints (consumer) using the +Rest DSL and plugin to other Camel components as the REST transport. + +The REST component can also be used as a client (producer) to call REST +services. + +# URI format + + rest://method:path[:uriTemplate]?[options] + +# Supported REST components + +The following components support the REST consumer (Rest DSL): + +- camel-netty-http + +- camel-jetty + +- camel-servlet + +- camel-undertow + +- camel-platform-http + +The following components support the REST producer: + +- camel-http + +- camel-netty-http + +- camel-undertow + +- camel-vertx-http + +# Path and uriTemplate syntax + +The path and uriTemplate option is defined using a REST syntax where you +define the REST context path using support for parameters. + +If no uriTemplate is configured then `path` option works the same way. + +It does not matter if you configure only `path` or if you configure both +options. Though configuring both a path and uriTemplate is a more common +practice with REST. + +The following is a Camel route using a path only + + from("rest:get:hello") + .transform().constant("Bye World"); + +And the following route uses a parameter which is mapped to a Camel +header with the key "me". + + from("rest:get:hello/{me}") + .transform().simple("Bye ${header.me}"); + +The following examples have configured a base path as "hello" and then +have two REST services configured using uriTemplates. + + from("rest:get:hello:/{me}") + .transform().simple("Hi ${header.me}"); + + from("rest:get:hello:/french/{me}") + .transform().simple("Bonjour ${header.me}"); + +# Rest producer examples + +You can use the REST component to call REST services like any other +Camel component. + +For example, to call a REST service on using `hello/{me}` you can do + + from("direct:start") + .to("rest:get:hello/{me}"); + +And then the dynamic value `{me}` is mapped to a Camel message with the +same name. So to call this REST service, you can send an empty message +body and a header as shown: + + template.sendBodyAndHeader("direct:start", null, "me", "Donald Duck"); + +The Rest producer needs to know the hostname and port of the REST +service, which you can configure using the host option as shown: + + from("direct:start") + .to("rest:get:hello/{me}?host=myserver:8080/foo"); + +Instead of using the host option, you can configure the host on the +`restConfiguration` as shown: + + restConfiguration().host("myserver:8080/foo"); + + from("direct:start") + .to("rest:get:hello/{me}"); + +You can use the `producerComponent` to select which Camel component to +use as the HTTP client, for example to use http, you can do: + + restConfiguration().host("myserver:8080/foo").producerComponent("http"); + + from("direct:start") + .to("rest:get:hello/{me}"); + +# Rest producer binding + +The REST producer supports binding using JSON or XML like the rest-dsl +does. + +For example, to use jetty with JSON binding mode turned on, you can +configure this in the REST configuration: + + restConfiguration().component("jetty").host("localhost").port(8080).bindingMode(RestBindingMode.json); + + from("direct:start") + .to("rest:post:user"); + +Then when calling the REST service using the REST producer, it will +automatically bind any POJOs to JSON before calling the REST service: + + UserPojo user = new UserPojo(); + user.setId(123); + user.setName("Donald Duck"); + + template.sendBody("direct:start", user); + +In the example above we send a POJO instance `UserPojo` as the message +body. And because we have turned on JSON binding in the REST +configuration, then the POJO will be marshalled from POJO to JSON before +calling the REST service. + +However, if you want to also perform binding for the response message +(e.g., what the REST service sends back, as response) you would need to +configure the `outType` option to specify what is the class name of the +POJO to unmarshal from JSON to POJO. + +For example, if the REST service returns a JSON payload that binds to +`com.foo.MyResponsePojo` you can configure this as shown: + + restConfiguration().component("jetty").host("localhost").port(8080).bindingMode(RestBindingMode.json); + + from("direct:start") + .to("rest:post:user?outType=com.foo.MyResponsePojo"); + +You must configure `outType` option if you want POJO binding to happen +for the response messages received from calling the REST service. + +# More examples + +See Rest DSL, which offers more examples and how you can use the Rest +DSL to define those in a nicer, restful way. + +There is a **camel-example-servlet-rest-tomcat** example in the Apache +Camel distribution, that demonstrates how to use the Rest DSL with +Servlet as transport that can be deployed on Apache Tomcat, or similar +web containers. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|consumerComponentName|The Camel Rest component to use for the consumer REST transport, such as jetty, servlet, undertow. If no component has been explicitly configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used.||string| +|apiDoc|The swagger api doc resource to use. The resource is loaded from classpath by default and must be in JSON format.||string| +|host|Host and port of HTTP service to use (override host in swagger schema)||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|producerComponentName|The Camel Rest component to use for the producer REST transport, such as http, undertow. If no component has been explicitly configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|method|HTTP method to use.||string| +|path|The base path, can use \* as path suffix to support wildcard HTTP route matching.||string| +|uriTemplate|The uri template||string| +|consumes|Media type such as: 'text/xml', or 'application/json' this REST service accepts. By default we accept all kinds of types.||string| +|inType|To declare the incoming POJO binding type as a FQN class name||string| +|outType|To declare the outgoing POJO binding type as a FQN class name||string| +|produces|Media type such as: 'text/xml', or 'application/json' this REST service returns.||string| +|routeId|Name of the route this REST services creates||string| +|consumerComponentName|The Camel Rest component to use for the consumer REST transport, such as jetty, servlet, undertow. If no component has been explicitly configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used.||string| +|description|Human description to document this REST service||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|apiDoc|The openapi api doc resource to use. The resource is loaded from classpath by default and must be in JSON format.||string| +|bindingMode|Configures the binding mode for the producer. If set to anything other than 'off' the producer will try to convert the body of the incoming message from inType to the json or xml, and the response from json or xml to outType.||object| +|host|Host and port of HTTP service to use (override host in openapi schema)||string| +|producerComponentName|The Camel Rest component to use for the producer REST transport, such as http, undertow. If no component has been explicitly configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used.||string| +|queryParameters|Query parameters for the HTTP service to call. The query parameters can contain multiple parameters separated by ampersand such such as foo=123\&bar=456.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-robotframework.md b/camel-robotframework.md new file mode 100644 index 0000000000000000000000000000000000000000..f200965a1b67ed40b3e5d5a95cb0700a86492400 --- /dev/null +++ b/camel-robotframework.md @@ -0,0 +1,227 @@ +# Robotframework + +**Since Camel 3.0** + +**Both producer and consumer are supported** + +The **robotframework:** component allows for processing camel exchanges +in acceptance test suites which are already implemented with its own +DSL. The depending keyword libraries that can be used inside test suites +implemented in Robot DSL, could have been implemented either via Java or +Pyhton. + +This component will let you execute business logic of acceptance test +cases in Robot language on which you can pass parameters to feed data +via power of Camel Routes. However, there is no reverse binding of +parameters back where you can pass values back into Camel exchange. +Therefore, for that reason, it actually acts like a template language +passing camel exchanges by binding data into the test cases implemented. + + + org.apache.camel + camel-robotframework + x.x.x + + +# URI format + + robotframework:templateName[?options] + +Where **templateName** is the classpath-local URI of the template to +invoke; or the complete URL of the remote template (eg: +file://folder/myfile.robot). + +# Samples + +For example, you could use something like: + + from("direct:setVariableCamelBody") + .to("robotframework:src/test/resources/org/apache/camel/component/robotframework/set_variable_camel_body.robot") + +To use a robot test case to execute and collect the results and pass +them to generate a custom report if such need happens + +It’s possible to specify what template the component should use +dynamically via a header, so for example: + + from("direct:in") + .setHeader(RobotFrameworkCamelConstants.CAMEL_ROBOT_RESOURCE_URI).constant("path/to/my/template.robot") + .to("robotframework:dummy?allowTemplateFromHeader=true"); + +Robotframework component helps you pass values into robot test cases +with the similar approach how you would be able to pass values using +Camel Simple Language. Components support passing values in three +different ways. Exchange body, headers, and properties. + + from("direct:in") + .setBody(constant("Hello Robot")) + .setHeader(RobotFrameworkCamelConstants.CAMEL_ROBOT_RESOURCE_URI).constant("path/to/my/template.robot") + .to("robotframework:dummy?allowTemplateFromHeader=true"); + +And the `template.robot` file: + + *** Test Cases *** + Set Variable Camel Body Test Case + ${myvar} = Set Variable ${body} + Should Be True ${myvar} == ${body} + + from("direct:in") + .setHeader("testHeader", constant("testHeaderValue")) + .setHeader(RobotFrameworkCamelConstants.CAMEL_ROBOT_RESOURCE_URI).constant("path/to/my/template.robot") + .to("robotframework:dummy?allowTemplateFromHeader=true"); + +And the `template.robot` file: + + *** Test Cases *** + Set Variable Camel Header Test Case + ${myvar} = Set Variable ${headers.testHeader} + Should Be True ${myvar} == ${headers.testHeader} + + from("direct:in") + .setProperty"testProperty", constant("testPropertyValue")) + .setHeader(RobotFrameworkCamelConstants.CAMEL_ROBOT_RESOURCE_URI).constant("path/to/my/template.robot") + .to("robotframework:dummy?allowTemplateFromHeader=true"); + +And the `template.robot` file: + + *** Test Cases *** + Set Variable Camel Header Test Case + ${myvar} = Set Variable ${properties.testProperty} + Should Be True ${myvar} == ${properties.testProperty} + +Please note that when you pass values through Camel Exchange to test +cases, they will be available as case-sensitive \`\`body\`\`, +\`\`headers.\[yourHeaderName\]\`\` and +\`\`properties.\[yourPropertyName\]\`\` + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|argumentFiles|A text String to read more arguments from.||string| +|combinedTagStats|Creates combined statistics based on tags. Use the format tags:title List||string| +|criticalTags|Tests that have the given tags are considered critical. List||string| +|debugFile|A debug String that is written during execution.||string| +|document|Sets the documentation of the top-level tests suites.||string| +|dryrun|Sets dryrun mode on use. In the dry run mode tests are run without executing keywords originating from test libraries. Useful for validating test data syntax.|false|boolean| +|excludes|Selects the tests cases by tags. List||string| +|exitOnFailure|Sets robot to stop execution immediately if a critical test fails.|false|boolean| +|includes|Selects the tests cases by tags. List||string| +|listener|Sets a single listener for monitoring tests execution||string| +|listeners|Sets multiple listeners for monitoring tests execution. Use the format ListenerWithArgs:arg1:arg2 or simply ListenerWithoutArgs List||string| +|log|Sets the path to the generated log String.||string| +|logLevel|Sets the threshold level for logging.||string| +|logTitle|Sets a title for the generated tests log.||string| +|metadata|Sets free metadata for the top level tests suites. comma seperated list of string resulting as List||string| +|monitorColors|Using ANSI colors in console. Normally colors work in unixes but not in Windows. Default is 'on'. 'on' - use colors in unixes but not in Windows 'off' - never use colors 'force' - always use colors (also in Windows)||string| +|monitorWidth|Width of the monitor output. Default is 78.|78|string| +|name|Sets the name of the top-level tests suites.||string| +|nonCriticalTags|Tests that have the given tags are not critical. List||string| +|noStatusReturnCode|If true, sets the return code to zero regardless of failures in test cases. Error codes are returned normally.|false|boolean| +|output|Sets the path to the generated output String.||string| +|outputDirectory|Configures where generated reports are to be placed.||string| +|randomize|Sets the test execution order to be randomized. Valid values are all, suite, and test||string| +|report|Sets the path to the generated report String.||string| +|reportBackground|Sets background colors for the generated report and summary.||string| +|reportTitle|Sets a title for the generated tests report.||string| +|runEmptySuite|Executes tests also if the top level test suite is empty. Useful e.g. with --include/--exclude when it is not an error that no test matches the condition.|false|boolean| +|runFailed|Re-run failed tests, based on output.xml String.||string| +|runMode|Sets the execution mode for this tests run. Note that this setting has been deprecated in Robot Framework 2.8. Use separate dryryn, skipTeardownOnExit, exitOnFailure, and randomize settings instead.||string| +|skipTeardownOnExit|Sets whether the teardowns are skipped if the test execution is prematurely stopped.|false|boolean| +|splitOutputs|Splits output and log files.||string| +|suites|Selects the tests suites by name. List||string| +|suiteStatLevel|Defines how many levels to show in the Statistics by Suite table in outputs.||string| +|summaryTitle|Sets a title for the generated summary report.||string| +|tagDocs|Adds documentation to the specified tags. List||string| +|tags|Sets the tags(s) to all executed tests cases. List||string| +|tagStatExcludes|Excludes these tags from the Statistics by Tag and Test Details by Tag tables in outputs. List||string| +|tagStatIncludes|Includes only these tags in the Statistics by Tag and Test Details by Tag tables in outputs. List||string| +|tagStatLinks|Adds external links to the Statistics by Tag table in outputs. Use the format pattern:link:title List||string| +|tests|Selects the tests cases by name. List||string| +|timestampOutputs|Adds a timestamp to all output files.|false|boolean| +|variableFiles|Sets variables using variables files. Use the format path:args List||string| +|variables|Sets individual variables. Use the format name:value List||string| +|warnOnSkippedFiles|Show a warning when an invalid String is skipped.|false|boolean| +|xunitFile|Sets the path to the generated XUnit compatible result String, relative to outputDirectory. The String is in xml format. By default, the String name is derived from the testCasesDirectory parameter, replacing blanks in the directory name by underscores.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configuration|The configuration||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|resourceUri|Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod.||string| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|argumentFiles|A text String to read more arguments from.||string| +|combinedTagStats|Creates combined statistics based on tags. Use the format tags:title List||string| +|contentCache|Sets whether to use resource content cache or not|false|boolean| +|criticalTags|Tests that have the given tags are considered critical. List||string| +|debugFile|A debug String that is written during execution.||string| +|document|Sets the documentation of the top-level tests suites.||string| +|dryrun|Sets dryrun mode on use. In the dry run mode tests are run without executing keywords originating from test libraries. Useful for validating test data syntax.|false|boolean| +|excludes|Selects the tests cases by tags. List||string| +|exitOnFailure|Sets robot to stop execution immediately if a critical test fails.|false|boolean| +|includes|Selects the tests cases by tags. List||string| +|listener|Sets a single listener for monitoring tests execution||string| +|listeners|Sets multiple listeners for monitoring tests execution. Use the format ListenerWithArgs:arg1:arg2 or simply ListenerWithoutArgs List||string| +|log|Sets the path to the generated log String.||string| +|logLevel|Sets the threshold level for logging.||string| +|logTitle|Sets a title for the generated tests log.||string| +|metadata|Sets free metadata for the top level tests suites. comma seperated list of string resulting as List||string| +|monitorColors|Using ANSI colors in console. Normally colors work in unixes but not in Windows. Default is 'on'. 'on' - use colors in unixes but not in Windows 'off' - never use colors 'force' - always use colors (also in Windows)||string| +|monitorWidth|Width of the monitor output. Default is 78.|78|string| +|name|Sets the name of the top-level tests suites.||string| +|nonCriticalTags|Tests that have the given tags are not critical. List||string| +|noStatusReturnCode|If true, sets the return code to zero regardless of failures in test cases. Error codes are returned normally.|false|boolean| +|output|Sets the path to the generated output String.||string| +|outputDirectory|Configures where generated reports are to be placed.||string| +|randomize|Sets the test execution order to be randomized. Valid values are all, suite, and test||string| +|report|Sets the path to the generated report String.||string| +|reportBackground|Sets background colors for the generated report and summary.||string| +|reportTitle|Sets a title for the generated tests report.||string| +|runEmptySuite|Executes tests also if the top level test suite is empty. Useful e.g. with --include/--exclude when it is not an error that no test matches the condition.|false|boolean| +|runFailed|Re-run failed tests, based on output.xml String.||string| +|runMode|Sets the execution mode for this tests run. Note that this setting has been deprecated in Robot Framework 2.8. Use separate dryryn, skipTeardownOnExit, exitOnFailure, and randomize settings instead.||string| +|skipTeardownOnExit|Sets whether the teardowns are skipped if the test execution is prematurely stopped.|false|boolean| +|splitOutputs|Splits output and log files.||string| +|suites|Selects the tests suites by name. List||string| +|suiteStatLevel|Defines how many levels to show in the Statistics by Suite table in outputs.||string| +|summaryTitle|Sets a title for the generated summary report.||string| +|tagDocs|Adds documentation to the specified tags. List||string| +|tags|Sets the tags(s) to all executed tests cases. List||string| +|tagStatExcludes|Excludes these tags from the Statistics by Tag and Test Details by Tag tables in outputs. List||string| +|tagStatIncludes|Includes only these tags in the Statistics by Tag and Test Details by Tag tables in outputs. List||string| +|tagStatLinks|Adds external links to the Statistics by Tag table in outputs. Use the format pattern:link:title List||string| +|tests|Selects the tests cases by name. List||string| +|timestampOutputs|Adds a timestamp to all output files.|false|boolean| +|variableFiles|Sets variables using variables files. Use the format path:args List||string| +|variables|Sets individual variables. Use the format name:value List||string| +|warnOnSkippedFiles|Show a warning when an invalid String is skipped.|false|boolean| +|xunitFile|Sets the path to the generated XUnit compatible result String, relative to outputDirectory. The String is in xml format. By default, the String name is derived from the testCasesDirectory parameter, replacing blanks in the directory name by underscores.||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| diff --git a/camel-rocketmq.md b/camel-rocketmq.md new file mode 100644 index 0000000000000000000000000000000000000000..fd4e2716a88d696b056bd55551b715b7c9a6e7fb --- /dev/null +++ b/camel-rocketmq.md @@ -0,0 +1,121 @@ +# Rocketmq + +**Since Camel 3.20** + +**Both producer and consumer are supported** + +The RocketMQ component allows you to produce and consume messages from +[RocketMQ](https://rocketmq.apache.org/) instances. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-rocketmq + x.x.x + + + +Since RocketMQ 5.x API is compatible with 4.x, this component works with +both RocketMQ 4.x and 5.x. Users could change RocketMQ dependencies on +their own. + +# URI format + + rocketmq:topicName?[options] + +The topic name determines the topic to which the produced messages will +be sent to. In the case of consumers, the topic name determines the +topic will be subscribed. This component uses RocketMQ push consumer by +default. + +# InOut Pattern + +InOut Pattern based on Message Key. When the producer sends the message, +a messageKey will be generated and append to the message’s key. + +After the message sent, a consumer will listen to the topic configured +by the parameter `ReplyToTopic`. + +When a message from `ReplyToTpic` contains the key, it means that the +reply received and continue routing. + +If `requestTimeoutMillis` elapsed and no reply received, an exception +will be thrown. + + from("rocketmq:START_TOPIC?producerGroup=p1&consumerGroup=c1") + + .to(ExchangePattern.InOut, "rocketmq:INTERMEDIATE_TOPIC" + + "?producerGroup=intermediaProducer" + + "&consumerGroup=intermediateConsumer" + + "&replyToTopic=REPLY_TO_TOPIC" + + "&replyToConsumerGroup=replyToConsumerGroup" + + "&requestTimeoutMillis=30000") + + .to("log:InOutRoute?showAll=true") + +# Examples + +Receive messages from a topic named `from_topic`, route to `to_topic`. + + from("rocketmq:FROM_TOPIC?namesrvAddr=localhost:9876&consumerGroup=consumer") + .to("rocketmq:TO_TOPIC?namesrvAddr=localhost:9876&producerGroup=producer"); + +Setting specific headers can change routing behaviour. For example, if +header `RocketMQConstants.OVERRIDE_TOPIC_NAME` was set, the message will +be sent to `ACTUAL_TARGET` instead of `ORIGIN_TARGET`. + + from("rocketmq:FROM?consumerGroup=consumer") + .process(exchange -> { + exchange.getMessage().setHeader(RocketMQConstants.OVERRIDE_TOPIC_NAME, "ACTUAL_TARGET"); + exchange.getMessage().setHeader(RocketMQConstants.OVERRIDE_TAG, "OVERRIDE_TAG"); + exchange.getMessage().setHeader(RocketMQConstants.OVERRIDE_MESSAGE_KEY, "OVERRIDE_MESSAGE_KEY"); + } + ) + .to("rocketmq:ORIGIN_TARGET?producerGroup=producer") + .to("log:RocketRoute?showAll=true") + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|namesrvAddr|Name server address of RocketMQ cluster.|localhost:9876|string| +|sendTag|Each message would be sent with this tag.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|consumerGroup|Consumer group name.||string| +|subscribeTags|Subscribe tags of consumer. Multiple tags could be split by , such as TagATagB|\*|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|producerGroup|Producer group name.||string| +|replyToConsumerGroup|Consumer group name used for receiving response.||string| +|replyToTopic|Topic used for receiving response when using in-out pattern.||string| +|waitForSendResult|Whether waiting for send result before routing to next endpoint.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|requestTimeoutCheckerIntervalMillis|Check interval milliseconds of request timeout.|1000|integer| +|requestTimeoutMillis|Timeout milliseconds of receiving response when using in-out pattern.|10000|integer| +|accessKey|Access key for RocketMQ ACL.||string| +|secretKey|Secret key for RocketMQ ACL.||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|topicName|Topic name of this endpoint.||string| +|namesrvAddr|Name server address of RocketMQ cluster.|localhost:9876|string| +|consumerGroup|Consumer group name.||string| +|subscribeTags|Subscribe tags of consumer. Multiple tags could be split by , such as TagATagB|\*|string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|producerGroup|Producer group name.||string| +|replyToConsumerGroup|Consumer group name used for receiving response.||string| +|replyToTopic|Topic used for receiving response when using in-out pattern.||string| +|sendTag|Each message would be sent with this tag.||string| +|waitForSendResult|Whether waiting for send result before routing to next endpoint.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|requestTimeoutCheckerIntervalMillis|Check interval milliseconds of request timeout.|1000|integer| +|requestTimeoutMillis|Timeout milliseconds of receiving response when using in-out pattern.|10000|integer| +|accessKey|Access key for RocketMQ ACL.||string| +|secretKey|Secret key for RocketMQ ACL.||string| diff --git a/camel-rss.md b/camel-rss.md new file mode 100644 index 0000000000000000000000000000000000000000..de9b6c7557bf535f8b61e8abf23a53b81a3767e7 --- /dev/null +++ b/camel-rss.md @@ -0,0 +1,129 @@ +# Rss + +**Since Camel 2.0** + +**Only consumer is supported** + +The RSS component is used for polling RSS feeds. By default, Camel will +poll the feed every 60th second. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-rss + x.x.x + + + +The component currently only supports consuming feeds. + +# URI format + + rss:rssUri + +Where `rssUri` is the URI to the RSS feed to poll. + +# Exchange data types + +Camel initializes the In body on the Exchange with a ROME `SyndFeed`. +Depending on the value of the `splitEntries` flag, Camel returns either +a `SyndFeed` with one `SyndEntry` or a `java.util.List` of `SyndEntrys`. + + +++++ + + + + + + + + + + + + + + + + + + + +
OptionValueBehavior

splitEntries

true

A single entry from the current feed is +set in the exchange.

splitEntries

false

The entire list of entries from the +current feed is set in the exchange.

+ +# Example + +If the URL for the RSS feed uses query parameters, this component will +resolve them. For example, if the feed uses `alt=rss`, then the +following example will be resolved: + + from("rss:http://someserver.com/feeds/posts/default?alt=rss&splitEntries=false&delay=1000") + .to("bean:rss"); + +# Filtering entries + +You can filter out entries using XPath, as shown in the data format +section above. You can also exploit Camel’s Bean Integration to +implement your own conditions. For instance, a filter equivalent to the +XPath example above would be: + + from("rss:file:src/test/data/rss20.xml?splitEntries=true&delay=100") + .filter().method("myFilterBean", "titleContainsCamel") + .to("mock:result"); + +The custom bean for this would be: + + public static class FilterBean { + + public boolean titleContainsCamel(@Body SyndFeed feed) { + SyndEntry firstEntry = (SyndEntry) feed.getEntries().get(0); + return firstEntry.getTitle().contains("Camel"); + } + } + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|feedUri|The URI to the feed to poll.||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|sortEntries|Sets whether to sort entries by published date. Only works when splitEntries = true.|false|boolean| +|splitEntries|Sets whether or not entries should be sent individually or whether the entire feed should be sent as a single message|true|boolean| +|throttleEntries|Sets whether all entries identified in a single feed poll should be delivered immediately. If true, only one entry is processed per delay. Only applicable when splitEntries = true.|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|feedHeader|Sets whether to add the feed object as a header.|true|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| diff --git a/camel-saga.md b/camel-saga.md new file mode 100644 index 0000000000000000000000000000000000000000..13e11945567a1ab9eddd3710d868de7a1faab143 --- /dev/null +++ b/camel-saga.md @@ -0,0 +1,34 @@ +# Saga + +**Since Camel 2.21** + +**Only producer is supported** + +The Saga component provides a bridge to execute custom actions within a +route using the Saga EIP. + +The component should be used for advanced tasks, such as deciding to +complete or compensate a Saga with completionMode set to **MANUAL**. + +Refer to the Saga EIP documentation for help on using sagas in common +scenarios. + +# URI format + + saga:action + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|action|Action to execute (complete or compensate)||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-salesforce.md b/camel-salesforce.md new file mode 100644 index 0000000000000000000000000000000000000000..2eb65db1fc91be10435903ba526d7313df60c082 --- /dev/null +++ b/camel-salesforce.md @@ -0,0 +1,4582 @@ +# Salesforce + +**Since Camel 2.12** + +**Both producer and consumer are supported** + +This component supports producer and consumer endpoints to communicate +with Salesforce using Java DTOs. There is a companion [maven +plugin](#MavenPlugin) that generates these DTOs. + +Developers wishing to contribute to the component are instructed to look +at the +[README.md](https://github.com/apache/camel/tree/main/components/camel-salesforce/camel-salesforce-component/README.md) +file on instructions on how to get started and set up your environment +for running integration tests. + +# Getting Started + +Follow these steps to get started with the Salesforce component. + +1. **Create a salesforce org**. If you don’t already have access to a + salesforce org, you can create a [free developer + org](https://developer.salesforce.com/signup). + +2. **Create a Connected App**. In salesforce, go to Setup \> Apps + \> App Manager, then click on **New Connected App**. Make sure to + check **Enable OAuth Settings** and include relevant OAuth Scopes, + including the scope called **Perform requests at any time**. Click + **Save**. + +3. **Get the Consumer Key and Consumer Secret**. You’ll need these to + configure salesforce authentication. View your new connected app, + then copy the key and secret and save in a safe place. + +4. **Add Maven depdendency**. + + + org.apache.camel + camel-salesforce + + +Spring Boot users should use the starter instead. + ++ + + + org.apache.camel.springboot + camel-salesforce-starter + + +1. **Generate DTOs**. Optionally, generate Java DTOs to represent your + salesforce objects. This step isn’t a hard requirement per se, but + most use cases will benefit from the type safety and + auto-completion. Use the [maven plugin](#MavenPlugin) to generate + DTOs for the salesforce objects you’ll be working with. + +2. **Configure authentication**. Using the OAuth key and secret, you + generated previously, configure salesforce + [authentication](#AuthenticatingToSalesforce). + +3. **Create routes**. Starting creating routes that interact with + salesforce! + +# Authenticating to Salesforce + +The component supports three OAuth authentication flows: + +- [OAuth 2.0 Username-Password + Flow](https://help.salesforce.com/articleView?id=remoteaccess_oauth_username_password_flow.htm) + +- [OAuth 2.0 Refresh Token + Flow](https://help.salesforce.com/articleView?id=remoteaccess_oauth_refresh_token_flow.htm) + +- [OAuth 2.0 JWT Bearer Token + Flow](https://help.salesforce.com/articleView?id=remoteaccess_oauth_jwt_flow.htm) + +For each of the flows, different sets of properties need to be set: + + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Properties to set for each authentication flow

Property

Where to find it on Salesforce

Flow

clientId

Connected App, Consumer Key

All flows

clientSecret

Connected App, Consumer Secret

Username-Password, Refresh Token, +Client Credentials

userName

Salesforce user username

Username-Password, JWT Bearer +Token

password

Salesforce user password

Username-Password

refreshToken

From OAuth flow callback

Refresh Token

keystore

Connected App, Digital +Certificate

JWT Bearer Token

+ +Properties to set for each authentication flow + +The component auto determines what flow you’re trying to configure. In +order to be explicit, set the `authenticationType` property. + +Using Username-Password Flow in production is not encouraged. + +The certificate used in JWT Bearer Token Flow can be a self-signed +certificate. The KeyStore holding the certificate and the private key +must contain only a single certificate-private key entry. + +# General Usage + +## URI format + +When used as a consumer, receiving streaming events, the URI scheme is: + + salesforce:subscribe:topic?options + +When used as a producer, invoking the Salesforce REST APIs, the URI +scheme is: + + salesforce:operationName?options + +As a general example on using the operations in this salesforce +component, the following producer endpoint uses the upsertSObject API, +with the sObjectIdName parameter specifying *Name* as the external id +field. The request message body should be an SObject DTO generated using +the maven plugin. + + ...to("salesforce:upsertSObject?sObjectIdName=Name")... + +## Passing in Salesforce headers and fetching Salesforce response headers + +There is support to pass [Salesforce +headers](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/headers.htm) +via inbound message headers, header names that start with `Sforce` or +`x-sfdc` on the Camel message will be passed on in the request, and +response headers that start with `Sforce` will be present in the +outbound message headers. + +For example, to fetch API limits, you can specify: + + // in your Camel route set the header before Salesforce endpoint + //... + .setHeader("Sforce-Limit-Info", constant("api-usage")) + .to("salesforce:getGlobalObjects") + .to(myProcessor); + + // myProcessor will receive `Sforce-Limit-Info` header on the outbound + // message + class MyProcessor implements Processor { + public void process(Exchange exchange) throws Exception { + Message in = exchange.getIn(); + String apiLimits = in.getHeader("Sforce-Limit-Info", String.class); + } + } + +In addition, HTTP response status code and text are available as headers +`Exchange.HTTP_RESPONSE_CODE` and `Exchange.HTTP_RESPONSE_TEXT`. + +## Sending null values to salesforce + +By default, SObject fields with null values are not sent to salesforce. +In order to send null values to salesforce, use the `fieldsToNull` +property, as follows: + + accountSObject.getFieldsToNull().add("Site"); + +# Supported Salesforce APIs + +Camel supports the following Salesforce APIs: + +- [REST API](#RESTAPI) + +- [Apex REST API](#ApexRESTAPI) + +- [Bulk 2 API](#Bulk2API) + +- [Bulk API](#BulkAPI) + +- [Pub/Sub API](#PubSubAPI) + +- [Streaming API](#StreamingAPI) + +- [Reports API](#ReportsAPI) + +## REST API + +The following operations are supported: + +- [getVersions](#getVersions) - Gets supported Salesforce REST API + versions. + +- [getResources](#getResources) - Gets available Salesforce REST + Resource endpoints. + +- [limits](#limits) - Lists information about limits in your org. + +- [recent](#recent) - Gets the most recently accessed items that were + viewed or referenced by the current user. + +- [getGlobalObjects](#getGlobalObjects) - Gets metadata for all + available SObject types. + +- [getBasicInfo](#getBasicInfo) - Gets basic metadata for a specific + SObject type. + +- [getDescription](#getDescription) - Gets comprehensive metadata for + a specific SObject type. + +- [getSObject](#getSObject) - Gets an SObject. + +- [getSObjectWithId](#getSObjectWithId) - Gets an SObject using an + External Id (user defined) field. + +- [getBlobField](#getSObjectWithId) - Retrieves the specified blob + field from an individual record. + +- [createSObject](#createSObject) - Creates an SObject. + +- [updateSObject](#updateSObject) - Updates an SObject. + +- [deleteSObject](#deleteSObject) - Deletes an SObject. + +- [upsertSObject](#upsertSObject) - Inserts or updates an SObject + using an External Id. + +- [deleteSObjectWithId](#deleteSObjectWithId) - Deletes an SObject + using an External Id. + +- [query](#query) - Runs a Salesforce SOQL query. + +- [queryMore](#queryMore) - Retrieves more results (in case of a large + number of results) using the result link returned from the *query* + API. + +- [queryAll](#queryAll) - Runs a SOQL query. Unlike the query + operation, queryAll returns records that are deleted because of a + merge or delete. queryAll also returns information about archived + task and event records. + +- [search](#sosl_search) - Runs a Salesforce SOSL query. + +- [apexCall](#apexCall) - Executes a user defined APEX REST API call. + +- [approval](#approval) - Submits a record or records (batch) for + approval process. + +- [approvals](#approvals) - Fetches a list of all approval processes. + +- [composite](#composite) - Executes up to 25 REST API requests in a + single call. You can use the output of one request as the input to a + subsequent request. + +- [composite-tree](#composite-tree) - Creates up to 200 records with + parent-child relationships (up to 5 levels) in one go. + +- [composite-batch](#composite-batch) - Executes up to 25 sub-requests + in a single request. + +- [compositeRetrieveSObjectCollections](#compositeRetrieveSObjectCollections) - + Retrieves one or more records of the same object type. + +- [compositeCreateSObjectCollections](#compositeCreateSObjectCollections) - + Creates up to 200 records. + +- [compositeUpdateSObjectCollections](#compositeUpdateSObjectCollections) - + Update up to 200 records. + +- [compositeUpsertSObjectCollections](#compositeUpsertSObjectCollections) - + Creates or updates up to 200 records based on an External Id field. + +- [compositeDeleteSObjectCollections](#compositeDeleteSObjectCollections) - + Deletes up to 200 records. + +- [getEventSchema](#getEventSchema) - Gets the event schema for + Plaform Events, Change Data Capture events, etc. + +Unless otherwise specified, DTO types for the following options are from +`org.apache.camel.component.salesforce.api.dto` or one if its +sub-packages. + +### Versions + +`getVersions` + +Lists summary information about each Salesforce version currently +available, including the version, label, and a link to each version’s +root. + +**Output** + +Type: `List` + +### Resources by Version + +`getResources` + +Lists available resources for the current API version, including +resource name and URI. + +**Output** + +Type: `Map` + +### Limits + +`limits` + +Lists information about limits in your org. For each limit, this +resource returns the maximum allocation and the remaining allocation +based on usage. + +**Output** + +Type: `Limits` + +**Additional Usage Information** + +With `salesforce:limits` operation you can fetch of API limits from +Salesforce and then act upon that data received. The result of +`salesforce:limits` operation is mapped to +`org.apache.camel.component.salesforce.api.dto.Limits` class and can be +used in a custom processors or expressions. + +For instance, consider that you need to limit the API usage of +Salesforce so that 10% of daily API requests is left for other routes. +The body of output message contains an instance of +`org.apache.camel.component.salesforce.api.dto.Limits` object that can +be used in conjunction with Content Based Router and Content Based +Router and [Spring Expression Language +(SpEL)](#languages:spel-language.adoc) to choose when to perform +queries. + +Notice how multiplying `1.0` with the integer value held in +`body.dailyApiRequests.remaining` makes the expression evaluate as with +floating point arithmetic, without it - it would end up making integral +division which would result with either `0` (some API limits consumed) +or `1` (no API limits consumed). + + from("direct:querySalesforce") + .to("salesforce:limits") + .choice() + .when(spel("#{1.0 * body.dailyApiRequests.remaining / body.dailyApiRequests.max < 0.1}")) + .to("salesforce:query?...") + .otherwise() + .setBody(constant("Used up Salesforce API limits, leaving 10% for critical routes")) + .endChoice() + +### Recently Viewed Items + +`recent` + +Gets the most recently accessed items that were viewed or referenced by +the current user. Salesforce stores information about record views in +the interface and uses it to generate a list of recently viewed and +referenced records, such as in the sidebar and for the auto-complete +options in search. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

limit

int

An optional limit that specifies the +maximum number of records to be returned. If this parameter is not +specified, the default maximum number of records returned is the maximum +number of entries in RecentlyViewed, which is 200 records per +object.

+ +**Output** + +Type: `List` + +**Additional Usage Information** + +To fetch the recent items use `salesforce:recent` operation. This +operation returns an `java.util.List` of +`org.apache.camel.component.salesforce.api.dto.RecentItem` objects +(`List`) that in turn contain the `Id`, `Name` and +`Attributes` (with `type` and `url` properties). You can limit the +number of returned items by specifying `limit` parameter set to maximum +number of records to return. For example: + + from("direct:fetchRecentItems") + to("salesforce:recent") + .split().body() + .log("${body.name} at ${body.attributes.url}"); + +### Describe Global + +`getGlobalObjects` + +Lists the available objects and their metadata for your organization’s +data. In addition, it provides the organization encoding, as well as the +maximum batch size permitted in queries. + +**Output** + +Type: `GlobalObjects` + +### sObject Basic Information + +`getBasicInfo` + +Describes the individual metadata for the specified object. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

sObjectName

String

Name of SObject, e.g. +Account. Alternatively, can be supplied in Body.

x

+ +**Output** + +Type: `SObjectBasicInfo` + +### sObject Describe + +`getDescription` + +Completely describes the individual metadata at all levels for the +specified object. For example, this can be used to retrieve the fields, +URLs, and child relationships for the Account object. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

sObjectName

String

Name of SObject, e.g. +Account. Alternatively, can be supplied in Body.

x

+ +**Output** + +Type: `SObjectDescription` + +### Retrieve SObject + +`getSObject` + +Accesses record based on the specified object ID. This operation +requires the `packages` option to be set. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

sObjectName

String

Name of SObject, e.g. +Account

x

sObjectId

String

Id of record to retrieve.

x

sObjectFields

String

Comma-separated list of fields to +retrieve

Body

AbstractSObjectBase

Instance of SObject that is used to +query salesforce. If supplied, overrides sObjectName and +sObjectId parameters.

+ +**Output** + +Type: Subclass of `AbstractSObjectBase` + +### Retrieve SObject by External Id + +`getSObjectWithId` + +Accesses record based on an External ID value. This operation requires +the `packages` option to be set. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

sObjectIdName

String

Name of External ID field

x

sObjectIdValue

String

External ID value

x

sObjectName

String

Name of SObject, e.g. +Account

x

Body

AbstractSObjectBase

Instance of SObject that is used to +query salesforce. If supplied, overrides sObjectName and +sObjectIdValue parameters.

+ +**Output** + +Type: Subclass of `AbstractSObjectBase` + +### sObject Blob Retrieve + +`getBlobField` + +Retrieves the specified blob field from an individual record. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

sObjectBlobFieldName

String

SOSL query

x

sObjectName

String

Name of SObject, e.g., Account

Required if SObject not supplied in +body

sObjectId

String

Id of SObject

Required if SObject not supplied in +body

Body

AbstractSObjectBase

SObject to determine type and Id from. +If not supplied, sObjectId and sObjectName +parameters will be used.

Required if sObjectId and +sObjectName are not supplied

+ +**Output** + +Type: `InputStream` + +### Create SObject + +`createSObject` + +Creates a record in salesforce. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

AbstractSObjectBase or +String

Instance of SObject to create.

x

sObjectName

String

Name of SObject, e.g. +Account. Only used if Camel cannot determine from +Body.

If Body is a +String

+ +**Output** + +Type: `CreateSObjectResult` + +### Update SObject + +`updateSObject` + +Updates a record in salesforce. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

AbstractSObjectBase or +String

Instance of SObject to update.

x

sObjectName

String

Name of SObject, e.g. +Account. Only used if Camel cannot determine from +Body.

If Body is a +String

sObjectId

String

Id of record to update. Only used if +Camel cannot determine from Body.

If Body is a +String

+ +### Upsert SObject + +`upsertSObject` + +Upserts a record by External ID. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

AbstractSObjectBase or +String

SObject to update.

x

sObjectIdName

String

External ID field name.

x

sObjectIdValue

String

External ID value

If Body is a +String

sObjectName

String

Name of SObject, e.g. +Account. Only used if Camel cannot determine from +Body.

If Body is a +String

+ +**Output** + +Type: `UpsertSObjectResult` + +### Delete SObject + +`deleteSObject` + +Deletes a record in salesforce. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

AbstractSObjectBase

Instance of SObject to delete.

sObjectName

String

Name of SObject, e.g. +Account. Only used if Camel cannot determine from +Body.

If Body is not an +AbstractSObjectBase instance

sObjectId

String

Id of record to delete.

If Body is not an +AbstractSObjectBase instance

+ +### Delete SObject by External Id + +`deleteSObjectWithId` + +Deletes a record in salesforce by External ID. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

AbstractSObjectBase

Instance of SObject to delete.

sObjectIdName

String

Name of External ID field

If Body is not an +AbstractSObjectBase instance

sObjectIdValue

String

External ID value

If Body is not an +AbstractSObjectBase instance

sObjectName

String

Name of SObject, e.g. +Account. Only used if Camel cannot determine from +Body.

If Body is not an +AbstractSObjectBase instance

+ +### Query + +`query` + +Runs a Salesforce SOQL query. If neither `sObjectClass` nor +`sObjectName` are set, Camel will attempt to determine the correct +`AbstractQueryRecordsBase` sublcass based on the response. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body or +sObjectQuery

String

SOQL query

x

streamQueryResult

Boolean

If true, returns a streaming +Iterator and transparently retrieves all pages as needed. +The sObjectClass option must reference an +AbstractQueryRecordsBase subclass.

false

sObjectClass

String

Fully qualified name of class to +deserialize response to. Usually a subclass of +AbstractQueryRecordsBase, e.g. +org.my.dto.QueryRecordsAccount

sObjectName

String

Simple name of class to deserialize +response to. Usually a subclass of +AbstractQueryRecordsBase, e.g. +QueryRecordsAccount. Requires the package +option be set.

+ +**Output** + +Type: Instance of class supplied in `sObjectClass`, or +`Iterator` if `streamQueryResult` is true. If +`streamQueryResult` is true, the header +`CamelSalesforceQueryResultTotalSize` is set to the number of records +that matched the query. + +### Query More + +`queryMore` + +Retrieves more results (in case of large number of results) using result +link returned from the `query` and `queryAll` operations. If neither +`sObjectClass` nor `sObjectName` are set, Camel will attempt to +determine the correct `AbstractQueryRecordsBase` sublcass based on the +response. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body or +sObjectQuery

String

nextRecords value. Can be +found in a prior query result in the +AbstractQueryRecordsBase.nextRecordsUrl property

X

sObjectClass

String

Fully qualified name of class to +deserialize response to. Usually a subclass of +AbstractQueryRecordsBase, e.g. +org.my.dto.QueryRecordsAccount

sObjectName

String

Simple name of class to deserialize +response to. Usually a subclass of +AbstractQueryRecordsBase, e.g. +QueryRecordsAccount. Requires the package +option be set.

+ +**Output** + +Type: Instance of class supplied in `sObjectClass` + +### Query All + +`queryAll` + +Executes the specified SOQL query. Unlike the `query` operation , +`queryAll` returns records that are deleted because of a merge or +delete. It also returns information about archived task and event +records. If neither `sObjectClass` nor `sObjectName` are set, Camel will +attempt to determine the correct `AbstractQueryRecordsBase` sublcass +based on the response. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body or +sObjectQuery

String

SOQL query

x

streamQueryResult

Boolean

If true, returns a streaming +Iterable and transparently retrieves all pages as needed. +The sObjectClass option must reference an +AbstractQueryRecordsBase subclass.

false

sObjectClass

String

Fully qualified name of class to +deserialize response to. Usually a subclass of +AbstractQueryRecordsBase, e.g. +org.my.dto.QueryRecordsAccount

sObjectName

String

Simple name of class to deserialize +response to. Usually a subclass of +AbstractQueryRecordsBase, e.g. +QueryRecordsAccount. Requires the package +option be set.

+ +**Output** + +Type: Instance of class supplied in `sObjectClass`, or +`Iterator` if `streamQueryResult` is true. + +### Search + +`search` + +Runs a Salesforce SOSL search + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body or +sObjectSearch

String

Name of field to retrieve

x

+ +**Output** + +Type: `SearchResult2` + +### Submit Approval + +`approval` + +Submit a record or records (batch) for approval process. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

ApprovalRequest or +List<ApprovalRequest>

ApprovalRequest(s) to +process

Approval.

Prefixed headers or endpoint options in +lieu of passing an ApprovalRequest in the body.

+ +**Output** + +Type: `ApprovalResult` + +**Additional usage information** + +All the properties are named exactly the same as in the Salesforce REST +API prefixed with `approval.`. You can set approval properties by +setting `approval.PropertyName` of the Endpoint these will be used as +template — meaning that any property not present in either body or +header will be taken from the Endpoint configuration. Or you can set the +approval template on the Endpoint by assigning `approval` property to a +reference onto a bean in the Registry. + +You can also provide header values using the same +`approval.PropertyName` in the incoming message headers. + +And finally body can contain one `AprovalRequest` or an `Iterable` of +`ApprovalRequest` objects to process as a batch. + +The important thing to remember is the priority of the values specified +in these three mechanisms: + +1. value in body takes precedence before any other + +2. value in message header takes precedence before template value + +3. value in template is set if no other value in header or body was + given + +For example, to send one record for approval using values in headers +use: + +Given a route: + + from("direct:example1")// + .setHeader("approval.ContextId", simple("${body['contextId']}")) + .setHeader("approval.NextApproverIds", simple("${body['nextApproverIds']}")) + .to("salesforce:approval?"// + + "approval.actionType=Submit"// + + "&approval.comments=this is a test"// + + "&approval.processDefinitionNameOrId=Test_Account_Process"// + + "&approval.skipEntryCriteria=true"); + +You could send a record for approval using: + + final Map body = new HashMap<>(); + body.put("contextId", accountIds.iterator().next()); + body.put("nextApproverIds", userId); + + final ApprovalResult result = template.requestBody("direct:example1", body, ApprovalResult.class); + +### Get Approvals + +`approvals` + +Returns a list of all approval processes. + +**Output** + +Type: `Approvals` + +### Composite + +`composite` + +Executes up to 25 REST API requests in a single call. You can use the +output of one request as the input to a subsequent request. The response +bodies and HTTP statuses of the requests are returned in a single +response body. The entire series of requests counts as a single call +toward your API limits. Use Salesforce Composite API to submit multiple +chained requests. Individual requests and responses are linked with the +provided *reference*. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

SObjectComposite

Contains REST API sub-requests to be +executed.

x

rawPayload

Boolean

Any (un)marshaling of requests and +responses are assumed to be handled by the route

false

x

compositeMethod

String

HTTP method to use for rawPayload +requests.

POST

+ +**Output** + +Type: `SObjectCompositeResponse` + +Composite API supports only JSON payloads. + +As with the batch API, the results can vary from API to API so the body +of each `SObjectCompositeResult` instance is given as a +`java.lang.Object`. In most cases the result will be a `java.util.Map` +with string keys and values or other `java.util.Map` as value. Requests +are made in JSON format hold some type information (i.e., it is known +what values are strings and what values are numbers). + +Let’s look at an example: + + SObjectComposite composite = new SObjectComposite("38.0", true); + + // first insert operation via an external id + final Account updateAccount = new TestAccount(); + updateAccount.setName("Salesforce"); + updateAccount.setBillingStreet("Landmark @ 1 Market Street"); + updateAccount.setBillingCity("San Francisco"); + updateAccount.setBillingState("California"); + updateAccount.setIndustry(Account_IndustryEnum.TECHNOLOGY); + composite.addUpdate("Account", "001xx000003DIpcAAG", updateAccount, "UpdatedAccount"); + + final Contact newContact = new TestContact(); + newContact.setLastName("John Doe"); + newContact.setPhone("1234567890"); + composite.addCreate(newContact, "NewContact"); + + final AccountContactJunction__c junction = new AccountContactJunction__c(); + junction.setAccount__c("001xx000003DIpcAAG"); + junction.setContactId__c("@{NewContact.id}"); + composite.addCreate(junction, "JunctionRecord"); + + final SObjectCompositeResponse response = template.requestBody("salesforce:composite", composite, SObjectCompositeResponse.class); + final List results = response.getCompositeResponse(); + + final SObjectCompositeResult accountUpdateResult = results.stream().filter(r -> "UpdatedAccount".equals(r.getReferenceId())).findFirst().get() + final int statusCode = accountUpdateResult.getHttpStatusCode(); // should be 200 + final Map accountUpdateBody = accountUpdateResult.getBody(); + + final SObjectCompositeResult contactCreationResult = results.stream().filter(r -> "JunctionRecord".equals(r.getReferenceId())).findFirst().get() + +**Using the `rawPayload` option** + +It’s possible to directly call Salesforce composite by preparing the +Salesforce JSON request in the route thanks to the `rawPayload` option. + +For instance, you can have the following route: + + from("timer:fire?period=2000").setBody(constant("{\n" + + " \"allOrNone\" : true,\n" + + " \"records\" : [ { \n" + + " \"attributes\" : {\"type\" : \"FOO\"},\n" + + " \"Name\" : \"123456789\",\n" + + " \"FOO\" : \"XXXX\",\n" + + " \"ACCOUNT\" : 2100.0\n" + + " \"ExternalID\" : \"EXTERNAL\"\n" + " }]\n" + + "}") + .to("salesforce:composite?rawPayload=true") + .log("${body}"); + +The route directly creates the body as JSON and directly submit to +salesforce endpoint using `rawPayload=true` option. + +With this approach, you have the complete control on the Salesforce +request. + +`POST` is the default HTTP method used to send raw Composite requests to +salesforce. Use the `compositeMethod` option to override to the other +supported value, `GET`, which returns a list of other available +composite resources. + +### Composite Tree + +`composite-tree` + +Create up to 200 records with parent-child relationships (up to 5 +levels) in one go. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

SObjectTree

Contains REST API sub-requests to be +executed.

x

+ +**Output** + +Type: `SObjectTree` + +To create up to 200 records including parent-child relationships use +`salesforce:composite-tree` operation. This requires an instance of +`org.apache.camel.component.salesforce.api.dto.composite.SObjectTree` in +the input message and returns the same tree of objects in the output +message. The +`org.apache.camel.component.salesforce.api.dto.AbstractSObjectBase` +instances within the tree get updated with the identifier values (`Id` +property) or their corresponding +`org.apache.camel.component.salesforce.api.dto.composite.SObjectNode` is +populated with `errors` on failure. + +Note that for some records operation can succeed and for some it can +fail — so you need to manually check for errors. + +The easiest way to use this functionality is to use the DTOs generated +by the `camel-salesforce-maven-plugin`, but you also have the option of +customizing the references that identify each object in the tree, for +instance primary keys from your database. + +Let’s look at an example: + + Account account = ... + Contact president = ... + Contact marketing = ... + + Account anotherAccount = ... + Contact sales = ... + Asset someAsset = ... + + // build the tree + SObjectTree request = new SObjectTree(); + request.addObject(account).addChildren(president, marketing); + request.addObject(anotherAccount).addChild(sales).addChild(someAsset); + + final SObjectTree response = template.requestBody("salesforce:composite-tree", tree, SObjectTree.class); + final Map> result = response.allNodes() + .collect(Collectors.groupingBy(SObjectNode::hasErrors)); + + final List withErrors = result.get(true); + final List succeeded = result.get(false); + + final String firstId = succeeded.get(0).getId(); + +### Composite Batch + +`composite-batch` + +Submit a composition of requests in batch. Executes up to 25 +sub-requests in a single request. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

SObjectBatch

Contains sub-requests to be +executed.

x

+ +**Output** + +Type: `SObjectBatchResponse` + +The Composite API batch operation allows you to accumulate multiple +requests in a batch and then submit them in one go, saving the round +trip cost of multiple individual requests. Each response is then +received in a list of responses with the order preserved, so that the +n-th requests response is in the n-th place of the response. + +The results can vary from API to API so the result of each sub-request +(`SObjectBatchResult.result`) is given as a `java.lang.Object`. In most +cases the result will be a `java.util.Map` with string keys and values +or other `java.util.Map` as value. Requests are made in JSON format and +hold some type information (i.e., it is known what values are strings +and what values are numbers). + +Let’s look at an example: + + final String acountId = ... + final SObjectBatch batch = new SObjectBatch("53.0"); + + final Account updates = new Account(); + updates.setName("NewName"); + batch.addUpdate("Account", accountId, updates); + + final Account newAccount = new Account(); + newAccount.setName("Account created from Composite batch API"); + batch.addCreate(newAccount); + + batch.addGet("Account", accountId, "Name", "BillingPostalCode"); + + batch.addDelete("Account", accountId); + + final SObjectBatchResponse response = template.requestBody("salesforce:composite-batch", batch, SObjectBatchResponse.class); + + boolean hasErrors = response.hasErrors(); // if any of the requests has resulted in either 4xx or 5xx HTTP status + final List results = response.getResults(); // results of three operations sent in batch + + final SObjectBatchResult updateResult = results.get(0); // update result + final int updateStatus = updateResult.getStatusCode(); // probably 204 + final Object updateResultData = updateResult.getResult(); // probably null + + final SObjectBatchResult createResult = results.get(1); // create result + @SuppressWarnings("unchecked") + final Map createData = (Map) createResult.getResult(); + final String newAccountId = createData.get("id"); // id of the new account, this is for JSON, for XML it would be createData.get("Result").get("id") + + final SObjectBatchResult retrieveResult = results.get(2); // retrieve result + @SuppressWarnings("unchecked") + final Map retrieveData = (Map) retrieveResult.getResult(); + final String accountName = retrieveData.get("Name"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get("Account").get("Name") + final String accountBillingPostalCode = retrieveData.get("BillingPostalCode"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get("Account").get("BillingPostalCode") + + final SObjectBatchResult deleteResult = results.get(3); // delete result + final int updateStatus = deleteResult.getStatusCode(); // probably 204 + final Object updateResultData = deleteResult.getResult(); // probably null + +### Retrieve Multiple Records with Fewer Round-Trips + +`compositeRetrieveSObjectCollections` + +Retrieve one or more records of the same object type. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

sObjectIds

List of String or comma-separated +string

A list of one or more IDs of the +objects to return. All IDs must belong to the same object type.

x

sObjectFields

List of String or comma-separated +string

A list of fields to include in the +response. The field names you specify must be valid, and you must have +read-level permissions to each field.

x

sObjectName

String

Type of SObject, e.g. +Account

x

sObjectClass

String

Fully qualified class name of DTO class +to use for deserializing the response.

Required if sObjectName +parameter does not resolve to a class that exists in the package +specified by the package option.

+ +**Output** + +Type: `List` of class determined by `sObjectName` or `sObjectClass` +header + +### Create SObject Collections + +`compositeCreateSObjectCollections` + +Add up to 200 records. Mixed SObject types is supported. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

List of SObject

A list of SObjects to create

x

allOrNone

boolean

Indicates whether to roll back the +entire request when the creation of any object fails (true) or to +continue with the independent creation of other objects in the +request.

false

+ +**Output** + +Type: `List` + +### Update SObject Collections + +`compositeUpdateSObjectCollections` + +Update up to 200 records. Mixed SObject types is supported. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

List of SObject

A list of SObjects to update

x

allOrNone

boolean

Indicates whether to roll back the +entire request when the update of any object fails (true) or to continue +with the independent update of other objects in the request.

false

+ +**Output** + +Type: `List` + +### Upsert SObject Collections + +`compositeUpsertSObjectCollections` + +Create or update (upsert) up to 200 records based on an external ID +field. Mixed SObject types is not supported. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

List of SObject

A list of SObjects to upsert

x

allOrNone

boolean

Indicates whether to roll back the +entire request when the upsert of any object fails (true) or to continue +with the independent upsert of other objects in the request.

false

sObjectName

String

Type of SObject, e.g. +Account

x

sObjectIdName

String

Name of External ID field

x

+ +**Output** + +Type: `List` + +### Delete SObject Collections + +`compositeDeleteSObjectCollections` + +Delete up to 200 records. Mixed SObject types is supported. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

sObjectIds or request +body

List of String or comma-separated +string

A list of up to 200 IDs of objects to +be deleted.

x

allOrNone

boolean

Indicates whether to roll back the +entire request when the deletion of any object fails (true) or to +continue with the independent deletion of other objects in the +request.

false

+ +**Output** + +Type: `List` + +### Get Event Schema + +`getEventSchema` + +Gets the definition of a Platform Event in JSON format. Other types of +events such as Change Data Capture events or custom events are also +supported. This operation is available in REST API version 40.0 and +later. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

eventName

String

Name of event

eventName or +eventSchemaId is required

eventSchemaId

String

ID of a schema

eventName or +eventSchemaId is required

eventSchemaFormat

EventSchemaFormatEnum

EXPANDED: Apache Avro +format but doesn’t strictly adhere to the record complex type. +COMPACT: Apache Avro, adheres to the specification for the +record complex type. This parameter is available in API version 43.0 and +later.

EXPANDED

+ +**Output** + +Type: `InputStream` + +## Apex REST API + +### Invoke an Apex REST Web Service method + +`apexCall` + +You can [expose your Apex class and +methods](https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_rest_intro.htm) +so that external applications can access your code and your application +through the REST architecture. + +The URI format for invoking Apex REST is: + + salesforce:apexCall[/yourApexRestUrl][?options] + +You can supply the apexUrl either in the endpoint (see above), or as the +`apexUrl` option as listed in the table below. In either case the Apex +URL can contain placeholders in the format of `{headerName}`. E.g., for +the Apex URL `MyApexClass/{id}`, the value of the header named `id` will +be used to replace the placeholder. If `rawPayload` is false and neither +`sObjectClass` nor `sObjectName` are set, Camel will attempt to +determine the correct `AbstractQueryRecordsBase` sublcass based on the +response. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

Map<String, Object> +if GET, otherwise String or +InputStream

In the case of a GET, the +body (Map instance) is transformed into query parameters. +For other HTTP methods, the body is used for the HTTP body.

apexUrl

String

The portion of the endpoint URL after +https://instance.salesforce.com/services/apexrest/, e.g., +MyApexClass/

Yes, unless supplied in +endpoint

apexMethod

String

The HTTP method (e.g. GET, +POST) to use.

GET

rawPayload

Boolean

If true, Camel will not serialize the +request or response bodies.

false

Header: +apexQueryParam.[paramName]

Object

Headers that override apex parameters +passed in the endpoint.

sObjectName

String

Name of sObject (e.g. +Merchandise__c) used to deserialize the response

sObjectClass

String

Fully qualified class name used to +deserialize the response

+ +**Output** + +Type: Instance of class supplied in `sObjectClass` input header. + +## Bulk 2.0 API + +The Bulk 2.0 API has a simplified model over the original Bulk API. Use +it to quickly load a large amount of data into salesforce, or query a +large amount of data out of salesforce. Data must be provided in CSV +format, usually via an `InputStream` instance. PK chunking is performed +automatically. The minimum API version for Bulk 2.0 is v41.0. The +minimum API version for Bulk Queries is v47.0. DTO classes mentioned +below are from the +`org.apache.camel.component.salesforce.api.dto.bulkv2` package. The +following operations are supported: + +- [bulk2CreateJob](#bulk2CreateJob) - Creates a bulk ingest job. + +- [bulk2CreateBatch](#bulk2CreateBatch) - Adds a batch of data to an + ingest job. + +- [bulk2CloseJob](#bulk2CloseJob) - Closes an ingest job. + +- [bulk2AbortJob](#bulk2AbortJob) - Aborts an ingest job. + +- [bulk2DeleteJob](#bulk2DeleteJob) - Deletes an ingest job. + +- [bulk2GetSuccessfulResults](#bulk2GetSuccessfulResults) - Gets + successful results for an ingest job. + +- [bulk2GetFailedResults](#bulk2GetFailedResults) - Gets failed + results for an ingest job. + +- [bulk2GetUnprocessedRecords](#bulk2GetUnprocessedRecords) - Gets + unprocessed records for an ingest job. + +- [bulk2GetJob](#bulk2GetJob) - Gets an ingest Job. + +- [bulk2GetAllJobs](#bulk2GetAllJobs) - Gets all ingest jobs. + +- [bulk2CreateQueryJob](#bulk2CreateQueryJob) - Creates a query job. + +- [bulk2GetQueryJobResults](#bulk2GetQueryJobResults) - Gets query job + results. + +- [bulk2AbortQueryJob](#bulk2AbortQueryJob) - Aborts a query job. + +- [bulk2DeleteQueryJob](#bulk2DeleteQueryJob) - Deletes a query job. + +- [bulk2GetQueryJob](#bulk2GetQueryJob) - Gets a query job. + +- [bulk2GetAllQueryJobs](#bulk2GetAllQueryJobs) - Gets all query jobs. + +### Create a Job + +`bulk2CreateJob` Creates a bulk ingest job. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

Job

Job to create

x

+ +**Output** + +Type: `Job` + +### Upload a Batch of Job Data + +`bulk2CreateBatch` + +Adds a batch of data to an ingest job. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

InputStream or +String

CSV data. The first row must contain +headers.

Required if jobId not +supplied

jobId

String

Id of Job to create batch +under

x

+ +### Close a Job + +`bulk2CloseJob` + +Closes an ingest job. You must close the job in order for it to be +processed or aborted/deleted. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job to close

x

+ +**Output** + +Type: `Job` + +### Abort a Job + +`bulk2AbortJob` + +Aborts an ingest job. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job to abort

x

+ +**Output** + +Type: `Job` + +### Delete a Job + +`bulk2DeleteJob` + +Deletes an ingest job. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job to delete

x

+ +### Get Job Successful Record Results + +`bulk2GetSuccessfulResults` + +Gets successful results for an ingest job. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job to get results for

x

+ +**Output** + +Type: `InputStream` +Contents: CSV data + +### Get Job Failed Record Results + +`bulk2GetFailedResults` + +Gets failed results for an ingest job. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job to get results for

x

+ +**Output** + +Type: `InputStream` +Contents: CSV data + +### Get Job Unprocessed Record Results + +`bulk2GetUnprocessedRecords` + +Gets unprocessed records for an ingest job. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job to get records for

x

+ +**Output** + +Type: `InputStream` Contents: CSV data + +### Get Job Info + +`bulk2GetJob` + +Gets an ingest Job. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

Job

Will use Id of supplied Job to retrieve +Job

Required if jobId not +supplied

jobId

String

Id of Job to retrieve

Required if Job not +supplied in body

+ +**Output** + +Type: `Job` + +### Get All Jobs + +`bulk2GetAllJobs` + +Gets all ingest jobs. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

queryLocator

String

Used in subsequent calls if results +span multiple pages

+ +**Output** + +Type: `Jobs` + +If the `done` property of the `Jobs` instance is false, there are +additional pages to fetch, and the `nextRecordsUrl` property contains +the value to be set in the `queryLocator` parameter on subsequent calls. + +### Create a Query Job + +`bulk2CreateQueryJob` + +Gets a query job. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

QueryJob

QueryJob to create

x

+ +**Output** + +Type: `QueryJob` + +### Get Results for a Query Job + +`bulk2GetQueryJobResults` + +Get bulk query job results. `jobId` parameter is required. Accepts +`maxRecords` and `locator` parameters. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job to get results for

x

maxRecords

Integer

The maximum number of records to +retrieve per set of results for the query. The request is still subject +to the size limits. If you are working with a very large number of query +results, you may experience a timeout before receiving all the data from +Salesforce. To prevent a timeout, specify the maximum number of records +your client is expecting to receive in the maxRecords parameter. This +splits the results into smaller sets with this value as the maximum +size.

locator

locator

A string that identifies a specific set +of query results. Providing a value for this parameter returns only that +set of results. Omitting this parameter returns the first set of +results.

+ +**Output** + +Type: `InputStream` Contents: CSV data + +Response message headers include `Sforce-NumberOfRecords` and +`Sforce-Locator` headers. The value of `Sforce-Locator` can be passed +into subsequent calls via the `locator` parameter. + +### Abort a Query Job + +`bulk2AbortQueryJob` + +Aborts a query job. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job to abort

x

+ +**Output** + +Type: `QueryJob` + +### Delete a Query Job + +`bulk2DeleteQueryJob` + +Deletes a query job. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job to delete

x

+ +### Get Information About a Query Job + +`bulk2GetQueryJob` + +Gets a query job. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job to retrieve

x

+ +**Output** + +Type: `QueryJob` + +### Get Information About All Query Jobs + +`bulk2GetAllQueryJobs` + +Gets all query jobs. + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

queryLocator

String

Used in subsequent calls if results +span multiple pages

+ +**Output** + +Type: `QueryJobs` + +If the `done` property of the `QueryJobs` instance is false, there are +additional pages to fetch, and the `nextRecordsUrl` property contains +the value to be set in the `queryLocator` parameter on subsequent calls. + +## Bulk (original) API + +Producer endpoints can use the following APIs. All Job data formats, +i.e. xml, csv, zip/xml, and zip/csv are supported. +The request and response have to be marshalled/unmarshalled by the +route. Usually the request will be some stream source like a CSV file, +and the response may also be saved to a file to be correlated with the +request. + +The following operations are supported: + +- [createJob](#createJob) - Creates a Salesforce Bulk Job. + +- [getJob](#getJob) - Gets a Job using its Salesforce Id + +- [closeJob](#closeJob) - Closes a Job + +- [abortJob](#abortJob) - Aborts a Job + +- [createBatch](#createBatch) - Submits a Batch within a Bulk Job + +- [getBatch](#getBatch) - Gets a Batch using Id + +- [getAllBatches](#getAllBatches) - Gets all Batches for a Bulk Job Id + +- [getRequest](#getRequest) - Gets Request data (XML/CSV) for a Batch + +- [getResults](#getResults) - Gets the results of the Batch when its + complete + +- [createBatchQuery](#createBatchQuery) - Creates a Batch from an SOQL + query + +- [getQueryResultIds](#getQueryResultIds) - Gets a list of Result Ids + for a Batch Query + +- [getQueryResult](#getQueryResult) - Gets results for a Result Id + +### Create a Job + +`createJob` + +Creates a Salesforce Bulk Job. PK Chunking is supported via the +pkChunking\* options. See an explanation +[here](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/async_api_headers_enable_pk_chunking.htm). + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

JobInfo

Job to create

x

pkChunking

Boolean

Whether to use PK Chunking

false

pkChunkingChunkSize

Integer

pkChunkingStartRow

Integer

pkChunkingParent

String

+ +**Output** + +Type: `JobInfo` + +### Get Job Details + +`getJob` + +Gets a Job + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job to get

Required if body not supplied

Body

JobInfo

JobInfo instance from +which Id will be used

Required if jobId not +supplied

+ +**Output** + +Type: `JobInfo` + +### Close a Job + +`closeJob` + +Closes a Job + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job

Required if body not supplied

Body

JobInfo

JobInfo instance from +which Id will be used

Required if jobId not +supplied

+ +**Output** + +Type: `JobInfo` + +### Abort a Job + +`abortJob` + +Aborts a Job + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job

Required if body not supplied

Body

JobInfo

JobInfo instance from +which Id will be used

Required if jobId not +supplied

+ +**Output** + +Type: `JobInfo` + +### Add a Batch to a Job + +`createBatch` + +Submits a Batch within a Bulk Job + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job

x

contentType

String

Content type of body. Can be XML, CSV, +ZIP_XML or ZIP_CSV

x

Body

InputStream or +String

Batch data

x

+ +**Output** + +Type: `BatchInfo` + +### Get Information for a Batch + +`getBatch` + +Get a Batch + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job

Required if body not supplied

batchId

String

Id of Batch

Required if body not supplied

Body

BatchInfo

JobInfo instance from +which jobId and batchId will be used

Required if jobId and +BatchId not supplied

+ +**Output** + +Type: `BatchInfo` + +### Get Information for All Batches in a Job + +`getAllBatches` + +Gets all Batches for a Bulk Job Id + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job

Required if body not supplied

Body

JobInfo

JobInfo instance from +which Id will be used

Required if jobId not +supplied

+ +**Output** + +Type: `List` + +### Get a Batch Request + +`getRequest` + +Gets Request data (XML/CSV) for a Batch + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job

Required if body not supplied

batchId

String

Id of Batch

Required if body not supplied

Body

BatchInfo

JobInfo instance from +which jobId and batchId will be used

Required if jobId and +BatchId not supplied

+ +**Output** + +Type: `InputStream` + +### Get Batch Results + +`getResults` + +Gets the results of the Batch when it’s complete + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job

Required if body not supplied

batchId

String

Id of Batch

Required if body not supplied

Body

BatchInfo

JobInfo instance from +which jobId and batchId will be used

Required if jobId and +BatchId not supplied

+ +**Output** + +Type: `InputStream` + +### Create Bulk Query Batch + +`createBatchQuery` + +Creates a Batch from an SOQL query + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job

Required if body not supplied

contentType

String

Content type of body. Can be XML, CSV, +ZIP_XML or ZIP_CSV

Required if JobInfo +instance not supplied in body

sObjectQuery

String

SOQL query to be used for this +batch

Required if not supplied in +body

Body

JobInfo or +String

Either JobInfo instance +from which jobId and contentType will be used, +or String to be used as the Batch query

Required JobInfo if +jobId and contentType not supplied.

+ +**Output** + +Type: `BatchInfo` + +### Get Batch Results + +`getQueryResultIds` + +Gets a list of Result Ids for a Batch Query + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job

Required if body not supplied

batchId

String

Id of Batch

Required if body not supplied

Body

BatchInfo

JobInfo instance from +which jobId and batchId will be used

Required if jobId and +BatchId not supplied

+ +**Output** + +Type: `List` + +### Get Bulk Query Results + +`getQueryResult` + +Gets results for a Result Id + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

jobId

String

Id of Job

Required if body not supplied

batchId

String

Id of Batch

Required if body not supplied

resultId

String

Id of Result

If not passed in body

Body

BatchInfo or +String

JobInfo instance from +which jobId and batchId will be used. +Otherwise, can be a String containing the +resultId

BatchInfo Required if +jobId and BatchId not supplied

+ +**Output** + +Type: `InputStream` + +For example, the following producer endpoint uses the createBatch API to +create a Job Batch. The in message must contain a body that can be +converted into an `InputStream` (usually UTF-8 CSV or XML content from a +file, etc.) and header fields *jobId* for the Job and *contentType* for +the Job content type, which can be XML, CSV, ZIP\_XML or ZIP\_CSV. The +put message body will contain `BatchInfo` on success, or throw a +`SalesforceException` on error. + + ...to("salesforce:createBatch").. + +## Pub/Sub API + +The Pub/Sub API allows you to publish and subscribe to platform events, +including real-time event monitoring events, and change data capture +events. This API is based on gRPC and HTTP/2, and event payloads are +delivered in Apache Avro format. + +### Publishing Events + +The URI format for publishing events is: + + salesforce:pubSubPublish: + +For example: + + .to("salesforce:pubsubPublish:/event/MyCustomPlatformEvent__e") + +### Publish an Event + +`pubSubPublish` + + +++++++ + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

List. List can contained +mixed types (see description below).

Event payloads to be published

+ +Because The Pub/Sub API requires that event payloads be serialized in +Apache Avro format, Camel will attempt to serialize event payloads from +the following input types: + +- Avro `GenericRecord`. Camel fetches the Avro schema in order to + serialize `GenericRecord` instances. This option doesn’t require + ahead-of-time generation of Event classes. + +- Avro `SpecificRecord`. Subclasses of `SpecificRecord` contain + properties that are specific to an event type. The [maven + plugin](#MavenPlugin) can generate the subclasses automatically. + +- POJO. Camel fetches the Avro schema in order to serialize POJO + instances. The POJO’s field names must match event field names + exactly, including case. + +- `String`. Camel will treat the `String` value as JSON and serialize + to Avro. Note that the JSON value does not have to be Avro-encoded + JSON. It can be arbitrary JSON, but it must be serializable to Avro + based on the Schema associated with the topic you’re publishing to. + The JSON object’s field names must match event field names exactly, + including case. + +- `byte[]`. Camel will not perform any serialization. Value must be + the Avro-encoded event payload. + +- `com.salesforce.eventbus.protobuf.ProducerEvent`. Providing a + `ProducerEvent` allows full control, e.g., setting the `id` + property, which can be tied back to the + `PublishResult.CorrelationKey`. + +**Output** + +Type: +`List` + +The order of the items in the returned `List` correlates to the order of +the items in the input `List`. + +### Subscribing + +The URI format for subscribing to a Pub/Sub topic is: + + salesforce:pubSubSubscribe: + +For example: + + from("salesforce:pubSubSubscribe:/event/BatchApexErrorEvent") + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

replayPreset

ReplayPreset

Values: LATEST, +EARLIEST, CUSTOM.

LATEST

pubSubReplayId

String

When replayPreset is set +to CUSTOM, the replayId to use when subscribing to a +topic.

pubSubBatchSize

int

Max number of events to receive at a +time. Values >100 will be normalized to 100 by salesforce.

100

X

pubSubDeserializeType

PubSubDeserializeType

Values: AVRO, +SPECIFIC_RECORD, GENERIC_RECORD, +POJO, JSON. AVRO will try a +SpecificRecord subclass if found, otherwise +GenericRecord

AVRO

X

pubSubPojoClass

Fully qualified class name to +deserialize Pub/Sub API event to.

If pubSubDeserializeType +is POJO

+ +**Output** + +Type: Determined by the `pubSubDeserializeType` option. + +Headers: `CamelSalesforcePubSubReplayId` + +## Streaming API + +The Streaming API enables streaming of events using push technology and +provides a subscription mechanism for receiving events in near real +time. The Streaming API subscription mechanism supports multiple types +of events, including PushTopic events, generic events, platform events, +and Change Data Capture events. + +### Push Topics + +The URI format for consuming Push Topics is: + + salesforce:subscribe:[?options] + +To create and subscribe to a topic + + from("salesforce:subscribe:CamelTestTopic?notifyForFields=ALL¬ifyForOperations=ALL&sObjectName=Merchandise__c&updateTopic=true&sObjectQuery=SELECT Id, Name FROM Merchandise__c")... + +To subscribe to an existing topic + + from("salesforce:subscribe:CamelTestTopic&sObjectName=Merchandise__c")... + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

sObjectName

String

SObject to monitor

x

sObjectQuery

String

SOQL query used to create Push +Topic

Required for creating new +topics

updateTopic

Boolean

Whether to update an existing Push +Topic if exists

false

notifyForFields

NotifyForFieldsEnum

Specifies how the record is evaluated +against the PushTopic query.

Referenced

notifyForOperationCreate

Boolean

Whether a create operation should +generate a notification.

false

notifyForOperationDelete

Boolean

Whether a delete operation should +generate a notification.

false

notifyForOperationUndelete

Boolean

Whether an undelete operation should +generate a notification.

false

notifyForOperationUpdate

Boolean

Whether an update operation should +generate a notification.

false

notifyForOperations

NotifyForOperationsEnum

Whether an update operation should +generate a notification. Only for use in API version < 29.0

All

replayId

int

The replayId value to use when +subscribing.

defaultReplayId

int

Default replayId setting if no value is +found in initialReplayIdMap.

-1

fallBackReplayId

int

ReplayId to fall back to after an +Invalid Replay Id response.

-1

+ +**Output** + +Type: Class passed via `sObjectName` parameter + +### Platform Events + +To emit a platform event use the [createSObject](#createSObject) +operation, passing an instance of a platform event, e.g. +`Order_Event__e`. + +The URI format for consuming platform events is: + + salesforce:subscribe:event/ + +For example, to receive platform events use for the event type +`Order_Event__e`: + + from("salesforce:subscribe:event/Order_Event__e") + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

rawPayload

Boolean

If false, operation returns a +PlatformEvent, otherwise returns the raw Bayeux +Message

false

replayId

int

The replayId value to use when +subscribing.

defaultReplayId

int

Default replayId setting if no value is +found in initialReplayIdMap.

-1

fallBackReplayId

int

ReplayId to fall back to after an +Invalid Replay Id response.

-1

+ +**Output** + +Type: `PlatformEvent` or `org.cometd.bayeux.Message` + +### Change Data Capture Events + +Change Data Capture (CDC) allows you to receive near-real-time changes +of Salesforce records, and synchronize corresponding records in an +external data store. Change Data Capture publishes change events, which +represent changes to Salesforce records. Changes include the creation of +a new record, updates to an existing record, deletion of a record, and +undeletion of a record. + +The URI format to consume CDC events is as follows: + +All Selected Entities + + salesforce:subscribe:data/ChangeEvents + +Standard Objects + + salesforce:subscribe:data/ChangeEvent + +Custom Objects + + salesforce:subscribe:data/__ChangeEvent + +Here are a few examples + + from("salesforce:subscribe:data/ChangeEvents?replayId=-1").log("being notified of all change events") + from("salesforce:subscribe:data/AccountChangeEvent?replayId=-1").log("being notified of change events for Account records") + from("salesforce:subscribe:data/Employee__ChangeEvent?replayId=-1").log("being notified of change events for Employee__c custom object") + +More details about how to use the Camel Salesforce component change data +capture capabilities could be found in the +[ChangeEventsConsumerIntegrationTest](https://github.com/apache/camel/tree/main/components/camel-salesforce/camel-salesforce-component/src/test/java/org/apache/camel/component/salesforce/ChangeEventsConsumerIntegrationTest.java). + +The [Salesforce developer +guide](https://developer.salesforce.com/docs/atlas.en-us.change_data_capture.meta/change_data_capture/cdc_intro.htm) +is a good fit to better know the subtleties of implementing a change +data capture integration application. The dynamic nature of change event +body fields, high level replication steps as well as security +considerations could be of interest. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

rawPayload

Boolean

If false, operation returns a +Map<String, Object>, otherwise returns the raw Bayeux +Message

false

replayId

int

The replayId value to use when +subscribing.

defaultReplayId

int

Default replayId setting if no value is +found in initialReplayIdMap.

-1

fallBackReplayId

int

ReplayId to fall back to after an +Invalid Replay Id response.

-1

+ +**Output** + +Type: `Map` or `org.cometd.bayeux.Message` + +Headers + + ++++ + + + + + + + + + + +

Name

Description

CamelSalesforceChangeType

CREATE, +UPDATE, DELETE or +UNDELETE

+ +## Reports API + +- [getRecentReports](#getRecentReports) - Gets up to 200 of the + reports you most recently viewed. + +- [getReportDescription](#getReportDescription) - Retrieves report + description. + +- [executeSyncReport](#executeSyncReport) - Runs a report + synchronously. + +- [executeAsyncReport](#executeAsyncReport) - Runs a report + asynchronously. + +- [getReportInstances](#getReportInstances) - Returns a list of + instances for a report that you requested to be run asynchronously. + +- [getReportResults](#getReportResults) - Retrieves results for an + instance of a report run asynchronously. + +### Report List + +`getRecentReports` + +Gets up to 200 of the reports you most recently viewed. + +**Output** + +Type: `List` + +### Describe Report + +`getReportDescription` + +Retrieves the report, report type, and related metadata for a report, +either in a tabular or summary or matrix format. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

reportId

String

Id of Report

Required if not supplied in +body

Body

String

Id of Report

Required if not supplied in +reportId parameter

+ +**Output** + +Type: `ReportDescription` + +### Execute Sync + +`executeSyncReport` + +Runs a report synchronously with or without changing filters and returns +the latest summary data. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

reportId

String

Id of Report

Required if not supplied in +body

includeDetails

Boolean

Whether to include details

false

reportMetadata

ReportMetadata

Optionally, pass ReportMetadata here +instead of body

Body

ReportMetadata

If supplied, will use instead of +reportId

Required if not supplied in +reportId parameter

+ +**Output** + +Type: `AbstractReportResultsBase` + +### Execute Async + +`executeAsyncReport` + +Runs an instance of a report asynchronously with or without filters and +returns the summary data with or without details. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

reportId

String

Id of Report

Required if not supplied in +body

includeDetails

Boolean

Whether to include details

false

reportMetadata

ReportMetadata

Optionally, pass ReportMetadata here +instead of body

Body

ReportMetadata

If supplied, will use instead of +reportId parameter

Required if not supplied in +reportId parameter

+ +**Output** + +Type: `ReportInstance` + +### Instances List + +`getReportInstances` + +Returns a list of instances for a report that you requested to be run +asynchronously. Each item in the list is treated as a separate instance +of the report. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

reportId

String

Id of Report

Required if not supplied in +body

Body

String

If supplied, will use instead of +reportId parameter

Required if not supplied in +reportId parameter

+ +**Output** + +Type: `List` + +### Instance Results + +`getReportResults` + +Contains the results of running a report. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

reportId

String

Id of Report

Required if not supplied in +body

instanceId

String

Id of Report instance

x

Body

String

If supplied, will use instead of +reportId parameter

Required if not supplied in +reportId parameter

+ +**Output** + +Type: `AbstractReportResultsBase` + +# Miscellaneous Operations + +- [raw](#raw) - Send requests to salesforce and have full, raw control + over endpoint, parameters, body, etc. + +## Raw + +`raw` + +Sends HTTP requests to salesforce with full, raw control of all aspects +of the call. Any serialization or deserialization of request and +response bodies must be performed in the route. The `Content-Type` HTTP +header will be automatically set based on the `format` option, but this +can be overridden with the `rawHttpHeaders` option. + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Type

Description

Default

Required

Body

String or +InputStream

Body of the HTTP request

rawPath

String

The portion of the endpoint URL after +the domain name, e.g., +/services/data/v51.0/sobjects/Account/

x

rawMethod

String

The HTTP method

x

rawQueryParameters

String

Comma separated list of message headers +to include as query parameters. Do not url-encode values as this will be +done automatically.

rawHttpHeaders

String

Comma separated list of message headers +to include as HTTP headers

+ +**Output** + +Type: `InputStream` + +### Query example + +In this example we’ll send a query to the REST API. The query must be +passed in a URL parameter called "q", so we’ll create a message header +called q and tell the raw operation to include that message header as a +URL parameter: + + from("direct:queryExample") + .setHeader("q", "SELECT Id, LastName FROM Contact") + .to("salesforce:raw?format=JSON&rawMethod=GET&rawQueryParameters=q&rawPath=/services/data/v51.0/query") + // deserialize JSON results or handle in some other way + +### SObject example + +In this example, we’ll pass a Contact the REST API in a `create` +operation. Since the `raw` operation does not perform any serialization, +we make sure to pass XML in the message body + + from("direct:createAContact") + .setBody(constant("TestLast")) + .to("salesforce:raw?format=XML&rawMethod=POST&rawPath=/services/data/v51.0/sobjects/Contact") + +The response is: + + + + 0034x00000RnV6zAAF + true + + +# Uploading a document to a ContentWorkspace + +Create the ContentVersion in Java, using a Processor instance: + + public class ContentProcessor implements Processor { + public void process(Exchange exchange) throws Exception { + Message message = exchange.getIn(); + + ContentVersion cv = new ContentVersion(); + ContentWorkspace cw = getWorkspace(exchange); + cv.setFirstPublishLocationId(cw.getId()); + cv.setTitle("test document"); + cv.setPathOnClient("test_doc.html"); + byte[] document = message.getBody(byte[].class); + ObjectMapper mapper = new ObjectMapper(); + String enc = mapper.convertValue(document, String.class); + cv.setVersionDataUrl(enc); + message.setBody(cv); + } + + protected ContentWorkspace getWorkSpace(Exchange exchange) { + // Look up the content workspace somehow, maybe use enrich() to add it to a + // header that can be extracted here + .... + } + } + +Give the output from the processor to the Salesforce component: + + from("file:///home/camel/library") + .to(new ContentProcessor()) // convert bytes from the file into a ContentVersion SObject + // for the salesforce component + .to("salesforce:createSObject"); + +# Generating SOQL query strings + +`org.apache.camel.component.salesforce.api.utils.QueryHelper` contains +helper methods to generate SOQL queries. For instance to fetch all +custom fields from *Account* SObject you can simply generate the SOQL +SELECT by invoking: + + String allCustomFieldsQuery = QueryHelper.queryToFetchFilteredFieldsOf(new Account(), SObjectField::isCustom); + +# Camel Salesforce Maven Plugin + +The Maven plugin generates Java DTOs to represent salesforce objects. + +Please refer to +[README.md](https://github.com/apache/camel/tree/main/components/camel-salesforce/camel-salesforce-maven-plugin) +for details on how to use the plugin. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|apexMethod|APEX method name||string| +|apexQueryParams|Query params for APEX method||object| +|apiVersion|Salesforce API version.|56.0|string| +|backoffIncrement|Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect.|1000|duration| +|batchId|Bulk API Batch ID||string| +|contentType|Bulk API content type, one of XML, CSV, ZIP\_XML, ZIP\_CSV||object| +|defaultReplayId|Default replayId setting if no value is found in initialReplayIdMap|-1|integer| +|fallBackReplayId|ReplayId to fall back to after an Invalid Replay Id response|-1|integer| +|format|Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON. As of Camel 3.12, this option only applies to the Raw operation.||object| +|httpClient|Custom Jetty Http Client to use to connect to Salesforce.||object| +|httpClientConnectionTimeout|Connection timeout used by the HttpClient when connecting to the Salesforce server.|60000|integer| +|httpClientIdleTimeout|Timeout used by the HttpClient when waiting for response from the Salesforce server.|10000|integer| +|httpMaxContentLength|Max content length of an HTTP response.||integer| +|httpRequestBufferSize|HTTP request buffer size. May need to be increased for large SOQL queries.|8192|integer| +|httpRequestTimeout|Timeout value for HTTP requests.|60000|integer| +|includeDetails|Include details in Salesforce1 Analytics report, defaults to false.||boolean| +|initialReplayIdMap|Replay IDs to start from per channel name.||object| +|instanceId|Salesforce1 Analytics report execution instance ID||string| +|jobId|Bulk API Job ID||string| +|limit|Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation.||integer| +|locator|Locator provided by salesforce Bulk 2.0 API for use in getting results for a Query job.||string| +|maxBackoff|Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect.|30000|duration| +|maxRecords|The maximum number of records to retrieve per set of results for a Bulk 2.0 Query. The request is still subject to the size limits. If you are working with a very large number of query results, you may experience a timeout before receiving all the data from Salesforce. To prevent a timeout, specify the maximum number of records your client is expecting to receive in the maxRecords parameter. This splits the results into smaller sets with this value as the maximum size.||integer| +|notFoundBehaviour|Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default.|EXCEPTION|object| +|notifyForFields|Notify for fields, options are ALL, REFERENCED, SELECT, WHERE||object| +|notifyForOperationCreate|Notify for create operation, defaults to false (API version \>= 29.0)||boolean| +|notifyForOperationDelete|Notify for delete operation, defaults to false (API version \>= 29.0)||boolean| +|notifyForOperations|Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version \< 29.0)||object| +|notifyForOperationUndelete|Notify for un-delete operation, defaults to false (API version \>= 29.0)||boolean| +|notifyForOperationUpdate|Notify for update operation, defaults to false (API version \>= 29.0)||boolean| +|objectMapper|Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects.||object| +|packages|In what packages are the generated DTO classes. Typically the classes would be generated using camel-salesforce-maven-plugin. Set it if using the generated DTOs to gain the benefit of using short SObject names in parameters/header values. Multiple packages can be separated by comma.||string| +|pkChunking|Use PK Chunking. Only for use in original Bulk API. Bulk 2.0 API performs PK chunking automatically, if necessary.||boolean| +|pkChunkingChunkSize|Chunk size for use with PK Chunking. If unspecified, salesforce default is 100,000. Maximum size is 250,000.||integer| +|pkChunkingParent|Specifies the parent object when you're enabling PK chunking for queries on sharing objects. The chunks are based on the parent object's records rather than the sharing object's records. For example, when querying on AccountShare, specify Account as the parent object. PK chunking is supported for sharing objects as long as the parent object is supported.||string| +|pkChunkingStartRow|Specifies the 15-character or 18-character record ID to be used as the lower boundary for the first chunk. Use this parameter to specify a starting ID when restarting a job that failed between batches.||string| +|queryLocator|Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records.||string| +|rawPayload|Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default|false|boolean| +|reportId|Salesforce1 Analytics report Id||string| +|reportMetadata|Salesforce1 Analytics report metadata for filtering||object| +|resultId|Bulk API Result ID||string| +|sObjectBlobFieldName|SObject blob field name||string| +|sObjectClass|Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin||string| +|sObjectFields|SObject fields to retrieve||string| +|sObjectId|SObject ID if required by API||string| +|sObjectIdName|SObject external ID field name||string| +|sObjectIdValue|SObject external ID field value||string| +|sObjectName|SObject name if required or supported by API||string| +|sObjectQuery|Salesforce SOQL query string||string| +|sObjectSearch|Salesforce SOSL search string||string| +|streamQueryResult|If true, streams SOQL query result and transparently handles subsequent requests if there are multiple pages. Otherwise, results are returned one page at a time.|false|boolean| +|updateTopic|Whether to update an existing Push Topic when using the Streaming API, defaults to false|false|boolean| +|config|Global endpoint configuration - use to set values that are common to all endpoints||object| +|httpClientProperties|Used to set any properties that can be configured on the underlying HTTP client. Have a look at properties of SalesforceHttpClient and the Jetty HttpClient for all available options.||object| +|longPollingTransportProperties|Used to set any properties that can be configured on the LongPollingTransport used by the BayeuxClient (CometD) used by the streaming api||object| +|workerPoolMaxSize|Maximum size of the thread pool used to handle HTTP responses.|20|integer| +|workerPoolSize|Size of the thread pool used to handle HTTP responses.|10|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|pubSubBatchSize|Max number of events to receive in a batch from the Pub/Sub API.|100|integer| +|pubSubDeserializeType|How to deserialize events consume from the Pub/Sub API. AVRO will try a SpecificRecord subclass if found, otherwise GenericRecord.|AVRO|object| +|pubSubPojoClass|Fully qualified class name to deserialize Pub/Sub API event to.||string| +|replayPreset|Replay preset for Pub/Sub API.|LATEST|object| +|allOrNone|Composite API option to indicate to rollback all records if any are not successful.|false|boolean| +|apexUrl|APEX method URL||string| +|compositeMethod|Composite (raw) method.||string| +|eventName|Name of Platform Event, Change Data Capture Event, custom event, etc.||string| +|eventSchemaFormat|EXPANDED: Apache Avro format but doesn't strictly adhere to the record complex type. COMPACT: Apache Avro, adheres to the specification for the record complex type. This parameter is available in API version 43.0 and later.||object| +|eventSchemaId|The ID of the event schema.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|rawHttpHeaders|Comma separated list of message headers to include as HTTP parameters for Raw operation.||string| +|rawMethod|HTTP method to use for the Raw operation||string| +|rawPath|The portion of the endpoint URL after the domain name. E.g., '/services/data/v52.0/sobjects/Account/'||string| +|rawQueryParameters|Comma separated list of message headers to include as query parameters for Raw operation. Do not url-encode values as this will be done automatically.||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|httpProxyExcludedAddresses|A list of addresses for which HTTP proxy server should not be used.||object| +|httpProxyHost|Hostname of the HTTP proxy server to use.||string| +|httpProxyIncludedAddresses|A list of addresses for which HTTP proxy server should be used.||object| +|httpProxyPort|Port number of the HTTP proxy server to use.||integer| +|httpProxySocks4|If set to true the configures the HTTP proxy to use as a SOCKS4 proxy.|false|boolean| +|authenticationType|Explicit authentication method to be used, one of USERNAME\_PASSWORD, REFRESH\_TOKEN, CLIENT\_CREDENTIALS, or JWT. Salesforce component can auto-determine the authentication method to use from the properties set, set this property to eliminate any ambiguity.||object| +|clientId|OAuth Consumer Key of the connected app configured in the Salesforce instance setup. Typically a connected app needs to be configured but one can be provided by installing a package.||string| +|clientSecret|OAuth Consumer Secret of the connected app configured in the Salesforce instance setup.||string| +|httpProxyAuthUri|Used in authentication against the HTTP proxy server, needs to match the URI of the proxy server in order for the httpProxyUsername and httpProxyPassword to be used for authentication.||string| +|httpProxyPassword|Password to use to authenticate against the HTTP proxy server.||string| +|httpProxyRealm|Realm of the proxy server, used in preemptive Basic/Digest authentication methods against the HTTP proxy server.||string| +|httpProxySecure|If set to false disables the use of TLS when accessing the HTTP proxy.|true|boolean| +|httpProxyUseDigestAuth|If set to true Digest authentication will be used when authenticating to the HTTP proxy, otherwise Basic authorization method will be used|false|boolean| +|httpProxyUsername|Username to use to authenticate against the HTTP proxy server.||string| +|instanceUrl|URL of the Salesforce instance used after authentication, by default received from Salesforce on successful authentication||string| +|jwtAudience|Value to use for the Audience claim (aud) when using OAuth JWT flow. If not set, the login URL will be used, which is appropriate in most cases.||string| +|keystore|KeyStore parameters to use in OAuth JWT flow. The KeyStore should contain only one entry with private key and certificate. Salesforce does not verify the certificate chain, so this can easily be a selfsigned certificate. Make sure that you upload the certificate to the corresponding connected app.||object| +|lazyLogin|If set to true prevents the component from authenticating to Salesforce with the start of the component. You would generally set this to the (default) false and authenticate early and be immediately aware of any authentication issues. Lazy login is not supported by salesforce consumers.|false|boolean| +|loginConfig|All authentication configuration in one nested bean, all properties set there can be set directly on the component as well||object| +|loginUrl|URL of the Salesforce instance used for authentication, by default set to https://login.salesforce.com|https://login.salesforce.com|string| +|password|Password used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. Make sure that you append security token to the end of the password if using one.||string| +|pubSubHost|Pub/Sub host|api.pubsub.salesforce.com|string| +|pubSubPort|Pub/Sub port|7443|integer| +|refreshToken|Refresh token already obtained in the refresh token OAuth flow. One needs to setup a web application and configure a callback URL to receive the refresh token, or configure using the builtin callback at https://login.salesforce.com/services/oauth2/success or https://test.salesforce.com/services/oauth2/success and then retrive the refresh\_token from the URL at the end of the flow. Note that in development organizations Salesforce allows hosting the callback web application at localhost.||string| +|sslContextParameters|SSL parameters to use, see SSLContextParameters class for all available options.||object| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters|false|boolean| +|userName|Username used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows.||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|operationName|The operation to use||object| +|topicName|The name of the topic/channel to use||string| +|apexMethod|APEX method name||string| +|apexQueryParams|Query params for APEX method||object| +|apiVersion|Salesforce API version.|56.0|string| +|backoffIncrement|Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect.|1000|duration| +|batchId|Bulk API Batch ID||string| +|contentType|Bulk API content type, one of XML, CSV, ZIP\_XML, ZIP\_CSV||object| +|defaultReplayId|Default replayId setting if no value is found in initialReplayIdMap|-1|integer| +|fallBackReplayId|ReplayId to fall back to after an Invalid Replay Id response|-1|integer| +|format|Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON. As of Camel 3.12, this option only applies to the Raw operation.||object| +|httpClient|Custom Jetty Http Client to use to connect to Salesforce.||object| +|includeDetails|Include details in Salesforce1 Analytics report, defaults to false.||boolean| +|initialReplayIdMap|Replay IDs to start from per channel name.||object| +|instanceId|Salesforce1 Analytics report execution instance ID||string| +|jobId|Bulk API Job ID||string| +|limit|Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation.||integer| +|locator|Locator provided by salesforce Bulk 2.0 API for use in getting results for a Query job.||string| +|maxBackoff|Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect.|30000|duration| +|maxRecords|The maximum number of records to retrieve per set of results for a Bulk 2.0 Query. The request is still subject to the size limits. If you are working with a very large number of query results, you may experience a timeout before receiving all the data from Salesforce. To prevent a timeout, specify the maximum number of records your client is expecting to receive in the maxRecords parameter. This splits the results into smaller sets with this value as the maximum size.||integer| +|notFoundBehaviour|Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default.|EXCEPTION|object| +|notifyForFields|Notify for fields, options are ALL, REFERENCED, SELECT, WHERE||object| +|notifyForOperationCreate|Notify for create operation, defaults to false (API version \>= 29.0)||boolean| +|notifyForOperationDelete|Notify for delete operation, defaults to false (API version \>= 29.0)||boolean| +|notifyForOperations|Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version \< 29.0)||object| +|notifyForOperationUndelete|Notify for un-delete operation, defaults to false (API version \>= 29.0)||boolean| +|notifyForOperationUpdate|Notify for update operation, defaults to false (API version \>= 29.0)||boolean| +|objectMapper|Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects.||object| +|pkChunking|Use PK Chunking. Only for use in original Bulk API. Bulk 2.0 API performs PK chunking automatically, if necessary.||boolean| +|pkChunkingChunkSize|Chunk size for use with PK Chunking. If unspecified, salesforce default is 100,000. Maximum size is 250,000.||integer| +|pkChunkingParent|Specifies the parent object when you're enabling PK chunking for queries on sharing objects. The chunks are based on the parent object's records rather than the sharing object's records. For example, when querying on AccountShare, specify Account as the parent object. PK chunking is supported for sharing objects as long as the parent object is supported.||string| +|pkChunkingStartRow|Specifies the 15-character or 18-character record ID to be used as the lower boundary for the first chunk. Use this parameter to specify a starting ID when restarting a job that failed between batches.||string| +|queryLocator|Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records.||string| +|rawPayload|Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default|false|boolean| +|reportId|Salesforce1 Analytics report Id||string| +|reportMetadata|Salesforce1 Analytics report metadata for filtering||object| +|resultId|Bulk API Result ID||string| +|sObjectBlobFieldName|SObject blob field name||string| +|sObjectClass|Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin||string| +|sObjectFields|SObject fields to retrieve||string| +|sObjectId|SObject ID if required by API||string| +|sObjectIdName|SObject external ID field name||string| +|sObjectIdValue|SObject external ID field value||string| +|sObjectName|SObject name if required or supported by API||string| +|sObjectQuery|Salesforce SOQL query string||string| +|sObjectSearch|Salesforce SOSL search string||string| +|streamQueryResult|If true, streams SOQL query result and transparently handles subsequent requests if there are multiple pages. Otherwise, results are returned one page at a time.|false|boolean| +|updateTopic|Whether to update an existing Push Topic when using the Streaming API, defaults to false|false|boolean| +|pubSubBatchSize|Max number of events to receive in a batch from the Pub/Sub API.|100|integer| +|pubSubDeserializeType|How to deserialize events consume from the Pub/Sub API. AVRO will try a SpecificRecord subclass if found, otherwise GenericRecord.|AVRO|object| +|pubSubPojoClass|Fully qualified class name to deserialize Pub/Sub API event to.||string| +|pubSubReplayId|The replayId value to use when subscribing to the Pub/Sub API.||string| +|replayId|The replayId value to use when subscribing to the Streaming API.||integer| +|replayPreset|Replay preset for Pub/Sub API.|LATEST|object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|allOrNone|Composite API option to indicate to rollback all records if any are not successful.|false|boolean| +|apexUrl|APEX method URL||string| +|compositeMethod|Composite (raw) method.||string| +|eventName|Name of Platform Event, Change Data Capture Event, custom event, etc.||string| +|eventSchemaFormat|EXPANDED: Apache Avro format but doesn't strictly adhere to the record complex type. COMPACT: Apache Avro, adheres to the specification for the record complex type. This parameter is available in API version 43.0 and later.||object| +|eventSchemaId|The ID of the event schema.||string| +|rawHttpHeaders|Comma separated list of message headers to include as HTTP parameters for Raw operation.||string| +|rawMethod|HTTP method to use for the Raw operation||string| +|rawPath|The portion of the endpoint URL after the domain name. E.g., '/services/data/v52.0/sobjects/Account/'||string| +|rawQueryParameters|Comma separated list of message headers to include as query parameters for Raw operation. Do not url-encode values as this will be done automatically.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-sap-netweaver.md b/camel-sap-netweaver.md new file mode 100644 index 0000000000000000000000000000000000000000..0e67d5e76d2ae70393db4ab04920dcc95834e61b --- /dev/null +++ b/camel-sap-netweaver.md @@ -0,0 +1,123 @@ +# Sap-netweaver + +**Since Camel 2.12** + +**Only producer is supported** + +The SAP Netweaver integrates with the [SAP NetWeaver +Gateway](http://scn.sap.com/community/developer-center/netweaver-gateway) +using HTTP transports. + +This camel component supports only producer endpoints. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-sap-netweaver + x.x.x + + + +# URI format + +The URI scheme for a sap netweaver gateway component is as follows + + sap-netweaver:https://host:8080/path?username=foo&password=secret + +# Prerequisites + +You would need to have an account on the SAP NetWeaver system to be able +to leverage this component. SAP provides a [demo +setup](http://scn.sap.com/docs/DOC-31221#section6) where you can request +an account. + +This component uses the basic authentication scheme for logging into SAP +NetWeaver. + +# Examples + +This example is using the flight demo example from SAP, which is +available online over the internet +[here](http://scn.sap.com/docs/DOC-31221). + +In the route below we request the SAP NetWeaver demo server using the +following url + + https://sapes4.sapdevcenter.com/sap/opu/odata/IWFND/RMTSAMPLEFLIGHT + +And we want to execute the following command + + FlightCollection(carrid='AA',connid='0017',fldate=datetime'2016-04-20T00%3A00%3A00') + +To get flight details for the given flight. The command syntax is in [MS +ADO.Net Data +Service](http://msdn.microsoft.com/en-us/library/cc956153.aspx) format. + +We have the following Camel route + + from("direct:start") + .setHeader(NetWeaverConstants.COMMAND, constant(command)) + .toF("sap-netweaver:%s?username=%s&password=%s", url, username, password) + .to("log:response") + .to("velocity:flight-info.vm") + +Where `url`, `username`, `password` and `command` are defined as: + + private String username = "P1909969254"; + private String password = "TODO"; + private String url = "https://sapes4.sapdevcenter.com/sap/opu/odata/IWFND/RMTSAMPLEFLIGHT"; + private String command = "FlightCollection(carrid='AA',connid='0017',fldate=datetime'2016-04-20T00%3A00%3A00')"; + +The password is invalid. You would need to create an account at SAP +first to run the demo. + +The velocity template is used for formatting the response to a basic +HTML page + + + + Flight information: + +

+
Airline ID: $body["AirLineID"] +
Aircraft Type: $body["AirCraftType"] +
Departure city: $body["FlightDetails"]["DepartureCity"] +
Departure airport: $body["FlightDetails"]["DepartureAirPort"] +
Destination city: $body["FlightDetails"]["DestinationCity"] +
Destination airport: $body["FlightDetails"]["DestinationAirPort"] + + + + +When running the application, you get sample output: + + Flight information: + Airline ID: AA + Aircraft Type: 747-400 + Departure city: new york + Departure airport: JFK + Destination city: SAN FRANCISCO + Destination airport: SFO + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|url|Url to the SAP net-weaver gateway server.||string| +|flatternMap|If the JSON Map contains only a single entry, then flattern by storing that single entry value as the message body.|true|boolean| +|json|Whether to return data in JSON format. If this option is false, then XML is returned in Atom format.|true|boolean| +|jsonAsMap|To transform the JSON from a String to a Map in the message body.|true|boolean| +|password|Password for account.||string| +|username|Username for account.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-scheduler.md b/camel-scheduler.md new file mode 100644 index 0000000000000000000000000000000000000000..caade8a5caa150962d69977ace94a0638b998f6c --- /dev/null +++ b/camel-scheduler.md @@ -0,0 +1,146 @@ +# Scheduler + +**Since Camel 2.15** + +**Only consumer is supported** + +The Scheduler component is used to generate message exchanges when a +scheduler fires. This component is similar to the +[Timer](#timer-component.adoc) component, but it offers more +functionality in terms of scheduling. Also, this component uses JDK +`ScheduledExecutorService`, whereas the timer uses a JDK `Timer`. + +You can only consume events from this endpoint. + +# URI format + + scheduler:name[?options] + +Where `name` is the name of the scheduler, which is created and shared +across endpoints. So if you use the same name for all your scheduler +endpoints, only one scheduler thread pool and thread will be used - but +you can configure the thread pool to allow more concurrent threads. + +**Note:** The IN body of the generated exchange is `null`. So +`exchange.getIn().getBody()` returns `null`. + +# More information + +This component is a scheduler [Polling +Consumer](http://camel.apache.org/polling-consumer.html) where you can +find more information about the options above, and examples at the +[Polling Consumer](http://camel.apache.org/polling-consumer.html) page. + +# Exchange Properties + +When the timer is fired, it adds the following information as properties +to the `Exchange`: + + +++++ + + + + + + + + + + + + + + + + + + + +
NameTypeDescription

Exchange.TIMER_NAME

String

The value of the name +option.

Exchange.TIMER_FIRED_TIME

Date

The time when the consumer +fired.

+ +# Sample + +To set up a route that generates an event every 60 seconds: + + from("scheduler://foo?delay=60000").to("bean:myBean?method=someMethodName"); + +The above route will generate an event and then invoke the +`someMethodName` method on the bean called `myBean` in the Registry such +as JNDI or Spring. + +And the route in Spring DSL: + + + + + + +# Forcing the scheduler to trigger immediately when completed + +To let the scheduler trigger as soon as the previous task is complete, +you can set the option `greedy=true`. But beware then the scheduler will +keep firing all the time. So use this with caution. + +# Forcing the scheduler to be idle + +There can be use cases where you want the scheduler to trigger and be +greedy. But sometimes you want to "tell the scheduler" that there was no +task to poll, so the scheduler can change into idle mode using the +backoff options. To do this, you would need to set a property on the +exchange with the key `Exchange.SCHEDULER_POLLED_MESSAGES` to a boolean +value of false. This will cause the consumer to indicate that there were +no messages polled. + +The consumer will otherwise as by default return 1 message polled to the +scheduler, every time the consumer has completed processing the +exchange. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|includeMetadata|Whether to include metadata in the exchange such as fired time, timer name, timer count etc.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|poolSize|Number of core threads in the thread pool used by the scheduling thread pool. Is by default using a single thread|1|integer| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|The name of the scheduler||string| +|includeMetadata|Whether to include metadata in the exchange such as fired time, timer name, timer count etc.|false|boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|poolSize|Number of core threads in the thread pool used by the scheduling thread pool. Is by default using a single thread|1|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| diff --git a/camel-schematron.md b/camel-schematron.md new file mode 100644 index 0000000000000000000000000000000000000000..d4ccd20855281028e2004efa06152d3756296cd6 --- /dev/null +++ b/camel-schematron.md @@ -0,0 +1,173 @@ +# Schematron + +**Since Camel 2.15** + +**Only producer is supported** + +[Schematron](http://www.schematron.com/index.html) is an XML-based +language for validating XML instance documents. It is used to make +assertions about data in an XML document, and it is also used to express +operational and business rules. Schematron is an [ISO +Standard](http://standards.iso.org/ittf/PubliclyAvailableStandards/index.html). +The schematron component uses the leading +[implementation](http://www.schematron.com/implementation.html) of ISO +schematron. It is an XSLT based implementation. The schematron rules are +run through [four XSLT +pipelines](http://www.schematron.com/implementation.html), which +generates a final XSLT which will be used as the basis for running the +assertion against the XML document. The component is written in a way +that Schematron rules are loaded at the start of the endpoint (only +once) this is to minimize the overhead of instantiating a Java Templates +object representing the rules. + +# URI format + + schematron://path?[options] + +# Headers + + ++++++ + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionTypeIn/Out

CamelSchematronValidationStatus

The schematron validation status: +SUCCESS / FAILED

String

IN

CamelSchematronValidationReport

The schematrion report body in XML +format. See an example below

String

IN

+ +# URI and path syntax + +The following example shows how to invoke the schematron processor in +Java DSL. The schematron rules file is sourced from the class path: + + from("direct:start").to("schematron://sch/schematron.sch").to("mock:result") + +The following example shows how to invoke the schematron processor in +XML DSL. The schematrion rules file is sourced from the file system: + + + + + + + + ${in.header.CamelSchematronValidationStatus} == 'SUCCESS' + + + + + +

CamelSchematronValidationReport
+ + + + +
+ +**Where to store schematron rules?** + +Schematron rules can change with business requirement, as such it is +recommended to store these rules somewhere in a file system. When the +schematron component endpoint is started, the rules are compiled into +XSLT as a Java Templates Object. This is done only once to minimize the +overhead of instantiating Java Templates object, which can be an +expensive operation for a large set of rules and given that the process +goes through four pipelines of [XSLT +transformations](http://www.schematron.com/implementation.html). So if +you happen to store the rules in the file system, in the event of an +update, all you need is to restart the route or the component. No harm +in storing these rules in the class path though, but you will have to +build and deploy the component to pick up the changes. + +# Schematron rules and report samples + +Here is an example of schematron rules + + + + Check Sections 12/07 + + + This section has no title + This section has no paragraphs + + + + +Here is an example of schematron report: + + + + + + + + A chapter should have a title + + + + A chapter should have a title + + + + +**Useful Links and resources** + +- [Introduction to + Schematron](http://www.mulberrytech.com/papers/schematron-Philly.pdf) + by Mulleberry technologies. An excellent document in PDF to get you + started on Schematron. + +- [Schematron official site](http://www.schematron.com). This contains + links to other resources + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|path|The path to the schematron rules file. Can either be in class path or location in the file system.||string| +|abort|Flag to abort the route and throw a schematron validation exception.|false|boolean| +|rules|To use the given schematron rules instead of loading from the path||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|uriResolver|Set the URIResolver to be used for resolving schematron includes in the rules file.||object| diff --git a/camel-scp.md b/camel-scp.md new file mode 100644 index 0000000000000000000000000000000000000000..4a51818142c805323bbd0b824cc199da7933c950 --- /dev/null +++ b/camel-scp.md @@ -0,0 +1,78 @@ +# Scp + +**Since Camel 2.10** + +**Only producer is supported** + +The Camel Jsch component supports the [SCP +protocol](http://en.wikipedia.org/wiki/Secure_copy) using the Client API +of the [Jsch](http://www.jcraft.com/jsch/) project. Jsch is already used +in camel by the [FTP](#ftp-component.adoc) component for the **sftp:** +protocol. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-jsch + x.x.x + + + +# URI format + + scp://host[:port]/destination[?options] + +The file name can be specified either in the \ part of the +URI or as a "CamelFileName" header on the message (`Exchange.FILE_NAME` +if used in code). + +# Limitations + +Currently, camel-jsch only supports a +[Producer](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Producer.html) +(i.e., copy files to another host). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|verboseLogging|JSCH is verbose logging out of the box. Therefore we turn the logging down to DEBUG logging by default. But setting this option to true turns on the verbose logging again.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|Hostname of the FTP server||string| +|port|Port of the FTP server||integer| +|directoryName|The starting directory||string| +|chmod|Allows you to set chmod on the stored file. For example chmod=664.|664|string| +|disconnect|Whether or not to disconnect from remote FTP server right after use. Disconnect will only disconnect the current connection to the FTP server. If you have a consumer which you want to stop, then you need to stop the consumer/route instead.|false|boolean| +|checksumFileAlgorithm|If provided, then Camel will write a checksum file when the original file has been written. The checksum file will contain the checksum created with the provided algorithm for the original file. The checksum file will always be written in the same folder as the original file.||string| +|fileName|Use Expression such as File Language to dynamically set the filename. For consumers, it's used as a filename filter. For producers, it's used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today's file using the File Language syntax: mydata-${date:now:yyyyMMdd}.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards.||string| +|flatten|Flatten is used to flatten the file name path to strip any leading paths, so it's just the file name. This allows you to consume recursively into sub-directories, but when you eg write the files to another directory they will be written in a single directory. Setting this to true on the producer enforces that any file name in CamelFileName header will be stripped for any leading paths.|false|boolean| +|jailStartingDirectory|Used for jailing (restricting) writing files to the starting directory (and sub) only. This is enabled by default to not allow Camel to write files to outside directories (to be more secured out of the box). You can turn this off to allow writing files to directories outside the starting directory, such as parent or root folders.|true|boolean| +|strictHostKeyChecking|Sets whether to use strict host key checking. Possible values are: no, yes|no|string| +|allowNullBody|Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged.|false|boolean| +|disconnectOnBatchComplete|Whether or not to disconnect from remote FTP server right after a Batch upload is complete. disconnectOnBatchComplete will only disconnect the current connection to the FTP server.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|moveExistingFileStrategy|Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided||object| +|connectTimeout|Sets the connect timeout for waiting for a connection to be established Used by both FTPClient and JSCH|10000|duration| +|soTimeout|Sets the so timeout FTP and FTPS Is the SocketOptions.SO\_TIMEOUT value in millis. Recommended option is to set this to 300000 so as not have a hanged connection. On SFTP this option is set as timeout on the JSCH Session instance.|300000|duration| +|timeout|Sets the data timeout for waiting for reply Used only by FTPClient|30000|duration| +|knownHostsFile|Sets the known\_hosts file, so that the jsch endpoint can do host key verification. You can prefix with classpath: to load the file from classpath instead of file system.||string| +|password|Password to use for login||string| +|preferredAuthentications|Set a comma separated list of authentications that will be used in order of preference. Possible authentication methods are defined by JCraft JSCH. Some examples include: gssapi-with-mic,publickey,keyboard-interactive,password If not specified the JSCH and/or system defaults will be used.||string| +|privateKeyBytes|Set the private key bytes to that the endpoint can do private key verification. This must be used only if privateKeyFile wasn't set. Otherwise the file will have the priority.||string| +|privateKeyFile|Set the private key file to that the endpoint can do private key verification. You can prefix with classpath: to load the file from classpath instead of file system.||string| +|privateKeyFilePassphrase|Set the private key file passphrase to that the endpoint can do private key verification.||string| +|username|Username to use for login||string| +|useUserKnownHostsFile|If knownHostFile has not been explicit configured, then use the host file from System.getProperty(user.home) /.ssh/known\_hosts|true|boolean| +|ciphers|Set a comma separated list of ciphers that will be used in order of preference. Possible cipher names are defined by JCraft JSCH. Some examples include: aes128-ctr,aes128-cbc,3des-ctr,3des-cbc,blowfish-cbc,aes192-cbc,aes256-cbc. If not specified the default list from JSCH will be used.||string| diff --git a/camel-seda.md b/camel-seda.md new file mode 100644 index 0000000000000000000000000000000000000000..0aa7868bf30b168ba6fb790e039e15b2965f21ed --- /dev/null +++ b/camel-seda.md @@ -0,0 +1,236 @@ +# Seda + +**Since Camel 1.1** + +**Both producer and consumer are supported** + +The SEDA component provides asynchronous +[SEDA](https://en.wikipedia.org/wiki/Staged_event-driven_architecture) +behavior, so that messages are exchanged on a +[BlockingQueue](http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/BlockingQueue.html) +and consumers are invoked in a separate thread from the producer. + +Note that queues are only visible within the same CamelContext. + +This component does not implement any kind of persistence or recovery if +the JVM terminates while messages are yet to be processed. If you need +persistence, reliability or distributed SEDA, try using +[JMS](#jms-component.adoc). + +**Synchronous** + +The [Direct](#direct-component.adoc) component provides synchronous +invocation of any consumers when a producer sends a message exchange. + +# URI format + + seda:someId[?options] + +Where *someId* can be any string that uniquely identifies the endpoint +within the current CamelContext. + +# Choosing BlockingQueue implementation + +By default, the SEDA component always instantiates a +`LinkedBlockingQueue`, but you can use different implementation, you can +reference your own `BlockingQueue` implementation, in this case the size +option is not used: + + + + + + + + seda:array?queue=#arrayQueue + +You can also reference a `BlockingQueueFactory` implementation. Three +implementations are provided: + +- `LinkedBlockingQueueFactory` + +- `ArrayBlockingQueueFactory` + +- `PriorityBlockingQueueFactory` + + + + + + + + + + + seda:priority?queueFactory=#priorityQueueFactory&size=100 + +# Use of Request Reply + +The [SEDA](#seda-component.adoc) component supports using Request Reply, +where the caller will wait for the Async route to complete. For +instance: + + from("mina:tcp://0.0.0.0:9876?textline=true&sync=true").to("seda:input"); + + from("seda:input").to("bean:processInput").to("bean:createResponse"); + +In the route above, we have a TCP listener on port 9876 that accepts +incoming requests. The request is routed to the `seda:input` queue. As +it is a Request Reply message, we wait for the response. When the +consumer on the `seda:input` queue is complete, it copies the response +to the original message response. + +# Concurrent consumers + +By default, the SEDA endpoint uses a single consumer thread, but you can +configure it to use concurrent consumer threads. So instead of thread +pools, you can use: + + from("seda:stageName?concurrentConsumers=5").process(...) + +As for the difference between the two, note a *thread pool* can +increase/shrink dynamically at runtime depending on load, whereas the +number of concurrent consumers is always fixed. + +# Thread pools + +Be aware that adding a thread pool to a SEDA endpoint by doing something +like: + + from("seda:stageName").thread(5).process(...) + +Can wind up with two `BlockQueues`: one from the SEDA endpoint, and one +from the work queue of the thread pool, which may not be what you want. +Instead, you might wish to configure a [Direct](#direct-component.adoc) +endpoint with a thread pool, which can process messages both +synchronously and asynchronously. For example: + + from("direct:stageName").thread(5).process(...) + +You can also directly configure number of threads that process messages +on a SEDA endpoint using the `concurrentConsumers` option. + +# Sample + +In the route below, we use the SEDA queue to send the request to this +async queue. As such, it is able to send a *fire-and-forget* message for +further processing in another thread, and return a constant reply in +this thread to the original caller. + +We send a *Hello World* message and expect the reply to be *OK*. + + @Test + public void testSendAsync() throws Exception { + MockEndpoint mock = getMockEndpoint("mock:result"); + mock.expectedBodiesReceived("Hello World"); + + // START SNIPPET: e2 + Object out = template.requestBody("direct:start", "Hello World"); + assertEquals("OK", out); + // END SNIPPET: e2 + + MockEndpoint.assertIsSatisfied(context); + } + + @Override + protected RouteBuilder createRouteBuilder() throws Exception { + return new RouteBuilder() { + // START SNIPPET: e1 + public void configure() throws Exception { + from("direct:start") + // send it to the seda queue that is async + .to("seda:next") + // return a constant response + .transform(constant("OK")); + + from("seda:next").to("mock:result"); + } + // END SNIPPET: e1 + }; + } + +The *Hello World* message will be consumed from the SEDA queue from +another thread for further processing. Since this is from a unit test, +it will be sent to a `mock` endpoint where we can do assertions in the +unit test. + +# Using multipleConsumers + +In this example, we have defined two consumers. + + @Test + public void testSameOptionsProducerStillOkay() throws Exception { + getMockEndpoint("mock:foo").expectedBodiesReceived("Hello World"); + getMockEndpoint("mock:bar").expectedBodiesReceived("Hello World"); + + template.sendBody("seda:foo", "Hello World"); + + MockEndpoint.assertIsSatisfied(context); + } + + @Override + protected RouteBuilder createRouteBuilder() throws Exception { + return new RouteBuilder() { + @Override + public void configure() throws Exception { + from("seda:foo?multipleConsumers=true").routeId("foo").to("mock:foo"); + from("seda:foo?multipleConsumers=true").routeId("bar").to("mock:bar"); + } + }; + } + +Since we have specified `multipleConsumers=true` on the seda `foo` +endpoint we can have those two consumers receive their own copy of the +message as a kind of *publish/subscribe* style messaging. + +As the beans are part of a unit test, they simply send the message to a +mock endpoint. + +# Extracting queue information. + +If needed, information such as queue size, etc. can be obtained without +using JMX in this fashion: + + SedaEndpoint seda = context.getEndpoint("seda:xxxx"); + int size = seda.getExchanges().size(); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|concurrentConsumers|Sets the default number of concurrent threads processing exchanges.|1|integer| +|defaultPollTimeout|The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown.|1000|integer| +|defaultBlockWhenFull|Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted.|false|boolean| +|defaultDiscardWhenFull|Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue.|false|boolean| +|defaultOfferTimeout|Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, where a configured timeout can be added to the block case. Utilizing the .offer(timeout) method of the underlining java queue||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|defaultQueueFactory|Sets the default queue factory.||object| +|queueSize|Sets the default maximum capacity of the SEDA queue (i.e., the number of messages it can hold).|1000|integer| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Name of queue||string| +|size|The maximum capacity of the SEDA queue (i.e., the number of messages it can hold). Will by default use the defaultSize set on the SEDA component.|1000|integer| +|concurrentConsumers|Number of concurrent threads processing exchanges.|1|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|limitConcurrentConsumers|Whether to limit the number of concurrentConsumers to the maximum of 500. By default, an exception will be thrown if an endpoint is configured with a greater number. You can disable that check by turning this option off.|true|boolean| +|multipleConsumers|Specifies whether multiple consumers are allowed. If enabled, you can use SEDA for Publish-Subscribe messaging. That is, you can send a message to the SEDA queue and have each consumer receive a copy of the message. When enabled, this option should be specified on every consumer endpoint.|false|boolean| +|pollTimeout|The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown.|1000|integer| +|purgeWhenStopping|Whether to purge the task queue when stopping the consumer/route. This allows to stop faster, as any pending messages on the queue is discarded.|false|boolean| +|blockWhenFull|Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted.|false|boolean| +|discardIfNoConsumers|Whether the producer should discard the message (do not add the message to the queue), when sending to a queue with no active consumers. Only one of the options discardIfNoConsumers and failIfNoConsumers can be enabled at the same time.|false|boolean| +|discardWhenFull|Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue.|false|boolean| +|failIfNoConsumers|Whether the producer should fail by throwing an exception, when sending to a queue with no active consumers. Only one of the options discardIfNoConsumers and failIfNoConsumers can be enabled at the same time.|false|boolean| +|offerTimeout|Offer timeout (in milliseconds) can be added to the block case when queue is full. You can disable timeout by using 0 or a negative value.||duration| +|timeout|Timeout (in milliseconds) before a SEDA producer will stop waiting for an asynchronous task to complete. You can disable timeout by using 0 or a negative value.|30000|duration| +|waitForTaskToComplete|Option to specify whether the caller should wait for the async task to complete or not before continuing. The following three options are supported: Always, Never or IfReplyExpected. The first two values are self-explanatory. The last value, IfReplyExpected, will only wait if the message is Request Reply based. The default option is IfReplyExpected.|IfReplyExpected|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|queue|Define the queue instance which will be used by the endpoint||object| diff --git a/camel-service.md b/camel-service.md new file mode 100644 index 0000000000000000000000000000000000000000..40e94879dfcf728c88e8d5f42cb2e1922e6f6b27 --- /dev/null +++ b/camel-service.md @@ -0,0 +1,39 @@ +# Service + +**Since Camel 2.22** + +**Only consumer is supported** + +# URI format + + service:serviceName:endpoint[?options] + +# Implementations + +Camel provides the following ServiceRegistry implementations: + +- camel-consul + +- camel-zookeeper + +- camel-spring-cloud + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|service|Inject the service to use.||object| +|serviceSelector|Inject the service selector used to lookup the ServiceRegistry to use.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|delegateUri|The endpoint uri to expose as service||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| diff --git a/camel-servicenow.md b/camel-servicenow.md new file mode 100644 index 0000000000000000000000000000000000000000..61bce6ba4f3d411a14022fac8423362b3d95ffc7 --- /dev/null +++ b/camel-servicenow.md @@ -0,0 +1,457 @@ +# Servicenow + +**Since Camel 2.18** + +**Only producer is supported** + +The ServiceNow component provides access to ServiceNow platform through +their REST API. + +The component supports multiple versions of ServiceNow platform with +default to Helsinki. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-servicenow + ${camel-version} + + +# URI format + + servicenow://instanceName?[options] + + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
API Mapping
CamelServiceNowResourceCamelServiceNowActionMethodAPI URI

TABLE

RETRIEVE
GET
/api/now/v1/table/{table_name}/{sys_id}
CREATE
POST
/api/now/v1/table/{table_name}
MODIFY
PUT
/api/now/v1/table/{table_name}/{sys_id}
DELETE
DELETE
/api/now/v1/table/{table_name}/{sys_id}
UPDATE
PATCH
/api/now/v1/table/{table_name}/{sys_id}
AGGREGATE
RETRIEVE
GET
/api/now/v1/stats/{table_name}

IMPORT

RETRIEVE
GET
/api/now/import/{table_name}/{sys_id}
CREATE
POST
/api/now/import/{table_name}
+ +API Mapping + +[Fuji REST API +Documentation](http://wiki.servicenow.com/index.php?title=REST_API#Available_APIs) + + + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
API Mapping
CamelServiceNowResourceCamelServiceNowActionCamelServiceNowActionSubjectMethodAPI URI

TABLE

RETRIEVE
GET
/api/now/v1/table/{table_name}/{sys_id}
CREATE
POST
/api/now/v1/table/{table_name}
MODIFY
PUT
/api/now/v1/table/{table_name}/{sys_id}
DELETE
DELETE
/api/now/v1/table/{table_name}/{sys_id}
UPDATE
PATCH
/api/now/v1/table/{table_name}/{sys_id}
AGGREGATE
RETRIEVE
GET
/api/now/v1/stats/{table_name}

IMPORT

RETRIEVE
GET
/api/now/import/{table_name}/{sys_id}
CREATE
POST
/api/now/import/{table_name}

ATTACHMENT

RETRIEVE
GET
/api/now/api/now/attachment/{sys_id}
CONTENT
GET
/api/now/attachment/{sys_id}/file
UPLOAD
POST
/api/now/api/now/attachment/file
DELETE
DELETE
/api/now/attachment/{sys_id}
SCORECARDS
RETRIEVE
PERFORMANCE_ANALYTICS
GET
/api/now/pa/scorecards

MISC

RETRIEVE
USER_ROLE_INHERITANCE
GET
/api/global/user_role_inheritance
CREATE
IDENTIFY_RECONCILE
POST
/api/now/identifyreconcile

SERVICE_CATALOG

RETRIEVE
GET
/sn_sc/servicecatalog/catalogs/{sys_id}
RETRIEVE
CATEGORIES
GET
/sn_sc/servicecatalog/catalogs/{sys_id}/categories

SERVICE_CATALOG_ITEMS

RETRIEVE
GET
/sn_sc/servicecatalog/items/{sys_id}
RETRIEVE
SUBMIT_GUIDE
POST
/sn_sc/servicecatalog/items/{sys_id}/submit_guide
RETRIEVE
CHECKOUT_GUIDE
POST
/sn_sc/servicecatalog/items/{sys_id}/checkout_guide
CREATE
SUBJECT_CART
POST
/sn_sc/servicecatalog/items/{sys_id}/add_to_cart
CREATE
SUBJECT_PRODUCER
POST
/sn_sc/servicecatalog/items/{sys_id}/submit_producer

SERVICE_CATALOG_CARTS

RETRIEVE
GET
/sn_sc/servicecatalog/cart
RETRIEVE
DELIVERY_ADDRESS
GET
/sn_sc/servicecatalog/cart/delivery_address/{user_id}
RETRIEVE
CHECKOUT
POST
/sn_sc/servicecatalog/cart/checkout
UPDATE
POST
/sn_sc/servicecatalog/cart/{cart_item_id}
UPDATE
CHECKOUT
POST
/sn_sc/servicecatalog/cart/submit_order
DELETE
DELETE
/sn_sc/servicecatalog/cart/{sys_id}/empty
SERVICE_CATALOG_CATEGORIES
RETRIEVE
GET
/sn_sc/servicecatalog/categories/{sys_id}
+ +API Mapping + +[Helsinki REST API +Documentation](https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/integrate/inbound-rest/reference/r_RESTResources.html) + +# Usage examples: + +**Retrieve 10 Incidents** + + context.addRoutes(new RouteBuilder() { + public void configure() { + from("direct:servicenow") + .to("servicenow:{{env:SERVICENOW_INSTANCE}}" + + "?userName={{env:SERVICENOW_USERNAME}}" + + "&password={{env:SERVICENOW_PASSWORD}}" + + "&oauthClientId={{env:SERVICENOW_OAUTH2_CLIENT_ID}}" + + "&oauthClientSecret={{env:SERVICENOW_OAUTH2_CLIENT_SECRET}}" + .to("mock:servicenow"); + } + }); + + FluentProducerTemplate.on(context) + .withHeader(ServiceNowConstants.RESOURCE, "table") + .withHeader(ServiceNowConstants.ACTION, ServiceNowConstants.ACTION_RETRIEVE) + .withHeader(ServiceNowConstants.SYSPARM_LIMIT.getId(), "10") + .withHeader(ServiceNowConstants.TABLE, "incident") + .withHeader(ServiceNowConstants.MODEL, Incident.class) + .to("direct:servicenow") + .send(); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|Component configuration||object| +|display|Set this parameter to true to return only scorecards where the indicator Display field is selected. Set this parameter to all to return scorecards with any Display field value. This parameter is true by default.|true|string| +|displayValue|Return the display value (true), actual value (false), or both (all) for reference fields (default: false)|false|string| +|excludeReferenceLink|True to exclude Table API links for reference fields (default: false)||boolean| +|favorites|Set this parameter to true to return only scorecards that are favorites of the querying user.||boolean| +|includeAggregates|Set this parameter to true to always return all available aggregates for an indicator, including when an aggregate has already been applied. If a value is not specified, this parameter defaults to false and returns no aggregates.||boolean| +|includeAvailableAggregates|Set this parameter to true to return all available aggregates for an indicator when no aggregate has been applied. If a value is not specified, this parameter defaults to false and returns no aggregates.||boolean| +|includeAvailableBreakdowns|Set this parameter to true to return all available breakdowns for an indicator. If a value is not specified, this parameter defaults to false and returns no breakdowns.||boolean| +|includeScoreNotes|Set this parameter to true to return all notes associated with the score. The note element contains the note text as well as the author and timestamp when the note was added.||boolean| +|includeScores|Set this parameter to true to return all scores for a scorecard. If a value is not specified, this parameter defaults to false and returns only the most recent score value.||boolean| +|inputDisplayValue|True to set raw value of input fields (default: false)||boolean| +|key|Set this parameter to true to return only scorecards for key indicators.||boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|models|Defines both request and response models||object| +|perPage|Enter the maximum number of scorecards each query can return. By default this value is 10, and the maximum is 100.|10|integer| +|release|The ServiceNow release to target, default to Helsinki See https://docs.servicenow.com|HELSINKI|object| +|requestModels|Defines the request model||object| +|resource|The default resource, can be overridden by header CamelServiceNowResource||string| +|responseModels|Defines the response model||object| +|sortBy|Specify the value to use when sorting results. By default, queries sort records by value.||string| +|sortDir|Specify the sort direction, ascending or descending. By default, queries sort records in descending order. Use sysparm\_sortdir=asc to sort in ascending order.||string| +|suppressAutoSysField|True to suppress auto generation of system fields (default: false)||boolean| +|suppressPaginationHeader|Set this value to true to remove the Link header from the response. The Link header allows you to request additional pages of data when the number of records matching your query exceeds the query limit||boolean| +|table|The default table, can be overridden by header CamelServiceNowTable||string| +|target|Set this parameter to true to return only scorecards that have a target.||boolean| +|topLevelOnly|Gets only those categories whose parent is a catalog.||boolean| +|apiVersion|The ServiceNow REST API version, default latest||string| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|dateFormat|The date format used for Json serialization/deserialization|yyyy-MM-dd|string| +|dateTimeFormat|The date-time format used for Json serialization/deserialization|yyyy-MM-dd HH:mm:ss|string| +|httpClientPolicy|To configure http-client||object| +|instanceName|The ServiceNow instance name||string| +|mapper|Sets Jackson's ObjectMapper to use for request/reply||object| +|proxyAuthorizationPolicy|To configure proxy authentication||object| +|retrieveTargetRecordOnImport|Set this parameter to true to retrieve the target record when using import set api. The import set result is then replaced by the target record|false|boolean| +|timeFormat|The time format used for Json serialization/deserialization|HH:mm:ss|string| +|proxyHost|The proxy host name||string| +|proxyPort|The proxy port number||integer| +|apiUrl|The ServiceNow REST API url||string| +|oauthClientId|OAuth2 ClientID||string| +|oauthClientSecret|OAuth2 ClientSecret||string| +|oauthTokenUrl|OAuth token Url||string| +|password|ServiceNow account password, MUST be provided||string| +|proxyPassword|Password for proxy authentication||string| +|proxyUserName|Username for proxy authentication||string| +|sslContextParameters|To configure security using SSLContextParameters. See http://camel.apache.org/camel-configuration-utilities.html||object| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| +|userName|ServiceNow user account name, MUST be provided||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|instanceName|The ServiceNow instance name||string| +|display|Set this parameter to true to return only scorecards where the indicator Display field is selected. Set this parameter to all to return scorecards with any Display field value. This parameter is true by default.|true|string| +|displayValue|Return the display value (true), actual value (false), or both (all) for reference fields (default: false)|false|string| +|excludeReferenceLink|True to exclude Table API links for reference fields (default: false)||boolean| +|favorites|Set this parameter to true to return only scorecards that are favorites of the querying user.||boolean| +|includeAggregates|Set this parameter to true to always return all available aggregates for an indicator, including when an aggregate has already been applied. If a value is not specified, this parameter defaults to false and returns no aggregates.||boolean| +|includeAvailableAggregates|Set this parameter to true to return all available aggregates for an indicator when no aggregate has been applied. If a value is not specified, this parameter defaults to false and returns no aggregates.||boolean| +|includeAvailableBreakdowns|Set this parameter to true to return all available breakdowns for an indicator. If a value is not specified, this parameter defaults to false and returns no breakdowns.||boolean| +|includeScoreNotes|Set this parameter to true to return all notes associated with the score. The note element contains the note text as well as the author and timestamp when the note was added.||boolean| +|includeScores|Set this parameter to true to return all scores for a scorecard. If a value is not specified, this parameter defaults to false and returns only the most recent score value.||boolean| +|inputDisplayValue|True to set raw value of input fields (default: false)||boolean| +|key|Set this parameter to true to return only scorecards for key indicators.||boolean| +|models|Defines both request and response models||object| +|perPage|Enter the maximum number of scorecards each query can return. By default this value is 10, and the maximum is 100.|10|integer| +|release|The ServiceNow release to target, default to Helsinki See https://docs.servicenow.com|HELSINKI|object| +|requestModels|Defines the request model||object| +|resource|The default resource, can be overridden by header CamelServiceNowResource||string| +|responseModels|Defines the response model||object| +|sortBy|Specify the value to use when sorting results. By default, queries sort records by value.||string| +|sortDir|Specify the sort direction, ascending or descending. By default, queries sort records in descending order. Use sysparm\_sortdir=asc to sort in ascending order.||string| +|suppressAutoSysField|True to suppress auto generation of system fields (default: false)||boolean| +|suppressPaginationHeader|Set this value to true to remove the Link header from the response. The Link header allows you to request additional pages of data when the number of records matching your query exceeds the query limit||boolean| +|table|The default table, can be overridden by header CamelServiceNowTable||string| +|target|Set this parameter to true to return only scorecards that have a target.||boolean| +|topLevelOnly|Gets only those categories whose parent is a catalog.||boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|apiVersion|The ServiceNow REST API version, default latest||string| +|dateFormat|The date format used for Json serialization/deserialization|yyyy-MM-dd|string| +|dateTimeFormat|The date-time format used for Json serialization/deserialization|yyyy-MM-dd HH:mm:ss|string| +|httpClientPolicy|To configure http-client||object| +|mapper|Sets Jackson's ObjectMapper to use for request/reply||object| +|proxyAuthorizationPolicy|To configure proxy authentication||object| +|retrieveTargetRecordOnImport|Set this parameter to true to retrieve the target record when using import set api. The import set result is then replaced by the target record|false|boolean| +|timeFormat|The time format used for Json serialization/deserialization|HH:mm:ss|string| +|proxyHost|The proxy host name||string| +|proxyPort|The proxy port number||integer| +|apiUrl|The ServiceNow REST API url||string| +|oauthClientId|OAuth2 ClientID||string| +|oauthClientSecret|OAuth2 ClientSecret||string| +|oauthTokenUrl|OAuth token Url||string| +|password|ServiceNow account password, MUST be provided||string| +|proxyPassword|Password for proxy authentication||string| +|proxyUserName|Username for proxy authentication||string| +|sslContextParameters|To configure security using SSLContextParameters. See http://camel.apache.org/camel-configuration-utilities.html||object| +|userName|ServiceNow user account name, MUST be provided||string| diff --git a/camel-servlet.md b/camel-servlet.md new file mode 100644 index 0000000000000000000000000000000000000000..6815bf72a865ffc0ab2a2303c609f629d73ed68a --- /dev/null +++ b/camel-servlet.md @@ -0,0 +1,248 @@ +# Servlet + +**Since Camel 2.0** + +**Only consumer is supported** + +The Servlet component provides HTTP-based endpoints for consuming HTTP +requests that arrive at an HTTP endpoint that is bound to a published +Servlet. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-servlet + x.x.x + + + +**Stream** + +Servlet is stream-based, which means the input it receives is submitted +to Camel as a stream. That means you will only be able to read the +content of the stream **once**. If you find a situation where the +message body appears to be empty, or you need to access the data +multiple times (eg: doing multicasting, or redelivery error handling), +you should use Stream caching or convert the message body to a `String` +which is safe to be read multiple times. + +# URI format + + servlet://relative_path[?options] + +# Message Headers + +Camel will apply the same Message Headers as the +[HTTP](#http-component.adoc) component. + +Camel will also populate **all** `request.parameter` and +`request.headers`. For example, if a client request has the URL, +[http://myserver/myserver?orderid=123](http://myserver/myserver?orderid=123), the exchange will contain a +header named `orderid` with the value `123`. + +# Usage + +You can consume only `from` endpoints generated by the Servlet +component. Therefore, it should be used only as input into your Camel +routes. To issue HTTP requests against other HTTP endpoints, use the +[HTTP Component](#http-component.adoc). + +# Example `CamelHttpTransportServlet` configuration + +## Camel Spring Boot / Camel Quarkus + +When running camel-servlet on the Spring Boot or Camel Quarkus runtimes, +`CamelHttpTransportServlet` is configured for you automatically and is +driven by configuration properties. Refer to the camel-servlet +configuration documentation for these runtimes. + +## Servlet container / application server + +If you’re running Camel standalone on a Servlet container or application +server, you can use `web.xml` to configure `CamelHttpTransportServlet`. + +For example, to define a route that exposes an HTTP service under the +path `/services`. + + + + CamelServlet + org.apache.camel.component.servlet.CamelHttpTransportServlet + + + + CamelServlet + /services/* + + + +# Example route + + from("servlet:hello").process(new Processor() { + public void process(Exchange exchange) throws Exception { + // Access HTTP headers sent by the client + Message message = exchange.getMessage(); + String contentType = message.getHeader(Exchange.CONTENT_TYPE, String.class); + String httpUri = message.getHeader(Exchange.HTTP_URI, String.class); + + // Set the response body + message.setBody("Got Content-Type: " + contentType = ", URI: " + httpUri + ""); + } + }); + +# Camel Servlet HTTP endpoint path + +The full path where the camel-servlet HTTP endpoint is published depends +on: + +- The Servlet application context path + +- The configured Servlet mapping URL patterns + +- The camel-servlet endpoint URI context path + +For example, if the application context path is `/camel` and +`CamelHttpTransportServlet` is configured with a URL mapping of +`/services/*`. Then a Camel route like `from("servlet:hello")` would be +published to a path like [http://localhost:8080/camel/services/hello](http://localhost:8080/camel/services/hello). + +# Servlet asynchronous support + +To enable Camel to benefit from Servlet asynchronous support, you must +enable the `async` boolean init parameter by setting it to `true`. + +By default, the servlet thread pool is used for exchange processing. +However, to use a custom thread pool, you can configure an init +parameter named `executorRef` with the String value set to the name of a +bean bound to the Camel registry of type `Executor`. If no bean was +found in the Camel registry, the Servlet component will attempt to fall +back on an executor policy or default executor service. + +If you want to force exchange processing to wait in another container +background thread, you can set the `forceAwait` boolean init parameter +to `true`. + +On the Camel Quarkus runtime, these init parameters can be set via +configuration properties. Refer to the Camel Quarkus Servlet extension +documentation for more information. + +On other runtimes you can configure these parameters in `web.xml` as +follows. + + + + CamelServlet + org.apache.camel.component.servlet.CamelHttpTransportServlet + + async + true + + + executorRef + my-custom-thread-pool + + + + + CamelServlet + /services/* + + + +# Camel JARs on an application server boot classpath + +If deploying into an application server / servlet container and you +choose to have Camel JARs such as `camel-core`, `camel-servlet`, etc on +the boot classpath. Then the servlet mapping list will be shared between +multiple deployed Camel application in the app server. + +Having Camel JARs on the boot classpath of the application server is not +best practice. + +In this scenario, you **must** define a custom and unique servlet name +in each of your Camel applications. For example, in `web.xml`: + + + + MyServlet + org.apache.camel.component.servlet.CamelHttpTransportServlet + 1 + + + + MyServlet + /* + + + +In your Camel servlet endpoints, include the servlet name: + + from("servlet://foo?servletName=MyServlet") + +Camel detects duplicate Servlet names and will fail to start the +application. You can control and ignore such duplicates by setting the +servlet init parameter `ignoreDuplicateServletName` to `true` as +follows: + + + CamelServlet + Camel Http Transport Servlet + org.apache.camel.component.servlet.CamelHttpTransportServlet + + ignoreDuplicateServletName + true + + + +But it is **strongly advised** to use unique `servlet-name` for each +Camel application to avoid this duplication clash, as well any +unforeseen side effects. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|muteException|If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace.|true|boolean| +|servletName|Default name of servlet to use. The default name is CamelServlet.|CamelServlet|string| +|attachmentMultipartBinding|Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turn off by default as this may require servlet specific configuration to enable this when using Servlet's.|false|boolean| +|fileNameExtWhitelist|Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml.||string| +|httpRegistry|To use a custom org.apache.camel.component.servlet.HttpRegistry.||object| +|allowJavaSerializedObject|Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|httpBinding|To use a custom HttpBinding to control the mapping between Camel message and HttpClient.||object| +|httpConfiguration|To use the shared HttpConfiguration as base configuration.||object| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|contextPath|The context-path to use||string| +|disableStreamCache|Determines whether or not the raw input stream from Servlet is cached or not (Camel will read the stream into a in memory/overflow to file, Stream caching) cache. By default Camel will cache the Servlet input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The http producer will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is as the message body.|false|boolean| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object| +|httpBinding|To use a custom HttpBinding to control the mapping between Camel message and HttpClient.||object| +|chunked|If this option is false the Servlet will disable the HTTP streaming and set the content-length header on the response|true|boolean| +|transferException|If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk.|false|boolean| +|async|Configure the consumer to work in async mode|false|boolean| +|httpMethodRestrict|Used to only allow consuming if the HttpMethod matches, such as GET/POST/PUT etc. Multiple methods can be specified separated by comma.||string| +|logException|If enabled and an Exchange failed processing on the consumer side the exception's stack trace will be logged when the exception stack trace is not sent in the response's body.|false|boolean| +|matchOnUriPrefix|Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found.|false|boolean| +|muteException|If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace.|false|boolean| +|responseBufferSize|To use a custom buffer size on the jakarta.servlet.ServletResponse.||integer| +|servletName|Name of the servlet to use|CamelServlet|string| +|attachmentMultipartBinding|Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turn off by default as this may require servlet specific configuration to enable this when using Servlet's.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|eagerCheckContentAvailable|Whether to eager check whether the HTTP requests has content if the content-length header is 0 or not present. This can be turned on in case HTTP clients do not send streamed data.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|fileNameExtWhitelist|Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml.||string| +|mapHttpMessageBody|If this option is true then IN exchange Body of the exchange will be mapped to HTTP body. Setting this to false will avoid the HTTP mapping.|true|boolean| +|mapHttpMessageFormUrlEncodedBody|If this option is true then IN exchange Form Encoded body of the exchange will be mapped to HTTP. Setting this to false will avoid the HTTP Form Encoded body mapping.|true|boolean| +|mapHttpMessageHeaders|If this option is true then IN exchange Headers of the exchange will be mapped to HTTP headers. Setting this to false will avoid the HTTP Headers mapping.|true|boolean| +|optionsEnabled|Specifies whether to enable HTTP OPTIONS for this Servlet consumer. By default OPTIONS is turned off.|false|boolean| +|traceEnabled|Specifies whether to enable HTTP TRACE for this Servlet consumer. By default TRACE is turned off.|false|boolean| diff --git a/camel-sftp.md b/camel-sftp.md new file mode 100644 index 0000000000000000000000000000000000000000..96855ad0fe16d20fc3c32c98ce89449308e8d020 --- /dev/null +++ b/camel-sftp.md @@ -0,0 +1,192 @@ +# Sftp + +**Since Camel 1.1** + +**Both producer and consumer are supported** + +This component provides access to remote file systems over the SFTP +protocol. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-ftp + x.x.x + + + +# Restoring Deprecated Key Types and Algorithms + +As of Camel 3.17.0, key types and algorithms that use SHA1 have been +deprecated. These can be restored, if necessary, by setting JSch +configuration directly. E.g.: + + JSch.setConfig("server_host_key", JSch.getConfig("server_host_key") + ",ssh-rsa"); + JSch.setConfig("PubkeyAcceptedAlgorithms", JSch.getConfig("PubkeyAcceptedAlgorithms") + ",ssh-rsa"); + JSch.setConfig("kex", JSch.getConfig("kex") + ",diffie-hellman-group1-sha1,diffie-hellman-group14-sha1"); + +Note that the key types and algorithms your server supports may differ +than the above example. You can use the following command to inspect +your server’s configuration: + + ssh -vvv + +As of Camel 3.18.1, these values can also be set on SFTP endpoints by +setting the corresponding URI parameters. + +# More Information + +For more information, you can look at the [FTP +component](#ftp-component.adoc). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|Hostname of the FTP server||string| +|port|Port of the FTP server||integer| +|directoryName|The starting directory||string| +|binary|Specifies the file transfer mode, BINARY or ASCII. Default is ASCII (false).|false|boolean| +|charset|This option is used to specify the encoding of the file. You can use this on the consumer, to specify the encodings of the files, which allow Camel to know the charset it should load the file content in case the file content is being accessed. Likewise when writing a file, you can use this option to specify which charset to write the file as well. Do mind that when writing the file Camel may have to read the message content into memory to be able to convert the data into the configured charset, so do not use this if you have big messages.||string| +|disconnect|Whether or not to disconnect from remote FTP server right after use. Disconnect will only disconnect the current connection to the FTP server. If you have a consumer which you want to stop, then you need to stop the consumer/route instead.|false|boolean| +|doneFileName|Producer: If provided, then Camel will write a 2nd done file when the original file has been written. The done file will be empty. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders. The done file will always be written in the same folder as the original file. Consumer: If provided, Camel will only consume files if a done file exists. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders.The done file is always expected in the same folder as the original file. Only ${file.name} and ${file.name.next} is supported as dynamic placeholders.||string| +|fileName|Use Expression such as File Language to dynamically set the filename. For consumers, it's used as a filename filter. For producers, it's used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today's file using the File Language syntax: mydata-${date:now:yyyyMMdd}.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards.||string| +|jschLoggingLevel|The logging level to use for JSCH activity logging. As JSCH is verbose at by default at INFO level the threshold is WARN by default.|WARN|object| +|passiveMode|Sets passive mode connections. Default is active mode connections.|false|boolean| +|separator|Sets the path separator to be used. UNIX = Uses unix style path separator Windows = Uses windows style path separator Auto = (is default) Use existing path separator in file name|UNIX|object| +|fastExistsCheck|If set this option to be true, camel-ftp will use the list file directly to check if the file exists. Since some FTP server may not support to list the file directly, if the option is false, camel-ftp will use the old way to list the directory and check if the file exists. This option also influences readLock=changed to control whether it performs a fast check to update file information or not. This can be used to speed up the process if the FTP server has a lot of files.|false|boolean| +|delete|If true, the file will be deleted after it is processed successfully.|false|boolean| +|moveFailed|Sets the move failure expression based on Simple language. For example, to move files into a .error subdirectory use: .error. Note: When moving the files to the fail location Camel will handle the error and will not pick up the file again.||string| +|noop|If true, the file is not moved or deleted in any way. This option is good for readonly data, or for ETL type requirements. If noop=true, Camel will set idempotent=true as well, to avoid consuming the same files over and over again.|false|boolean| +|preMove|Expression (such as File Language) used to dynamically set the filename when moving it before processing. For example to move in-progress files into the order directory set this value to order.||string| +|preSort|When pre-sort is enabled then the consumer will sort the file and directory names during polling, that was retrieved from the file system. You may want to do this in case you need to operate on the files in a sorted order. The pre-sort is executed before the consumer starts to filter, and accept files to process by Camel. This option is default=false meaning disabled.|false|boolean| +|recursive|If a directory, will look for files in all the sub-directories as well.|false|boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|streamDownload|Sets the download method to use when not using a local working directory. If set to true, the remote files are streamed to the route as they are read. When set to false, the remote files are loaded into memory before being sent into the route. If enabling this option then you must set stepwise=false as both cannot be enabled at the same time.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|download|Whether the FTP consumer should download the file. If this option is set to false, then the message body will be null, but the consumer will still trigger a Camel Exchange that has details about the file such as file name, file size, etc. It's just that the file will not be downloaded.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|ignoreFileNotFoundOrPermissionError|Whether to ignore when (trying to list files in directories or when downloading a file), which does not exist or due to permission error. By default when a directory or file does not exist or insufficient permission, then an exception is thrown. Setting this option to true allows to ignore that instead.|false|boolean| +|inProgressRepository|A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used.||object| +|localWorkDirectory|When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory.||string| +|onCompletionExceptionHandler|To use a custom org.apache.camel.spi.ExceptionHandler to handle any thrown exceptions that happens during the file on completion process where the consumer does either a commit or rollback. The default implementation will log any exception at WARN level and ignore.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|processStrategy|A pluggable org.apache.camel.component.file.GenericFileProcessStrategy allowing you to implement your own readLock option or similar. Can also be used when special conditions must be met before a file can be consumed, such as a special ready file exists. If this option is set then the readLock option does not apply.||object| +|useList|Whether to allow using LIST command when downloading a file. Default is true. In some use cases you may want to download a specific file and are not allowed to use the LIST command, and therefore you can set this option to false. Notice when using this option, then the specific file to download does not include meta-data information such as file size, timestamp, permissions etc, because those information is only possible to retrieve when LIST command is in use.|true|boolean| +|checksumFileAlgorithm|If provided, then Camel will write a checksum file when the original file has been written. The checksum file will contain the checksum created with the provided algorithm for the original file. The checksum file will always be written in the same folder as the original file.||string| +|fileExist|What to do if a file already exists with the same name. Override, which is the default, replaces the existing file. - Append - adds content to the existing file. - Fail - throws a GenericFileOperationException, indicating that there is already an existing file. - Ignore - silently ignores the problem and does not override the existing file, but assumes everything is okay. - Move - option requires to use the moveExisting option to be configured as well. The option eagerDeleteTargetFile can be used to control what to do if an moving the file, and there exists already an existing file, otherwise causing the move operation to fail. The Move option will move any existing files, before writing the target file. - TryRename is only applicable if tempFileName option is in use. This allows to try renaming the file from the temporary name to the actual name, without doing any exists check. This check may be faster on some file systems and especially FTP servers.|Override|object| +|flatten|Flatten is used to flatten the file name path to strip any leading paths, so it's just the file name. This allows you to consume recursively into sub-directories, but when you eg write the files to another directory they will be written in a single directory. Setting this to true on the producer enforces that any file name in CamelFileName header will be stripped for any leading paths.|false|boolean| +|jailStartingDirectory|Used for jailing (restricting) writing files to the starting directory (and sub) only. This is enabled by default to not allow Camel to write files to outside directories (to be more secured out of the box). You can turn this off to allow writing files to directories outside the starting directory, such as parent or root folders.|true|boolean| +|moveExisting|Expression (such as File Language) used to compute file name to use when fileExist=Move is configured. To move files into a backup subdirectory just enter backup. This option only supports the following File Language tokens: file:name, file:name.ext, file:name.noext, file:onlyname, file:onlyname.noext, file:ext, and file:parent. Notice the file:parent is not supported by the FTP component, as the FTP component can only move any existing files to a relative directory based on current dir as base.||string| +|tempFileName|The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. The location for tempFilename is relative to the final file location in the option 'fileName', not the target directory in the base uri. For example if option fileName includes a directory prefix: dir/finalFilename then tempFileName is relative to that subdirectory dir.||string| +|tempPrefix|This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files.||string| +|allowNullBody|Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged.|false|boolean| +|chmod|Allows you to set chmod on the stored file. For example chmod=640.||string| +|chmodDirectory|Allows you to set chmod during path creation. For example chmod=640.||string| +|disconnectOnBatchComplete|Whether or not to disconnect from remote FTP server right after a Batch upload is complete. disconnectOnBatchComplete will only disconnect the current connection to the FTP server.|false|boolean| +|eagerDeleteTargetFile|Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation.|true|boolean| +|keepLastModified|Will keep the last modified timestamp from the source file (if any). Will use the FileConstants.FILE\_LAST\_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|moveExistingFileStrategy|Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided||object| +|sendNoop|Whether to send a noop command as a pre-write check before uploading files to the FTP server. This is enabled by default as a validation of the connection is still valid, which allows to silently re-connect to be able to upload the file. However if this causes problems, you can turn this option off.|true|boolean| +|autoCreate|Automatically create missing directories in the file's pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to.|true|boolean| +|bindAddress|Specifies the address of the local interface against which the connection should bind.||string| +|bulkRequests|Specifies how many requests may be outstanding at any one time. Increasing this value may slightly improve file transfer speed but will increase memory usage.||integer| +|compression|To use compression. Specify a level from 1 to 10. Important: You must manually add the needed JSCH zlib JAR to the classpath for compression support.||integer| +|connectTimeout|Sets the connect timeout for waiting for a connection to be established Used by both FTPClient and JSCH|10000|duration| +|existDirCheckUsingLs|Whether to check for existing directory using LS command or CD. By default LS is used which is safer as otherwise Camel needs to change the directory back after checking. However LS has been reported to cause a problem on windows system in some situations and therefore you can disable this option to use CD.|true|boolean| +|filenameEncoding|Encoding to use for FTP client when parsing filenames. By default, UTF-8 is used.||string| +|maximumReconnectAttempts|Specifies the maximum reconnect attempts Camel performs when it tries to connect to the remote FTP server. Use 0 to disable this behavior.||integer| +|proxy|To use a custom configured com.jcraft.jsch.Proxy. This proxy is used to consume/send messages from the target SFTP host.||object| +|reconnectDelay|Delay in millis Camel will wait before performing a reconnect attempt.|1000|duration| +|serverAliveCountMax|Sets the number of keep-alive messages which may be sent without receiving any messages back from the server. If this threshold is reached while keep-alive messages are being sent, the connection will be disconnected. The default value is one.|1|integer| +|serverAliveInterval|Sets the interval (millis) to send a keep-alive message. If zero is specified, any keep-alive message must not be sent. The default interval is zero.||integer| +|serverMessageLoggingLevel|The logging level used for various human intended log messages from the FTP server. This can be used during troubleshooting to raise the logging level and inspect the logs received from the FTP server.|DEBUG|object| +|soTimeout|Sets the so timeout FTP and FTPS Is the SocketOptions.SO\_TIMEOUT value in millis. Recommended option is to set this to 300000 so as not have a hanged connection. On SFTP this option is set as timeout on the JSCH Session instance.|300000|duration| +|stepwise|Sets whether we should stepwise change directories while traversing file structures when downloading files, or as well when uploading a file to a directory. You can disable this if you for example are in a situation where you cannot change directory on the FTP server due security reasons. Stepwise cannot be used together with streamDownload.|true|boolean| +|throwExceptionOnConnectFailed|Should an exception be thrown if connection failed (exhausted)By default exception is not thrown and a WARN is logged. You can use this to enable exception being thrown and handle the thrown exception from the org.apache.camel.spi.PollingConsumerPollStrategy rollback method.|false|boolean| +|timeout|Sets the data timeout for waiting for reply Used only by FTPClient|30000|duration| +|antExclude|Ant style filter exclusion. If both antInclude and antExclude are used, antExclude takes precedence over antInclude. Multiple exclusions may be specified in comma-delimited format.||string| +|antFilterCaseSensitive|Sets case sensitive flag on ant filter.|true|boolean| +|antInclude|Ant style filter inclusion. Multiple inclusions may be specified in comma-delimited format.||string| +|eagerMaxMessagesPerPoll|Allows for controlling whether the limit from maxMessagesPerPoll is eager or not. If eager then the limit is during the scanning of files. Where as false would scan all files, and then perform sorting. Setting this option to false allows for sorting all files first, and then limit the poll. Mind that this requires a higher memory usage as all file details are in memory to perform the sorting.|true|boolean| +|exclude|Is used to exclude files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris||string| +|excludeExt|Is used to exclude files matching file extension name (case insensitive). For example to exclude bak files, then use excludeExt=bak. Multiple extensions can be separated by comma, for example to exclude bak and dat files, use excludeExt=bak,dat. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options.||string| +|filter|Pluggable filter as a org.apache.camel.component.file.GenericFileFilter class. Will skip files if filter returns false in its accept() method.||object| +|filterDirectory|Filters the directory based on Simple language. For example to filter on current date, you can use a simple date pattern such as ${date:now:yyyMMdd}||string| +|filterFile|Filters the file based on Simple language. For example to filter on file size, you can use ${file:size} 5000||string| +|idempotent|Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again.|false|boolean| +|idempotentEager|Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again.|false|boolean| +|idempotentKey|To use a custom idempotent key. By default the absolute path of the file is used. You can use the File Language, for example to use the file name and file size, you can do: idempotentKey=${file:name}-${file:size}||string| +|idempotentRepository|A pluggable repository org.apache.camel.spi.IdempotentRepository which by default use MemoryIdempotentRepository if none is specified and idempotent is true.||object| +|include|Is used to include files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris||string| +|includeExt|Is used to include files matching file extension name (case insensitive). For example to include txt files, then use includeExt=txt. Multiple extensions can be separated by comma, for example to include txt and xml files, use includeExt=txt,xml. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options.||string| +|maxDepth|The maximum depth to traverse when recursively processing a directory.|2147483647|integer| +|maxMessagesPerPoll|To define a maximum messages to gather per poll. By default no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disabled it. Notice: If this option is in use then the File and FTP components will limit before any sorting. For example if you have 100000 files and use maxMessagesPerPoll=500, then only the first 500 files will be picked up, and then sorted. You can use the eagerMaxMessagesPerPoll option and set this to false to allow to scan all files first and then sort afterwards.||integer| +|minDepth|The minimum depth to start processing when recursively processing a directory. Using minDepth=1 means the base directory. Using minDepth=2 means the first sub directory.||integer| +|move|Expression (such as Simple Language) used to dynamically set the filename when moving it after processing. To move files into a .done subdirectory just enter .done.||string| +|exclusiveReadLockStrategy|Pluggable read-lock as a org.apache.camel.component.file.GenericFileExclusiveReadLockStrategy implementation.||object| +|readLock|Used by consumer, to only poll the files if it has exclusive read-lock on the file (i.e. the file is not in-progress or being written). Camel will wait until the file lock is granted. This option provides the build in strategies: - none - No read lock is in use - markerFile - Camel creates a marker file (fileName.camelLock) and then holds a lock on it. This option is not available for the FTP component - changed - Changed is using file length/modification timestamp to detect whether the file is currently being copied or not. Will at least use 1 sec to determine this, so this option cannot consume files as fast as the others, but can be more reliable as the JDK IO API cannot always determine whether a file is currently being used by another process. The option readLockCheckInterval can be used to set the check frequency. - fileLock - is for using java.nio.channels.FileLock. This option is not avail for Windows OS and the FTP component. This approach should be avoided when accessing a remote file system via a mount/share unless that file system supports distributed file locks. - rename - rename is for using a try to rename the file as a test if we can get exclusive read-lock. - idempotent - (only for file component) idempotent is for using a idempotentRepository as the read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-changed - (only for file component) idempotent-changed is for using a idempotentRepository and changed as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-rename - (only for file component) idempotent-rename is for using a idempotentRepository and rename as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that.Notice: The various read locks is not all suited to work in clustered mode, where concurrent consumers on different nodes is competing for the same files on a shared file system. The markerFile using a close to atomic operation to create the empty marker file, but its not guaranteed to work in a cluster. The fileLock may work better but then the file system need to support distributed file locks, and so on. Using the idempotent read lock can support clustering if the idempotent repository supports clustering, such as Hazelcast Component or Infinispan.|none|string| +|readLockCheckInterval|Interval in millis for the read-lock, if supported by the read lock. This interval is used for sleeping between attempts to acquire the read lock. For example when using the changed read lock, you can set a higher interval period to cater for slow writes. The default of 1 sec. may be too fast if the producer is very slow writing the file. Notice: For FTP the default readLockCheckInterval is 5000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that ample time is allowed for the read lock process to try to grab the lock before the timeout was hit.|1000|integer| +|readLockDeleteOrphanLockFiles|Whether or not read lock with marker files should upon startup delete any orphan read lock files, which may have been left on the file system, if Camel was not properly shutdown (such as a JVM crash). If turning this option to false then any orphaned lock file will cause Camel to not attempt to pickup that file, this could also be due another node is concurrently reading files from the same shared directory.|true|boolean| +|readLockIdempotentReleaseAsync|Whether the delayed release task should be synchronous or asynchronous. See more details at the readLockIdempotentReleaseDelay option.|false|boolean| +|readLockIdempotentReleaseAsyncPoolSize|The number of threads in the scheduled thread pool when using asynchronous release tasks. Using a default of 1 core threads should be sufficient in almost all use-cases, only set this to a higher value if either updating the idempotent repository is slow, or there are a lot of files to process. This option is not in-use if you use a shared thread pool by configuring the readLockIdempotentReleaseExecutorService option. See more details at the readLockIdempotentReleaseDelay option.||integer| +|readLockIdempotentReleaseDelay|Whether to delay the release task for a period of millis. This can be used to delay the release tasks to expand the window when a file is regarded as read-locked, in an active/active cluster scenario with a shared idempotent repository, to ensure other nodes cannot potentially scan and acquire the same file, due to race-conditions. By expanding the time-window of the release tasks helps prevents these situations. Note delaying is only needed if you have configured readLockRemoveOnCommit to true.||integer| +|readLockIdempotentReleaseExecutorService|To use a custom and shared thread pool for asynchronous release tasks. See more details at the readLockIdempotentReleaseDelay option.||object| +|readLockLoggingLevel|Logging level used when a read lock could not be acquired. By default a DEBUG is logged. You can change this level, for example to OFF to not have any logging. This option is only applicable for readLock of types: changed, fileLock, idempotent, idempotent-changed, idempotent-rename, rename.|DEBUG|object| +|readLockMarkerFile|Whether to use marker file with the changed, rename, or exclusive read lock types. By default a marker file is used as well to guard against other processes picking up the same files. This behavior can be turned off by setting this option to false. For example if you do not want to write marker files to the file systems by the Camel application.|true|boolean| +|readLockMinAge|This option is applied only for readLock=changed. It allows to specify a minimum age the file must be before attempting to acquire the read lock. For example use readLockMinAge=300s to require the file is at last 5 minutes old. This can speedup the changed read lock as it will only attempt to acquire files which are at least that given age.|0|integer| +|readLockMinLength|This option is applied only for readLock=changed. It allows you to configure a minimum file length. By default Camel expects the file to contain data, and thus the default value is 1. You can set this option to zero, to allow consuming zero-length files.|1|integer| +|readLockRemoveOnCommit|This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file is succeeded and a commit happens. By default the file is not removed which ensures that any race-condition do not occur so another active node may attempt to grab the file. Instead the idempotent repository may support eviction strategies that you can configure to evict the file name entry after X minutes - this ensures no problems with race conditions. See more details at the readLockIdempotentReleaseDelay option.|false|boolean| +|readLockRemoveOnRollback|This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file failed and a rollback happens. If this option is false, then the file name entry is confirmed (as if the file did a commit).|true|boolean| +|readLockTimeout|Optional timeout in millis for the read-lock, if supported by the read-lock. If the read-lock could not be granted and the timeout triggered, then Camel will skip the file. At next poll Camel, will try the file again, and this time maybe the read-lock could be granted. Use a value of 0 or lower to indicate forever. Currently fileLock, changed and rename support the timeout. Notice: For FTP the default readLockTimeout value is 20000 instead of 10000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that ample time is allowed for the read lock process to try to grab the lock before the timeout was hit.|10000|integer| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|autoCreateKnownHostsFile|If knownHostFile does not exist, then attempt to auto-create the path and file (beware that the file will be created by the current user of the running Java process, which may not have file permission).|false|boolean| +|ciphers|Set a comma separated list of ciphers that will be used in order of preference. Possible cipher names are defined by JCraft JSCH. Some examples include: aes128-ctr,aes128-cbc,3des-ctr,3des-cbc,blowfish-cbc,aes192-cbc,aes256-cbc. If not specified the default list from JSCH will be used.||string| +|keyExchangeProtocols|Set a comma separated list of key exchange protocols that will be used in order of preference. Possible cipher names are defined by JCraft JSCH. Some examples include: diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1,diffie-hellman-group14-sha1, diffie-hellman-group-exchange-sha256,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521. If not specified the default list from JSCH will be used.||string| +|keyPair|Sets a key pair of the public and private key so to that the SFTP endpoint can do public/private key verification.||object| +|knownHosts|Sets the known\_hosts from the byte array, so that the SFTP endpoint can do host key verification.||string| +|knownHostsFile|Sets the known\_hosts file, so that the SFTP endpoint can do host key verification.||string| +|knownHostsUri|Sets the known\_hosts file (loaded from classpath by default), so that the SFTP endpoint can do host key verification.||string| +|password|Password to use for login||string| +|preferredAuthentications|Set the preferred authentications which SFTP endpoint will used. Some example include:password,publickey. If not specified the default list from JSCH will be used.||string| +|privateKey|Set the private key as byte so that the SFTP endpoint can do private key verification.||string| +|privateKeyFile|Set the private key file so that the SFTP endpoint can do private key verification.||string| +|privateKeyPassphrase|Set the private key file passphrase so that the SFTP endpoint can do private key verification.||string| +|privateKeyUri|Set the private key file (loaded from classpath by default) so that the SFTP endpoint can do private key verification.||string| +|publicKeyAcceptedAlgorithms|Set a comma separated list of public key accepted algorithms. Some examples include: ssh-dss,ssh-rsa,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521. If not specified the default list from JSCH will be used.||string| +|serverHostKeys|Set a comma separated list of algorithms supported for the server host key. Some examples include: ssh-dss,ssh-rsa,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521. If not specified the default list from JSCH will be used.||string| +|strictHostKeyChecking|Sets whether to use strict host key checking.|no|string| +|username|Username to use for login||string| +|useUserKnownHostsFile|If knownHostFile has not been explicit configured then use the host file from System.getProperty(user.home)/.ssh/known\_hosts|true|boolean| +|shuffle|To shuffle the list of files (sort in random order)|false|boolean| +|sortBy|Built-in sort by using the File Language. Supports nested sorts, so you can have a sort by file name and as a 2nd group sort by modified date.||string| +|sorter|Pluggable sorter as a java.util.Comparator class.||object| diff --git a/camel-sjms.md b/camel-sjms.md new file mode 100644 index 0000000000000000000000000000000000000000..81034af6f4527445c1491fe8ec03dc4ff2f361a4 --- /dev/null +++ b/camel-sjms.md @@ -0,0 +1,277 @@ +# Sjms + +**Since Camel 2.11** + +**Both producer and consumer are supported** + +The Simple JMS Component is a JMS component that only uses JMS APIs and +no third-party framework such as Spring JMS. + +The component was reworked from Camel 3.8 onwards to be similar to the +existing Camel JMS component that is based on Spring JMS. + +The reason is to offer many of the same features and functionality from +the JMS component, but for users that require lightweight without having +to include the Spring Framework. + +There are some advanced features in the Spring JMS component that has +been omitted, such as shared queues for request/reply. Spring JMS offers +fine-grained tunings for concurrency settings, which can be tweaked for +dynamic scaling up and down depending on load. This is a special feature +in Spring JMS that would require substantial code to implement in SJMS. + +The SJMS component does not support for Spring or JTA Transaction, +however, support for internal local transactions is supported using JMS +or Transaction or Client Acknowledge Mode. See further details below. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-sjms + x.x.x + + + +# URI format + + sjms:[queue:|topic:]destinationName[?options] + +Where `destinationName` is a JMS queue or topic name. By default, the +`destinationName` is interpreted as a queue name. For example, to +connect to the queue, `FOO.BAR` use: + + sjms:FOO.BAR + +You can include the optional `queue:` prefix, if you prefer: + + sjms:queue:FOO.BAR + +To connect to a topic, you *must* include the `topic:` prefix. For +example, to connect to the topic, `Stocks.Prices`, use: + + sjms:topic:Stocks.Prices + +# Reuse endpoint and send to different destinations computed at runtime + +If you need to send messages to a lot of different JMS destinations, it +makes sense to reuse a SJMS endpoint and specify the real destination in +a message header. This allows Camel to reuse the same endpoint, but send +to different destinations. This greatly reduces the number of endpoints +created and economizes on memory and thread resources. + +Using [toD](#eips:toD-eip.adoc) is easier than specifying the dynamic +destination with a header + +You can specify the destination in the following headers: + + +++++ + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelJmsDestinationName

String

The destination name.

+ +For example, the following route shows how you can compute a destination +at run time and use it to override the destination appearing in the JMS +URL: + + from("file://inbox") + .to("bean:computeDestination") + .to("sjms:queue:dummy"); + +The queue name, `dummy`, is just a placeholder. It must be provided as +part of the JMS endpoint URL, but it will be ignored in this example. + +In the `computeDestination` bean, specify the real destination by +setting the `CamelJmsDestinationName` header as follows: + + public void setJmsHeader(Exchange exchange) { + String id = .... + exchange.getIn().setHeader("CamelJmsDestinationName", "order:" + id"); + } + +Then Camel will read this header and use it as the destination instead +of the one configured on the endpoint. So, in this example Camel sends +the message to `sjms:queue:order:2`, assuming the `id` value was 2. + +Keep in mind that the JMS producer removes both +`CamelJmsDestinationName` headers from the exchange and do not propagate +them to the created JMS message to avoid the accidental loops in the +routes (in scenarios when the message will be forwarded to another JMS +endpoint). + +# Using toD + +If you need to send messages to a lot of different JMS destinations, it +makes sense to reuse a SJMS endpoint and specify the dynamic +destinations with simple language using [toD](#eips:toD-eip.adoc). + +For example, suppose you need to send messages to queues with order +types, then using toD could, for example, be done as follows: + + from("direct:order") + .toD("sjms:order-${header.orderType}"); + +# Additional Notes + +## Local transactions + +When using `transacted=true` then JMS Transacted Acknowledge Mode are in +use. The SJMS component supports this from both the consumer and +producers. If a consumer is transacted, then the active JMS Session will +commit or rollback at the end of processing the message. + +SJMS producers that are `transacted=true` will also defer until the end +of processing the message before the active JMS Session will commit or +rollback. + +You can combine consumer and producer, such as: + + from("sjms:cheese?transacted=true") + .to("bean:foo") + .to("sjms:foo?transacted=true") + .to("bean:bar"); + +Here the consumer and producer are both transacted, which means that +only at the end of processing the message, then both the consumer and +the producer will commit (or rollback in case of an exception during +routing). + +## Message Header Format + +The SJMS Component uses the same header format strategy used in the +Camel JMS Component. This pluggable strategy ensures that messages sent +over the wire conform to the JMS Message spec. + +For the `exchange.in.header` the following rules apply for the header +keys: + +- Keys starting with `JMS` or `JMSX` are reserved. + +- `exchange.in.headers` keys must be literals and all be valid Java + identifiers (do not use dots in the key name). + +- Camel replaces dots \& hyphens and the reverse when consuming JMS + messages: + + - it is replaced by *DOT* and the reverse replacement when Camel + consumes the message. + + - it is replaced by *HYPHEN* and the reverse replacement when + Camel consumes the message.See also the option + `jmsKeyFormatStrategy`, which allows use of your own custom + strategy for formatting keys. + +## Message Content + +To deliver content over the wire, we must ensure that the body of the +message that is being delivered adheres to the JMS Message +Specification. Therefore, all that are produced must either be +primitives or their counter-objects (such as `Integer`, `Long`, +`Character`). The types, `String`, `CharSequence`, `Date`, `BigDecimal` +and `BigInteger` are all converted to their `toString()` representation. +All other types are dropped. + +## Clustering + +When using *InOut* with SJMS in a clustered environment, you must either +use TemporaryQueue destinations or use a unique reply to destination per +InOut producer endpoint. The producer handles message correlation is +handled, not with message selectors at the broker. + +You should only use queues as reply-to destination types, topics are not +recommended or fully supported. + +Currently, the only correlation strategy is to use the +`JMSCorrelationId`. The *InOut* Consumer uses this strategy as well +ensuring that all response messages to the included `JMSReplyTo` +destination also have the `JMSCorrelationId` copied from the request as +well. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|connectionFactory|The connection factory to be use. A connection factory must be configured either on the component or endpoint.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|destinationCreationStrategy|To use a custom DestinationCreationStrategy.||object| +|exceptionListener|Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions.||object| +|jmsKeyFormatStrategy|Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides one implementation out of the box: default. The default strategy will safely marshal dots and hyphens (. and -). Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation.||object| +|messageCreatedStrategy|To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of jakarta.jms.Message objects when Camel is sending a JMS message.||object| +|recoveryInterval|Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds.|5000|duration| +|replyToOnTimeoutMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS.|1|integer| +|requestTimeoutCheckerInterval|Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout.|1000|duration| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|destinationType|The kind of destination to use|queue|string| +|destinationName|DestinationName is a JMS queue or topic name. By default, the destinationName is interpreted as a queue name.||string| +|acknowledgementMode|The JMS acknowledgement name, which is one of: SESSION\_TRANSACTED, CLIENT\_ACKNOWLEDGE, AUTO\_ACKNOWLEDGE, DUPS\_OK\_ACKNOWLEDGE|AUTO\_ACKNOWLEDGE|object| +|connectionFactory|The connection factory to be use. A connection factory must be configured either on the component or endpoint.||object| +|disableReplyTo|Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another.|false|boolean| +|replyTo|Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer).||string| +|testConnectionOnStartup|Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well.|false|boolean| +|asyncConsumer|Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions).|false|boolean| +|autoStartup|Specifies whether the consumer container should auto-startup.|true|boolean| +|clientId|Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead.||string| +|concurrentConsumers|Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.|1|integer| +|durableSubscriptionName|The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well.||string| +|replyToDeliveryPersistent|Specifies whether to use persistent delivery by default for replies.|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|eagerLoadingOfProperties|Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody.|false|boolean| +|eagerPoisonBody|If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties.|Poison JMS message due to ${exception.message}|string| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|messageSelector|Sets the JMS Message selector syntax.||string| +|replyToSameDestinationAllowed|Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself.|false|boolean| +|deliveryMode|Specifies the delivery mode to be used. Possible values are those defined by jakarta.jms.DeliveryMode. NON\_PERSISTENT = 1 and PERSISTENT = 2.||integer| +|deliveryPersistent|Specifies whether persistent delivery is used by default.|true|boolean| +|priority|Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect.|4|integer| +|replyToConcurrentConsumers|Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.|1|integer| +|replyToOverride|Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination.||string| +|replyToType|Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Exclusive is used.||object| +|requestTimeout|The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option.|20000|duration| +|timeToLive|When sending messages, specifies the time-to-live of the message (in milliseconds).|-1|integer| +|allowNullBody|Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown.|true|boolean| +|disableTimeToLive|Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details.|false|boolean| +|explicitQosEnabled|Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|preserveMessageQos|Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header.|false|boolean| +|asyncStartListener|Whether to startup the consumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or fail over. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry.|false|boolean| +|asyncStopListener|Whether to stop the consumer message listener asynchronously, when stopping a route.|false|boolean| +|destinationCreationStrategy|To use a custom DestinationCreationStrategy.||object| +|exceptionListener|Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions.||object| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object| +|includeAllJMSXProperties|Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply.|false|boolean| +|jmsKeyFormatStrategy|Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation.||object| +|mapJmsMessage|Specifies whether Camel should auto map the received JMS message to a suited payload type, such as jakarta.jms.TextMessage to a String etc. See section about how mapping works below for more details.|true|boolean| +|messageCreatedStrategy|To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of jakarta.jms.Message objects when Camel is sending a JMS message.||object| +|recoveryInterval|Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds.|5000|duration| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|transferException|If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a jakarta.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!|false|boolean| +|transacted|Specifies whether to use transacted mode|false|boolean| diff --git a/camel-sjms2.md b/camel-sjms2.md new file mode 100644 index 0000000000000000000000000000000000000000..8101bf9c3c016cf3e26f21bcd17acaa9eec47eda --- /dev/null +++ b/camel-sjms2.md @@ -0,0 +1,283 @@ +# Sjms2 + +**Since Camel 2.19** + +**Both producer and consumer are supported** + +The Simple JMS Component is a JMS component that only uses JMS APIs and +no third-party framework such as Spring JMS. + +The component was reworked from Camel 3.8 onwards to be similar to the +existing Camel JMS component that is based on Spring JMS. + +The reason is to offer many of the same features and functionality from +the JMS component, but for users that require lightweight without having +to include the Spring Framework. + +There are some advanced features in the Spring JMS component that has +been omitted, such as shared queues for request/reply. Spring JMS offers +fine-grained tunings for concurrency settings, which can be tweaked for +dynamic scaling up and down depending on load. This is a special feature +in Spring JMS that would require substantial code to implement in SJMS2. + +The SJMS2 component does not support for Spring or JTA Transaction, +however, support for internal local transactions is supported using JMS +or Transaction or Client Acknowledge Mode. See further details below. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-sjms2 + x.x.x + + + +# URI format + + sjms2:[queue:|topic:]destinationName[?options] + +Where `destinationName` is a JMS queue or topic name. By default, the +`destinationName` is interpreted as a queue name. For example, to +connect to the queue, `FOO.BAR` use: + + sjms2:FOO.BAR + +You can include the optional `queue:` prefix, if you prefer: + + sjms2:queue:FOO.BAR + +To connect to a topic, you *must* include the `topic:` prefix. For +example, to connect to the topic, `Stocks.Prices`, use: + + sjms2:topic:Stocks.Prices + +You append query options to the URI using the following format, +`?option=value&option=value&...` + +# Reuse endpoint and send to different destinations computed at runtime + +If you need to send messages to a lot of different JMS destinations, it +makes sense to reuse a SJMS endpoint and specify the real destination in +a message header. This allows Camel to reuse the same endpoint, but send +to different destinations. This greatly reduces the number of endpoints +created and economizes on memory and thread resources. + +Using [toD](#eips:toD-eip.adoc) is easier than specifying the dynamic +destination with a header + +You can specify the destination in the following headers: + + +++++ + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelJmsDestinationName

String

The destination name.

+ +For example, the following route shows how you can compute a destination +at run time and use it to override the destination appearing in the JMS +URL: + + from("file://inbox") + .to("bean:computeDestination") + .to("sjms2:queue:dummy"); + +The queue name, `dummy`, is just a placeholder. It must be provided as +part of the JMS endpoint URL, but it will be ignored in this example. + +In the `computeDestination` bean, specify the real destination by +setting the `CamelJmsDestinationName` header as follows: + + public void setJmsHeader(Exchange exchange) { + String id = .... + exchange.getIn().setHeader("CamelJmsDestinationName", "order:" + id"); + } + +Then Camel will read this header and use it as the destination instead +of the one configured on the endpoint. So, in this example Camel sends +the message to `sjms2:queue:order:2`, assuming the `id` value was 2. + +Keep in mind that the JMS producer removes both +`CamelJmsDestinationName` headers from the exchange and do not propagate +them to the created JMS message to avoid the accidental loops in the +routes (in scenarios when the message will be forwarded to another JMS +endpoint). + +# Using toD + +If you need to send messages to a lot of different JMS destinations, it +makes sense to reuse a SJMS2 endpoint and specify the dynamic +destinations with simple language using [toD](#eips:toD-eip.adoc). + +For example, suppose you need to send messages to queues with order +types, then using toD could, for example, be done as follows: + + from("direct:order") + .toD("sjms2:order-${header.orderType}"); + +# Additional Notes + +## Local transactions + +When using `transacted=true` then JMS Transacted Acknowledge Mode are in +use. The SJMS2 component supports this from both the consumer and +producers. If a consumer is transacted, then the active JMS Session will +commit or rollback at the end of processing the message. + +SJMS2 producers that are `transacted=true` will also defer until the end +of processing the message before the active JMS Session will commit or +rollback. + +You can combine consumer and producer, such as: + + from("sjms2:cheese?transacted=true") + .to("bean:foo") + .to("sjms2:foo?transacted=true") + .to("bean:bar"); + +Here the consumer and producer are both transacted, which means that +only at the end of processing the message, then both the consumer and +the producer will commit (or rollback in case of an exception during +routing). + +## Message Header Format + +The SJMS2 Component uses the same header format strategy used in the +Camel JMS Component. This pluggable strategy ensures that messages sent +over the wire conform to the JMS Message spec. + +For the `exchange.in.header` the following rules apply for the header +keys: + +- Keys starting with `JMS` or `JMSX` are reserved. + +- `exchange.in.headers` keys must be literals and all be valid Java + identifiers (do not use dots in the key name). + +- Camel replaces dots \& hyphens and the reverse when consuming JMS + messages: + + - it is replaced by *DOT* and the reverse replacement when Camel + consumes the message. + + - it is replaced by *HYPHEN* and the reverse replacement when + Camel consumes the message.See also the option + `jmsKeyFormatStrategy`, which allows use of your own custom + strategy for formatting keys. + +## Message Content + +To deliver content over the wire, we must ensure that the body of the +message that is being delivered adheres to the JMS Message +Specification. Therefore, all that are produced must either be +primitives or their counter-objects (such as `Integer`, `Long`, +`Character`). The types, `String`, `CharSequence`, `Date`, `BigDecimal` +and `BigInteger` are all converted to their `toString()` representation. +All other types are dropped. + +## Clustering + +When using *InOut* with SJMS2 in a clustered environment, you must +either use TemporaryQueue destinations or use a unique reply to +destination per InOut producer endpoint. The producer handles message +correlation, not with message selectors at the broker. + +You should only use queues as reply-to destination types, topics are not +recommended or fully supported. + +Currently, the only correlation strategy is to use the +`JMSCorrelationId`. The *InOut* Consumer uses this strategy as well +ensuring that all response messages to the included `JMSReplyTo` +destination also have the `JMSCorrelationId` copied from the request as +well. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|connectionFactory|The connection factory to be use. A connection factory must be configured either on the component or endpoint.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|destinationCreationStrategy|To use a custom DestinationCreationStrategy.||object| +|exceptionListener|Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions.||object| +|jmsKeyFormatStrategy|Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides one implementation out of the box: default. The default strategy will safely marshal dots and hyphens (. and -). Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation.||object| +|messageCreatedStrategy|To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of jakarta.jms.Message objects when Camel is sending a JMS message.||object| +|recoveryInterval|Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds.|5000|duration| +|replyToOnTimeoutMaxConcurrentConsumers|Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS.|1|integer| +|requestTimeoutCheckerInterval|Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout.|1000|duration| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|destinationType|The kind of destination to use|queue|string| +|destinationName|DestinationName is a JMS queue or topic name. By default, the destinationName is interpreted as a queue name.||string| +|acknowledgementMode|The JMS acknowledgement name, which is one of: SESSION\_TRANSACTED, CLIENT\_ACKNOWLEDGE, AUTO\_ACKNOWLEDGE, DUPS\_OK\_ACKNOWLEDGE|AUTO\_ACKNOWLEDGE|object| +|connectionFactory|The connection factory to be use. A connection factory must be configured either on the component or endpoint.||object| +|disableReplyTo|Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another.|false|boolean| +|replyTo|Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer).||string| +|testConnectionOnStartup|Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well.|false|boolean| +|asyncConsumer|Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions).|false|boolean| +|autoStartup|Specifies whether the consumer container should auto-startup.|true|boolean| +|clientId|Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead.||string| +|concurrentConsumers|Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener.|1|integer| +|durable|Sets the topic to be durable|false|boolean| +|durableSubscriptionName|The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well.||string| +|replyToDeliveryPersistent|Specifies whether to use persistent delivery by default for replies.|true|boolean| +|shared|Sets the topic to be shared|false|boolean| +|subscriptionId|Sets the topic subscription id, required for durable or shared topics.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|eagerLoadingOfProperties|Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody.|false|boolean| +|eagerPoisonBody|If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties.|Poison JMS message due to ${exception.message}|string| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|messageSelector|Sets the JMS Message selector syntax.||string| +|replyToSameDestinationAllowed|Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself.|false|boolean| +|deliveryMode|Specifies the delivery mode to be used. Possible values are those defined by jakarta.jms.DeliveryMode. NON\_PERSISTENT = 1 and PERSISTENT = 2.||integer| +|deliveryPersistent|Specifies whether persistent delivery is used by default.|true|boolean| +|priority|Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect.|4|integer| +|replyToConcurrentConsumers|Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads.|1|integer| +|replyToOverride|Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination.||string| +|replyToType|Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Exclusive is used.||object| +|requestTimeout|The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option.|20000|duration| +|timeToLive|When sending messages, specifies the time-to-live of the message (in milliseconds).|-1|integer| +|allowNullBody|Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown.|true|boolean| +|disableTimeToLive|Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details.|false|boolean| +|explicitQosEnabled|Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|preserveMessageQos|Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header.|false|boolean| +|asyncStartListener|Whether to startup the consumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or fail over. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry.|false|boolean| +|asyncStopListener|Whether to stop the consumer message listener asynchronously, when stopping a route.|false|boolean| +|destinationCreationStrategy|To use a custom DestinationCreationStrategy.||object| +|exceptionListener|Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions.||object| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object| +|includeAllJMSXProperties|Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply.|false|boolean| +|jmsKeyFormatStrategy|Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation.||object| +|mapJmsMessage|Specifies whether Camel should auto map the received JMS message to a suited payload type, such as jakarta.jms.TextMessage to a String etc. See section about how mapping works below for more details.|true|boolean| +|messageCreatedStrategy|To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of jakarta.jms.Message objects when Camel is sending a JMS message.||object| +|recoveryInterval|Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds.|5000|duration| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|transferException|If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a jakarta.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!|false|boolean| +|transacted|Specifies whether to use transacted mode|false|boolean| diff --git a/camel-slack.md b/camel-slack.md new file mode 100644 index 0000000000000000000000000000000000000000..e19af969293d7585a68cf670f884280ba9802f8e --- /dev/null +++ b/camel-slack.md @@ -0,0 +1,222 @@ +# Slack + +**Since Camel 2.16** + +**Both producer and consumer are supported** + +The Slack component allows you to connect to an instance of +[Slack](http://www.slack.com/) and to send and receive the messages. + +To send a message contained in the message body, a pre-established +[Slack incoming webhook](https://api.slack.com/incoming-webhooks) must +be configured in Slack. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-slack + x.x.x + + + +# URI format + +To send a message to a channel. + + slack:#channel[?options] + +To send a direct message to a Slack user. + + slack:@userID[?options] + +# Configuring in Spring XML + +The SlackComponent with XML must be configured as a Spring or Blueprint +bean that contains the incoming webhook url or the app token for the +integration as a parameter. + + + + + + +For Java, you can configure this using Java code. + +# Example + +A CamelContext with Blueprint could be as: + + + + + + + + + + + + + + + + + +# Producer + +You can now use a token to send a message instead of WebhookUrl + + from("direct:test") + .to("slack:#random?token=RAW()"); + +You can now use the Slack API model to create blocks. You can read more +about it here [https://api.slack.com/block-kit](https://api.slack.com/block-kit) + + public void testSlackAPIModelMessage() { + Message message = new Message(); + message.setBlocks(Collections.singletonList(SectionBlock + .builder() + .text(MarkdownTextObject + .builder() + .text("*Hello from Camel!*") + .build()) + .build())); + + template.sendBody(test, message); + } + +You’ll need to create a Slack app and use it in your workspace. + +For token usage, set the *OAuth Token*. + +Add the corresponding (`channels:history`, `chat:write`) user token +scopes to your app to grant it permission to write messages in the +corresponding channel. You’ll also need to invite the Bot or User to the +corresponding channel. + +For Bot tokens, you’ll need the following permissions: + +- channels:history + +- chat:write + +For User tokens, you’ll need the following permissions: + +- channels:history + +- chat:write + +# Consumer + +You can also use a consumer for messages in a channel + + from("slack://general?token=RAW()&maxResults=1") + .to("mock:result"); + +This way you’ll get the last message from `general` channel. The +consumer will track the timestamp of the last message consumed, and in +the next poll it will consume only newer messages in the channel. + +You’ll need to create a Slack app and use it in your workspace. + +Use the *User OAuth Token* as token for the consumer endpoint. + +Add the corresponding history (`channels:history`, `groups:history`, +`mpim:history` and `im:history`) and read (`channels:read`, +`groups:read`, `mpim:read` and `im:read`) user token scope to your app +to grant it permission to view messages in the corresponding channel. + +For Bot tokens, you’ll need the following permissions: + +- channels:history + +- groups:history + +- im:history + +- mpim:history + +- channels:read + +- groups:read + +- im:read + +- mpim:read + +For User tokens, you’ll need the following permissions: + +- channels:history + +- groups:history + +- im:history + +- mpim:history + +- channels:read + +- groups:read + +- im:read + +- mpim:read + +The `naturalOrder` option allows consuming messages from the oldest to +the newest. Originally, you would get the newest first and consume +backward (`message 3 -> message 2 -> message 1`) + +The channel / conversation doesn’t need to be public to read the history +and messages. Use the `conversationType` option to specify the type of +the conversation (`PUBLIC_CHANNEL`,`PRIVATE_CHANNEL`, `MPIM`, `IM`). + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|token|The token to access Slack. This app needs to have channels:history, groups:history, im:history, mpim:history, channels:read, groups:read, im:read and mpim:read permissions. The User OAuth Token is the kind of token needed.||string| +|webhookUrl|The incoming webhook URL||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|channel|The channel name (syntax #name) or slack user (syntax userName) to send a message directly to an user.||string| +|token|The token to access Slack. This app needs to have channels:history, groups:history, im:history, mpim:history, channels:read, groups:read, im:read and mpim:read permissions. The User OAuth Token is the kind of token needed.||string| +|conversationType|Type of conversation|PUBLIC\_CHANNEL|object| +|maxResults|The Max Result for the poll|10|string| +|naturalOrder|Create exchanges in natural order (oldest to newest) or not|false|boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|serverUrl|The Server URL of the Slack instance|https://slack.com|string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|iconEmoji|Use a Slack emoji as an avatar||string| +|iconUrl|The avatar that the component will use when sending message to a channel or user.||string| +|username|This is the username that the bot will have when sending messages to a channel or user.||string| +|webhookUrl|The incoming webhook URL||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|10000|duration| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| diff --git a/camel-smb.md b/camel-smb.md new file mode 100644 index 0000000000000000000000000000000000000000..9d7dcce9c0239de914f527d0e038a43f268710d6 --- /dev/null +++ b/camel-smb.md @@ -0,0 +1,91 @@ +# Smb + +**Since Camel 4.3** + +**Both producer and consumer are supported** + +The Server Message Block (SMB) component provides a way to connect +natively to SMB file shares, such as those provided by Microsoft Windows +or [Samba](https://www.samba.org/). + + + org.apache.camel + camel-smb + x.x.x + + + +# URI format + + smb:address[:port]/shareName[?options] + +# Examples + +For instance, polling all the files from an SMB file share and reading +their contents would look like this: + + private void process(Exchange exchange) throws IOException { + final File file = exchange.getMessage().getBody(File.class); + try (InputStream inputStream = file.getInputStream()) { + LOG.debug("Read exchange: {}, with contents: {}", file.getFileInformation(), new String(inputStream.readAllBytes())); + } + } + + public void configure() { + fromF("smb:%s/%s?username=%s&password=%s&path=/", service.address(), service.shareName(), service.userName(), service.password()) + .process(this::process) + .to("mock:result"); + } + +Beware that the File object provided is not a `java.io.File` instance, +but, instead a `com.hierynomus.smbj.share.File` instance. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|hostname|The share hostname or IP address||string| +|port|The share port number|445|integer| +|shareName|The name of the share to connect to.||string| +|path|The path, within the share, to consume the files from||string| +|searchPattern|The search pattern used to list the files|\*.txt|string| +|recursive|If a directory, will look for files in all the sub-directories as well.|false|boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|autoCreate|Whether to create parent directory if it does not exist|false|boolean| +|fileExist|What action to take if the SMB file already exists|Ignore|object| +|readBufferSize|Read buffer size when for file being produced|2048|integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|idempotentRepository|A pluggable repository org.apache.camel.spi.IdempotentRepository which by default use MemoryIdempotentRepository if none is specified.||object| +|smbConfig|An optional SMB client configuration, can be used to configure client specific configurations, like timeouts||object| +|smbIoBean|An optional SMB I/O bean to use to setup the file access attributes when reading/writing a file||object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|domain|The user domain||string| +|password|The password to access the share||string| +|username|The username required to access the share||string| diff --git a/camel-smooks.md b/camel-smooks.md new file mode 100644 index 0000000000000000000000000000000000000000..e300bb2949660d8d38cd5ed0a7da0d4ecf1464a6 --- /dev/null +++ b/camel-smooks.md @@ -0,0 +1,69 @@ +# Smooks + +**Since Camel 4.7** + +**Both producer and consumer are supported** + +The Camel Smooks component uses [Smooks](https://www.smooks.org/) to +break up the structured data (EDI, CSV, POJO, etc…) of a Camel message +body into fragments. These fragments can be processed independently of +one another from within Smooks. + +Common applications of Smooks include: + +- transformation (e.g., EDI to CSV, POJO to EDI, POJO to XML) + +- routing (e.g., split, transform, and route fragments to destinations + such as JMS queues, file systems, and databases) + +- enrichment (e.g., enriching a fragment with data from a database). + +Maven users will need to add the following dependency to their +`pom.xml`. + + + org.apache.camel + camel-smooks + x.x.x + + + +# URI Format + + smooks://smooks-config-path[?options] + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|smooksConfig|Smooks XML configuration file||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| diff --git a/camel-smpp.md b/camel-smpp.md new file mode 100644 index 0000000000000000000000000000000000000000..541d9320b70b33156d0e296f57ae88941bbd4da0 --- /dev/null +++ b/camel-smpp.md @@ -0,0 +1,348 @@ +# Smpp + +**Since Camel 2.2** + +**Both producer and consumer are supported** + +This component provides access to an SMSC (Short Message Service Center) +over the [SMPP](http://smsforum.net/SMPP_v3_4_Issue1_2.zip) protocol to +send and receive SMS. The [JSMPP](http://jsmpp.org) library is used for +the protocol implementation. + +The version of the SMPP protocol specification is 3.4 by default and can +be set using the component configuration options (field +"interfaceVersion"). + +The Camel component currently operates as an +[ESME](http://en.wikipedia.org/wiki/ESME) (External Short Messaging +Entity) and not as an SMSC itself. + +You are also able to execute `ReplaceSm`, `QuerySm`, `SubmitMulti`, +`CancelSm`, and `DataSm`. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-smpp + x.x.x + + + +# SMS limitations + +SMS is neither reliable nor secure. Users who require reliable and +secure delivery may want to consider using the XMPP or SIP components +instead, combined with a smartphone app supporting the chosen protocol. + +- Reliability: although the SMPP standard offers a range of feedback + mechanisms to indicate errors, non-delivery and confirmation of + delivery, it is not uncommon for mobile networks to hide or simulate + these responses. For example, some networks automatically send a + delivery confirmation for every message even if the destination + number is invalid or not switched on. Some networks silently drop + messages if they think they are spam. Spam detection rules in the + network may be very crude, sometimes more than 100 messages per day + from a single sender may be considered spam. + +- Security: there is basic encryption for the last hop from the radio + tower down to the recipient handset. SMS messages are not encrypted + or authenticated in any other part of the network. Some operators + allow staff in retail outlets or call centres to browse through the + SMS message histories of their customers. Message sender identity + can be easily forged. Regulators and even the mobile telephone + industry itself have cautioned against the use of SMS in two-factor + authentication schemes and other purposes where security is + important. + +While the Camel component makes it as easy as possible to send messages +to the SMS network, it cannot offer an easy solution to these problems. + +# Data coding, alphabet and international character sets + +Data coding and alphabet can be specified on a per-message basis. +Default values can be specified for the endpoint. It is important to +understand the relationship between these options and the way the +component acts when more than one value is set. + +Data coding is an 8 bit field in the SMPP wire format. + +The alphabet corresponds to bits 0–3 of the data coding field. For some +types of message, where a message class is used (by setting bit 5 of the +data coding field), the lower two bits of the data coding field are not +interpreted as alphabet. Only bits 2 and 3 impact the alphabet. + +Furthermore, the current version of the JSMPP library only seems to +support bits 2 and 3, assuming that bits 0 and 1 are used for message +class. This is why the Alphabet class in JSMPP doesn’t support the value +3 (binary 0011) which indicates ISO-8859-1. + +Although JSMPP provides a representation of the message class parameter, +the Camel component doesn’t currently provide a way to set it other than +manually setting the corresponding bits in the data coding field. + +When setting the data coding field in the outgoing message, the Camel +component considers the following values and uses the first one it can +find: + +- the data coding specified in a header + +- the alphabet specified in a header + +- the data coding specified in the endpoint configuration (URI + parameter) + +In addition to trying to send the data coding value to the SMSC, the +Camel component also tries to analyze the message body, converts it to a +Java String (Unicode) and converts that to a byte array in the +corresponding alphabet. When deciding which alphabet to use in the byte +array, the Camel SMPP component does not consider the data coding value +(header or configuration), it only considers the specified alphabet +(from either the header or endpoint parameter). + +If some characters in the String cannot be represented in the chosen +alphabet, they may be replaced by the question mark (`?`) symbol. Users +of the API may want to consider checking if their message body can be +converted to ISO-8859-1 before passing it to the component and if not, +setting the alphabet header to request UCS-2 encoding. If the alphabet +and data coding options are not specified at all, then the component may +try to detect the required encoding and set the data coding for you. + +The list of alphabet codes is specified in the SMPP specification v3.4, +section 5.2.19. One notable limitation of the SMPP specification is that +there is no alphabet code for explicitly requesting use of the GSM 3.38 +(7-bit) character set. Choosing `0` for the alphabet selects the SMSC +*default* alphabet, this usually means GSM 3.38, but it is not +guaranteed. The SMPP gateway Nexmo [actually allows the default to be +mapped to any other character set with a control panel +option](https://help.nexmo.com/hc/en-us/articles/204015813-How-to-change-the-character-encoding-in-SMPP-). +It is suggested that users check with their SMSC operator to confirm +exactly which character set is being used as the default. + +# Message splitting and throttling + +After transforming a message body from a String to a byte array, the +Camel component is also responsible for splitting the message into parts +(within the 140 byte SMS size limit) before passing it to JSMPP. This is +completed automatically. + +If the GSM 3.38 alphabet is used, the component will pack up to 160 +characters into the 140-byte message body. If an 8-bit character set is +used (e.g., ISO-8859-1 for Western Europe), then 140 characters will be +allowed within the 140-byte message body. If 16 bit UCS-2 encoding is +used, then just 70 characters fit into each 140-byte message. + +Some SMSC providers implement throttling rules. Each part of a message +that has been split may be counted separately by the provider’s +throttling mechanism. The Camel Throttler component can be useful for +throttling messages in the SMPP route before handing them to the SMSC. + +# URI format + + smpp://[username@]hostname[:port][?options] + smpps://[username@]hostname[:port][?options] + +If no **username** is provided, then Camel will provide the default +value `smppclient`. +If no **port** number is provided, then Camel will provide the default +value `2775`. +If the protocol name is "smpps", camel-smpp with try to use SSLSocket to +init a connection to the server. + +**JSMPP library** + +See the documentation of the [JSMPP Library](http://jsmpp.org) for more +details about the underlying library. + +# Exception handling + +This component supports the general Camel exception handling +capabilities + +When an error occurs sending a message with SubmitSm (the default +action), the org.apache.camel.component.smpp.SmppException is thrown +with a nested exception, org.jsmpp.extra.NegativeResponseException. Call +NegativeResponseException.getCommandStatus() to obtain the exact SMPP +negative response code, the values are explained in the SMPP +specification 3.4, section 5.1.3. +When the SMPP consumer receives a `DeliverSm` or `DataSm` short message +and the processing of these messages fails, you can also throw a +`ProcessRequestException` instead of handle the failure. In this case, +this exception is forwarded to the underlying [JSMPP +library](http://jsmpp.org) which will return the included error code to +the SMSC. This feature is useful to e.g., instruct the SMSC to resend +the short message at a later time. This could be done with the following +lines of code: + + from("smpp://smppclient@localhost:2775?password=password&enquireLinkTimer=3000&transactionTimer=5000&systemType=consumer") + .doTry() + .to("bean:dao?method=updateSmsState") + .doCatch(Exception.class) + .throwException(new ProcessRequestException("update of sms state failed", 100)) + .end(); + +Please refer to the [SMPP +specification](http://smsforum.net/SMPP_v3_4_Issue1_2.zip) for the +complete list of error codes and their meanings. + +# Samples + +A route which sends an SMS using the Java DSL: + + from("direct:start") + .to("smpp://smppclient@localhost:2775? + password=password&enquireLinkTimer=3000&transactionTimer=5000&systemType=producer"); + +A route which sends an SMS using the Spring XML DSL: + + + + + + +A route which receives an SMS using the Java DSL: + + from("smpp://smppclient@localhost:2775?password=password&enquireLinkTimer=3000&transactionTimer=5000&systemType=consumer") + .to("bean:foo"); + +A route which receives an SMS using the Spring XML DSL: + + + + + + +An example of using transceiver (TRX) binding type: + + from("direct:start") + .to("smpp://j@localhost:8056?password=jpwd&systemType=producer" + + "&messageReceiverRouteId=sampleMessageReceiverRouteId"); + + from("direct:messageReceiver").id("sampleMessageReceiverRouteId") + .to("bean:foo"); + +Please note that with TRX binding type, you wouldn’t define a +corresponding redundant SMPP consumer. Camel will use the specified +route by `messageReceiverRouteId` as the corresponding consumer. +Internally, it uses one and same SmppSession as producer for the +provided consumer. + +When the SMPP Server doesn’t support TRX, then you have to define +separate producer (TX by default) and consumer (RX by default). + +**SMSC simulator** + +If you need an SMSC simulator for your test, you can use the simulator +provided by +[JSMPP](https://github.com/opentelecoms-org/jsmpp/wiki/GettingStarted#running-smpp-server). + +# Debug logging + +This component has log level **DEBUG**, which can be helpful in +debugging problems. If you use log4j, you can add the following line to +your configuration: + + log4j.logger.org.apache.camel.component.smpp=DEBUG + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|initialReconnectDelay|Defines the initial delay in milliseconds after the consumer/producer tries to reconnect to the SMSC, after the connection was lost.|5000|integer| +|maxReconnect|Defines the maximum number of attempts to reconnect to the SMSC, if SMSC returns a negative bind response|2147483647|integer| +|reconnectDelay|Defines the interval in milliseconds between the reconnect attempts, if the connection to the SMSC was lost and the previous was not succeed.|5000|integer| +|splittingPolicy|You can specify a policy for handling long messages: ALLOW - the default, long messages are split to 140 bytes per message TRUNCATE - long messages are split and only the first fragment will be sent to the SMSC. Some carriers drop subsequent fragments so this reduces load on the SMPP connection sending parts of a message that will never be delivered. REJECT - if a message would need to be split, it is rejected with an SMPP NegativeResponseException and the reason code signifying the message is too long.|ALLOW|object| +|systemType|This parameter is used to categorize the type of ESME (External Short Message Entity) that is binding to the SMSC (max. 13 characters).||string| +|addressRange|You can specify the address range for the SmppConsumer as defined in section 5.2.7 of the SMPP 3.4 specification. The SmppConsumer will receive messages only from SMSC's which target an address (MSISDN or IP address) within this range.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|destAddr|Defines the destination SME address. For mobile terminated messages, this is the directory number of the recipient MS. Only for SubmitSm, SubmitMulti, CancelSm and DataSm.|1717|string| +|destAddrNpi|Defines the type of number (TON) to be used in the SME destination address parameters. Only for SubmitSm, SubmitMulti, CancelSm and DataSm. The following NPI values are defined: 0: Unknown 1: ISDN (E163/E164) 2: Data (X.121) 3: Telex (F.69) 6: Land Mobile (E.212) 8: National 9: Private 10: ERMES 13: Internet (IP) 18: WAP Client Id (to be defined by WAP Forum)||integer| +|destAddrTon|Defines the type of number (TON) to be used in the SME destination address parameters. Only for SubmitSm, SubmitMulti, CancelSm and DataSm. The following TON values are defined: 0: Unknown 1: International 2: National 3: Network Specific 4: Subscriber Number 5: Alphanumeric 6: Abbreviated||integer| +|lazySessionCreation|Sessions can be lazily created to avoid exceptions, if the SMSC is not available when the Camel producer is started. Camel will check the in message headers 'CamelSmppSystemId' and 'CamelSmppPassword' of the first exchange. If they are present, Camel will use these data to connect to the SMSC.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|messageReceiverRouteId|Set this on producer in order to benefit from transceiver (TRX) binding type. So once set, you don't need to define an 'SMTPP consumer' endpoint anymore. You would set this to a 'Direct consumer' endpoint instead. DISCALIMER: This feature is only tested with 'Direct consumer' endpoint. The behavior with any other consumer type is unknown and not tested.||string| +|numberingPlanIndicator|Defines the numeric plan indicator (NPI) to be used in the SME. The following NPI values are defined: 0: Unknown 1: ISDN (E163/E164) 2: Data (X.121) 3: Telex (F.69) 6: Land Mobile (E.212) 8: National 9: Private 10: ERMES 13: Internet (IP) 18: WAP Client Id (to be defined by WAP Forum)||integer| +|priorityFlag|Allows the originating SME to assign a priority level to the short message. Only for SubmitSm and SubmitMulti. Four Priority Levels are supported: 0: Level 0 (lowest) priority 1: Level 1 priority 2: Level 2 priority 3: Level 3 (highest) priority||integer| +|protocolId|The protocol id||integer| +|registeredDelivery|Is used to request an SMSC delivery receipt and/or SME originated acknowledgements. The following values are defined: 0: No SMSC delivery receipt requested. 1: SMSC delivery receipt requested where final delivery outcome is success or failure. 2: SMSC delivery receipt requested where the final delivery outcome is delivery failure.||integer| +|replaceIfPresentFlag|Used to request the SMSC to replace a previously submitted message, that is still pending delivery. The SMSC will replace an existing message provided that the source address, destination address and service type match the same fields in the new message. The following replace if present flag values are defined: 0: Don't replace 1: Replace||integer| +|serviceType|The service type parameter can be used to indicate the SMS Application service associated with the message. The following generic service\_types are defined: CMT: Cellular Messaging CPT: Cellular Paging VMN: Voice Mail Notification VMA: Voice Mail Alerting WAP: Wireless Application Protocol USSD: Unstructured Supplementary Services Data||string| +|sourceAddr|Defines the address of SME (Short Message Entity) which originated this message.|1616|string| +|sourceAddrNpi|Defines the numeric plan indicator (NPI) to be used in the SME originator address parameters. The following NPI values are defined: 0: Unknown 1: ISDN (E163/E164) 2: Data (X.121) 3: Telex (F.69) 6: Land Mobile (E.212) 8: National 9: Private 10: ERMES 13: Internet (IP) 18: WAP Client Id (to be defined by WAP Forum)||integer| +|sourceAddrTon|Defines the type of number (TON) to be used in the SME originator address parameters. The following TON values are defined: 0: Unknown 1: International 2: National 3: Network Specific 4: Subscriber Number 5: Alphanumeric 6: Abbreviated||integer| +|typeOfNumber|Defines the type of number (TON) to be used in the SME. The following TON values are defined: 0: Unknown 1: International 2: National 3: Network Specific 4: Subscriber Number 5: Alphanumeric 6: Abbreviated||integer| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configuration|To use the shared SmppConfiguration as configuration.||object| +|enquireLinkTimer|Defines the interval in milliseconds between the confidence checks. The confidence check is used to test the communication path between an ESME and an SMSC.|60000|integer| +|interfaceVersion|Defines the interface version to be used in the binding request with the SMSC. The following values are allowed, as defined in the SMPP protocol (and the underlying implementation using the jSMPP library, respectively): legacy (0x00), 3.3 (0x33), 3.4 (0x34), and 5.0 (0x50). The default (fallback) value is version 3.4.|3.4|string| +|pduProcessorDegree|Sets the number of threads which can read PDU and process them in parallel.|3|integer| +|pduProcessorQueueCapacity|Sets the capacity of the working queue for PDU processing.|100|integer| +|sessionStateListener|You can refer to a org.jsmpp.session.SessionStateListener in the Registry to receive callbacks when the session state changed.||object| +|singleDLR|When true, the SMSC delivery receipt would be requested only for the last segment of a multi-segment (long) message. For short messages, with only 1 segment the behaviour is unchanged.|false|boolean| +|transactionTimer|Defines the maximum period of inactivity allowed after a transaction, after which an SMPP entity may assume that the session is no longer active. This timer may be active on either communicating SMPP entity (i.e. SMSC or ESME).|10000|integer| +|alphabet|Defines encoding of data according the SMPP 3.4 specification, section 5.2.19. 0: SMSC Default Alphabet 4: 8 bit Alphabet 8: UCS2 Alphabet||integer| +|dataCoding|Defines the data coding according the SMPP 3.4 specification, section 5.2.19. Example data encodings are: 0: SMSC Default Alphabet 3: Latin 1 (ISO-8859-1) 4: Octet unspecified (8-bit binary) 8: UCS2 (ISO/IEC-10646) 13: Extended Kanji JIS(X 0212-1990)||integer| +|encoding|Defines the encoding scheme of the short message user data. Only for SubmitSm, ReplaceSm and SubmitMulti.|ISO-8859-1|string| +|httpProxyHost|If you need to tunnel SMPP through a HTTP proxy, set this attribute to the hostname or ip address of your HTTP proxy.||string| +|httpProxyPassword|If your HTTP proxy requires basic authentication, set this attribute to the password required for your HTTP proxy.||string| +|httpProxyPort|If you need to tunnel SMPP through a HTTP proxy, set this attribute to the port of your HTTP proxy.|3128|integer| +|httpProxyUsername|If your HTTP proxy requires basic authentication, set this attribute to the username required for your HTTP proxy.||string| +|proxyHeaders|These headers will be passed to the proxy server while establishing the connection.||object| +|password|The password for connecting to SMSC server.||string| +|systemId|The system id (username) for connecting to SMSC server.|smppclient|string| +|usingSSL|Whether using SSL with the smpps protocol|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|Hostname for the SMSC server to use.|localhost|string| +|port|Port number for the SMSC server to use.|2775|integer| +|initialReconnectDelay|Defines the initial delay in milliseconds after the consumer/producer tries to reconnect to the SMSC, after the connection was lost.|5000|integer| +|maxReconnect|Defines the maximum number of attempts to reconnect to the SMSC, if SMSC returns a negative bind response|2147483647|integer| +|reconnectDelay|Defines the interval in milliseconds between the reconnect attempts, if the connection to the SMSC was lost and the previous was not succeed.|5000|integer| +|splittingPolicy|You can specify a policy for handling long messages: ALLOW - the default, long messages are split to 140 bytes per message TRUNCATE - long messages are split and only the first fragment will be sent to the SMSC. Some carriers drop subsequent fragments so this reduces load on the SMPP connection sending parts of a message that will never be delivered. REJECT - if a message would need to be split, it is rejected with an SMPP NegativeResponseException and the reason code signifying the message is too long.|ALLOW|object| +|systemType|This parameter is used to categorize the type of ESME (External Short Message Entity) that is binding to the SMSC (max. 13 characters).||string| +|addressRange|You can specify the address range for the SmppConsumer as defined in section 5.2.7 of the SMPP 3.4 specification. The SmppConsumer will receive messages only from SMSC's which target an address (MSISDN or IP address) within this range.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|destAddr|Defines the destination SME address. For mobile terminated messages, this is the directory number of the recipient MS. Only for SubmitSm, SubmitMulti, CancelSm and DataSm.|1717|string| +|destAddrNpi|Defines the type of number (TON) to be used in the SME destination address parameters. Only for SubmitSm, SubmitMulti, CancelSm and DataSm. The following NPI values are defined: 0: Unknown 1: ISDN (E163/E164) 2: Data (X.121) 3: Telex (F.69) 6: Land Mobile (E.212) 8: National 9: Private 10: ERMES 13: Internet (IP) 18: WAP Client Id (to be defined by WAP Forum)||integer| +|destAddrTon|Defines the type of number (TON) to be used in the SME destination address parameters. Only for SubmitSm, SubmitMulti, CancelSm and DataSm. The following TON values are defined: 0: Unknown 1: International 2: National 3: Network Specific 4: Subscriber Number 5: Alphanumeric 6: Abbreviated||integer| +|lazySessionCreation|Sessions can be lazily created to avoid exceptions, if the SMSC is not available when the Camel producer is started. Camel will check the in message headers 'CamelSmppSystemId' and 'CamelSmppPassword' of the first exchange. If they are present, Camel will use these data to connect to the SMSC.|false|boolean| +|messageReceiverRouteId|Set this on producer in order to benefit from transceiver (TRX) binding type. So once set, you don't need to define an 'SMTPP consumer' endpoint anymore. You would set this to a 'Direct consumer' endpoint instead. DISCALIMER: This feature is only tested with 'Direct consumer' endpoint. The behavior with any other consumer type is unknown and not tested.||string| +|numberingPlanIndicator|Defines the numeric plan indicator (NPI) to be used in the SME. The following NPI values are defined: 0: Unknown 1: ISDN (E163/E164) 2: Data (X.121) 3: Telex (F.69) 6: Land Mobile (E.212) 8: National 9: Private 10: ERMES 13: Internet (IP) 18: WAP Client Id (to be defined by WAP Forum)||integer| +|priorityFlag|Allows the originating SME to assign a priority level to the short message. Only for SubmitSm and SubmitMulti. Four Priority Levels are supported: 0: Level 0 (lowest) priority 1: Level 1 priority 2: Level 2 priority 3: Level 3 (highest) priority||integer| +|protocolId|The protocol id||integer| +|registeredDelivery|Is used to request an SMSC delivery receipt and/or SME originated acknowledgements. The following values are defined: 0: No SMSC delivery receipt requested. 1: SMSC delivery receipt requested where final delivery outcome is success or failure. 2: SMSC delivery receipt requested where the final delivery outcome is delivery failure.||integer| +|replaceIfPresentFlag|Used to request the SMSC to replace a previously submitted message, that is still pending delivery. The SMSC will replace an existing message provided that the source address, destination address and service type match the same fields in the new message. The following replace if present flag values are defined: 0: Don't replace 1: Replace||integer| +|serviceType|The service type parameter can be used to indicate the SMS Application service associated with the message. The following generic service\_types are defined: CMT: Cellular Messaging CPT: Cellular Paging VMN: Voice Mail Notification VMA: Voice Mail Alerting WAP: Wireless Application Protocol USSD: Unstructured Supplementary Services Data||string| +|sourceAddr|Defines the address of SME (Short Message Entity) which originated this message.|1616|string| +|sourceAddrNpi|Defines the numeric plan indicator (NPI) to be used in the SME originator address parameters. The following NPI values are defined: 0: Unknown 1: ISDN (E163/E164) 2: Data (X.121) 3: Telex (F.69) 6: Land Mobile (E.212) 8: National 9: Private 10: ERMES 13: Internet (IP) 18: WAP Client Id (to be defined by WAP Forum)||integer| +|sourceAddrTon|Defines the type of number (TON) to be used in the SME originator address parameters. The following TON values are defined: 0: Unknown 1: International 2: National 3: Network Specific 4: Subscriber Number 5: Alphanumeric 6: Abbreviated||integer| +|typeOfNumber|Defines the type of number (TON) to be used in the SME. The following TON values are defined: 0: Unknown 1: International 2: National 3: Network Specific 4: Subscriber Number 5: Alphanumeric 6: Abbreviated||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|enquireLinkTimer|Defines the interval in milliseconds between the confidence checks. The confidence check is used to test the communication path between an ESME and an SMSC.|60000|integer| +|interfaceVersion|Defines the interface version to be used in the binding request with the SMSC. The following values are allowed, as defined in the SMPP protocol (and the underlying implementation using the jSMPP library, respectively): legacy (0x00), 3.3 (0x33), 3.4 (0x34), and 5.0 (0x50). The default (fallback) value is version 3.4.|3.4|string| +|pduProcessorDegree|Sets the number of threads which can read PDU and process them in parallel.|3|integer| +|pduProcessorQueueCapacity|Sets the capacity of the working queue for PDU processing.|100|integer| +|sessionStateListener|You can refer to a org.jsmpp.session.SessionStateListener in the Registry to receive callbacks when the session state changed.||object| +|singleDLR|When true, the SMSC delivery receipt would be requested only for the last segment of a multi-segment (long) message. For short messages, with only 1 segment the behaviour is unchanged.|false|boolean| +|transactionTimer|Defines the maximum period of inactivity allowed after a transaction, after which an SMPP entity may assume that the session is no longer active. This timer may be active on either communicating SMPP entity (i.e. SMSC or ESME).|10000|integer| +|alphabet|Defines encoding of data according the SMPP 3.4 specification, section 5.2.19. 0: SMSC Default Alphabet 4: 8 bit Alphabet 8: UCS2 Alphabet||integer| +|dataCoding|Defines the data coding according the SMPP 3.4 specification, section 5.2.19. Example data encodings are: 0: SMSC Default Alphabet 3: Latin 1 (ISO-8859-1) 4: Octet unspecified (8-bit binary) 8: UCS2 (ISO/IEC-10646) 13: Extended Kanji JIS(X 0212-1990)||integer| +|encoding|Defines the encoding scheme of the short message user data. Only for SubmitSm, ReplaceSm and SubmitMulti.|ISO-8859-1|string| +|httpProxyHost|If you need to tunnel SMPP through a HTTP proxy, set this attribute to the hostname or ip address of your HTTP proxy.||string| +|httpProxyPassword|If your HTTP proxy requires basic authentication, set this attribute to the password required for your HTTP proxy.||string| +|httpProxyPort|If you need to tunnel SMPP through a HTTP proxy, set this attribute to the port of your HTTP proxy.|3128|integer| +|httpProxyUsername|If your HTTP proxy requires basic authentication, set this attribute to the username required for your HTTP proxy.||string| +|proxyHeaders|These headers will be passed to the proxy server while establishing the connection.||object| +|password|The password for connecting to SMSC server.||string| +|systemId|The system id (username) for connecting to SMSC server.|smppclient|string| +|usingSSL|Whether using SSL with the smpps protocol|false|boolean| diff --git a/camel-snmp.md b/camel-snmp.md new file mode 100644 index 0000000000000000000000000000000000000000..db3c0a6499fa650b34f45de0b46336cb503aba5f --- /dev/null +++ b/camel-snmp.md @@ -0,0 +1,160 @@ +# Snmp + +**Since Camel 2.1** + +**Both producer and consumer are supported** + +The SNMP component gives you the ability to poll SNMP capable devices or +receiving traps + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-snmp + x.x.x + + + +# URI format + + snmp://hostname[:port][?Options] + +The component supports polling OID values from an SNMP enabled device +and receiving traps. + +# Snmp Producer + +It can also be used to request information using GET method. + +The response body type is `org.apache.camel.component.snmp.SnmpMessage`. + +# The result of a poll + +Given the situation, that I poll for the following OIDs: + +**OIDs** + + 1.3.6.1.2.1.1.3.0 + 1.3.6.1.2.1.25.3.2.1.5.1 + 1.3.6.1.2.1.25.3.5.1.1.1 + 1.3.6.1.2.1.43.5.1.1.11.1 + +The result will be the following: + +**Result of toString conversion** + + + + + 1.3.6.1.2.1.1.3.0 + 6 days, 21:14:28.00 + + + 1.3.6.1.2.1.25.3.2.1.5.1 + 2 + + + 1.3.6.1.2.1.25.3.5.1.1.1 + 3 + + + 1.3.6.1.2.1.43.5.1.1.11.1 + 6 + + + 1.3.6.1.2.1.1.1.0 + My Very Special Printer Of Brand Unknown + + + +As you maybe recognized, there is one more result than requested: +`....1.3.6.1.2.1.1.1.0`. The device fills in this one automatically in +this special case. So it may absolutely happen that you receive more +than you requested. Be prepared. + +**OID starting with dot representation** + + .1.3.6.1.4.1.6527.3.1.2.21.2.1.50 + +As you may notice, default `snmpVersion` is 0, which means `version1` in +the endpoint if it is not set explicitly. Make sure you explicitly set +`snmpVersion` which is not default value, in a case of where you are +able to query SNMP tables with different versions. Other possible values +are `version2c` and `version3`. + +# Examples + +Polling a remote device: + + snmp:192.168.178.23:161?protocol=udp&type=POLL&oids=1.3.6.1.2.1.1.5.0 + +Setting up a trap receiver (**Note that no OID info is needed here!**): + + snmp:127.0.0.1:162?protocol=udp&type=TRAP + +You can get the community of SNMP TRAP with message header +`securityName`, peer address of the SNMP TRAP with message header +`peerAddress`. + +Routing example in Java: (converts the SNMP PDU to XML String) + + from("snmp:192.168.178.23:161?protocol=udp&type=POLL&oids=1.3.6.1.2.1.1.5.0"). + convertBodyTo(String.class). + to("activemq:snmp.states"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|Hostname of the SNMP enabled device||string| +|port|Port number of the SNMP enabled device||integer| +|oids|Defines which values you are interested in. Please have a look at the Wikipedia to get a better understanding. You may provide a single OID or a coma separated list of OIDs. Example: oids=1.3.6.1.2.1.1.3.0,1.3.6.1.2.1.25.3.2.1.5.1,1.3.6.1.2.1.25.3.5.1.1.1,1.3.6.1.2.1.43.5.1.1.11.1||object| +|protocol|Here you can select which protocol to use. You can use either udp or tcp.|udp|string| +|retries|Defines how often a retry is made before canceling the request.|2|integer| +|snmpCommunity|Sets the community octet string for the snmp request.|public|string| +|snmpContextEngineId|Sets the context engine ID field of the scoped PDU.||string| +|snmpContextName|Sets the context name field of this scoped PDU.||string| +|snmpVersion|Sets the snmp version for the request. The value 0 means SNMPv1, 1 means SNMPv2c, and the value 3 means SNMPv3|0|integer| +|timeout|Sets the timeout value for the request in millis.|1500|integer| +|type|Which operation to perform such as poll, trap, etc.||object| +|delay|Milliseconds before the next poll.|60000|duration| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|treeList|Sets the flag whether the scoped PDU will be displayed as the list if it has child elements in its tree|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|authenticationPassphrase|The authentication passphrase. If not null, authenticationProtocol must also be not null. RFC3414 11.2 requires passphrases to have a minimum length of 8 bytes. If the length of authenticationPassphrase is less than 8 bytes an IllegalArgumentException is thrown.||string| +|authenticationProtocol|Authentication protocol to use if security level is set to enable authentication The possible values are: MD5, SHA1||string| +|privacyPassphrase|The privacy passphrase. If not null, privacyProtocol must also be not null. RFC3414 11.2 requires passphrases to have a minimum length of 8 bytes. If the length of authenticationPassphrase is less than 8 bytes an IllegalArgumentException is thrown.||string| +|privacyProtocol|The privacy protocol ID to be associated with this user. If set to null, this user only supports unencrypted messages.||string| +|securityLevel|Sets the security level for this target. The supplied security level must be supported by the security model dependent information associated with the security name set for this target. The value 1 means: No authentication and no encryption. Anyone can create and read messages with this security level The value 2 means: Authentication and no encryption. Only the one with the right authentication key can create messages with this security level, but anyone can read the contents of the message. The value 3 means: Authentication and encryption. Only the one with the right authentication key can create messages with this security level, and only the one with the right encryption/decryption key can read the contents of the message.|3|integer| +|securityName|Sets the security name to be used with this target.||string| diff --git a/camel-solr.md b/camel-solr.md new file mode 100644 index 0000000000000000000000000000000000000000..210d4843d9c2727a8973c1dd0de86245640e1846 --- /dev/null +++ b/camel-solr.md @@ -0,0 +1,189 @@ +# Solr + +**Since Camel 4.8** + +**Only producer is supported** + +The Solr component allows you to interface with an [Apache +Solr](https://solr.apache.org/) server. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-solr + x.x.x + + + +# URI format + + solr://host[:port]/solr?[options] + solrs://host[:port]/solr?[options] + solrCloud://host[:port]/solr?[options] + +# Message Operations + +The following Solr operations are currently supported. Simply set an +exchange header with a key of "SolrOperation" and a value set to one of +the following. Some operations also require the message body to be set. + +- INSERT + +- INSERT\_STREAMING + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationMessage bodyDescription

INSERT/INSERT_STREAMING

n/a

adds an index using message headers +(must be prefixed with "SolrField.")

INSERT/INSERT_STREAMING

File

adds an index using the given File +(using ContentStreamUpdateRequest)

INSERT/INSERT_STREAMING

SolrInputDocument

updates index based on the given +SolrInputDocument

INSERT/INSERT_STREAMING

String XML

updates index based on the given XML +(must follow SolrInputDocument format)

ADD_BEAN

bean instance

adds an index based on values in an annotated +bean

ADD_BEANS

collection<bean>

adds index based on a collection of annotated +bean

DELETE_BY_ID

index id to delete

delete a record by ID

DELETE_BY_QUERY

query string

delete a record by a query

COMMIT

n/a

performs a commit on any pending index +changes

SOFT_COMMIT

n/a

performs a soft commit +(without guarantee that Lucene index files are written to stable +storage; useful for Near Real Time operations) on any pending index +changes

ROLLBACK

n/a

performs a rollback on any pending +index changes

OPTIMIZE

n/a

performs a commit on any pending index +changes and then runs the optimize command (This command reorganizes the +Solr index and might be a heavy task)

+ +# Example + +Below is a simple INSERT, DELETE and COMMIT example + + from("direct:insert") + .setHeader(SolrConstants.OPERATION, constant(SolrConstants.OPERATION_INSERT)) + .setHeader(SolrConstants.FIELD + "id", body()) + .to("solr://localhost:8983/solr"); + + from("direct:delete") + .setHeader(SolrConstants.OPERATION, constant(SolrConstants.OPERATION_DELETE_BY_ID)) + .to("solr://localhost:8983/solr"); + + from("direct:commit") + .setHeader(SolrConstants.OPERATION, constant(SolrConstants.OPERATION_COMMIT)) + .to("solr://localhost:8983/solr"); + + + + + INSERT + + + ${body} + + + + + + + DELETE_BY_ID + + + + + + + COMMIT + + + + +A client would simply need to pass a body message to the insert or +delete routes and then call the commit route. + + template.sendBody("direct:insert", "1234"); + template.sendBody("direct:commit", null); + template.sendBody("direct:delete", "1234"); + template.sendBody("direct:commit", null); + +# Querying Solr + +The components provide a producer operation to query Solr. + +For more information: + +[Solr Query +Syntax](https://solr.apache.org/guide/solr/latest/query-guide/standard-query-parser.html) + +## Component ConfigurationsThere are no configurations for this component + +## Endpoint ConfigurationsThere are no configurations for this component diff --git a/camel-splunk-hec.md b/camel-splunk-hec.md new file mode 100644 index 0000000000000000000000000000000000000000..aeb8183d0f8fc814183a0f5c4ec9eba030e20050 --- /dev/null +++ b/camel-splunk-hec.md @@ -0,0 +1,81 @@ +# Splunk-hec + +**Since Camel 3.3** + +**Only producer is supported** + +The Splunk HEC component allows sending data to Splunk using the [HTTP +Event +Collector](https://dev.splunk.com/enterprise/docs/dataapps/httpeventcollector/). + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-splunk-hec + ${camel-version} + + +# URI format + + splunk-hec:[splunkURL]?[options] + +# Message body + +The body must be serializable to JSON, so it may be sent to Splunk. + +A `String` or a `java.util.Map` object can be serialized easily. + +# Use Cases + +The Splunk HEC endpoint may be used to stream data to Splunk for +ingestion. + +It is meant for high-volume ingestion of machine data. + +# Configuring the index time + +By default, the index time for an event is when the event makes it to +the Splunk server. + + from("direct:start") + .to("splunk-hec://localhost:8080?token=token"); + +If you are ingesting a large enough dataset with a big enough lag, then +the time the event hits the server and when that event actually happened +could be skewed. If you want to override the index time, you can do so. + + from("kafka:logs") + .setHeader(SplunkHECConstants.INDEX_TIME, simple("${headers[kafka.HEADERS].lastKey('TIME')}")) + .to("splunk-hec://localhost:8080?token=token"); + + from("kafka:logs") + .toD("splunk-hec://localhost:8080?token=token&time=${headers[kafka.HEADERS].lastKey('TIME')}"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|splunkURL|Splunk Host and Port (example: my\_splunk\_server:8089)||string| +|bodyOnly|Send only the message body|false|boolean| +|headersOnly|Send only message headers|false|boolean| +|host|Splunk host field of the event message. This is not the Splunk host to connect to.||string| +|index|Splunk index to write to|camel|string| +|source|Splunk source argument|camel|string| +|sourceType|Splunk sourcetype argument|camel|string| +|splunkEndpoint|Splunk endpoint Defaults to /services/collector/event To write RAW data like JSON use /services/collector/raw For a list of all endpoints refer to splunk documentation (HTTP Event Collector REST API endpoints) Example for Spunk 8.2.x: https://docs.splunk.com/Documentation/SplunkCloud/8.2.2203/Data/HECRESTendpoints To extract timestamps in Splunk8.0 /services/collector/eventauto\_extract\_timestamp=true Remember to utilize RAW{} for questionmarks or slashes in parameters.|/services/collector/event|string| +|time|Time this even occurred. By default, the time will be when this event hits the splunk server.||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|https|Contact HEC over https.|true|boolean| +|skipTlsVerify|Splunk HEC TLS verification.|false|boolean| +|token|Splunk HEC token (this is the token created for HEC and not the user's token)||string| diff --git a/camel-splunk.md b/camel-splunk.md new file mode 100644 index 0000000000000000000000000000000000000000..cec5c847ca5065aef93a74dad8fc89a97d739fb9 --- /dev/null +++ b/camel-splunk.md @@ -0,0 +1,232 @@ +# Splunk + +**Since Camel 2.13** + +**Both producer and consumer are supported** + +The Splunk component provides access to +[Splunk](http://docs.splunk.com/Documentation/Splunk/latest) using the +Splunk provided [client](https://github.com/splunk/splunk-sdk-java) api, +and it enables you to publish and search for events in Splunk. + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-splunk + ${camel-version} + + +# URI format + + splunk://[endpoint]?[options] + +# Producer Endpoints: + + ++++ + + + + + + + + + + + + + + + + + + + + +
EndpointDescription

stream

Streams data to a named index or the +default if not specified. When using stream mode, be aware of that +Splunk has some internal buffer (about 1MB or so) before events get to +the index. If you need realtime, better use submit or tcp mode.

submit

submit mode. Uses Splunk rest api to +publish events to a named index or the default if not +specified.

tcp

tcp mode. Streams data to a tcp port, +and requires an open receiver port in Splunk.

+ +When publishing events, the message body should contain a SplunkEvent. +See comment under message body. + +**Example** + + from("direct:start").convertBodyTo(SplunkEvent.class) + .to("splunk://submit?username=user&password=123&index=myindex&sourceType=someSourceType&source=mySource")... + +In this example, a converter is required to convert to a SplunkEvent +class. + +# Consumer Endpoints: + + ++++ + + + + + + + + + + + + + + + + + + + + +
EndpointDescription

normal

Performs normal search and requires a +search query in the search option.

realtime

Performs realtime search on live data +and requires a search query in the search option.

savedsearch

Performs search based on a search query +saved in splunk and requires the name of the query in the savedSearch +option.

+ +**Example** + + from("splunk://normal?delay=5000&username=user&password=123&initEarliestTime=-10s&search=search index=myindex sourcetype=someSourcetype") + .to("direct:search-result"); + +camel-splunk creates a route exchange per search result with a +SplunkEvent in the body. + +# Message body + +Splunk operates on data in key/value pairs. The SplunkEvent class is a +placeholder for such data, and should be in the message body for the +producer. Likewise, it will be returned in the body per search result +for the consumer. + +You can send raw data to Splunk by setting the raw option on the +producer endpoint. This is useful for e.g., json/xml and other payloads +where Splunk has built in support. + +# Use Cases + +Search Twitter for tweets with music and publish events to Splunk + + from("twitter://search?type=polling&keywords=music&delay=10&consumerKey=abc&consumerSecret=def&accessToken=hij&accessTokenSecret=xxx") + .convertBodyTo(SplunkEvent.class) + .to("splunk://submit?username=foo&password=bar&index=camel-tweets&sourceType=twitter&source=music-tweets"); + +To convert a Tweet to a `SplunkEvent`, you could use a converter like: + + @Converter + public class Tweet2SplunkEvent { + @Converter + public static SplunkEvent convertTweet(Status status) { + SplunkEvent data = new SplunkEvent("twitter-message", null); + //data.addPair("source", status.getSource()); + data.addPair("from_user", status.getUser().getScreenName()); + data.addPair("in_reply_to", status.getInReplyToScreenName()); + data.addPair(SplunkEvent.COMMON_START_TIME, status.getCreatedAt()); + data.addPair(SplunkEvent.COMMON_EVENT_ID, status.getId()); + data.addPair("text", status.getText()); + data.addPair("retweet_count", status.getRetweetCount()); + if (status.getPlace() != null) { + data.addPair("place_country", status.getPlace().getCountry()); + data.addPair("place_name", status.getPlace().getName()); + data.addPair("place_street", status.getPlace().getStreetAddress()); + } + if (status.getGeoLocation() != null) { + data.addPair("geo_latitude", status.getGeoLocation().getLatitude()); + data.addPair("geo_longitude", status.getGeoLocation().getLongitude()); + } + return data; + } + } + +Search Splunk for tweets: + + from("splunk://normal?username=foo&password=bar&initEarliestTime=-2m&search=search index=camel-tweets sourcetype=twitter") + .log("${body}"); + +# Other comments + +Splunk comes with a variety of options for leveraging machine generated +data with prebuilt apps for analyzing and displaying this. For example, +the jmx app could be used to publish jmx attributes, e.g., route and jvm +metrics to Splunk, and display this on a dashboard. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|splunkConfigurationFactory|To use the SplunkConfigurationFactory||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Name has no purpose||string| +|app|Splunk app||string| +|connectionTimeout|Timeout in MS when connecting to Splunk server|5000|integer| +|host|Splunk host.|localhost|string| +|owner|Splunk owner||string| +|port|Splunk port|8089|integer| +|scheme|Splunk scheme|https|string| +|count|A number that indicates the maximum number of entities to return.||integer| +|earliestTime|Earliest time of the search time window.||string| +|initEarliestTime|Initial start offset of the first search||string| +|latestTime|Latest time of the search time window.||string| +|savedSearch|The name of the query saved in Splunk to run||string| +|search|The Splunk query to run||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|streaming|Sets streaming mode. Streaming mode sends exchanges as they are received, rather than in a batch.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|eventHost|Override the default Splunk event host field||string| +|index|Splunk index to write to||string| +|raw|Should the payload be inserted raw|false|boolean| +|source|Splunk source argument||string| +|sourceType|Splunk sourcetype argument||string| +|tcpReceiverLocalPort|Splunk tcp receiver port defined locally on splunk server. (For example if splunk port 9997 is mapped to 12345, tcpReceiverLocalPort has to be 9997)||integer| +|tcpReceiverPort|Splunk tcp receiver port||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|password|Password for Splunk||string| +|sslProtocol|Set the ssl protocol to use|TLSv1.2|object| +|token|User's token for Splunk. This takes precedence over password when both are set||string| +|username|Username for Splunk||string| +|useSunHttpsHandler|Use sun.net.www.protocol.https.Handler Https handler to establish the Splunk Connection. Can be useful when running in application servers to avoid app. server https handling.|false|boolean| diff --git a/camel-spring-batch.md b/camel-spring-batch.md new file mode 100644 index 0000000000000000000000000000000000000000..6d4b7126f3346f0f2c377f934e8323c22de0c496 --- /dev/null +++ b/camel-spring-batch.md @@ -0,0 +1,213 @@ +# Spring-batch + +**Since Camel 2.10** + +**Only producer is supported** + +The Spring Batch component and support classes provide integration +bridge between Camel and [Spring +Batch](http://www.springsource.org/spring-batch) infrastructure. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-spring-batch + x.x.x + + + +# URI format + + spring-batch:jobName[?options] + +Where **jobName** represents the name of the Spring Batch job located in +the Camel registry. Alternatively, if a JobRegistry is provided, it will +be used to locate the job instead. + +This component can only be used to define producer endpoints, which +means that you cannot use the Spring Batch component in a `from()` +statement. + +# Usage + +When the Spring Batch component receives the message, it triggers the +job execution. The job will be executed using the +`org.springframework.batch.core.launch.JobLaucher` instance resolved +according to the following algorithm: + +- if `JobLauncher` is manually set on the component, then use it. + +- if `jobLauncherRef` option is set on the component, then search + Camel Registry for the `JobLauncher` with the given name. + **Deprecated and will be removed in Camel 3.0!** + +- if there is `JobLauncher` registered in the Camel Registry under + **jobLauncher** name, then use it. + +- if none of the steps above allow resolving the `JobLauncher` and + there is exactly one `JobLauncher` instance in the Camel Registry, + then use it. + +All headers found in the message are passed to the `JobLauncher` as job +parameters. `String`, `Long`, `Double` and `java.util.Date` values are +copied to the `org.springframework.batch.core.JobParametersBuilder` - +other data types are converted to Strings. + +# Examples + +Triggering the Spring Batch job execution: + + from("direct:startBatch").to("spring-batch:myJob"); + +Triggering the Spring Batch job execution with the `JobLauncher` set +explicitly. + + from("direct:startBatch").to("spring-batch:myJob?jobLauncherRef=myJobLauncher"); + +A `JobExecution` instance returned by the `JobLauncher` is forwarded by +the `SpringBatchProducer` as the output message. You can use the +`JobExecution` instance to perform some operations using the Spring +Batch API directly. + + from("direct:startBatch").to("spring-batch:myJob").to("mock:JobExecutions"); + ... + MockEndpoint mockEndpoint = ...; + JobExecution jobExecution = mockEndpoint.getExchanges().get(0).getIn().getBody(JobExecution.class); + BatchStatus currentJobStatus = jobExecution.getStatus(); + +# Support classes + +Apart from the Component, Camel Spring Batch provides also support +classes, which can be used to hook into Spring Batch infrastructure. + +## CamelItemReader + +`CamelItemReader` can be used to read batch data directly from the Camel +infrastructure. + +For example, the snippet below is configuring Spring Batch to read data +from JMS queue. + + + + + + + + + + + + + + +## CamelItemWriter + +`CamelItemWriter` has similar purpose as `CamelItemReader`, but it is +dedicated to write chunk of the processed data. + +For example, the snippet below is configuring Spring Batch to read data +from JMS queue. + + + + + + + + + + + + + + +## CamelItemProcessor + +`CamelItemProcessor` is the implementation of Spring Batch +`org.springframework.batch.item.ItemProcessor` interface. The latter +implementation relays on the [Request Reply +pattern](http://camel.apache.org/request-reply.html) to delegate the +processing of the batch item to the Camel infrastructure. The item to +process is sent to the Camel endpoint as the body of the message. + +For example, the snippet below performs simple processing of the batch +item using the [Direct endpoint](http://camel.apache.org/direct.html) +and the [Simple expression +language](http://camel.apache.org/simple.html). + + + + + + + Processed ${body} + + + + + + + + + + + + + + + + + +## CamelJobExecutionListener + +`CamelJobExecutionListener` is the implementation of the +`org.springframework.batch.core.JobExecutionListener` interface sending +job execution events to the Camel endpoint. + +The `org.springframework.batch.core.JobExecution` instance produced by +the Spring Batch is sent as a body of the message. To distinguish +between before- and after-callbacks `SPRING_BATCH_JOB_EVENT_TYPE` header +is set to the `BEFORE` or `AFTER` value. + +The example snippet below sends Spring Batch job execution events to the +JMS queue. + + + + + + + + + + + + + + + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|jobLauncher|Explicitly specifies a JobLauncher to be used.||object| +|jobRegistry|Explicitly specifies a JobRegistry to be used.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|jobName|The name of the Spring Batch job located in the registry.||string| +|jobFromHeader|Explicitly defines if the jobName should be taken from the headers instead of the URI.|false|boolean| +|jobLauncher|Explicitly specifies a JobLauncher to be used.||object| +|jobRegistry|Explicitly specifies a JobRegistry to be used.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-spring-event.md b/camel-spring-event.md new file mode 100644 index 0000000000000000000000000000000000000000..905efea20714f9a8b013bddb66d6624d210bfb45 --- /dev/null +++ b/camel-spring-event.md @@ -0,0 +1,39 @@ +# Spring-event + +**Since Camel 1.4** + +**Both producer and consumer are supported** + +The Spring Event component provides access to the Spring +`ApplicationEvent` objects. This allows you to publish +`ApplicationEvent` objects to a Spring `ApplicationContext` or to +consume them. You can then use [Enterprise Integration +Patterns](#eips:enterprise-integration-patterns.adoc) to process them, +such as [Message Filter](#eips:filter-eip.adoc). + +# URI format + + spring-event://default[?options] + +At the moment, there are no options for this component. That may change +in future releases, so please check back. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Name of endpoint||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-spring-jdbc.md b/camel-spring-jdbc.md new file mode 100644 index 0000000000000000000000000000000000000000..81bd63b03365d65084fcd2a4361787e3883a978f --- /dev/null +++ b/camel-spring-jdbc.md @@ -0,0 +1,52 @@ +# Spring-jdbc + +**Since Camel 3.10** + +**Only producer is supported** + +The Spring JDBC component is an extension of the JDBC component with one +additional feature to integrate with Spring Transaction Manager. + +For general use of this component, see the [JDBC +Component](#jdbc-component.adoc). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-spring-jdbc + x.x.x + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|dataSource|To use the DataSource instance instead of looking up the data source by name from the registry.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|connectionStrategy|To use a custom strategy for working with connections. Do not use a custom strategy when using the spring-jdbc component because a special Spring ConnectionStrategy is used by default to support Spring Transactions.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|dataSourceName|Name of DataSource to lookup in the Registry. If the name is dataSource or default, then Camel will attempt to lookup a default DataSource from the registry, meaning if there is a only one instance of DataSource found, then this DataSource will be used.||string| +|allowNamedParameters|Whether to allow using named parameters in the queries.|true|boolean| +|outputClass|Specify the full package and class name to use as conversion when outputType=SelectOne or SelectList.||string| +|outputType|Determines the output the producer should use.|SelectList|object| +|parameters|Optional parameters to the java.sql.Statement. For example to set maxRows, fetchSize etc.||object| +|readSize|The default maximum number of rows that can be read by a polling query. The default value is 0.||integer| +|resetAutoCommit|Camel will set the autoCommit on the JDBC connection to be false, commit the change after executed the statement and reset the autoCommit flag of the connection at the end, if the resetAutoCommit is true. If the JDBC connection doesn't support to reset the autoCommit flag, you can set the resetAutoCommit flag to be false, and Camel will not try to reset the autoCommit flag. When used with XA transactions you most likely need to set it to false so that the transaction manager is in charge of committing this tx.|true|boolean| +|transacted|Whether transactions are in use.|false|boolean| +|useGetBytesForBlob|To read BLOB columns as bytes instead of string data. This may be needed for certain databases such as Oracle where you must read BLOB columns as bytes.|false|boolean| +|useHeadersAsParameters|Set this option to true to use the prepareStatementStrategy with named parameters. This allows to define queries with named placeholders, and use headers with the dynamic values for the query placeholders.|false|boolean| +|useJDBC4ColumnNameAndLabelSemantics|Sets whether to use JDBC 4 or JDBC 3.0 or older semantic when retrieving column name. JDBC 4.0 uses columnLabel to get the column name where as JDBC 3.0 uses both columnName or columnLabel. Unfortunately JDBC drivers behave differently so you can use this option to work out issues around your JDBC driver if you get problem using this component This option is default true.|true|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|beanRowMapper|To use a custom org.apache.camel.component.jdbc.BeanRowMapper when using outputClass. The default implementation will lower case the row names and skip underscores, and dashes. For example CUST\_ID is mapped as custId.||object| +|connectionStrategy|To use a custom strategy for working with connections. Do not use a custom strategy when using the spring-jdbc component because a special Spring ConnectionStrategy is used by default to support Spring Transactions.||object| +|prepareStatementStrategy|Allows the plugin to use a custom org.apache.camel.component.jdbc.JdbcPrepareStatementStrategy to control preparation of the query and prepared statement.||object| diff --git a/camel-spring-ldap.md b/camel-spring-ldap.md new file mode 100644 index 0000000000000000000000000000000000000000..b83b82725bd7d7fa47ca283638ba255bab5e8eb7 --- /dev/null +++ b/camel-spring-ldap.md @@ -0,0 +1,125 @@ +# Spring-ldap + +**Since Camel 2.11** + +**Only producer is supported** + +The Spring LDAP component provides a Camel wrapper for [Spring +LDAP](http://www.springsource.org/ldap). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-spring-ldap + x.x.x + + + +# URI format + + spring-ldap:springLdapTemplate[?options] + +Where **springLdapTemplate** is the name of the [Spring LDAP Template +bean](http://static.springsource.org/spring-ldap/site/apidocs/org/springframework/ldap/core/LdapTemplate.html). +In this bean, you configure the URL and the credentials for your LDAP +access. + +# Usage + +The component supports producer endpoint only. An attempt to create a +consumer endpoint will result in an `UnsupportedOperationException`. +The body of the message must be a map (an instance of `java.util.Map`). +Unless a base DN is specified by in the configuration of your +ContextSource, this map must contain at least an entry with the key +**`dn`** (not needed for function\_driven operation) that specifies the +root node for the LDAP operation to be performed. Other entries of the +map are operation-specific (see below). + +The body of the message remains unchanged for the `bind` and `unbind` +operations. For the `search` and `function_driven` operations, the body +is set to the result of the search, see +[http://static.springsource.org/spring-ldap/site/apidocs/org/springframework/ldap/core/LdapTemplate.html#search%28java.lang.String,%20java.lang.String,%20int,%20org.springframework.ldap.core.AttributesMapper%29](http://static.springsource.org/spring-ldap/site/apidocs/org/springframework/ldap/core/LdapTemplate.html#search%28java.lang.String,%20java.lang.String,%20int,%20org.springframework.ldap.core.AttributesMapper%29). + +## Search + +The message body must have an entry with the key **`filter`**. The value +must be a `String` representing a valid LDAP filter, see +[http://en.wikipedia.org/wiki/Lightweight\_Directory\_Access\_Protocol#Search\_and\_Compare](http://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol#Search_and_Compare). + +## Bind + +The message body must have an entry with the key **`attributes`**. The +value must be an instance of +[javax.naming.directory.Attributes](http://docs.oracle.com/javase/6/docs/api/javax/naming/directory/Attributes.html) +This entry specifies the LDAP node to be created. + +## Unbind + +No further entries necessary, the node with the specified **`dn`** is +deleted. + +## Authenticate + +The message body must have entries with the keys **`filter`** and +**`password`**. The values must be an instance of `String` representing +a valid LDAP filter and a user password, respectively. + +## Modify Attributes + +The message body must have an entry with the key +**`modificationItems`**. The value must be an instance of any array of +type +[javax.naming.directory.ModificationItem](http://docs.oracle.com/javase/6/docs/api/javax/naming/directory/ModificationItem.html) + +## Function-Driven + +The message body must have entries with the keys **`function`** and +**`request`**. The **`function`** value must be of type +`java.util.function.BiFunction`. The `L` type parameter must be +of type `org.springframework.ldap.core.LdapOperations`. The +**`request`** value must be the same type as the `Q` type parameter in +the **`function`** and it must encapsulate the parameters expected by +the LdapTemplate method being invoked within the **`function`**. The `S` +type parameter represents the response type as returned by the +LdapTemplate method being invoked. This operation allows dynamic +invocation of LdapTemplate methods that are not covered by the +operations mentioned above. + +**Key definitions** + +To avoid spelling errors, the following constants are defined in +`org.apache.camel.springldap.SpringLdapProducer`: + +- `public static final String DN = "dn"` + +- `public static final String FILTER = "filter"` + +- `public static final String ATTRIBUTES = "attributes"` + +- `public static final String PASSWORD = "password";` + +- `public static final String MODIFICATION_ITEMS = "modificationItems";` + +- `public static final String FUNCTION = "function";` + +- `public static final String REQUEST = "request";` + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|templateName|Name of the Spring LDAP Template bean||string| +|operation|The LDAP operation to be performed.||object| +|scope|The scope of the search operation.|subtree|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-spring-rabbitmq.md b/camel-spring-rabbitmq.md new file mode 100644 index 0000000000000000000000000000000000000000..d26eb7b3babc0cba5060371886236df2b33da39a --- /dev/null +++ b/camel-spring-rabbitmq.md @@ -0,0 +1,402 @@ +# Spring-rabbitmq + +**Since Camel 3.8** + +**Both producer and consumer are supported** + +The Spring RabbitMQ component allows you to produce and consume messages +from [RabbitMQ](http://www.rabbitmq.com/) instances. Using the Spring +RabbitMQ client. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-spring-rabbitmq + x.x.x + + + +# URI format + + spring-rabbitmq:exchangeName?[options] + +The exchange name determines the exchange to which the produced messages +will be sent to. In the case of consumers, the exchange name determines +the exchange the queue will be bound to. + +# Using a connection factory + +To connect to RabbitMQ, you need to set up a `ConnectionFactory` (same +as with JMS) with the login details such as: + +It is recommended to use `CachingConnectionFactory` from spring-rabbit +as it comes with connection pooling out of the box. + + + + + +The `ConnectionFactory` is auto-detected by default, so you can do: + + + + + + + + +# Default Exchange Name + +To use default exchange name (which would be an empty exchange name in +RabbitMQ) then you should use `default` as name in the endpoint uri, +such as: + + to("spring-rabbitmq:default?routingKey=foo") + +# Auto declare exchanges, queues and bindings + +Before you can send or receive messages from RabbitMQ, then exchanges, +queues and bindings must be setup first. + +In development mode, it may be desirable to let Camel automatic do this. +You can enable this by setting `autoDeclare=true` on the +`SpringRabbitMQComponent`. + +Then Spring RabbitMQ will automatically declare the necessary elements +and set up the binding between the exchange, queue and routing keys. + +The elements can be configured using the multivalued `args` option. + +For example, to specify the queue as durable and exclusive, you can +configure the endpoint uri with +`arg.queue.durable=true&arg.queue.exclusive=true`. + +**Exchanges** + + ++++++ + + + + + + + + + + + + + + + + + + + + + + +
OptionTypeDescriptionDefault

autoDelete

boolean

True if the server should delete the +exchange when it is no longer in use (if all bindings are +deleted).

false

durable

boolean

A durable exchange will survive a +server restart.

true

+ +You can also configure any additional `x-` arguments. See details in the +RabbitMQ documentation. + +**Queues** + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OptionTypeDescriptionDefault

autoDelete

boolean

True if the server should delete the +exchange when it is no longer in use (if all bindings are +deleted).

false

durable

boolean

A durable queue will survive a server +restart.

false

exclusive

boolean

Whether the queue is exclusive

false

x-dead-letter-exchange

String

The name of the dead letter exchange. +If none configured, then the component configured value is +used.

x-dead-letter-routing-key

String

The routing key for the dead letter +exchange. If none configured, then the component configured value is +used.

+ +You can also configure any additional `x-` arguments, such as the +message time to live, via `x-message-ttl`, and many others. See details +in the RabbitMQ documentation. + +# Mapping from Camel to RabbitMQ + +The message body is mapped from Camel Message body to a `byte[]` which +is the type that RabbitMQ uses for message body. Camel will use its type +converter to convert the message body to a byte array. + +Spring Rabbit comes out of the box with support for mapping Java +serialized objects, but Camel Spring RabbitMQ does **not** support this +due to security vulnerabilities and using Java objects is a bad design +as it enforces strong coupling. + +Custom message headers are mapped from Camel Message headers to RabbitMQ +headers. This behaviour can be customized by configuring a new +implementation of `HeaderFilterStrategy` on the Camel component. + +# Request / Reply + +Request and reply messaging is supported using [RabbitMQ direct +reply-to](https://www.rabbitmq.com/direct-reply-to.html). + +The example below will do request/reply, where the message is sent via +the cheese exchange name and routing key `foo.bar`, which is being +consumed by the second Camel route, that prepends the message with +\`Hello \`, and then sends back the message. + +So if we send `World` as message body to *direct:start* then, we will se +the message being logged + +- `log:request -> World` + +- `log:input -> World` + +- `log:response -> Hello World` + + + + from("direct:start") + .to("log:request") + .to(ExchangePattern.InOut, "spring-rabbitmq:cheese?routingKey=foo.bar") + .to("log:response"); + + from("spring-rabbitmq:cheese?queues=myqueue&routingKey=foo.bar") + .to("log:input") + .transform(body().prepend("Hello ")); + +# Reuse endpoint and send to different destinations computed at runtime + +If you need to send messages to a lot of different RabbitMQ exchanges, +it makes sense to reuse an endpoint and specify the real destination in +a message header. This allows Camel to reuse the same endpoint, but send +to different exchanges. This greatly reduces the number of endpoints +created and economizes on memory and thread resources. + +Using [toD](#eips:toD-eip.adoc) is easier than specifying the dynamic +destination with headers + +You can specify using the following headers: + + +++++ + + + + + + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelSpringRabbitmqExchangeOverrideName

String

The exchange name.

CamelSpringRabbitmqRoutingOverrideKey

String

The routing key.

+ +For example, the following route shows how you can compute a destination +at run time and use it to override the exchange appearing in the +endpoint URL: + + from("file://inbox") + .to("bean:computeDestination") + .to("spring-rabbitmq:dummy"); + +The exchange name, `dummy`, is just a placeholder. It must be provided +as part of the RabbitMQ endpoint URL, but it will be ignored in this +example. + +In the `computeDestination` bean, specify the real destination by +setting the `CamelRabbitmqExchangeOverrideName` header as follows: + + public void setExchangeHeader(Exchange exchange) { + String region = .... + exchange.getIn().setHeader("CamelSpringRabbitmqExchangeOverrideName", "order-" + region); + } + +Then Camel will read this header and use it as the exchange name instead +of the one configured on the endpoint. So, in this example Camel sends +the message to `spring-rabbitmq:order-emea`, assuming the `region` value +was `emea`. + +Keep in mind that the producer removes both +`CamelSpringRabbitmqExchangeOverrideName` and +`CamelSpringRabbitmqRoutingOverrideKey` headers from the exchange and do +not propagate them to the created Rabbitmq message to avoid the +accidental loops in the routes (in scenarios when the message will be +forwarded to another RabbitMQ endpoint). + +# Using toD + +If you need to send messages to a lot of different exchanges, it makes +sense to reuse an endpoint and specify the dynamic destinations with +simple language using [toD](#eips:toD-eip.adoc). + +For example, suppose you need to send messages to exchanges with order +types, then using toD could, for example, be done as follows: + + from("direct:order") + .toD("spring-rabbit:order-${header.orderType}"); + +# Manual Acknowledgement + +If we need to manually acknowledge a message for some use case, we can +do it by setting and acknowledgeMode to Manual and using the below +snippet of code to get Channel and deliveryTag to manually acknowledge +the message: + + from("spring-rabbitmq:%s?queues=%s&acknowledgeMode=MANUAL") + .process(exchange -> { + Channel channel = exchange.getProperty(SpringRabbitMQConstants.CHANNEL, Channel.class); + long deliveryTag = exchange.getMessage().getHeader(SpringRabbitMQConstants.DELIVERY_TAG, long.class); + channel.basicAck(deliveryTag, false); + }) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|amqpAdmin|Optional AMQP Admin service to use for auto declaring elements (queues, exchanges, bindings)||object| +|connectionFactory|The connection factory to be use. A connection factory must be configured either on the component or endpoint.||object| +|testConnectionOnStartup|Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well.|false|boolean| +|autoDeclare|Specifies whether the consumer should auto declare binding between exchange, queue and routing key when starting. Enabling this can be good for development to make it easy to standup exchanges, queues and bindings on the broker.|true|boolean| +|autoStartup|Specifies whether the consumer container should auto-startup.|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|deadLetterExchange|The name of the dead letter exchange||string| +|deadLetterExchangeType|The type of the dead letter exchange|direct|string| +|deadLetterQueue|The name of the dead letter queue||string| +|deadLetterRoutingKey|The routing key for the dead letter exchange||string| +|maximumRetryAttempts|How many times a Rabbitmq consumer will retry the same message if Camel failed to process the message|5|integer| +|rejectAndDontRequeue|Whether a Rabbitmq consumer should reject the message without requeuing. This enables failed messages to be sent to a Dead Letter Exchange/Queue, if the broker is so configured.|true|boolean| +|retryDelay|Delay in msec a Rabbitmq consumer will wait before redelivering a message that Camel failed to process|1000|integer| +|concurrentConsumers|The number of consumers|1|integer| +|errorHandler|To use a custom ErrorHandler for handling exceptions from the message listener (consumer)||object| +|listenerContainerFactory|To use a custom factory for creating and configuring ListenerContainer to be used by the consumer for receiving messages||object| +|maxConcurrentConsumers|The maximum number of consumers (available only with SMLC)||integer| +|messageListenerContainerType|The type of the MessageListenerContainer|DMLC|string| +|prefetchCount|Tell the broker how many messages to send to each consumer in a single request. Often this can be set quite high to improve throughput.|250|integer| +|retry|Custom retry configuration to use. If this is configured then the other settings such as maximumRetryAttempts for retry are not in use.||object| +|shutdownTimeout|The time to wait for workers in milliseconds after the container is stopped. If any workers are active when the shutdown signal comes they will be allowed to finish processing as long as they can finish within this timeout.|5000|duration| +|allowNullBody|Whether to allow sending messages with no body. If this option is false and the message body is null, then an MessageConversionException is thrown.|false|boolean| +|autoDeclareProducer|Specifies whether the producer should auto declare binding between exchange, queue and routing key when starting. Enabling this can be good for development to make it easy to standup exchanges, queues and bindings on the broker.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|replyTimeout|Specify the timeout in milliseconds to be used when waiting for a reply message when doing request/reply messaging. The default value is 5 seconds. A negative value indicates an indefinite timeout.|5000|duration| +|args|Specify arguments for configuring the different RabbitMQ concepts, a different prefix is required for each element: consumer. exchange. queue. binding. dlq.exchange. dlq.queue. dlq.binding. For example to declare a queue with message ttl argument: queue.x-message-ttl=60000||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|ignoreDeclarationExceptions|Switch on ignore exceptions such as mismatched properties when declaring|false|boolean| +|messageConverter|To use a custom MessageConverter so you can be in control how to map to/from a org.springframework.amqp.core.Message.||object| +|messagePropertiesConverter|To use a custom MessagePropertiesConverter so you can be in control how to map to/from a org.springframework.amqp.core.MessageProperties.||object| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|exchangeName|The exchange name determines the exchange to which the produced messages will be sent to. In the case of consumers, the exchange name determines the exchange the queue will be bound to. Note: to use default exchange then do not use empty name, but use default instead.||string| +|connectionFactory|The connection factory to be use. A connection factory must be configured either on the component or endpoint.||object| +|deadLetterExchange|The name of the dead letter exchange||string| +|deadLetterExchangeType|The type of the dead letter exchange|direct|string| +|deadLetterQueue|The name of the dead letter queue||string| +|deadLetterRoutingKey|The routing key for the dead letter exchange||string| +|disableReplyTo|Specifies whether Camel ignores the ReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the ReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another.|false|boolean| +|queues|The queue(s) to use for consuming or producing messages. Multiple queue names can be separated by comma. If none has been configured then Camel will generate an unique id as the queue name.||string| +|routingKey|The value of a routing key to use. Default is empty which is not helpful when using the default (or any direct) exchange, but fine if the exchange is a headers exchange for instance.||string| +|testConnectionOnStartup|Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well.|false|boolean| +|acknowledgeMode|Flag controlling the behaviour of the container with respect to message acknowledgement. The most common usage is to let the container handle the acknowledgements (so the listener doesn't need to know about the channel or the message). Set to AcknowledgeMode.MANUAL if the listener will send the acknowledgements itself using Channel.basicAck(long, boolean). Manual acks are consistent with either a transactional or non-transactional channel, but if you are doing no other work on the channel at the same other than receiving a single message then the transaction is probably unnecessary. Set to AcknowledgeMode.NONE to tell the broker not to expect any acknowledgements, and it will assume all messages are acknowledged as soon as they are sent (this is autoack in native Rabbit broker terms). If AcknowledgeMode.NONE then the channel cannot be transactional (so the container will fail on start up if that flag is accidentally set).||object| +|asyncConsumer|Whether the consumer processes the Exchange asynchronously. If enabled then the consumer may pickup the next message from the queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the consumer will pickup the next message from the queue.|false|boolean| +|autoDeclare|Specifies whether the consumer should auto declare binding between exchange, queue and routing key when starting.|true|boolean| +|autoStartup|Specifies whether the consumer container should auto-startup.|true|boolean| +|exchangeType|The type of the exchange|direct|string| +|exclusive|Set to true for an exclusive consumer|false|boolean| +|maximumRetryAttempts|How many times a Rabbitmq consumer will try the same message if Camel failed to process the message (The number of attempts includes the initial try)|5|integer| +|noLocal|Set to true for an no-local consumer|false|boolean| +|rejectAndDontRequeue|Whether a Rabbitmq consumer should reject the message without requeuing. This enables failed messages to be sent to a Dead Letter Exchange/Queue, if the broker is so configured.|true|boolean| +|retryDelay|Delay in millis a Rabbitmq consumer will wait before redelivering a message that Camel failed to process|1000|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|concurrentConsumers|The number of consumers||integer| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|maxConcurrentConsumers|The maximum number of consumers (available only with SMLC)||integer| +|messageListenerContainerType|The type of the MessageListenerContainer|DMLC|string| +|prefetchCount|Tell the broker how many messages to send in a single request. Often this can be set quite high to improve throughput.||integer| +|retry|Custom retry configuration to use. If this is configured then the other settings such as maximumRetryAttempts for retry are not in use.||object| +|allowNullBody|Whether to allow sending messages with no body. If this option is false and the message body is null, then an MessageConversionException is thrown.|false|boolean| +|autoDeclareProducer|Specifies whether the producer should auto declare binding between exchange, queue and routing key when starting.|false|boolean| +|confirm|Controls whether to wait for confirms. The connection factory must be configured for publisher confirms and this method. auto = Camel detects if the connection factory uses confirms or not. disabled = Confirms is disabled. enabled = Confirms is enabled.|auto|string| +|confirmTimeout|Specify the timeout in milliseconds to be used when waiting for a message sent to be confirmed by RabbitMQ when doing send only messaging (InOnly). The default value is 5 seconds. A negative value indicates an indefinite timeout.|5000|duration| +|replyTimeout|Specify the timeout in milliseconds to be used when waiting for a reply message when doing request/reply (InOut) messaging. The default value is 30 seconds. A negative value indicates an indefinite timeout (Beware that this will cause a memory leak if a reply is not received).|30000|duration| +|usePublisherConnection|Use a separate connection for publishers and consumers|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|args|Specify arguments for configuring the different RabbitMQ concepts, a different prefix is required for each element: arg.consumer. arg.exchange. arg.queue. arg.binding. arg.dlq.exchange. arg.dlq.queue. arg.dlq.binding. For example to declare a queue with message ttl argument: args=arg.queue.x-message-ttl=60000||object| +|messageConverter|To use a custom MessageConverter so you can be in control how to map to/from a org.springframework.amqp.core.Message.||object| +|messagePropertiesConverter|To use a custom MessagePropertiesConverter so you can be in control how to map to/from a org.springframework.amqp.core.MessageProperties.||object| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| diff --git a/camel-spring-redis.md b/camel-spring-redis.md new file mode 100644 index 0000000000000000000000000000000000000000..289bb890d3d0b5563a00c52b8ba44e4686f27a8e --- /dev/null +++ b/camel-spring-redis.md @@ -0,0 +1,1236 @@ +# Spring-redis + +**Since Camel 2.11** + +**Both producer and consumer are supported** + +This component allows sending and receiving messages from +[Redis](https://redis.io/). Redis is an advanced key-value store where +keys can contain strings, hashes, lists, sets and sorted sets. In +addition, it provides pub/sub functionality for inter-app +communications. Camel provides a producer for executing commands, +consumer for subscribing to pub/sub messages an idempotent repository +for filtering out duplicate messages. + +**Prerequisites** + +To use this component, you must have a Redis server running. + +# URI Format + + spring-redis://host:port[?options] + +# Usage + +See also the unit tests available at +[https://github.com/apache/camel/tree/main/components/camel-spring-redis/src/test/java/org/apache/camel/component/redis](https://github.com/apache/camel/tree/main/components/camel-spring-redis/src/test/java/org/apache/camel/component/redis). + +## Message headers evaluated by the Redis producer + +The producer issues commands to the server, and each command has a +different set of parameters with specific types. The result from the +command execution is returned in the message body. + + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Hash CommandsDescriptionParametersResult

HSET

Set the string value of a hash +field

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.FIELD/"CamelRedis.Field" (String), +RedisConstants.VALUE/"CamelRedis.Value" (Object)

void

HGET

Get the value of a hash field

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.FIELD/"CamelRedis.Field" +(String)

String

HSETNX

Set the value of a hash field only if +the field does not exist

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.FIELD/"CamelRedis.Field" (String), +RedisConstants.VALUE/"CamelRedis.Value" (Object)

void

HMSET

Set multiple hash fields to multiple +values

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUES/"CamelRedis.Values" +(Map<String, Object>)

void

HMGET

Get the values of all the given hash +fields

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.FIELDS/"CamelRedis.Fields" +(Collection<String>)

Collection<Object>

HINCRBY

Increment the integer value of a hash +field by the given number

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.FIELD/"CamelRedis.Field" (String), +RedisConstants.VALUE/"CamelRedis.Value" (Long)

Long

HEXISTS

Determine if a hash field +exists

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.FIELD/"CamelRedis.Field" +(String)

Boolean

HDEL

Delete one or more hash fields

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.FIELD/"CamelRedis.Field" +(String)

void

HLEN

Get the number of fields in a +hash

RedisConstants.KEY/"CamelRedis.Key" +(String)

Long

HKEYS

Get all the fields in a hash

RedisConstants.KEY/"CamelRedis.Key" +(String)

Set<String>

HVALS

Get all the values in a hash

RedisConstants.KEY/"CamelRedis.Key" +(String)

Collection<Object>

HGETALL

Get all the fields and values in a +hash

RedisConstants.KEY/"CamelRedis.Key" +(String)

Map<String, Object>

+ + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
List CommandsDescriptionParametersResult

RPUSH

Append one or multiple values to a +list

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

Long

RPUSHX

Append a value to a list only if the +list exists

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

Long

LPUSH

Prepend one or multiple values to a +list

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

Long

LLEN

Get the length of a list

RedisConstants.KEY/"CamelRedis.Key" +(String)

Long

LRANGE

Get a range of elements from a +list

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.START/"CamelRedis.Start"Long), +RedisConstants.END/"CamelRedis.End" (Long)

List<Object>

LTRIM

Trim a list to the specified +range

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.START/"CamelRedis.Start"Long), +RedisConstants.END/"CamelRedis.End" (Long)

void

LINDEX

Get an element from a list by its +index

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.INDEX/"CamelRedis.Index" +(Long)

String

LINSERT

Insert an element before or after +another element in a list

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" (Object), +RedisConstants.PIVOT/"CamelRedis.Pivot" (String), +RedisConstants.POSITION/"CamelRedis.Position" +(String)

Long

LSET

Set the value of an element in a list +by its index

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" (Object), +RedisConstants.INDEX/"CamelRedis.Index" (Long)

void

LREM

Remove elements from a list

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" (Object), +RedisConstants.COUNT/"CamelRedis.Count" (Long)

Long

LPOP

Remove and get the first element in a +list

RedisConstants.KEY/"CamelRedis.Key" +(String)

Object

RPOP

Remove and get the last element in a +list

RedisConstants.KEY/"CamelRedis.Key" +(String)

String

RPOPLPUSH

Remove the last element in a list, +append it to another list and return it

RedisConstants.KEY/"CamelRedis.Key" +(String), +RedisConstants.DESTINATION/"CamelRedis.Destination" +(String)

Object

BRPOPLPUSH

Pop a value from a list, push it to +another list and return it; or block until one is available

RedisConstants.KEY/"CamelRedis.Key" +(String), +RedisConstants.DESTINATION/"CamelRedis.Destination" +(String), RedisConstants.TIMEOUT/"CamelRedis.Timeout" +(Long)

Object

BLPOP

Remove and get the first element in a +list, or block until one is available

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.TIMEOUT/"CamelRedis.Timeout" +(Long)

Object

BRPOP

Remove and get the last element in a +list, or block until one is available

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.TIMEOUT/"CamelRedis.Timeout" +(Long)

String

+ + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Set CommandsDescriptionParametersResult

SADD

Add one or more members to a +set

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

Boolean

SMEMBERS

Get all the members in a set

RedisConstants.KEY/"CamelRedis.Key" +(String)

Set<Object>

SREM

Remove one or more members from a +set

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

Boolean

SPOP

Remove and return a random member from +a set

RedisConstants.KEY/"CamelRedis.Key" +(String)

String

SMOVE

Move a member from one set to +another

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" (Object), +RedisConstants.DESTINATION/"CamelRedis.Destination" +(String)

Boolean

SCARD

Get the number of members in a +set

RedisConstants.KEY/"CamelRedis.Key" +(String)

Long

SISMEMBER

Determine if a given value is a member +of a set

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

Boolean

SINTER

Intersect multiple sets

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.KEYS/"CamelRedis.Keys" +(String)

Set<Object>

SINTERSTORE

Intersect multiple sets and store the +resulting set in a key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.KEYS/"CamelRedis.Keys" (String), +RedisConstants.DESTINATION/"CamelRedis.Destination" +(String)

void

SUNION

Add multiple sets

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.KEYS/"CamelRedis.Keys" +(String)

Set<Object>

SUNIONSTORE

Add multiple sets and store the +resulting set in a key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.KEYS/"CamelRedis.Keys" (String), +RedisConstants.DESTINATION/"CamelRedis.Destination" +(String)

void

SDIFF

Subtract multiple sets

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.KEYS/"CamelRedis.Keys" +(String)

Set<Object>

SDIFFSTORE

Subtract multiple sets and store the +resulting set in a key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.KEYS/"CamelRedis.Keys" (String), +RedisConstants.DESTINATION/"CamelRedis.Destination" +(String)

void

SRANDMEMBER

Get one or multiple random members from +a set

RedisConstants.KEY/"CamelRedis.Key" +(String)

String

+ + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Ordered set CommandsDescriptionParametersResult

ZADD

Add one or more members to a sorted +set, or update its score if it already exists

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" (Object), +RedisConstants.SCORE/"CamelRedis.Score" (Double)

Boolean

ZRANGE

Return a range of members in a sorted +set, by index

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.START/"CamelRedis.Start"Long), +RedisConstants.END/"CamelRedis.End" (Long), +RedisConstants.WITHSCORE/"CamelRedis.WithScore" +(Boolean)

Object

ZREM

Remove one or more members from a +sorted set

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

Boolean

ZINCRBY

Increment the score of a member in a +sorted set

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" (Object), +RedisConstants.INCREMENT/"CamelRedis.Increment" +(Double)

Double

ZRANK

Determine the index of a member in a +sorted set

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

Long

ZREVRANK

Determine the index of a member in a +sorted set, with scores ordered from high to low

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

Long

ZREVRANGE

Return a range of members in a sorted +set, by index, with scores ordered from high to low

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.START/"CamelRedis.Start"Long), +RedisConstants.END/"CamelRedis.End" (Long), +RedisConstants.WITHSCORE/"CamelRedis.WithScore" +(Boolean)

Object

ZCARD

Get the number of members in a sorted +set

RedisConstants.KEY/"CamelRedis.Key" +(String)

Long

ZCOUNT

Count the members in a sorted set with +scores within the given values

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.MIN/"CamelRedis.Min" (Double), +RedisConstants.MAX/"CamelRedis.Max" (Double)

Long

ZRANGEBYSCORE

Return a range of members in a sorted +set, by score

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.MIN/"CamelRedis.Min" (Double), +RedisConstants.MAX/"CamelRedis.Max" (Double)

Set<Object>

ZREVRANGEBYSCORE

Return a range of members in a sorted +set, by score, with scores ordered from high to low

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.MIN/"CamelRedis.Min" (Double), +RedisConstants.MAX/"CamelRedis.Max" (Double)

Set<Object>

ZREMRANGEBYRANK

Remove all members in a sorted set +within the given indexes

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.START/"CamelRedis.Start"Long), +RedisConstants.END/"CamelRedis.End" (Long)

void

ZREMRANGEBYSCORE

Remove all members in a sorted set +within the given scores

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.START/"CamelRedis.Start"Long), +RedisConstants.END/"CamelRedis.End" (Long)

void

ZUNIONSTORE

Add multiple sorted sets and store the +resulting-sorted set in a new key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.KEYS/"CamelRedis.Keys" (String), +RedisConstants.DESTINATION/"CamelRedis.Destination" +(String)

void

ZINTERSTORE

Intersect multiple sorted sets and +store the resulting sorted set in a new key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.KEYS/"CamelRedis.Keys" (String), +RedisConstants.DESTINATION/"CamelRedis.Destination" +(String)

void

+ + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
String CommandsDescriptionParametersResult

SET

Set the string value of a key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

void

GET

Get the value of a key

RedisConstants.KEY/"CamelRedis.Key" +(String)

Object

STRLEN

Get the length of the value stored in a +key

RedisConstants.KEY/"CamelRedis.Key" +(String)

Long

APPEND

Append a value to a key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(String)

Integer

SETBIT

Sets or clears the bit at offset in the +string value stored at key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.OFFSET/"CamelRedis.Offset" (Long), +RedisConstants.VALUE/"CamelRedis.Value" (Boolean)

void

GETBIT

Returns the bit value at offset in the +string value stored at key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.OFFSET/"CamelRedis.Offset" +(Long)

Boolean

SETRANGE

Overwrite part of a string at key +starting at the specified offset

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" (Object), +RedisConstants.OFFSET/"CamelRedis.Offset" (Long)

void

GETRANGE

Get a substring of the string stored at +a key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.START/"CamelRedis.Start"Long), +RedisConstants.END/"CamelRedis.End" (Long)

String

SETNX

Set the value of a key only if the key +does not exist

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

Boolean

SETEX

Set the value and expiration of a +key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" (Object), +RedisConstants.TIMEOUT/"CamelRedis.Timeout" (Long), +SECONDS

void

DECRBY

Decrement the integer value of a key by +the given number

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Long)

Long

DECR

Decrement the integer value of a key by +one

RedisConstants.KEY/"CamelRedis.Key" +(String),

Long

INCRBY

Increment the integer value of a key by +the given amount

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Long)

Long

INCR

Increment the integer value of a key by +one

RedisConstants.KEY/"CamelRedis.Key" +(String)

Long

MGET

Get the values of all the given +keys

RedisConstants.FIELDS/"CamelRedis.Filds" +(Collection<String>)

List<Object>

MSET

Set multiple keys to multiple +values

RedisConstants.VALUES/"CamelRedis.Values" +(Map<String, Object>)

void

MSETNX

Set multiple keys to multiple values +only if none of the keys exist

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

void

GETSET

Set the string value of a key and +return its old value

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

Object

+ + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Key CommandsDescriptionParametersResult

EXISTS

Determine if a key exists

RedisConstants.KEY/"CamelRedis.Key" +(String)

Boolean

DEL

Delete a key

RedisConstants.KEYS/"CamelRedis.Keys" +(String)

void

TYPE

Determine the type stored at +key

RedisConstants.KEY/"CamelRedis.Key" +(String)

DataType

KEYS

Find all keys matching the given +pattern

RedisConstants.PATERN/"CamelRedis.Pattern" +(String)

Collection<String>

RANDOMKEY

Return a random key from the +keyspace

RedisConstants.PATERN/"CamelRedis.Pattern" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(String)

String

RENAME

Rename a key

RedisConstants.KEY/"CamelRedis.Key" +(String)

void

RENAMENX

Rename a key, only if the new key does +not exist

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(String)

Boolean

EXPIRE

Set a key’s time to live in +seconds

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.TIMEOUT/"CamelRedis.Timeout" +(Long)

Boolean

SORT

Sort the elements in a list, set or +sorted set

RedisConstants.KEY/"CamelRedis.Key" +(String)

List<Object>

PERSIST

Remove the expiration from a +key

RedisConstants.KEY/"CamelRedis.Key" +(String)

Boolean

EXPIREAT

Set the expiration for a key as a UNIX +timestamp

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.TIMESTAMP/"CamelRedis.Timestamp" +(Long)

Boolean

PEXPIRE

Set a key’s time to live in +milliseconds

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.TIMEOUT/"CamelRedis.Timeout" +(Long)

Boolean

PEXPIREAT

Set the expiration for a key as a UNIX +timestamp specified in milliseconds

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.TIMESTAMP/"CamelRedis.Timestamp" +(Long)

Boolean

TTL

Get the time to live for a key

RedisConstants.KEY/"CamelRedis.Key" +(String)

Long

MOVE

Move a key to another database

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.DB/"CamelRedis.Db" +(Integer)

Boolean

+ + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Geo CommandsDescriptionParametersResult

GEOADD

Adds the specified geospatial items +(latitude, longitude, name) to the specified key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.LATITUDE/"CamelRedis.Latitude" +(Double), RedisConstants.LONGITUDE/"CamelRedis.Longitude" +(Double), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

Long

GEODIST

Return the distance between two members +in the geospatial index for the specified key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUES/"CamelRedis.Values" +(Object[])

Distance

GEOHASH

Return valid Geohash strings +representing the position of an element in the geospatial index for the +specified key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

List<String>

GEOPOS

Return the positions (longitude, +latitude) of an element in the geospatial index for the specified +key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" +(Object)

List<Point>

GEORADIUS

Return the element in the geospatial +index for the specified key which is within the borders of the area +specified with the center location and the maximum distance from the +center (the radius)

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.LATITUDE/"CamelRedis.Latitude" +(Double), RedisConstants.LONGITUDE/"CamelRedis.Longitude" +(Double), RedisConstants.RADIUS/"CamelRedis.Radius" +(Double), RedisConstants.COUNT/"CamelRedis.Count" +(Integer)

GeoResults

GEORADIUSBYMEMBER

This command is exactly like GEORADIUS +with the sole difference that instead of taking, as the center of the +area to query, a longitude and latitude value, it takes the name of a +member already existing inside the geospatial index for the specified +key

RedisConstants.KEY/"CamelRedis.Key" +(String), RedisConstants.VALUE/"CamelRedis.Value" (Object), +RedisConstants.RADIUS/"CamelRedis.Radius" (Double), +RedisConstants.COUNT/"CamelRedis.Count" (Integer)

GeoResults

+ + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Other Command

Description

Parameters

Result

MULTI

Mark the start of a transaction +block

none

void

DISCARD

Discard all commands issued after +MULTI

none

void

EXEC

Execute all commands issued after +MULTI

none

void

WATCH

Watch the given keys to determine +execution of the MULTI/EXEC block

RedisConstants.KEYS/"CamelRedis.Keys" +(String)

void

UNWATCH

Forget about all watched keys

none

void

ECHO

Echo the given string

RedisConstants.VALUE/"CamelRedis.Value" +(String)

String

PING

Ping the server

none

String

QUIT

Close the connection

none

void

PUBLISH

Post a message to a channel

RedisConstants.CHANNEL/"CamelRedis.Channel" +(String), RedisConstants.MESSAGE/"CamelRedis.Message" +(Object)

void

+ +# Dependencies + +Maven users will need to add the following dependency to their pom.xml. + +**pom.xml** + + + org.apache.camel + camel-spring-redis + ${camel-version} + + +where `${camel-version`} must be replaced by the actual version of +Camel. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|redisTemplate|Reference to a pre-configured RedisTemplate instance to use.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|The host where Redis server is running.||string| +|port|Redis server port number||integer| +|channels|List of topic names or name patterns to subscribe to. Multiple names can be separated by comma.||string| +|command|Default command, which can be overridden by message header. Notice the consumer only supports the following commands: PSUBSCRIBE and SUBSCRIBE|SET|object| +|connectionFactory|Reference to a pre-configured RedisConnectionFactory instance to use.||object| +|redisTemplate|Reference to a pre-configured RedisTemplate instance to use.||object| +|serializer|Reference to a pre-configured RedisSerializer instance to use.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|listenerContainer|Reference to a pre-configured RedisMessageListenerContainer instance to use.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-spring-ws.md b/camel-spring-ws.md new file mode 100644 index 0000000000000000000000000000000000000000..d258178395933728baa64b762b81142ec8219edc --- /dev/null +++ b/camel-spring-ws.md @@ -0,0 +1,413 @@ +# Spring-ws + +**Since Camel 2.6** + +**Both producer and consumer are supported** + +The Spring WS component allows you to integrate with [Spring Web +Services](http://static.springsource.org/spring-ws/sites/1.5/). It +offers both *client*-side support, for accessing web services, and +*server*-side support for creating your own contract-first web services. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-spring-ws + x.x.x + + + +**Be aware** Spring WS version 4.x does not support Axiom anymore +(because Axiom does not support Jakarta JEE 9) + +# URI format + +The URI scheme for this component is as follows + + spring-ws:[mapping-type:]address[?options] + +To expose a web service **mapping-type** needs to be set to any of the +following: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Mapping typeDescription

rootqname

Offers the option to map web service +requests based on the qualified name of the root element contained in +the message.

soapaction

Used to map web service requests based +on the SOAP action specified in the header of the message.

uri

To map web service requests that target +a specific URI.

xpathresult

Used to map web service requests based +on the evaluation of an XPath expression against the +incoming message. The result of the evaluation should match the XPath +result specified in the endpoint URI.

beanname

Allows you to reference an +org.apache.camel.component.spring.ws.bean.CamelEndpointDispatcher +object to integrate with existing (legacy) endpoint +mappings like PayloadRootQNameEndpointMapping, +SoapActionEndpointMapping, etc.

+ +As a consumer, the **address** should contain a value relevant to the +specified mapping-type (e.g. a SOAP action, XPath expression). As a +producer, the address should be set to the URI of the web service your +calling upon. + +# Accessing web services + +To call a web service at `\http://foo.com/bar` simply define a route: + + from("direct:example").to("spring-ws:http://foo.com/bar") + +And sent a message: + + template.requestBody("direct:example", "test message"); + +Remember if it’s a SOAP service you’re calling, you don’t have to +include SOAP tags. Spring-WS will perform the XML-to-SOAP marshaling. + +# Sending SOAP and WS-Addressing action headers + +When a remote web service requires a SOAP action or use of the +WS-Addressing standard, you define your route as: + + from("direct:example") + .to("spring-ws:http://foo.com/bar?soapAction=http://foo.com&wsAddressingAction=http://bar.com") + +Optionally, you can override the endpoint options with header values: + + template.requestBodyAndHeader("direct:example", + "test message", + SpringWebserviceConstants.SPRING_WS_SOAP_ACTION, "http://baz.com"); + +# Using SOAP headers + +You can provide the SOAP header(s) as a Camel Message header when +sending a message to a spring-ws endpoint, for example, given the +following SOAP header in a String + + String body = ... + String soapHeader = "12345678901111"; + +We can set the body and header on the Camel Message as follows: + + exchange.getIn().setBody(body); + exchange.getIn().setHeader(SpringWebserviceConstants.SPRING_WS_SOAP_HEADER, soapHeader); + +And then send the Exchange to a `spring-ws` endpoint to call the Web +Service. + +Likewise, the spring-ws consumer will also enrich the Camel Message with +the SOAP header. + +For example, see this [unit +test](https://svn.apache.org/repos/asf/camel/trunk/components/camel-spring-ws/src/test/java/org/apache/camel/component/spring/ws/SoapHeaderTest.java). + +# The header and attachment propagation + +Spring WS Camel supports propagation of the headers and attachments into +Spring-WS WebServiceMessage response. The endpoint will use so-called +"hook" the MessageFilter (default implementation is provided by +BasicMessageFilter) to propagate the exchange headers and attachments +into WebServiceMessage response. Now you can use + + exchange.getOut().getHeaders().put("myCustom","myHeaderValue") + exchange.getIn().addAttachment("myAttachment", new DataHandler(...)) + +If the exchange header in the pipeline contains text, it generates +Qname(key)=value attribute in the soap header. Recommended is to create +a QName class directly and put any key into header. + +# How to transform the soap header using a stylesheet + +The header transformation filter +(HeaderTransformationMessageFilter.java) can be used to transform the +soap header for a soap request. If you want to use the header +transformation filter, see the below example: + + + + + +Use the bead defined above in the camel endpoint + + + + + + +# The custom header and attachment filtering + +If you need to provide your custom processing of either headers or +attachments, extend existing BasicMessageFilter and override the +appropriate methods or write a brand-new implementation of the +MessageFilter interface. To use your custom filter, add this into your +spring context: + +You can specify either a global a or a local message filter as follows: + +- the global custom filter that provides the global configuration for + all Spring-WS endpoints + + + + + +- the local messageFilter directly on the endpoint as follows: + + + + to("spring-ws:http://yourdomain.com?messageFilter=#myEndpointSpecificMessageFilter"); + +For more information, see +[CAMEL-5724](https://issues.apache.org/jira/browse/CAMEL-5724) + +If you want to create your own `MessageFilter`, consider overriding the +following methods in the default implementation of `MessageFilter` in +class `BasicMessageFilter`: + + protected void doProcessSoapHeader(Message inOrOut, SoapMessage soapMessage) + {your code /*no need to call super*/ } + + protected void doProcessSoapAttachements(Message inOrOut, SoapMessage response) + { your code /*no need to call super*/ } + +# Using a custom MessageSender and MessageFactory + +A custom message sender or factory in the registry can be referenced +like this: + + from("direct:example") + .to("spring-ws:http://foo.com/bar?messageFactory=#messageFactory&messageSender=#messageSender") + +Spring configuration: + + + + + + + + + + + + + + + + + + +# Exposing web services + +To expose a web service using this component, you first need to set up a +[MessageDispatcher](http://static.springsource.org/spring-ws/sites/1.5/reference/html/server.html) +to look for endpoint mappings in a Spring XML file. If you plan on +running inside a servlet container, you probably want to use a +`MessageDispatcherServlet` configured in `web.xml`. + +By default, the `MessageDispatcherServlet` will look for a Spring XML +named `/WEB-INF/spring-ws-servlet.xml`. To use Camel with Spring-WS the +only mandatory bean in that XML file is `CamelEndpointMapping`. This +bean allows the `MessageDispatcher` to dispatch web service requests to +your routes. + +*web.xml* + + + + spring-ws + org.springframework.ws.transport.http.MessageDispatcherServlet + 1 + + + spring-ws + /* + + + +*spring-ws-servlet.xml* + + + + + + + + + + + + + + +More information on setting up Spring-WS can be found in [Writing +Contract-First Web +Services](http://static.springsource.org/spring-ws/sites/1.5/reference/html/tutorial.html). +Basically paragraph 3.6 "Implementing the Endpoint" is handled by this +component (specifically paragraph 3.6.2 "Routing the Message to the +Endpoint" is where `CamelEndpointMapping` comes in). Also remember to +check out the Spring Web Services Example included in the Camel +distribution. + +# Endpoint mapping in routes + +With the XML configuration in place, you can now use Camel’s DSL to +define what web service requests are handled by your endpoint: + +The following route will receive all web service requests that have a +root element named "GetFoo" within the `\http://example.com/` namespace. + + from("spring-ws:rootqname:{http://example.com/}GetFoo?endpointMapping=#endpointMapping") + .convertBodyTo(String.class).to(mock:example) + +The following route will receive web service requests containing the +`\http://example.com/GetFoo` SOAP action. + + from("spring-ws:soapaction:http://example.com/GetFoo?endpointMapping=#endpointMapping") + .convertBodyTo(String.class).to(mock:example) + +The following route will receive all requests sent to +`\http://example.com/foobar`. + + from("spring-ws:uri:http://example.com/foobar?endpointMapping=#endpointMapping") + .convertBodyTo(String.class).to(mock:example) + +The route below will receive requests that contain the element +`abc` anywhere inside the message (and the default +namespace). + + from("spring-ws:xpathresult:abc?expression=//foobar&endpointMapping=#endpointMapping") + .convertBodyTo(String.class).to(mock:example) + +# Alternative configuration, using existing endpoint mappings + +For every endpoint with mapping-type `beanname` one bean of type +`CamelEndpointDispatcher` with a corresponding name is required in the +Registry/ApplicationContext. This bean acts as a bridge between the +Camel endpoint and an existing [endpoint +mapping](http://static.springsource.org/spring-ws/sites/1.5/reference/html/server.html#server-endpoint-mapping) +like `PayloadRootQNameEndpointMapping`. + +The use of the `beanname` mapping-type is primarily meant for (legacy) +situations where you’re already using Spring-WS and have endpoint +mappings defined in a Spring XML file. The `beanname` mapping-type +allows you to wire your Camel route into an existing endpoint mapping. +When you’re starting from scratch, it’s recommended to define your +endpoint mappings as Camel URI’s (as illustrated above with +`endpointMapping`) since it requires less configuration and is more +expressive. Alternatively, you could use vanilla Spring-WS with the help +of annotations. + +An example of a route using `beanname`: + + + + + + + + + + + + FutureEndpointDispatcher + QuoteEndpointDispatcher + + + + + + + +# POJO (un)marshalling + +Camel’s pluggable data formats offer support for pojo/xml marshalling +using libraries such as JAXB. You can use these data formats in your +route to send and receive pojo’s, to and from web services. + +When *accessing* web services, you can marshal the request and unmarshal +the response message: + + JaxbDataFormat jaxb = new JaxbDataFormat(false); + jaxb.setContextPath("com.example.model"); + + from("direct:example").marshal(jaxb).to("spring-ws:http://foo.com/bar").unmarshal(jaxb); + +Similarly, when *providing* web services, you can unmarshal XML requests +to POJO’s and marshal the response message back to XML: + + from("spring-ws:rootqname:{http://example.com/}GetFoo?endpointMapping=#endpointMapping").unmarshal(jaxb) + .to("mock:example").marshal(jaxb); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|type|Endpoint mapping type if endpoint mapping is used. rootqname - Offers the option to map web service requests based on the qualified name of the root element contained in the message. soapaction - Used to map web service requests based on the SOAP action specified in the header of the message. uri - In order to map web service requests that target a specific URI. xpathresult - Used to map web service requests based on the evaluation of an XPath expression against the incoming message. The result of the evaluation should match the XPath result specified in the endpoint URI. beanname - Allows you to reference an org.apache.camel.component.spring.ws.bean.CamelEndpointDispatcher object in order to integrate with existing (legacy) endpoint mappings like PayloadRootQNameEndpointMapping, SoapActionEndpointMapping, etc||object| +|lookupKey|Endpoint mapping key if endpoint mapping is used||string| +|webServiceEndpointUri|The default Web Service endpoint uri to use for the producer.||string| +|messageFilter|Option to provide a custom MessageFilter. For example when you want to process your headers or attachments by your own.||object| +|messageIdStrategy|Option to provide a custom MessageIdStrategy to control generation of WS-Addressing unique message ids.||object| +|endpointDispatcher|Spring org.springframework.ws.server.endpoint.MessageEndpoint for dispatching messages received by Spring-WS to a Camel endpoint, to integrate with existing (legacy) endpoint mappings like PayloadRootQNameEndpointMapping, SoapActionEndpointMapping, etc.||object| +|endpointMapping|Reference to an instance of org.apache.camel.component.spring.ws.bean.CamelEndpointMapping in the Registry/ApplicationContext. Only one bean is required in the registry to serve all Camel/Spring-WS endpoints. This bean is auto-discovered by the MessageDispatcher and used to map requests to Camel endpoints based on characteristics specified on the endpoint (like root QName, SOAP action, etc)||object| +|expression|The XPath expression to use when option type=xpathresult. Then this option is required to be configured.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|allowResponseAttachmentOverride|Option to override soap response attachments in in/out exchange with attachments from the actual service layer. If the invoked service appends or rewrites the soap attachments this option when set to true, allows the modified soap attachments to be overwritten in in/out message attachments|false|boolean| +|allowResponseHeaderOverride|Option to override soap response header in in/out exchange with header info from the actual service layer. If the invoked service appends or rewrites the soap header this option when set to true, allows the modified soap header to be overwritten in in/out message headers|false|boolean| +|faultAction|Signifies the value for the faultAction response WS-Addressing Fault Action header that is provided by the method. See org.springframework.ws.soap.addressing.server.annotation.Action annotation for more details.||string| +|faultTo|Signifies the value for the faultAction response WS-Addressing FaultTo header that is provided by the method. See org.springframework.ws.soap.addressing.server.annotation.Action annotation for more details.||string| +|messageFactory|Option to provide a custom WebServiceMessageFactory.||object| +|messageSender|Option to provide a custom WebServiceMessageSender. For example to perform authentication or use alternative transports||object| +|outputAction|Signifies the value for the response WS-Addressing Action header that is provided by the method. See org.springframework.ws.soap.addressing.server.annotation.Action annotation for more details.||string| +|replyTo|Signifies the value for the replyTo response WS-Addressing ReplyTo header that is provided by the method. See org.springframework.ws.soap.addressing.server.annotation.Action annotation for more details.||string| +|soapAction|SOAP action to include inside a SOAP request when accessing remote web services||string| +|timeout|Sets the socket read timeout (in milliseconds) while invoking a webservice using the producer, see URLConnection.setReadTimeout() and CommonsHttpMessageSender.setReadTimeout(). This option works when using the built-in message sender implementations: CommonsHttpMessageSender and HttpUrlConnectionMessageSender. One of these implementations will be used by default for HTTP based services unless you customize the Spring WS configuration options supplied to the component. If you are using a non-standard sender, it is assumed that you will handle your own timeout configuration. The built-in message sender HttpComponentsMessageSender is considered instead of CommonsHttpMessageSender which has been deprecated, see HttpComponentsMessageSender.setReadTimeout().||integer| +|webServiceTemplate|Option to provide a custom WebServiceTemplate. This allows for full control over client-side web services handling; like adding a custom interceptor or specifying a fault resolver, message sender or message factory.||object| +|wsAddressingAction|WS-Addressing 1.0 action header to include when accessing web services. The To header is set to the address of the web service as specified in the endpoint URI (default Spring-WS behavior).||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|sslContextParameters|To configure security using SSLContextParameters||object| diff --git a/camel-sql-stored.md b/camel-sql-stored.md new file mode 100644 index 0000000000000000000000000000000000000000..19ab892d2f9ab14404115156f49f50af9f15059a --- /dev/null +++ b/camel-sql-stored.md @@ -0,0 +1,209 @@ +# Sql-stored + +**Since Camel 2.17** + +**Only producer is supported** + +The SQL Stored component allows you to work with databases using JDBC +Stored Procedure queries. This component is an extension to the [SQL +Component](#sql-component.adoc) but specialized for calling stored +procedures. + +This component uses `spring-jdbc` behind the scenes for the actual SQL +handling. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-sql + x.x.x + + + +# URI format + +The SQL component uses the following endpoint URI notation: + + sql-stored:template[?options] + +Where template is the stored procedure template, where you declare the +name of the stored procedure and the IN, INOUT, and OUT arguments. + +You can also refer to the template in an external file on the file +system or classpath such as: + + sql-stored:classpath:sql/myprocedure.sql[?options] + +Where sql/myprocedure.sql is a plain text file in the classpath with the +template, as show: + + SUBNUMBERS( + INTEGER ${headers.num1}, + INTEGER ${headers.num2}, + INOUT INTEGER ${headers.num3} out1, + OUT INTEGER out2 + ) + +# Declaring the stored procedure template + +The template is declared using a syntax that would be similar to a Java +method signature. The name of the stored procedure, and then the +arguments enclosed in parentheses. An example explains this well: + + + +The arguments are declared by a type and then a mapping to the Camel +message using simple expression. So, in this example, the first two +parameters are IN values of INTEGER type, mapped to the message headers. +The third parameter is INOUT, meaning it accepts an INTEGER and then +returns a different INTEGER result. The last parameter is the OUT value, +also an INTEGER type. + +In SQL terms, the stored procedure could be declared as: + + CREATE PROCEDURE STOREDSAMPLE(VALUE1 INTEGER, VALUE2 INTEGER, INOUT RESULT1 INTEGER, OUT RESULT2 INTEGER) + +## IN Parameters + +IN parameters take four parts separated by a space: parameter name, SQL +type (with scale), type name, and value source. + +Parameter name is optional and will be auto generated if not provided. +It must be given between quotes('). + +SQL type is required and can be an integer (positive or negative) or +reference to integer field in some class. If SQL type contains a dot, +then the component tries to resolve that class and read the given field. +For example, SQL type `com.Foo.INTEGER` is read from the field INTEGER +of class `com.Foo`. If the type doesn’t contain comma then class to +resolve the integer value will be `java.sql.Types`. Type can be +postfixed by scale for example DECIMAL(10) would mean +`java.sql.Types.DECIMAL` with scale 10. + +Type name is optional and must be given between quotes('). + +Value source is required. Value source populates the parameter value +from the Exchange. It can be either a Simple expression or header +location i.e. `:#
`. For example, the Simple expression +`${header.val}` would mean that parameter value will be read from the +header `val`. Header location expression `:#val` would have identical +effect. + + + +URI means that the stored procedure will be called with parameter name +*param1*, it’s SQL type is read from field INTEGER of class +`org.example.Types` and scale will be set to 10. Input value for the +parameter is passed from the header *srcValue*. + + + +URI is identical to previous on except SQL-type is 100 and type name is +*mytypename*. + +Actual call will be done using +org.springframework.jdbc.core.SqlParameter. + +## OUT Parameters + +OUT parameters work similarly IN parameters and contain three parts: SQL +type(with scale), type name, and output parameter name. + +SQL type works the same as IN parameters. + +Type name is optional and also works the same as IN parameters. + +Output parameter name is used for the OUT parameter name, as well as the +header name where the result will be stored. + + + +URI means that the OUT parameter’s name is `outheader1` and result will +be but into header `outheader1`. + + + +This is identical to previous one but type name will be `mytype`. + +Actual call will be done using +`org.springframework.jdbc.core.SqlOutParameter`. + +## INOUT Parameters + +INOUT parameters are a combination of all of the above. They receive a +value from the exchange, as well as store a result as a message header. +The only caveat is that the IN parameter’s "name" is skipped. Instead, +the OUT parameter’s *name* defines both the SQL parameter name, and the +result header name. + + + +Actual call will be done using +org.springframework.jdbc.core.SqlInOutParameter. + +## Query Timeout + +You can configure query timeout (via `template.queryTimeout`) on +statements used for query processing as shown: + + + +This will be overridden by the remaining transaction timeout when +executing within a transaction that has a timeout specified at the +transaction level. + +# Camel SQL Starter + +A starter module is available to spring-boot users. When using the +starter, the `DataSource` can be directly configured using spring-boot +properties. + + # Example for a mysql datasource + spring.datasource.url=jdbc:mysql://localhost/test + spring.datasource.username=dbuser + spring.datasource.password=dbpass + spring.datasource.driver-class-name=com.mysql.jdbc.Driver + +To use this feature, add the following dependencies to your spring boot +pom.xml file: + + + org.apache.camel.springboot + camel-sql-starter + ${camel.version} + + + + org.springframework.boot + spring-boot-starter-jdbc + ${spring-boot-version} + + +You should also include the specific database driver, if needed. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|dataSource|Sets the DataSource to use to communicate with the database.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|serviceLocationEnabled|Whether to detect the network address location of the JMS broker on startup. This information is gathered via reflection on the ConnectionFactory, and is vendor specific. This option can be used to turn this off.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|template|Sets the stored procedure template to perform. You can externalize the template by using file: or classpath: as prefix and specify the location of the file.||string| +|batch|Enables or disables batch mode|false|boolean| +|dataSource|Sets the DataSource to use to communicate with the database.||object| +|function|Whether this call is for a function.|false|boolean| +|noop|If set, will ignore the results of the stored procedure template and use the existing IN message as the OUT message for the continuation of processing|false|boolean| +|outputHeader|Store the template result in a header instead of the message body. By default, outputHeader == null and the template result is stored in the message body, any existing content in the message body is discarded. If outputHeader is set, the value is used as the name of the header to store the template result and the original message body is preserved.||string| +|useMessageBodyForTemplate|Whether to use the message body as the stored procedure template and then headers for parameters. If this option is enabled then the template in the uri is not used.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|templateOptions|Configures the Spring JdbcTemplate with the key/values from the Map||object| diff --git a/camel-sql.md b/camel-sql.md new file mode 100644 index 0000000000000000000000000000000000000000..11cda99c408b24a744a1843736ba51a0bf350a8a --- /dev/null +++ b/camel-sql.md @@ -0,0 +1,868 @@ +# Sql + +**Since Camel 1.4** + +**Both producer and consumer are supported** + +The SQL component allows you to work with databases using JDBC queries. +The difference between this component and [JDBC](#jdbc-component.adoc) +component is that in case of SQL, the query is a property of the +endpoint, and it uses message payload as parameters passed to the query. + +This component uses `spring-jdbc` behind the scenes for the actual SQL +handling. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-sql + x.x.x + + + +The SQL component also supports: + +- A JDBC-based repository for the Idempotent Consumer EIP pattern. See + further details below. + +- A JDBC-based repository for the Aggregator EIP pattern. See further + details below. + +# URI format + +This component can be used as a [Transactional +Client](#eips:transactional-client.adoc). + +The SQL component uses the following endpoint URI notation: + + sql:select * from table where id=# order by name[?options] + +You can use named parameters by using `:#name_of_the_parameter` style as +shown: + + sql:select * from table where id=:#myId order by name[?options] + +When using named parameters, Camel will look up the names in the given +precedence: + +1. from a [Simple](#languages:simple-language.adoc) expressions + +2. from message body if its a `java.util.Map` + +3. from message headers + +4. from exchange variables + +If a named parameter cannot be resolved, then an exception is thrown. + +You can use [Simple](#languages:simple-language.adoc) expressions as +parameters as shown: + + sql:select * from table where id=:#${exchangeProperty.myId} order by name[?options] + +And the [Simple](#languages:simple-language.adoc) can also be used with +POJO message bodies, to use getters for SQL parameters such as: + + sql:insert into project (FIRST, LAST, CONTACT_MAIL) + values (:#${body.firstName}, :#${body.lastName}, :#${body.email}) + +See more details in [Simple](#languages:simple-language.adoc) for more +complex syntax that can be used in the SQL queries, than the small +example above. + +Notice that the standard `?` symbol that denotes the parameters to an +SQL query is substituted with the `#` symbol, because the `?` symbol is +used to specify options for the endpoint. The `?` symbol replacement can +be configured on an endpoint basis. + +You can externalize your SQL queries to files in the classpath or file +system as shown: + + sql:classpath:sql/myquery.sql[?options] + +And the `myquery.sql` file is in the classpath and is just a plain text + + -- this is a comment + select * + from table + where + id = :#${exchangeProperty.myId} + order by + name + +In the file, you can use multi-lines and format the SQL as you wish. And +also use comments such as the – dash line. + +# Treatment of the message body + +The SQL component tries to convert the message body to an object of +`java.util.Iterator` type and then uses this iterator to fill the query +parameters (where each query parameter is represented by a `#` symbol +(or configured placeholder) in the endpoint URI). If the message body is +not an array or collection, the conversion results in an iterator that +iterates over only one object, which is the body itself. + +For example, if the message body is an instance of `java.util.List`, the +first item in the list is substituted into the first occurrence of `#` +in the SQL query, the second item in the list is substituted into the +second occurrence of `#`, and so on. + +If `batch` is set to `true`, then the interpretation of the inbound +message body changes slightly – instead of an iterator of parameters, +the component expects an iterator that contains the parameter iterators; +the size of the outer iterator determines the batch size. + +You can use the option `useMessageBodyForSql` that allows to use the +message body as the SQL statement, and then the SQL parameters must be +provided in a header with the key `SqlConstants.SQL_PARAMETERS`. This +allows the SQL component to work more dynamically as the SQL query is +from the message body. Use templating (such as +[Velocity](#components::velocity-component.adoc), +[Freemarker](#components::freemarker-component.adoc)) for conditional +processing, e.g., to include or exclude `where` clauses depending on the +presence of query parameters. + +# Result of the query + +For `select` operations, the result is an instance of +`List>` type, as returned by the +[JdbcTemplate.queryForList()]() +method. For `update` operations, a `NULL` body is returned as the +`update` operation is only set as a header and never as a body. + +By default, the result is placed in the message body. If the +outputHeader parameter is set, the result is placed in the header. This +is an alternative to using a full message enrichment pattern to add +headers, it provides a concise syntax for querying a sequence or some +other small value into a header. It is convenient to use outputHeader +and outputType together: + + from("jms:order.inbox") + .to("sql:select order_seq.nextval from dual?outputHeader=OrderId&outputType=SelectOne") + .to("jms:order.booking"); + +# Using StreamList + +The producer supports `outputType=StreamList` that uses an iterator to +stream the output of the query. This allows processing the data in a +streaming fashion which, for example, can be used by the Splitter EIP to +process each row one at a time, and load data from the database as +needed. + + from("direct:withSplitModel") + .to("sql:select * from projects order by id?outputType=StreamList&outputClass=org.apache.camel.component.sql.ProjectModel") + .to("log:stream") + .split(body()).streaming() + .to("log:row") + .to("mock:result") + .end(); + +# Generated keys + +If you insert data using SQL INSERT, then the RDBMS may support auto +generated keys. You can instruct the SQL producer to return the +generated keys in headers. To do that set the header +`CamelSqlRetrieveGeneratedKeys=true`. Then the generated keys will be +provided as headers with the keys listed in the table above. + +To specify which generated columns should be retrieved, set the header +`CamelSqlGeneratedColumns` to a `String[]` or `int[]`, indicating the +column names or indexes, respectively. Some databases require this, such +as Oracle. It may also be necessary to use the `parametersCount` option +if the driver cannot correctly determine the number of parameters. + +You can see more details in this [unit +test](https://gitbox.apache.org/repos/asf?p=camel.git;a=blob_plain;f=components/camel-sql/src/test/java/org/apache/camel/component/sql/SqlGeneratedKeysTest.java;hb=HEAD). + +# DataSource + +You can set a reference to a `DataSource` in the URI directly: + + sql:select * from table where id=# order by name?dataSource=#myDS + +# Using named parameters + +In the given route below, we want to get all the projects from the +`projects` table. Notice the SQL query has two named parameters, `:#lic` +and `:#min`. Camel will then look up for these parameters from the +message body, message headers and exchange variables. + +Notice in the example above we set two headers with constant value for +the named parameters: + + from("direct:projects") + .setHeader("lic", constant("ASF")) + .setHeader("min", constant(123)) + .to("sql:select * from projects where license = :#lic and id > :#min order by id") + +Though if the message body is a `java.util.Map` then the named +parameters will be taken from the body. + + from("direct:projects") + .to("sql:select * from projects where license = :#lic and id > :#min order by id") + +# Using expression parameters in producers + +In the given route below, we want to get all the projects from the +database. It uses the body of the exchange for defining the license and +uses the value of a property as the second parameter. + + from("direct:projects") + .setBody(constant("ASF")) + .setProperty("min", constant(123)) + .to("sql:select * from projects where license = :#${body} and id > :#${exchangeProperty.min} order by id") + +## Using expression parameters in consumers + +When using the SQL component as consumer, you can now also use +expression parameters (simple language) to build dynamic query +parameters, such as calling a method on a bean to retrieve an id, date +or something. + +For example, in the sample below we call the nextId method on the bean +myIdGenerator: + + from("sql:select * from projects where id = :#${bean:myIdGenerator.nextId}") + .to("mock:result"); + +And the bean has the following method: + + public static class MyIdGenerator { + + private int id = 1; + + public int nextId() { + return id++; + } + + } + +Notice that there is no existing `Exchange` with message body and +headers, so the simple expression you can use in the consumer is most +usable for calling bean methods as in this example. + +# Using IN queries with dynamic values + +The SQL producer allows using SQL queries with `IN` statements where the +`IN` values are dynamically computed. For example, from the message body +or a header, etc. + +To use IN you need to: + +- prefix the parameter name with `in:` + +- add `( )` around the parameter + +An example explains this better. The following query is used: + + -- this is a comment + select * + from projects + where project in (:#in:names) + order by id + +In the following route: + + from("direct:query") + .to("sql:classpath:sql/selectProjectsIn.sql") + .to("log:query") + .to("mock:query"); + +Then the IN query can use a header with the key names with the dynamic +values such as: + + // use an array + template.requestBodyAndHeader("direct:query", "Hi there!", "names", new String[]{"Camel", "AMQ"}); + + // use a list + List names = new ArrayList(); + names.add("Camel"); + names.add("AMQ"); + + template.requestBodyAndHeader("direct:query", "Hi there!", "names", names); + + // use a string separated values with comma + template.requestBodyAndHeader("direct:query", "Hi there!", "names", "Camel,AMQ"); + +The query can also be specified in the endpoint instead of being +externalized (notice that externalizing makes maintaining the SQL +queries easier) + + from("direct:query") + .to("sql:select * from projects where project in (:#in:names) order by id") + .to("log:query") + .to("mock:query"); + +If the dynamic list of values is stored in the message body, you can use +`(:#in:${body\})` to refer to the message body, such as: + + -- this is a comment + select * + from projects + where project in (:#in:${body}) + order by id + +# Using the JDBC-based idempotent repository + +In this section, we will use the JDBC-based idempotent repository. + +**Abstract class** + +There is an abstract class +`org.apache.camel.processor.idempotent.jdbc.AbstractJdbcMessageIdRepository` +you can extend to build custom JDBC idempotent repository. + +First, we have to create the database table which will be used by the +idempotent repository. We use the following schema: + + CREATE TABLE CAMEL_MESSAGEPROCESSED ( processorName VARCHAR(255), + messageId VARCHAR(100), createdAt TIMESTAMP, PRIMARY KEY (processorName, messageId) ) + +The SQL Server **TIMESTAMP** type is a fixed-length binary-string type. +It does not map to any of the JDBC time types: **DATE**, **TIME**, or +**TIMESTAMP**. + +The above SQL is consistent with most popular SQL vendors. + +When working with concurrent consumers, it is crucial to create a unique +constraint on the column combination of processorName and messageId. +This constraint will be preventing multiple consumers from adding the +same key to the repository and allow only one consumer to handle the +message. + +The SQL above includes the constraint by creating a primary key. If you +prefer to use a different constraint, or your SQL server uses a +different syntax for table creation, you can create the table yourself +using the above schema as a starting point. + +## Customize the JDBC idempotency repository + +You have a few options to tune the +`org.apache.camel.processor.idempotent.jdbc.JdbcMessageIdRepository` for +your needs: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Parameter

Default Value

Description

createTableIfNotExists

true

Defines whether Camel should try to +create the table if it doesn’t exist.

tableName

CAMEL_MESSAGEPROCESSED

To use a custom table name instead of +the default name: CAMEL_MESSAGEPROCESSED.

tableExistsString

SELECT 1 FROM CAMEL_MESSAGEPROCESSED WHERE 1 = 0

This query is used to figure out +whether the table already exists or not. It must throw an exception to +indicate the table doesn’t exist.

createString

CREATE TABLE CAMEL_MESSAGEPROCESSED (processorName VARCHAR(255),messageId VARCHAR(100), createdAt TIMESTAMP)

The statement which is used to create +the table.

queryString

SELECT COUNT(*) FROM CAMEL_MESSAGEPROCESSED WHERE processorName = ? AND messageId = ?

The query which is used to figure out +whether the message already exists in the repository (the result is not +equals to 0). It takes two parameters. This first one is the +processor name (String) and the second one is the message +id (String).

insertString

INSERT INTO CAMEL_MESSAGEPROCESSED (processorName, messageId, createdAt) VALUES (?, ?, ?)

The statement which is used to add the +entry into the table. It takes three parameters. The first one is the +processor name (String), the second one is the message id +(String) and the third one is the timestamp +(java.sql.Timestamp) when this entry was added to the +repository.

deleteString

DELETE FROM CAMEL_MESSAGEPROCESSED WHERE processorName = ? AND messageId = ?

The statement which is used to delete +the entry from the database. It takes two parameters. This first one is +the processor name (String) and the second one is the +message id (String).

+ +The option `tableName` can be used to use the default SQL queries but +with a different table name. However, if you want to customize the SQL +queries, then you can configure each of them individually. + +## Orphan Lock aware Jdbc IdempotentRepository + +One of the limitations of +`org.apache.camel.processor.idempotent.jdbc.JdbcMessageIdRepository` is +that it does not handle orphan locks resulting from JVM crash or +non-graceful shutdown. This can result in unprocessed files/messages if +this is implementation is used with camel-file, camel-ftp etc. if you +need to address orphan locks processing then use +`org.apache.camel.processor.idempotent.jdbc.JdbcOrphanLockAwareIdempotentRepository`. +This repository keeps track of the locks held by an instance of the +application. For each lock held, the application will send keep-alive +signals to the lock repository resulting in updating the createdAt +column with the current Timestamp. When an application instance tries to +acquire a lock, then there are three possibilities: + +- lock entry does not exist then the lock is provided using the base + implementation of `JdbcMessageIdRepository`. + +- lock already exists and the `createdAt` \< + `System.currentTimeMillis() - lockMaxAgeMillis`. In this case, it is + assumed that an active instance has the lock and the lock is not + provided to the new instance requesting the lock + +- lock already exists and the `createdAt` \> = + `System.currentTimeMillis() - lockMaxAgeMillis`. In this case, it is + assumed that there is no active instance which has the lock and the + lock is provided to the requesting instance. The reason behind is + that if the original instance which had the lock, if it was still + running, it would have updated the Timestamp on createdAt using its + keepAlive mechanism + +This repository has two additional configuration parameters + + ++++ + + + + + + + + + + + + + + +

Parameter

Description

lockMaxAgeMillis

This refers to the duration after which +the lock is considered orphaned, i.e., if the currentTimestamp - +createdAt >= lockMaxAgeMillis then lock is orphaned.

lockKeepAliveIntervalMillis

The frequency at which keep-alive +updates are done to createdAt Timestamp column.

+ +## Caching Jdbc IdempotentRepository + +Some SQL implementations are not fast on a per-query basis. The +`JdbcMessageIdRepository` implementation does its idempotent checks +individually within SQL transactions. Checking a mere 100 keys can take +minutes. The `JdbcCachedMessageIdRepository` preloads an in-memory cache +on start with the entire list of keys. This cache is then checked first +before passing through to the original implementation. + +As with all cache implementations, there are considerations that should +be made with regard to stale data and your specific usage. + +# Using the JDBC based aggregation repository + +`JdbcAggregationRepository` is an `AggregationRepository` which on the +fly persists the aggregated messages. This ensures that you will not +lose messages, as the default aggregator will use an in-memory only +`AggregationRepository`. The `JdbcAggregationRepository` allows together +with Camel to provide persistent support for the Aggregator. + +Only when an Exchange has been successfully processed it will be marked +as complete which happens when the `confirm` method is invoked on the +`AggregationRepository`. This means if the same Exchange fails again, it +will be kept retried until success. + +You can use option `maximumRedeliveries` to limit the maximum number of +redelivery attempts for a given recovered Exchange. You must also set +the `deadLetterUri` option so Camel knows where to send the Exchange +when the `maximumRedeliveries` was hit. + +You can see some examples in the unit tests of camel-sql, for example +`JdbcAggregateRecoverDeadLetterChannelTest.java` + +## Database + +To be operational, each aggregator uses two tables: the aggregation and +completed one. By convention, the completed has the same name as the +aggregation one suffixed with `"_COMPLETED"`. The name must be +configured in the Spring bean with the `RepositoryName` property. In the +following example, aggregation will be used. + +The table structure definition of both tables is identical: in both +cases, a String value is used as key (**id**) whereas a Blob contains +the exchange serialized in a byte array. However, one difference should +be remembered: the **id** field does not have the same content depending +on the table. In the aggregation table **id** holds the correlation id +used by the component to aggregate the messages. In the completed table, +**id** holds the id of the exchange stored in the corresponding blob +field. + +Here is the SQL query used to create the tables. Replace `"aggregation"` +with your aggregator repository name. + + CREATE TABLE aggregation ( + id varchar(255) NOT NULL, + exchange blob NOT NULL, + version BIGINT NOT NULL, + constraint aggregation_pk PRIMARY KEY (id) + ); + CREATE TABLE aggregation_completed ( + id varchar(255) NOT NULL, + exchange blob NOT NULL, + version BIGINT NOT NULL, + constraint aggregation_completed_pk PRIMARY KEY (id) + ); + +# Storing body and headers as text + +You can configure the `JdbcAggregationRepository` to store message body +and select(ed) headers as String in separate columns. For example, to +store the body, and the following two headers `companyName` and +`accountName` use the following SQL: + + CREATE TABLE aggregationRepo3 ( + id varchar(255) NOT NULL, + exchange blob NOT NULL, + version BIGINT NOT NULL, + body varchar(1000), + companyName varchar(1000), + accountName varchar(1000), + constraint aggregationRepo3_pk PRIMARY KEY (id) + ); + CREATE TABLE aggregationRepo3_completed ( + id varchar(255) NOT NULL, + exchange blob NOT NULL, + version BIGINT NOT NULL, + body varchar(1000), + companyName varchar(1000), + accountName varchar(1000), + constraint aggregationRepo3_completed_pk PRIMARY KEY (id) + ); + +And then configure the repository to enable this behavior as shown +below: + + + + + + + + + + companyName + accountName + + + + +## Codec (Serialization) + +Since they can contain any type of payload, Exchanges are not +serializable by design. It is converted into a byte array to be stored +in a database BLOB field. All those conversions are handled by the +`JdbcCodec` class. One detail of the code requires your attention: the +`ClassLoadingAwareObjectInputStream`. + +The `ClassLoadingAwareObjectInputStream` has been reused from the +[Apache ActiveMQ](http://activemq.apache.org/) project. It wraps an +`ObjectInputStream` and use it with the `ContextClassLoader` rather than +the `currentThread` one. The benefit is to be able to load classes +exposed by other bundles. This allows the exchange body and headers to +have custom types object references. + +While deserializing, it’s important to notice that the decode function +and the unmarshallExchange method will allow only all java packages and +subpackages and org.apache.camel packages and subpackages. The remaining +classes will be blacklisted. So you’ll need to change the filter in case +of a need. This could be accomplished by changing the +deserializationFilter field in the repository. + +## Transaction + +A Spring `PlatformTransactionManager` is required to orchestrate +transaction. + +### Service (Start/Stop) + +The `start` method verify the connection of the database and the +presence of the required tables. If anything is wrong, it will fail +during starting. + +## Aggregator configuration + +Depending on the targeted environment, the aggregator might need some +configuration. As you already know, each aggregator should have its own +repository (with the corresponding pair of tables created in the +database) and a data source. If the default lobHandler is not adapted to +your database system, it can be injected with the `lobHandler` property. + +Here is the declaration for Oracle: + + + + + + + + + + + + + +## Optimistic locking + +You can turn on `optimisticLocking` and use this JDBC-based aggregation +repository in a clustered environment where multiple Camel applications +shared the same database for the aggregation repository. If there is a +race condition there, the JDBC driver will throw a vendor-specific +exception which the `JdbcAggregationRepository` can react upon. To know +which caused exceptions from the JDBC driver is regarded as an +optimistic locking error, we need a mapper to do this. Therefore, there +is a +`org.apache.camel.processor.aggregate.jdbc.JdbcOptimisticLockingExceptionMapper` +allows you to implement your custom logic if needed. There is a default +implementation +`org.apache.camel.processor.aggregate.jdbc.DefaultJdbcOptimisticLockingExceptionMapper` +which works as follows: + +The following check is done: + +- If the caused exception is an `SQLException` then the SQLState is + checked if starts with 23. + +- If the caused exception is a `DataIntegrityViolationException` + +- If the caused exception `class name` has *ConstraintViolation* in + its name. + +- Optional checking for FQN class name matches if any class names have + been configured. + +You can, in addition, add FQN class names, and if any of the caused +exceptions (or any nested) equals any of the FQN class names, then it is +an optimistic locking error. + +Here is an example, where we define two extra FQN class names from the +JDBC vendor. + + + + + + + + + + + + com.foo.sql.MyViolationException + com.foo.sql.MyOtherViolationException + + + + +## Propagation behavior + +`JdbcAggregationRepository` uses two distinct *transaction templates* +from Spring-TX. One is read-only and one is used for read-write +operations. + +However, when using `JdbcAggregationRepository` within a route that +itself uses `` and there’s common +`PlatformTransactionManager` used, there may be a need to configure +*propagation behavior* used by transaction templates inside +`JdbcAggregationRepository`. + +Here’s a way to do it: + + + + + +Propagation is specified by constants of +`org.springframework.transaction.TransactionDefinition` interface, so +`propagationBehaviorName` is convenient setter that allows to use names +of the constants. + +## Clustering + +JdbcAggregationRepository does not provide recovery in a clustered +environment. + +You may use ClusteredJdbcAggregationRepository that provides a limited +support for recovery in a clustered environment: recovery mechanism is +dealt separately by members of the cluster, i.e., a member may only +recover exchanges that it completed itself. + +To enable this behavior, property `recoverByInstance` must be set to +true, and `instanceId` property must be defined using a unique +identifier (a string) for each member of the cluster. + +Besides, completed table must have a `instance_id VARCHAR(255)` column. + +Since each member is the only responsible for the recovery of its +completed exchanges, if a member is stopped, its completed exchanges +will not be recovered until it is restarted, unless you update completed +table to affect them to another member (by changing `instance_id` for +those completed exchanges). + +## PostgreSQL case + +There’s a special database that may cause problems with optimistic +locking used by `JdbcAggregationRepository`: PostgreSQL marks connection +as invalid in case of data integrity violation exception (the one with +SQLState 23505). This makes the connection effectively unusable within a +nested transaction. Details can be found [in this +document](https://www.postgresql.org/message-id/200609241203.59292.ralf.wiebicke%40exedio.com). + +`org.apache.camel.processor.aggregate.jdbc.PostgresAggregationRepository` +extends `JdbcAggregationRepository` and uses special +`INSERT .. ON CONFLICT ..` statement to provide optimistic locking +behavior. + +This statement is (with default aggregation table definition): + + INSERT INTO aggregation (id, exchange) values (?, ?) ON CONFLICT DO NOTHING + +Details can be found [in PostgreSQL +documentation](https://www.postgresql.org/docs/9.5/sql-insert.html). + +When this clause is used, `java.sql.PreparedStatement.executeUpdate()` +call returns `0` instead of throwing SQLException with SQLState=23505. +Further handling is exactly the same as with generic +`JdbcAggregationRepository`, but without marking PostgreSQL connection +as invalid. + +# Camel Sql Starter + +A starter module is available to spring-boot users. When using the +starter, the `DataSource` can be directly configured using spring-boot +properties. + + // Example for a mysql datasource + spring.datasource.url=jdbc:mysql://localhost/test + spring.datasource.username=dbuser + spring.datasource.password=dbpass + spring.datasource.driver-class-name=com.mysql.jdbc.Driver + +To use this feature, add the following dependencies to your spring boot +pom.xml file: + + + org.apache.camel.springboot + camel-sql-starter + ${camel.version} + + + + org.springframework.boot + spring-boot-starter-jdbc + ${spring-boot-version} + + +You should also include the specific database driver, if needed. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|dataSource|Sets the DataSource to use to communicate with the database.||object| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|rowMapperFactory|Factory for creating RowMapper||object| +|serviceLocationEnabled|Whether to detect the network address location of the JMS broker on startup. This information is gathered via reflection on the ConnectionFactory, and is vendor specific. This option can be used to turn this off.|true|boolean| +|usePlaceholder|Sets whether to use placeholder and replace all placeholder characters with sign in the SQL queries. This option is default true|true|boolean| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|query|Sets the SQL query to perform. You can externalize the query by using file: or classpath: as prefix and specify the location of the file.||string| +|allowNamedParameters|Whether to allow using named parameters in the queries.|true|boolean| +|dataSource|Sets the DataSource to use to communicate with the database at endpoint level.||object| +|outputClass|Specify the full package and class name to use as conversion when outputType=SelectOne.||string| +|outputHeader|Store the query result in a header instead of the message body. By default, outputHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If outputHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved.||string| +|outputType|Make the output of consumer or producer to SelectList as List of Map, or SelectOne as single Java object in the following way: a) If the query has only single column, then that JDBC Column object is returned. (such as SELECT COUNT( ) FROM PROJECT will return a Long object. b) If the query has more than one column, then it will return a Map of that result. c) If the outputClass is set, then it will convert the query result into an Java bean object by calling all the setters that match the column names. It will assume your class has a default constructor to create instance with. d) If the query resulted in more than one rows, it throws an non-unique result exception. StreamList streams the result of the query using an Iterator. This can be used with the Splitter EIP in streaming mode to process the ResultSet in streaming fashion.|SelectList|object| +|separator|The separator to use when parameter values is taken from message body (if the body is a String type), to be inserted at # placeholders. Notice if you use named parameters, then a Map type is used instead. The default value is comma|,|string| +|breakBatchOnConsumeFail|Sets whether to break batch if onConsume failed.|false|boolean| +|expectedUpdateCount|Sets an expected update count to validate when using onConsume.|-1|integer| +|maxMessagesPerPoll|Sets the maximum number of messages to poll||integer| +|onConsume|After processing each row then this query can be executed, if the Exchange was processed successfully, for example to mark the row as processed. The query can have parameter.||string| +|onConsumeBatchComplete|After processing the entire batch, this query can be executed to bulk update rows etc. The query cannot have parameters.||string| +|onConsumeFailed|After processing each row then this query can be executed, if the Exchange failed, for example to mark the row as failed. The query can have parameter.||string| +|routeEmptyResultSet|Sets whether empty resultset should be allowed to be sent to the next hop. Defaults to false. So the empty resultset will be filtered out.|false|boolean| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|transacted|Enables or disables transaction. If enabled then if processing an exchange failed then the consumer breaks out processing any further exchanges to cause a rollback eager.|false|boolean| +|useIterator|Sets how resultset should be delivered to route. Indicates delivery as either a list or individual object. defaults to true.|true|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|processingStrategy|Allows to plugin to use a custom org.apache.camel.component.sql.SqlProcessingStrategy to execute queries when the consumer has processed the rows/batch.||object| +|batch|Enables or disables batch mode|false|boolean| +|noop|If set, will ignore the results of the SQL query and use the existing IN message as the OUT message for the continuation of processing|false|boolean| +|useMessageBodyForSql|Whether to use the message body as the SQL and then headers for parameters. If this option is enabled then the SQL in the uri is not used. Note that query parameters in the message body are represented by a question mark instead of a # symbol.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|alwaysPopulateStatement|If enabled then the populateStatement method from org.apache.camel.component.sql.SqlPrepareStatementStrategy is always invoked, also if there is no expected parameters to be prepared. When this is false then the populateStatement is only invoked if there is 1 or more expected parameters to be set; for example this avoids reading the message body/headers for SQL queries with no parameters.|false|boolean| +|parametersCount|If set greater than zero, then Camel will use this count value of parameters to replace instead of querying via JDBC metadata API. This is useful if the JDBC vendor could not return correct parameters count, then user may override instead.||integer| +|placeholder|Specifies a character that will be replaced to in SQL query. Notice, that it is simple String.replaceAll() operation and no SQL parsing is involved (quoted strings will also change).|#|string| +|prepareStatementStrategy|Allows to plugin to use a custom org.apache.camel.component.sql.SqlPrepareStatementStrategy to control preparation of the query and prepared statement.||object| +|rowMapperFactory|Factory for creating RowMapper||object| +|templateOptions|Configures the Spring JdbcTemplate with the key/values from the Map||object| +|usePlaceholder|Sets whether to use placeholder and replace all placeholder characters with sign in the SQL queries.|true|boolean| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| diff --git a/camel-ssh.md b/camel-ssh.md new file mode 100644 index 0000000000000000000000000000000000000000..c8adcb6c34ee53adcd6e1f0c566b64281495eb24 --- /dev/null +++ b/camel-ssh.md @@ -0,0 +1,185 @@ +# Ssh + +**Since Camel 2.10** + +**Both producer and consumer are supported** + +The SSH component enables access to SSH servers such that you can send +an SSH command and process the response. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-ssh + x.x.x + + + +# URI format + + ssh:[username[:password]@]host[:port][?options] + +# Usage as a Producer endpoint + +When the SSH Component is used as a Producer (`.to("ssh://...")`), it +will send the message body as the command to execute on the remote SSH +server. + +Here is an example of this within the XML DSL. Note that the command has +an XML encoded newline (`+ +`). + + + + + features:list + + + + + +# Authentication + +The SSH Component can authenticate against the remote SSH server using +one of two mechanisms: Public Key certificate or username/password. +Configuring how the SSH Component does authentication is based on how +and which options are set. + +1. First, it will look to see if the `certResource` option has been + set, and if so, use it to locate the referenced Public Key + certificate and use that for authentication. + +2. If `certResource` is not set, it will look to see if a + `keyPairProvider` has been set, and if so, it will use that for + certificate-based authentication. + +3. If neither `certResource` nor `keyPairProvider` are set, it will use + the `username` and `password` options for authentication. Even + though the `username` and `password` are provided in the endpoint + configuration and headers set with `SshConstants.USERNAME_HEADER` + (`CamelSshUsername`) and `SshConstants.PASSWORD_HEADER` + (`CamelSshPassword`), the endpoint configuration is surpassed and + credentials set in the headers are used. + +The following route fragment shows an SSH polling consumer using a +certificate from the classpath. + +In the XML DSL, + + + + + + +In the Java DSL, + + from("ssh://scott@localhost:8101?certResource=classpath:test_rsa&useFixedDelay=true&delay=5000&pollCommand=features:list%0A") + .log("${body}"); + +An example of using Public Key authentication is provided in +`examples/camel-example-ssh-security`. + +# Certificate Dependencies + +You will need to add some additional runtime dependencies if you use +certificate-based authentication. You may need to use later versions +depending on what version of Camel you are using. + +The component uses `sshd-core` library which is based on either +`bouncycastle` or `eddsa` security providers. `camel-ssh` is picking +explicitly `bouncycastle` as security provider. + + + org.apache.sshd + sshd-core + 2.8.0 + + + org.bouncycastle + bcpg-jdk18on + 1.71 + + + org.bouncycastle + bcpkix-jdk18on + 1.71 + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|failOnUnknownHost|Specifies whether a connection to an unknown host should fail or not. This value is only checked when the property knownHosts is set.|false|boolean| +|knownHostsResource|Sets the resource path for a known\_hosts file||string| +|timeout|Sets the timeout in milliseconds to wait in establishing the remote SSH server connection. Defaults to 30000 milliseconds.|30000|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|pollCommand|Sets the command string to send to the remote SSH server during every poll cycle. Only works with camel-ssh component being used as a consumer, i.e. from(ssh://...) You may need to end your command with a newline, and that must be URL encoded %0A||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|channelType|Sets the channel type to pass to the Channel as part of command execution. Defaults to exec.|exec|string| +|clientBuilder|Instance of ClientBuilder used by the producer or consumer to create a new SshClient||object| +|compressions|Whether to use compression, and if so which.||string| +|configuration|Component configuration||object| +|shellPrompt|Sets the shellPrompt to be dropped when response is read after command execution||string| +|sleepForShellPrompt|Sets the sleep period in milliseconds to wait reading response from shell prompt. Defaults to 100 milliseconds.|100|integer| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|certResource|Sets the resource path of the certificate to use for Authentication. Will use ResourceHelperKeyPairProvider to resolve file based certificate, and depends on keyType setting.||string| +|certResourcePassword|Sets the password to use in loading certResource, if certResource is an encrypted key.||string| +|ciphers|Comma-separated list of allowed/supported ciphers in their order of preference.||string| +|kex|Comma-separated list of allowed/supported key exchange algorithms in their order of preference.||string| +|keyPairProvider|Sets the KeyPairProvider reference to use when connecting using Certificates to the remote SSH Server.||object| +|keyType|Sets the key type to pass to the KeyPairProvider as part of authentication. KeyPairProvider.loadKey(...) will be passed this value. From Camel 3.0.0 / 2.25.0, by default Camel will select the first available KeyPair that is loaded. Prior to this, a KeyType of 'ssh-rsa' was enforced by default.||string| +|macs|Comma-separated list of allowed/supported message authentication code algorithms in their order of preference. The MAC algorithm is used for data integrity protection.||string| +|password|Sets the password to use in connecting to remote SSH server. Requires keyPairProvider to be set to null.||string| +|signatures|Comma-separated list of allowed/supported signature algorithms in their order of preference.||string| +|username|Sets the username to use in logging into the remote SSH server.||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|Sets the hostname of the remote SSH server.||string| +|port|Sets the port number for the remote SSH server.|22|integer| +|failOnUnknownHost|Specifies whether a connection to an unknown host should fail or not. This value is only checked when the property knownHosts is set.|false|boolean| +|knownHostsResource|Sets the resource path for a known\_hosts file||string| +|timeout|Sets the timeout in milliseconds to wait in establishing the remote SSH server connection. Defaults to 30000 milliseconds.|30000|integer| +|pollCommand|Sets the command string to send to the remote SSH server during every poll cycle. Only works with camel-ssh component being used as a consumer, i.e. from(ssh://...) You may need to end your command with a newline, and that must be URL encoded %0A||string| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|channelType|Sets the channel type to pass to the Channel as part of command execution. Defaults to exec.|exec|string| +|clientBuilder|Instance of ClientBuilder used by the producer or consumer to create a new SshClient||object| +|compressions|Whether to use compression, and if so which.||string| +|shellPrompt|Sets the shellPrompt to be dropped when response is read after command execution||string| +|sleepForShellPrompt|Sets the sleep period in milliseconds to wait reading response from shell prompt. Defaults to 100 milliseconds.|100|integer| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|certResource|Sets the resource path of the certificate to use for Authentication. Will use ResourceHelperKeyPairProvider to resolve file based certificate, and depends on keyType setting.||string| +|certResourcePassword|Sets the password to use in loading certResource, if certResource is an encrypted key.||string| +|ciphers|Comma-separated list of allowed/supported ciphers in their order of preference.||string| +|kex|Comma-separated list of allowed/supported key exchange algorithms in their order of preference.||string| +|keyPairProvider|Sets the KeyPairProvider reference to use when connecting using Certificates to the remote SSH Server.||object| +|keyType|Sets the key type to pass to the KeyPairProvider as part of authentication. KeyPairProvider.loadKey(...) will be passed this value. From Camel 3.0.0 / 2.25.0, by default Camel will select the first available KeyPair that is loaded. Prior to this, a KeyType of 'ssh-rsa' was enforced by default.||string| +|macs|Comma-separated list of allowed/supported message authentication code algorithms in their order of preference. The MAC algorithm is used for data integrity protection.||string| +|password|Sets the password to use in connecting to remote SSH server. Requires keyPairProvider to be set to null.||string| +|signatures|Comma-separated list of allowed/supported signature algorithms in their order of preference.||string| +|username|Sets the username to use in logging into the remote SSH server.||string| diff --git a/camel-stax.md b/camel-stax.md new file mode 100644 index 0000000000000000000000000000000000000000..dc9ea59dd0b57216650140db833ac75b0db3cdd9 --- /dev/null +++ b/camel-stax.md @@ -0,0 +1,181 @@ +# Stax + +**Since Camel 2.9** + +**Only producer is supported** + +The StAX component allows messages to be processed through a SAX +[ContentHandler](http://download.oracle.com/javase/6/docs/api/org/xml/sax/ContentHandler.html). +Another feature of this component is to allow iterating over JAXB +records using StAX, for example, using the [Split +EIP](#eips:split-eip.adoc). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-stax + x.x.x + + + +# URI format + + stax:content-handler-class + +example: + + stax:org.superbiz.FooContentHandler + +You can look up a `org.xml.sax.ContentHandler` bean from the Registry +using the # syntax as shown: + + stax:#myHandler + +# Usage of a content handler as StAX parser + +The message body after the handling is the handler itself. + +Here is an example: + + from("file:target/in") + .to("stax:org.superbiz.handler.CountingHandler") + // CountingHandler implements org.xml.sax.ContentHandler or extends org.xml.sax.helpers.DefaultHandler + .process(new Processor() { + @Override + public void process(Exchange exchange) throws Exception { + CountingHandler handler = exchange.getIn().getBody(CountingHandler.class); + // do some great work with the handler + } + }); + +# Iterate over a collection using JAXB and StAX + +First, we suppose you have JAXB objects. + +For instance, a list of records in a wrapper object: + + import java.util.ArrayList; + import java.util.List; + import javax.xml.bind.annotation.XmlAccessType; + import javax.xml.bind.annotation.XmlAccessorType; + import javax.xml.bind.annotation.XmlElement; + import javax.xml.bind.annotation.XmlRootElement; + + @XmlAccessorType(XmlAccessType.FIELD) + @XmlRootElement(name = "records") + public class Records { + @XmlElement(required = true) + protected List record; + + public List getRecord() { + if (record == null) { + record = new ArrayList(); + } + return record; + } + } + +and + + import javax.xml.bind.annotation.XmlAccessType; + import javax.xml.bind.annotation.XmlAccessorType; + import javax.xml.bind.annotation.XmlAttribute; + import javax.xml.bind.annotation.XmlType; + + @XmlAccessorType(XmlAccessType.FIELD) + @XmlType(name = "record", propOrder = { "key", "value" }) + public class Record { + @XmlAttribute(required = true) + protected String key; + + @XmlAttribute(required = true) + protected String value; + + public String getKey() { + return key; + } + + public void setKey(String key) { + this.key = key; + } + + public String getValue() { + return value; + } + + public void setValue(String value) { + this.value = value; + } + } + +Then you get an XML file to process: + + + + + + + + + + + +The StAX component provides an `StAXBuilder` which can be used when +iterating XML elements with the Camel Splitter + + from("file:target/in") + .split(stax(Record.class)).streaming() + .to("mock:records"); + +Where `stax` is a static method on +`org.apache.camel.component.stax.StAXBuilder` which you can have static +import in the Java code. The stax builder is by default namespace aware +on the XMLReader it uses. You can turn this off by setting the boolean +parameter to false, as shown below: + + from("file:target/in") + .split(stax(Record.class, false)).streaming() + .to("mock:records"); + +## The previous example with XML DSL + +The example above could be implemented as follows in Spring XML + + + + + + + + + + + + + + + + staxRecord + + + + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|contentHandlerClass|The FQN class name for the ContentHandler implementation to use.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-stitch.md b/camel-stitch.md new file mode 100644 index 0000000000000000000000000000000000000000..b2777f8ac63cdc856df31789cf9090b6c60a52c2 --- /dev/null +++ b/camel-stitch.md @@ -0,0 +1,255 @@ +# Stitch + +**Since Camel 3.8** + +**Only producer is supported** + +Stitch is a cloud ETL service, a developer-focused platform for rapidly +moving and replicates data from more than 90 applications and databases. +It integrates various data sources into a central data warehouse. Stitch +has integrations for many enterprise software data sources, and can +receive data via WebHooks and an API (Stitch Import API) which Camel +Stitch Component uses to produce the data to Stitch ETL. + +For more info, feel free to visit their website: +[https://www.stitchdata.com/](https://www.stitchdata.com/) + +Prerequisites + +You must have a valid Stitch account, you will need to enable Stitch +Import API and generate a token for the integration, for more info, +please find more info +[here](https://www.stitchdata.com/docs/developers/import-api/guides/quick-start). + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-stitch + x.x.x + + + +# URI format + + stitch:[tableName]//[?options] + +# Async Producer + +This component implements the async Consumer and producer. + +This allows camel route to consume and produce events asynchronously +without blocking any threads. + +# Usage + +For example, to produce data to Stitch from a custom processor: + + from("direct:sendStitch") + .process(exchange -> { + final StitchMessage stitchMessage = StitchMessage.builder() + .withData("field_1", "stitchMessage2-1") + .build(); + + final StitchRequestBody stitchRequestBody = StitchRequestBody.builder() + .addMessage(stitchMessage) + .withSchema(StitchSchema.builder().addKeyword("field_1", "string").build()) + .withTableName("table_1") + .withKeyNames(Collections.singleton("field_1")) + .build(); + + exchange.getMessage().setBody(stitchRequestBody); + }) + .to("stitch:table_1?token=RAW({{token}})"); + +## Message body type + +Currently, the component supports the following types for the body +message on the producer side when producing a message to Stitch +component: + +- `org.apache.camel.component.stitch.client.models.StitchRequestBody`: + This represents this Stitch [JSON + Message](https://www.stitchdata.com/docs/developers/import-api/api#batch-data—arguments). + However, `StitchRequestBody` includes a type safe builder that helps + on building the request body. Please note that, `tableName`, + `keyNames` and `schema` options are no longer required if you send + the data with `StitchRequestBody`, if you still set these options, + they override whatever being set in message body + `StitchRequestBody`. + +- `org.apache.camel.component.stitch.client.models.StitchMessage`: + This represents [this Stitch message + structure](https://www.stitchdata.com/docs/developers/import-api/api#message-object). + If you choose to send your message as `StitchMessage`, **you will + need** to add `tableName`, `keyNames` and `schema` options to either + the Exchange headers or through the endpoint options. + +- `Map`: You can also send the data as `Map`, the data structure must + follow this [JSON + Message](https://www.stitchdata.com/docs/developers/import-api/api#batch-data—arguments) + structure which is similar to `StitchRequestBody` but with drawback + losing on all the type safety builder that is included with + `StitchRequestBody`. + +- `Iterable`: You can send multiple Stitch messages that are + aggregated by Camel or aggregated through custom processor. These + aggregated messages can be type of `StitchMessage`, + `StitchRequestBody` or `Map` but this Map here is similar to + `StitchMessage`. + +## Examples + +Here is list of examples of data that can be proceeded to Stitch: + +### Input body type `org.apache.camel.component.stitch.client.models.StitchRequestBody`: + + from("direct:sendStitch") + .process(exchange -> { + final StitchMessage stitchMessage = StitchMessage.builder() + .withData("field_1", "stitchMessage2-1") + .build(); + + final StitchRequestBody stitchRequestBody = StitchRequestBody.builder() + .addMessage(stitchMessage) + .withSchema(StitchSchema.builder().addKeyword("field_1", "string").build()) + .withTableName("table_1") + .withKeyNames(Collections.singleton("field_1")) + .build(); + + exchange.getMessage().setBody(stitchRequestBody); + }) + .to("stitch:table_1?token=RAW({{token}})"); + +### Input body type `org.apache.camel.component.stitch.client.models.StitchMessage`: + + from("direct:sendStitch") + .process(exchange -> { + exchange.getMessage().setHeader(StitchConstants.SCHEMA, StitchSchema.builder().addKeyword("field_1", "string").build()); + exchange.getMessage().setHeader(StitchConstants.KEY_NAMES, "field_1"); + exchange.getMessage().setHeader(StitchConstants.TABLE_NAME, "table_1"); + + final StitchMessage stitchMessage = StitchMessage.builder() + .withData("field_1", "stitchMessage2-1") + .build(); + + exchange.getMessage().setBody(stitchMessage); + }) + .to("stitch:table_1?token=RAW({{token}})"); + +### Input body type `Map`: + + from("direct:sendStitch") + .process(exchange -> { + final Map properties = new LinkedHashMap<>(); + properties.put("id", Collections.singletonMap("type", "integer")); + properties.put("name", Collections.singletonMap("type", "string")); + properties.put("age", Collections.singletonMap("type", "integer")); + properties.put("has_magic", Collections.singletonMap("type", "boolean")); + + final Map data = new LinkedHashMap<>(); + data.put(StitchRequestBody.TABLE_NAME, "my_table"); + data.put(StitchRequestBody.SCHEMA, Collections.singletonMap("properties", properties)); + data.put(StitchRequestBody.MESSAGES, + Collections.singletonList(Collections.singletonMap("data", Collections.singletonMap("id", 2)))); + data.put(StitchRequestBody.KEY_NAMES, "test_key"); + + exchange.getMessage().setBody(data); + }) + .to("stitch:table_1?token=RAW({{token}})"); + +### Input body type `Iterable`: + + from("direct:sendStitch") + .process(exchange -> { + exchange.getMessage().setHeader(StitchConstants.SCHEMA, StitchSchema.builder().addKeyword("field_1", "string").build()); + exchange.getMessage().setHeader(StitchConstants.KEY_NAMES, "field_1"); + exchange.getMessage().setHeader(StitchConstants.TABLE_NAME, "table_1"); + + final StitchMessage stitchMessage1 = StitchMessage.builder() + .withData("field_1", "stitchMessage1") + .build(); + + final StitchMessage stitchMessage2 = StitchMessage.builder() + .withData("field_1", "stitchMessage2-1") + .build(); + + final StitchRequestBody stitchMessage2RequestBody = StitchRequestBody.builder() + .addMessage(stitchMessage2) + .withSchema(StitchSchema.builder().addKeyword("field_1", "integer").build()) + .withTableName("table_1") + .withKeyNames(Collections.singleton("field_1")) + .build(); + + final Map stitchMessage3 = new LinkedHashMap<>(); + stitchMessage3.put(StitchMessage.DATA, Collections.singletonMap("field_1", "stitchMessage3")); + + final StitchMessage stitchMessage4 = StitchMessage.builder() + .withData("field_1", "stitchMessage4") + .build(); + + final Exchange stitchMessage4Exchange = new DefaultExchange(context); + stitchMessage4Exchange.getMessage().setBody(stitchMessage4); + + final StitchMessage stitchMessage5 = StitchMessage.builder() + .withData("field_1", "stitchMessage5") + .build(); + + final Message stitchMessage5Message = new DefaultExchange(context).getMessage(); + stitchMessage5Message.setBody(stitchMessage5); + + final List inputMessages = new LinkedList<>(); + inputMessages.add(stitchMessage1); + inputMessages.add(stitchMessage2RequestBody); + inputMessages.add(stitchMessage3); + inputMessages.add(stitchMessage4Exchange); + inputMessages.add(stitchMessage5Message); + + exchange.getMessage().setBody(inputMessages); + }) + .to("stitch:table_1?token=RAW({{token}})"); + +## Development Notes (Important) + +When developing on this component, you will need to obtain your Stitch +token to run the integration tests. In addition to the mocked unit +tests, you **will need to run the integration tests with every change +you make** To run the integration tests, in this component directory, +run the following maven command: + + mvn verify -Dtoken=stitchToken + +Whereby `token` is your Stitch token generated for Stitch Import API +integration. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|configuration|The component configurations||object| +|keyNames|A collection of comma separated strings representing the Primary Key fields in the source table. Stitch use these Primary Keys to de-dupe data during loading If not provided, the table will be loaded in an append-only manner.||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|region|Stitch account region, e.g: europe|EUROPE|object| +|stitchSchema|A schema that describes the record(s)||object| +|connectionProvider|ConnectionProvider contain configuration for the HttpClient like Maximum connection limit .. etc, you can inject this ConnectionProvider and the StitchClient will initialize HttpClient with this ConnectionProvider||object| +|httpClient|Reactor Netty HttpClient, you can injected it if you want to have custom HttpClient||object| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|stitchClient|Set a custom StitchClient that implements org.apache.camel.component.stitch.client.StitchClient interface||object| +|token|Stitch access token for the Stitch Import API||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|tableName|The name of the destination table the data is being pushed to. Table names must be unique in each destination schema, or loading issues will occur. Note: The number of characters in the table name should be within the destination's allowed limits or data will rejected.||string| +|keyNames|A collection of comma separated strings representing the Primary Key fields in the source table. Stitch use these Primary Keys to de-dupe data during loading If not provided, the table will be loaded in an append-only manner.||string| +|region|Stitch account region, e.g: europe|EUROPE|object| +|stitchSchema|A schema that describes the record(s)||object| +|connectionProvider|ConnectionProvider contain configuration for the HttpClient like Maximum connection limit .. etc, you can inject this ConnectionProvider and the StitchClient will initialize HttpClient with this ConnectionProvider||object| +|httpClient|Reactor Netty HttpClient, you can injected it if you want to have custom HttpClient||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|stitchClient|Set a custom StitchClient that implements org.apache.camel.component.stitch.client.StitchClient interface||object| +|token|Stitch access token for the Stitch Import API||string| diff --git a/camel-stomp.md b/camel-stomp.md new file mode 100644 index 0000000000000000000000000000000000000000..c2eceb9e035f9df7c50ff3b2701beaab5bb1cb38 --- /dev/null +++ b/camel-stomp.md @@ -0,0 +1,107 @@ +# Stomp + +**Since Camel 2.12** + +**Both producer and consumer are supported** + +The Stomp component is used for communicating with +[Stomp](http://stomp.github.io/) compliant message brokers, like [Apache +ActiveMQ](http://activemq.apache.org) or [ActiveMQ +Apollo](http://activemq.apache.org/apollo/) + +Since STOMP specification is not actively maintained, please note [STOMP +JMS +client](https://github.com/fusesource/stompjms/tree/master/stompjms-client) +is not as well actively maintained. However, we hope for the community +to step up to help in maintaining the STOMP JMS project in the near +future. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-stomp + x.x.x + + + +# URI format + + stomp:queue:destination[?options] + +Where **destination** is the name of the queue. + +# Samples + +Sending messages: + + from("direct:foo").to("stomp:queue:test"); + +Consuming messages: + + from("stomp:queue:test").transform(body().convertToString()).to("mock:result") + +# Endpoints + +Camel supports the Message Endpoint pattern using the +[Endpoint](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Endpoint.html) +interface. Endpoints are usually created by a Component, and Endpoints +are usually referred to in the DSL via their URIs. + +From an Endpoint you can use the following methods + +- [createProducer()](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Endpoint.html#createProducer--) + will create a + [Producer](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Producer.html) + for sending message exchanges to the endpoint + +- [createConsumer()](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Endpoint.html#createConsumer-org.apache.camel.Processor-) + implements the Event Driven Consumer pattern for consuming message + exchanges from the endpoint via a + [Processor](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Processor.html) + when creating a + [Consumer](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Consumer.html) + +- [createPollingConsumer()](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Endpoint.html#createPollingConsumer--) + implements the Polling Consumer pattern for consuming message + exchanges from the endpoint via a + [PollingConsumer](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/PollingConsumer.html) + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|brokerURL|The URI of the Stomp broker to connect to|tcp://localhost:61613|string| +|customHeaders|To set custom headers||object| +|host|The virtual host name||string| +|version|The stomp version (1.1, or 1.2)||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|configuration|Component configuration.||object| +|headerFilterStrategy|To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message.||object| +|login|The username||string| +|passcode|The password||string| +|sslContextParameters|To configure security using SSLContextParameters||object| +|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|destination|Name of the queue||string| +|brokerURL|The URI of the Stomp broker to connect to|tcp://localhost:61613|string| +|customHeaders|To set custom headers||object| +|host|The virtual host name||string| +|version|The stomp version (1.1, or 1.2)||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object| +|login|The username||string| +|passcode|The password||string| +|sslContextParameters|To configure security using SSLContextParameters||object| diff --git a/camel-stream.md b/camel-stream.md new file mode 100644 index 0000000000000000000000000000000000000000..ac05d6c76e28b18f3549eb8fc99ddc6eac165c1f --- /dev/null +++ b/camel-stream.md @@ -0,0 +1,124 @@ +# Stream + +**Since Camel 1.3** + +**Both producer and consumer are supported** + +The Stream component provides access to the `System.in`, `System.out` +and `System.err` streams as well as allowing streaming of file. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-stream + x.x.x + + + +# URI format + + stream:in[?options] + stream:out[?options] + stream:err[?options] + stream:header[?options] + stream:file?fileName=/foo/bar.txt + stream:http?httpUrl=http:myserver:8080/data + +If the `stream:header` URI is specified, the `stream` header is used to +find the stream to write to. This option is available only for stream +producers (that is, it cannot appear in `from()`). + +# Message content + +The Stream component supports either `String` or `byte[]` for writing to +streams. Just add either `String` or `byte[]` content to the +`message.in.body`. Messages sent to the **stream:** producer in binary +mode are not followed by the newline character (as opposed to the +`String` messages). Message with `null` body will not be appended to the +output stream. +The special `stream:header` URI is used for custom output streams. Just +add a `java.io.OutputStream` object to `message.in.header` in the key +`header`. +See samples for an example. + +# Samples + +In the following sample we route messages from the `direct:in` endpoint +to the `System.out` stream: + + // Route messages to the standard output. + from("direct:in").to("stream:out"); + + // Send String payload to the standard output. + // Message will be followed by the newline. + template.sendBody("direct:in", "Hello Text World"); + + // Send byte[] payload to the standard output. + // No newline will be added after the message. + template.sendBody("direct:in", "Hello Bytes World".getBytes()); + +The following sample demonstrates how the header type can be used to +determine which stream to use. In the sample we use our own output +stream, `MyOutputStream`. + +The following sample demonstrates how to continuously read a file stream +(analogous to the UNIX `tail` command): + + from("stream:file?fileName=/server/logs/server.log&scanStream=true&scanStreamDelay=1000") + .to("bean:logService?method=parseLogLine"); + +If you want to re-load the file if it roll over/rewritten then you +should also turn on the `fileWatcher` and `retry` options. + + from("stream:file?fileName=/server/logs/server.log&scanStream=true&scanStreamDelay=1000&retry=true&fileWatcher=true") + .to("bean:logService?method=parseLogLine"); + +# Reading HTTP server side streaming + +The camel-stream component has basic support for connecting to a remote +HTTP server and read streaming data (chunk of data separated by +new-line). + + from("stream:http?scanStream=true&httpUrl=http://localhost:8500") + .to("log:input"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|kind|Kind of stream to use such as System.in, System.out, a file, or a http url.||string| +|encoding|You can configure the encoding (is a charset name) to use text-based streams (for example, message body is a String object). If not provided, Camel uses the JVM default Charset.||string| +|fileName|When using the stream:file URI format, this option specifies the filename to stream to/from.||string| +|fileWatcher|To use JVM file watcher to listen for file change events to support re-loading files that may be overwritten, somewhat like tail --retry|false|boolean| +|groupLines|To group X number of lines in the consumer. For example to group 10 lines and therefore only spit out an Exchange with 10 lines, instead of 1 Exchange per line.||integer| +|groupStrategy|Allows to use a custom GroupStrategy to control how to group lines.||object| +|httpHeaders|When using stream:http format, this option specifies optional http headers, such as Accept: application/json. Multiple headers can be separated by comma. The format of headers can be either HEADER=VALUE or HEADER:VALUE. In accordance with the HTTP/1.1 specification, leading and/or trailing whitespace is ignored||string| +|httpUrl|When using stream:http format, this option specifies the http url to stream from.||string| +|initialPromptDelay|Initial delay in milliseconds before showing the message prompt. This delay occurs only once. Can be used during system startup to avoid message prompts being written while other logging is done to the system out.|2000|integer| +|promptDelay|Optional delay in milliseconds before showing the message prompt.||integer| +|promptMessage|Message prompt to use when reading from stream:in; for example, you could set this to Enter a command:||string| +|readLine|Whether to read the input stream in line mode (terminate by line breaks). Setting this to false, will instead read the entire stream until EOL.|true|boolean| +|retry|Will retry opening the stream if it's overwritten, somewhat like tail --retry If reading from files then you should also enable the fileWatcher option, to make it work reliable.|false|boolean| +|scanStream|To be used for continuously reading a stream such as the unix tail command.|false|boolean| +|scanStreamDelay|Delay in milliseconds between read attempts when using scanStream.||integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|appendNewLine|Whether to append a new line character at end of output.|true|boolean| +|autoCloseCount|Number of messages to process before closing stream on Producer side. Never close stream by default (only when Producer is stopped). If more messages are sent, the stream is reopened for another autoCloseCount batch.||integer| +|closeOnDone|This option is used in combination with Splitter and streaming to the same file. The idea is to keep the stream open and only close when the Splitter is done, to improve performance. Mind this requires that you only stream to the same file, and not 2 or more files.|false|boolean| +|delay|Initial delay in milliseconds before producing the stream.||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|readTimeout|Sets the read timeout to a specified timeout, in milliseconds. A non-zero value specifies the timeout when reading from Input stream when a connection is established to a resource. If the timeout expires before there is data available for read, a java.net.SocketTimeoutException is raised. A timeout of zero is interpreted as an infinite timeout.||integer| diff --git a/camel-string-template.md b/camel-string-template.md new file mode 100644 index 0000000000000000000000000000000000000000..116c9e6bd6ca13ad7f5fee0051dfba2b3c181aba --- /dev/null +++ b/camel-string-template.md @@ -0,0 +1,161 @@ +# String-template + +**Since Camel 1.2** + +**Only producer is supported** + +The String Template component allows you to process a message using a +[String Template](http://www.stringtemplate.org/). This can be ideal +when using Templating to generate responses for requests. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-stringtemplate + x.x.x + + + +# URI format + + string-template:templateName[?options] + +Where **templateName** is the classpath-local URI of the template to +invoke; or the complete URL of the remote template. + +# Headers + +Camel will store a reference to the resource in the message header with +key, `org.apache.camel.stringtemplate.resource`. The Resource is an +`org.springframework.core.io.Resource` object. + +# String Template Context + +Camel will provide exchange information in the String Template context +(just a `Map`). The `Exchange` is transferred as: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
keyvalue

exchange

The Exchange +itself.

exchange.properties

The Exchange +properties.

variables

The variables

headers

The headers of the In message.

camelContext

The Camel Context.

request

The In message.

body

The In message body.

response

The Out message (only for InOut message +exchange pattern).

+ +# Hot reloading + +The string template resource is by default hot-reloadable for both file +and classpath resources (expanded jar). If you set `contentCache=true`, +Camel loads the resource only once and hot-reloading is not possible. +This scenario can be used in production when the resource never changes. + +# Dynamic templates + +Camel provides two headers by which you can define a different resource +location for a template or the template content itself. If any of these +headers is set, then Camel uses this over the endpoint configured +resource. This allows you to provide a dynamic template at runtime. + +# StringTemplate Attributes + +You can define the custom context map by setting the message header +"**CamelStringTemplateVariableMap**" just like the below code. + + Map variableMap = new HashMap(); + Map headersMap = new HashMap(); + headersMap.put("name", "Willem"); + variableMap.put("headers", headersMap); + variableMap.put("body", "Monday"); + variableMap.put("exchange", exchange); + exchange.getIn().setHeader("CamelStringTemplateVariableMap", variableMap); + +# Samples + +For example, you could use a string template as follows in order to +formulate a response to a message: + + from("activemq:My.Queue"). + to("string-template:com/acme/MyResponse.tm"); + +# The Email Sample + +In this sample, we want to use a string template to send an order +confirmation email. The email template is laid out in `StringTemplate` +as: + + Dear , + + Thanks for the order of . + + Regards Camel Riders Bookstore + + +And the java code is as follows: + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|resourceUri|Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod.||string| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|allowTemplateFromHeader|Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care.|false|boolean| +|contentCache|Sets whether to use resource content cache or not|false|boolean| +|delimiterStart|The variable start delimiter|\<|string| +|delimiterStop|The variable end delimiter|\>|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| diff --git a/camel-stub.md b/camel-stub.md new file mode 100644 index 0000000000000000000000000000000000000000..cd2d0a6c067873ebda6a6630e875d43205769f90 --- /dev/null +++ b/camel-stub.md @@ -0,0 +1,75 @@ +# Stub + +**Since Camel 2.10** + +**Both producer and consumer are supported** + +The Stub component provides a simple way to stub out any physical +endpoints while in development or testing, allowing you, for example, to +run a route without needing to actually connect to a specific +[SMTP](#mail-component.adoc) or [Http](#http-component.adoc) endpoint. +Add **stub:** in front of any endpoint URI to stub out the endpoint. + +Internally, the Stub component creates [Seda](#seda-component.adoc) +endpoints. The main difference between [Stub](#stub-component.adoc) and +[Seda](#seda-component.adoc) is that [Seda](#seda-component.adoc) will +validate the URI and parameters you give it, so putting seda: in front +of a typical URI with query arguments will usually fail. Stub won’t, +though, as it basically ignores all query parameters to let you quickly +stub out one or more endpoints in your route temporarily. + +# URI format + + stub:someUri + +Where **`someUri`** can be any URI with any query parameters. + +# Examples + +Here are a few samples of stubbing endpoint uris + + stub:smtp://somehost.foo.com?user=whatnot&something=else + stub:http://somehost.bar.com/something + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|shadow|If shadow is enabled then the stub component will register a shadow endpoint with the actual uri that refers to the stub endpoint, meaning you can lookup the endpoint via both stub:kafka:cheese and kafka:cheese.|false|boolean| +|shadowPattern|If shadow is enabled then this pattern can be used to filter which components to match. Multiple patterns can be separated by comma.||string| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|concurrentConsumers|Sets the default number of concurrent threads processing exchanges.|1|integer| +|defaultPollTimeout|The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown.|1000|integer| +|defaultBlockWhenFull|Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted.|false|boolean| +|defaultDiscardWhenFull|Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue.|false|boolean| +|defaultOfferTimeout|Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, where a configured timeout can be added to the block case. Utilizing the .offer(timeout) method of the underlining java queue||integer| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|defaultQueueFactory|Sets the default queue factory.||object| +|queueSize|Sets the default maximum capacity of the SEDA queue (i.e., the number of messages it can hold).|1000|integer| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|name|Name of queue||string| +|size|The maximum capacity of the SEDA queue (i.e., the number of messages it can hold). Will by default use the defaultSize set on the SEDA component.|1000|integer| +|concurrentConsumers|Number of concurrent threads processing exchanges.|1|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|limitConcurrentConsumers|Whether to limit the number of concurrentConsumers to the maximum of 500. By default, an exception will be thrown if an endpoint is configured with a greater number. You can disable that check by turning this option off.|true|boolean| +|multipleConsumers|Specifies whether multiple consumers are allowed. If enabled, you can use SEDA for Publish-Subscribe messaging. That is, you can send a message to the SEDA queue and have each consumer receive a copy of the message. When enabled, this option should be specified on every consumer endpoint.|false|boolean| +|pollTimeout|The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown.|1000|integer| +|purgeWhenStopping|Whether to purge the task queue when stopping the consumer/route. This allows to stop faster, as any pending messages on the queue is discarded.|false|boolean| +|blockWhenFull|Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted.|false|boolean| +|discardIfNoConsumers|Whether the producer should discard the message (do not add the message to the queue), when sending to a queue with no active consumers. Only one of the options discardIfNoConsumers and failIfNoConsumers can be enabled at the same time.|false|boolean| +|discardWhenFull|Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue.|false|boolean| +|failIfNoConsumers|Whether the producer should fail by throwing an exception, when sending to a queue with no active consumers. Only one of the options discardIfNoConsumers and failIfNoConsumers can be enabled at the same time.|false|boolean| +|offerTimeout|Offer timeout (in milliseconds) can be added to the block case when queue is full. You can disable timeout by using 0 or a negative value.||duration| +|timeout|Timeout (in milliseconds) before a SEDA producer will stop waiting for an asynchronous task to complete. You can disable timeout by using 0 or a negative value.|30000|duration| +|waitForTaskToComplete|Option to specify whether the caller should wait for the async task to complete or not before continuing. The following three options are supported: Always, Never or IfReplyExpected. The first two values are self-explanatory. The last value, IfReplyExpected, will only wait if the message is Request Reply based. The default option is IfReplyExpected.|IfReplyExpected|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|queue|Define the queue instance which will be used by the endpoint||object| diff --git a/camel-telegram.md b/camel-telegram.md new file mode 100644 index 0000000000000000000000000000000000000000..96386b0d9ecdfa6e17ca8a002862e14861c181c6 --- /dev/null +++ b/camel-telegram.md @@ -0,0 +1,447 @@ +# Telegram + +**Since Camel 2.18** + +**Both producer and consumer are supported** + +The Telegram component provides access to the [Telegram Bot +API](https://core.telegram.org/bots/api). It allows a Camel-based +application to send and receive messages by acting as a Bot, +participating in direct conversations with normal users, private and +public groups or channels. + +A Telegram Bot must be created before using this component, following +the instructions at the [Telegram Bot developers +home](https://core.telegram.org/bots#3-how-do-i-create-a-bot). When a +new Bot is created, the [BotFather](https://telegram.me/botfather) +provides an **authorization token** corresponding to the Bot. The +authorization token is a mandatory parameter for the camel-telegram +endpoint. + +To allow the Bot to receive all messages exchanged within a group or +channel (not just the ones starting with a */* character), ask the +BotFather to **disable the privacy mode**, using the **/setprivacy** +command. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-telegram + x.x.x + + + +# URI format + + telegram:type[?options] + +# Usage + +The Telegram component supports both consumer and producer endpoints. It +can also be used in **reactive chatbot mode** (to consume, then produce +messages). + +# Producer Example + +The following is a basic example of how to send a message to a Telegram +chat through the Telegram Bot API. + +in Java DSL + + from("direct:start").to("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere"); + +or in Spring XML + + + + + + +The code `123456789:insertYourAuthorizationTokenHere` is the +**authorization token** corresponding to the Bot. + +When using the producer endpoint without specifying the **chat id** +option, the target chat will be identified using information contained +in the body or headers of the message. The following message bodies are +allowed for a producer endpoint (messages of type `OutgoingXXXMessage` +belong to the package `org.apache.camel.component.telegram.model`) + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Java TypeDescription

OutgoingTextMessage

To send a text message to a +chat

OutgoingPhotoMessage

To send a photo (JPG, PNG) to a +chat

OutgoingAudioMessage

To send a mp3 audio to a chat

OutgoingVideoMessage

To send a mp4 video to a chat

OutgoingDocumentMessage

To send a file to a chat (any media +type)

OutgoingStickerMessage

To send a sticker to a chat +(WEBP)

OutgoingAnswerInlineQuery

To send answers to an inline +query

EditMessageTextMessage

To edit text and game messages +(editMessageText)

EditMessageCaptionMessage

To edit captions of messages +(editMessageCaption)

EditMessageMediaMessage

To edit animation, audio, document, +photo, or video messages. (editMessageMedia)

EditMessageReplyMarkupMessage

To edit only the reply markup of a +message. (editMessageReplyMarkup)

EditMessageDelete

To delete a message, including service +messages. (deleteMessage)

SendLocationMessage

To send a location +(setSendLocation)

EditMessageLiveLocationMessage

To send changes to a live location +(editMessageLiveLocation)

StopMessageLiveLocationMessage

To stop updating a live location +message sent by the bot or via the bot (for inline bots) before +live_period expires (stopMessageLiveLocation)

SendVenueMessage

To send information about a venue +(sendVenue)

byte[]

To send any media type supported. It +requires the CamelTelegramMediaType header to be set to the +appropriate media type

String

To send a text message to a chat. It +gets converted automatically into a +OutgoingTextMessage

+ +# Consumer Example + +The following is a basic example of how to receive all messages that +telegram users are sending to the configured Bot. In Java DSL + + from("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere") + .bean(ProcessorBean.class) + +or in Spring XML + + + + + + + + +The `MyBean` is a simple bean that will receive the messages + + public class MyBean { + + public void process(String message) { + // or Exchange, or org.apache.camel.component.telegram.model.IncomingMessage (or both) + + // do process + } + + } + +Supported types for incoming messages are + + ++++ + + + + + + + + + + + + + + + + +
Java TypeDescription

IncomingMessage

The full object representation of an +incoming message

String

The content of the message, for text +messages only

+ +# Reactive Chat-Bot Example + +The reactive chatbot mode is a simple way of using the Camel component +to build a simple chatbot that replies directly to chat messages +received from the Telegram users. + +The following is a basic configuration of the chatbot in Java DSL + + from("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere") + .bean(ChatBotLogic.class) + .to("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere"); + +or in Spring XML + + + + + + + + + +The `ChatBotLogic` is a simple bean that implements a generic +String-to-String method. + + public class ChatBotLogic { + + public String chatBotProcess(String message) { + if( "do-not-reply".equals(message) ) { + return null; // no response in the chat + } + + return "echo from the bot: " + message; // echoes the message + } + + } + +Every non-null string returned by the `chatBotProcess` method is +automatically routed to the chat that originated the request (as the +`CamelTelegramChatId` header is used to route the message). + +# Getting the Chat ID + +If you want to push messages to a specific Telegram chat when an event +occurs, you need to retrieve the corresponding chat ID. The chat ID is +not currently shown in the telegram client, but you can obtain it using +a simple route. + +First, add the bot to the chat where you want to push messages, then run +a route like the following one. + + from("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere") + .to("log:INFO?showHeaders=true"); + +Any message received by the bot will be dumped to your log together with +information about the chat (`CamelTelegramChatId` header). + +Once you get the chat ID, you can use the following sample route to push +a message to it. + + from("timer:tick") + .setBody().constant("Hello") + to("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere&chatId=123456") + +Note that the corresponding URI parameter is simply `chatId`. + +# Customizing keyboard + +You can customize the user keyboard instead of asking him to write an +option. `OutgoingTextMessage` has the property `ReplyMarkup` which can +be used for such a thing. + + from("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere") + .process(exchange -> { + + OutgoingTextMessage msg = new OutgoingTextMessage(); + msg.setText("Choose one option!"); + + InlineKeyboardButton buttonOptionOneI = InlineKeyboardButton.builder() + .text("Option One - I").build(); + + InlineKeyboardButton buttonOptionOneII = InlineKeyboardButton.builder() + .text("Option One - II").build(); + + InlineKeyboardButton buttonOptionTwoI = InlineKeyboardButton.builder() + .text("Option Two - I").build(); + + ReplyKeyboardMarkup replyMarkup = ReplyKeyboardMarkup.builder() + .keyboard() + .addRow(Arrays.asList(buttonOptionOneI, buttonOptionOneII)) + .addRow(Arrays.asList(buttonOptionTwoI)) + .close() + .oneTimeKeyboard(true) + .build(); + + msg.setReplyMarkup(replyMarkup); + + exchange.getIn().setBody(msg); + }) + .to("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere"); + +If you want to disable it, the next message must have the property +`removeKeyboard` set on `ReplyKeyboardMarkup` object. + + from("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere") + .process(exchange -> { + + OutgoingTextMessage msg = new OutgoingTextMessage(); + msg.setText("Your answer was accepted!"); + + ReplyKeyboardMarkup replyMarkup = ReplyKeyboardMarkup.builder() + .removeKeyboard(true) + .build(); + + msg.setReplyKeyboardMarkup(replyMarkup); + + exchange.getIn().setBody(msg); + }) + .to("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere"); + +# Webhook Mode + +The Telegram component supports usage in the **webhook mode** using the +**camel-webhook** component. + +To enable webhook mode, users need first to add a REST implementation to +their application. Maven users, for example, can add **netty-http** to +their `pom.xml` file: + + + org.apache.camel + camel-netty-http + x.x.x + + + +Once done, you need to prepend the webhook URI to the telegram URI you +want to use. + +In Java DSL: + + from("webhook:telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere").to("log:info"); + +Some endpoints will be exposed by your application and Telegram will be +configured to send messages to them. You need to ensure that your server +is exposed to the internet and to pass the right value of the +**camel.component.webhook.configuration.webhook-external-url** property. + +Refer to the **camel-webhook** component documentation for instructions +on how to set it. + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|baseUri|Can be used to set an alternative base URI, e.g. when you want to test the component against a mock Telegram API|https://api.telegram.org|string| +|client|To use a custom java.net.http.HttpClient||object| +|healthCheckConsumerEnabled|Used for enabling or disabling all consumer based health checks from this component|true|boolean| +|healthCheckProducerEnabled|Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.|true|boolean| +|authorizationToken|The default Telegram authorization token to be used when the information is not provided in the endpoints.||string| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|type|The endpoint type. Currently, only the 'bots' type is supported.||string| +|limit|Limit on the number of updates that can be received in a single polling request.|100|integer| +|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean| +|timeout|Timeout in seconds for long polling. Put 0 for short polling or a bigger number for long polling. Long polling produces shorter response time.|30|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object| +|chatId|The identifier of the chat that will receive the produced messages. Chat ids can be first obtained from incoming messages (eg. when a telegram user starts a conversation with a bot, its client sends automatically a '/start' message containing the chat id). It is an optional parameter, as the chat id can be set dynamically for each outgoing message (using body or headers).||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|baseUri|Can be used to set an alternative base URI, e.g. when you want to test the component against a mock Telegram API||string| +|bufferSize|The initial in-memory buffer size used when transferring data between Camel and AHC Client.|1048576|integer| +|client|To use a custom HttpClient||object| +|proxyHost|HTTP proxy host which could be used when sending out the message.||string| +|proxyPort|HTTP proxy port which could be used when sending out the message.||integer| +|proxyType|HTTP proxy type which could be used when sending out the message.|HTTP|object| +|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer| +|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer| +|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer| +|delay|Milliseconds before the next poll.|500|integer| +|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean| +|initialDelay|Milliseconds before the first poll starts.|1000|integer| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| +|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object| +|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object| +|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object| +|startScheduler|Whether the scheduler should be auto started.|true|boolean| +|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object| +|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean| +|authorizationToken|The authorization token for using the bot (ask the BotFather)||string| diff --git a/camel-thrift.md b/camel-thrift.md new file mode 100644 index 0000000000000000000000000000000000000000..fc499374c8cb0c894b95b89da397553a680a0754 --- /dev/null +++ b/camel-thrift.md @@ -0,0 +1,115 @@ +# Thrift + +**Since Camel 2.20** + +**Both producer and consumer are supported** + +The Thrift component allows you to call or expose Remote Procedure Call +(RPC) services using [Apache Thrift](https://thrift.apache.org/) binary +communication protocol and serialization mechanism. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-thrift + x.x.x + + + +# URI format + + thrift://service[?options] + +# Thrift method parameters mapping + +Parameters in the called procedure must be passed as a list of objects +inside the message body. The primitives are converted from the objects +on the fly. To correctly find the corresponding method, all types must +be transmitted regardless of the values. Please see an example below, +how to pass different parameters to the method with the Camel body: + + List requestBody = new ArrayList(); + + requestBody.add((boolean)true); + requestBody.add((byte)THRIFT_TEST_NUM1); + requestBody.add((short)THRIFT_TEST_NUM1); + requestBody.add((int)THRIFT_TEST_NUM1); + requestBody.add((long)THRIFT_TEST_NUM1); + requestBody.add((double)THRIFT_TEST_NUM1); + requestBody.add("empty"); // String parameter + requestBody.add(ByteBuffer.allocate(10)); // binary parameter + requestBody.add(new Work(THRIFT_TEST_NUM1, THRIFT_TEST_NUM2, Operation.MULTIPLY)); // Struct parameter + requestBody.add(new ArrayList()); // list parameter + requestBody.add(new HashSet()); // set parameter + requestBody.add(new HashMap()); // map parameter + + Object responseBody = template.requestBody("direct:thrift-alltypes", requestBody); + +Incoming parameters in the service consumer will also be passed to the +message body as a list of objects. + +# Examples + +Below is a simple synchronous method invoke with host and port +parameters + + from("direct:thrift-calculate") + .to("thrift://localhost:1101/org.apache.camel.component.thrift.generated.Calculator?method=calculate&synchronous=true"); + +Below is a simple synchronous method invoke for the XML DSL +configuration + + + + + + +Thrift service consumer with asynchronous communication + + from("thrift://localhost:1101/org.apache.camel.component.thrift.generated.Calculator") + .to("direct:thrift-service"); + +It’s possible to automate Java code generation for .thrift files using +**thrift-maven-plugin**, but before start the thrift compiler binary +distribution for your operating system must be present on the running +host. + +# For more information, see these resources + +[Thrift project GitHub](https://github.com/apache/thrift/) +[https://thrift.apache.org/tutorial/java](https://thrift.apache.org/tutorial/java) \[Apache Thrift Java +tutorial\] + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| +|useGlobalSslContextParameters|Determine if the thrift component is using global SSL context parameters|false|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|host|The Thrift server host name. This is localhost or 0.0.0.0 (if not defined) when being a consumer or remote server host name when using producer.||string| +|port|The Thrift server port||integer| +|service|Fully qualified service name from the thrift descriptor file (package dot service definition name)||string| +|compressionType|Protocol compression mechanism type|NONE|object| +|exchangeProtocol|Exchange protocol serialization type|BINARY|object| +|clientTimeout|Client timeout for consumers||integer| +|maxPoolSize|The Thrift server consumer max thread pool size|10|integer| +|poolSize|The Thrift server consumer initial thread pool size|1|integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|method|The Thrift invoked method name||string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|negotiationType|Security negotiation type|PLAINTEXT|object| +|sslParameters|Configuration parameters for SSL/TLS security negotiation||object| diff --git a/camel-thymeleaf.md b/camel-thymeleaf.md new file mode 100644 index 0000000000000000000000000000000000000000..dac9a18b74be5f66e1b1aa0716c3fa8d3b315e64 --- /dev/null +++ b/camel-thymeleaf.md @@ -0,0 +1,269 @@ +# Thymeleaf + +**Since Camel 4.1** + +**Only producer is supported** + +The Thymeleaf component allows you to process a message using a +[Thymeleaf](https://www.thymeleaf.org/) template. This can be very +powerful when using Templating to generate responses for requests. + +Maven users will need to add the following dependency to their `pom.xml` +for this component: + + + org.apache.camel + camel-thymeleaf + x.x.x + + + +# URI format + + thymeleaf:templateName[?options] + +Where **templateName** is the classpath-local URI of the template to +invoke; or the complete URL of the remote template (e.g.: +`\file://folder/myfile.html`). + +Headers set during the Thymeleaf evaluation are returned to the message +and added as headers, thus making it possible to return values from +Thymeleaf to the Message. + +For example, to set the header value of `fruit` in the Thymeleaf +template `fruit-template.html`: + + $in.setHeader("fruit", "Apple") + +The `fruit` header is now accessible from the `message.out.headers`. + +# Thymeleaf Context + +Camel will provide exchange information in the Thymeleaf context (just a +`Map`). The `Exchange` is transferred as: + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
keyvalue

exchange

The Exchange +itself.

exchange.properties

The Exchange +properties.

headers

The headers of the In message.

camelContext

The Camel Context instance.

request

The In message.

in

The In message.

body

The In message body.

out

The Out message (only for InOut message +exchange pattern).

response

The Out message (only for InOut message +exchange pattern).

+ +You can set up a custom Thymeleaf Context yourself by setting property +`allowTemplateFromHeader=true` and setting the message header +`CamelThymeleafContext` like this + + EngineContext engineContext = new EngineContext(variableMap); + exchange.getIn().setHeader("CamelThymeleafContext", engineContext); + +# Hot reloading + +The Thymeleaf template resource is, by default, hot reloadable for both +file and classpath resources (expanded jar). If you set +`contentCache=true`, Camel will only load the resource once, and thus +hot reloading is not possible. This scenario can be used in production +when the resource never changes. + +# Dynamic templates + +Camel provides two headers by which you can define a different resource +location for a template or the template content itself. If any of these +headers is set, then Camel uses this over the endpoint configured +resource. This allows you to provide a dynamic template at runtime. + + +++++ + + + + + + + + + + + + + + + + + + + +
HeaderTypeDescription

CamelThymeleafResourceUri

String

A URI for the template resource to use +instead of the endpoint configured.

CamelThymeleafTemplate

String

The template to use instead of the +endpoint configured.

+ +# Samples + +For a simple use case, you could use something like: + + from("activemq:My.Queue"). + to("thymeleaf:com/acme/MyResponse.html"); + +To use a Thymeleaf template to formulate a response to a message for +InOut message exchanges (where there is a `JMSReplyTo` header). + +If you want to use InOnly and consume the message and send it to another +destination, you could use the following route: + + from("activemq:My.Queue") + .to("thymeleaf:com/acme/MyResponse.html") + .to("activemq:Another.Queue"); + +And to use the content cache, e.g., for use in production, where the +`.html` template never changes: + + from("activemq:My.Queue") + .to("thymeleaf:com/acme/MyResponse.html?contentCache=true") + .to("activemq:Another.Queue"); + +And a file-based resource: + + from("activemq:My.Queue") + .to("thymeleaf:file://myfolder/MyResponse.html?contentCache=true") + .to("activemq:Another.Queue"); + +It’s possible to specify what template the component should use +dynamically via a header, so for example: + + from("direct:in") + .setHeader("CamelThymeleafResourceUri").constant("path/to/my/template.html") + .to("thymeleaf:dummy?allowTemplateFromHeader=true""); + +It’s possible to specify a template directly as a header. The component +should use it dynamically via a header, so, for example: + + from("direct:in") + .setHeader("CamelThymeleafTemplate").constant("Hi this is a thymeleaf template that can do templating ${body}") + .to("thymeleaf:dummy?allowTemplateFromHeader=true""); + +# The Email Sample + +In this sample, we want to use Thymeleaf templating for an order +confirmation email. The email template is laid out in Thymeleaf as: + +**letter.html** + + Dear [(${headers.lastName})], [(${headers.firstName})] + + Thanks for the order of [(${headers.item})]. + + Regards Camel Riders Bookstore + [(${body})] + +And the java code (from a unit test): + + private Exchange createLetter() { + Exchange exchange = context.getEndpoint("direct:a").createExchange(); + Message msg = exchange.getIn(); + msg.setHeader("firstName", "Claus"); + msg.setHeader("lastName", "Ibsen"); + msg.setHeader("item", "Camel in Action"); + msg.setBody("PS: Next beer is on me, James"); + return exchange; + } + + @Test + public void testThymeleafLetter() throws Exception { + MockEndpoint mock = getMockEndpoint("mock:result"); + mock.expectedMessageCount(1); + mock.message(0).body(String.class).contains("Thanks for the order of Camel in Action"); + + template.send("direct:a", createLetter()); + + mock.assertIsSatisfied(); + } + + @Override + protected RouteBuilder createRouteBuilder() { + return new RouteBuilder() { + public void configure() { + from("direct:a") + .to("thymeleaf:org/apache/camel/component/thymeleaf/letter.txt") + .to("mock:result"); + } + }; + } + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|resourceUri|Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod.||string| +|allowContextMapAll|Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API.|false|boolean| +|cacheable|Whether templates have to be considered cacheable or not.||boolean| +|cacheTimeToLive|The cache Time To Live for templates, expressed in milliseconds.||integer| +|checkExistence|Whether a template resources will be checked for existence before being returned.||boolean| +|contentCache|Sets whether to use resource content cache or not|false|boolean| +|templateMode|The template mode to be applied to templates.|HTML|string| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|encoding|The character encoding to be used for reading template resources.||string| +|order|The order in which this template will be resolved as part of the resolver chain.||integer| +|prefix|An optional prefix added to template names to convert them into resource names.||string| +|resolver|The type of resolver to be used by the template engine.|CLASS\_LOADER|object| +|suffix|An optional suffix added to template names to convert them into resource names.||string| diff --git a/camel-tika.md b/camel-tika.md new file mode 100644 index 0000000000000000000000000000000000000000..c721e1158f5fd51719da8e7ba288b916fb1feddf --- /dev/null +++ b/camel-tika.md @@ -0,0 +1,56 @@ +# Tika + +**Since Camel 2.19** + +**Only producer is supported** + +The Tika component provides the ability to detect and parse documents +with Apache Tika. This component uses [Apache +Tika](https://tika.apache.org/) as the underlying library to work with +documents. + +To use the Tika component, Maven users will need to add the following +dependency to their `pom.xml`: + +**pom.xml** + + + org.apache.camel + camel-tika + x.x.x + + + +# To Detect a file’s MIME Type + +The file should be placed in the Body. + + from("direct:start") + .to("tika:detect"); + +# To Parse a File + +The file should be placed in the Body. + + from("direct:start") + .to("tika:parse"); + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|operation|Operation type||object| +|tikaParseOutputEncoding|Tika Parse Output Encoding||string| +|tikaParseOutputFormat|Tika Output Format. Supported output formats. xml: Returns Parsed Content as XML. html: Returns Parsed Content as HTML. text: Returns Parsed Content as Text. textMain: Uses the boilerpipe library to automatically extract the main content from a web page.|xml|object| +|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean| +|tikaConfig|Tika Config||object| +|tikaConfigUri|Tika Config Url||string| diff --git a/camel-timer.md b/camel-timer.md new file mode 100644 index 0000000000000000000000000000000000000000..cc33ef98b49b77eb0aa0133e730b3c705703b11c --- /dev/null +++ b/camel-timer.md @@ -0,0 +1,157 @@ +# Timer + +**Since Camel 1.0** + +**Only consumer is supported** + +The Timer component is used to generate message exchanges when a timer +fires. You can only consume events from this endpoint. + +# URI format + + timer:name[?options] + +Where `name` is the name of the `Timer` object, which is created and +shared across endpoints. So if you use the same name for all your timer +endpoints, only one `Timer` object and thread will be used. + +The *IN* body of the generated exchange is `null`. Therefore, calling +`exchange.getIn().getBody()` returns `null`. + +**Advanced Scheduler** + +See also the [Quartz](#quartz-component.adoc) component that supports +much more advanced scheduling. + +# Exchange Properties + +When the timer is fired, it adds the following information as properties +to the `Exchange`: + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescription

Exchange.TIMER_NAME

String

The value of the name +option.

Exchange.TIMER_TIME

Date

The value of the time +option.

Exchange.TIMER_PERIOD

long

The value of the period +option.

Exchange.TIMER_FIRED_TIME

Date

The time when the consumer +fired.

Exchange.TIMER_COUNTER

Long

The current fire counter. Starts from +1.

+ +# Sample + +To set up a route that generates an event every 60 seconds: + + from("timer://foo?fixedRate=true&period=60000").to("bean:myBean?method=someMethodName"); + +The above route will generate an event and then invoke the +`someMethodName` method on the bean called `myBean` in the Registry. + +And the route in Spring DSL: + + + + + + +# Firing as soon as possible + +You may want to fire messages in a Camel route as soon as possible, you +can use a negative delay: + + + + + + +In this way, the timer will fire messages immediately. + +You can also specify a `repeatCount` parameter in conjunction with a +negative delay to stop firing messages after a fixed number has been +reached. + +If you don’t specify a `repeatCount` then the timer will continue firing +messages until the route will be stopped. + +# Firing only once + +You may want to fire a message in a Camel route only once, such as when +starting the route. To do that, you use the `repeatCount` option as +shown: + + + + + + +## Component Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|includeMetadata|Whether to include metadata in the exchange such as fired time, timer name, timer count etc.|false|boolean| +|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean| + +## Endpoint Configurations + + +|Name|Description|Default|Type| +|---|---|---|---| +|timerName|The name of the timer||string| +|delay|The number of milliseconds to wait before the first event is generated. Should not be used in conjunction with the time option. The default value is 1000.|1000|duration| +|fixedRate|Events take place at approximately regular intervals, separated by the specified period.|false|boolean| +|includeMetadata|Whether to include metadata in the exchange such as fired time, timer name, timer count etc.|false|boolean| +|period|Generate periodic events every period. Must be zero or positive value. The default value is 1000.|1000|duration| +|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the timer will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.||integer| +|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean| +|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object| +|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object| +|daemon|Specifies whether or not the thread associated with the timer endpoint runs as a daemon. The default value is true.|true|boolean| +|pattern|Allows you to specify a custom Date pattern to use for setting the time option using URI syntax.||string| +|synchronous|Sets whether synchronous processing should be strictly used|false|boolean| +|time|A java.util.Date the first event should be generated. If using the URI, the pattern expected is: yyyy-MM-dd HH:mm:ss or yyyy-MM-dd'T'HH:mm:ss.||string| +|timer|To use a custom Timer||object| +|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object| diff --git a/camel-twilio.md b/camel-twilio.md new file mode 100644 index 0000000000000000000000000000000000000000..79eeb444dbd7284934240048859e27c1ad853961 --- /dev/null +++ b/camel-twilio.md @@ -0,0 +1,127 @@ +# Twilio + +**Since Camel 2.20** + +**Both producer and consumer are supported** + +The Twilio component provides access to Version 2010-04-01 of Twilio +REST APIs accessible using [Twilio Java +SDK](https://github.com/twilio/twilio-java). + +Maven users will need to add the following dependency to their pom.xml +for this component: + + + org.apache.camel + camel-twilio + ${camel-version} + + +# Producer Endpoints: + +Producer endpoints can use endpoint prefixes followed by endpoint names +and associated options described next. A shorthand alias can be used for +all the endpoints. The endpoint URI MUST contain a prefix. + +Any of the endpoint options can be provided in either the endpoint URI, +or dynamically in a message header. The message header name must be of +the format **`CamelTwilio.