Performance hints
This page offers several hints for performance tuning which would be good to keep in mind:
OpenHub = Apache Camel
There are many resources (articles, comments in discussions, etc.) which are related to Apache Camel performance issues.
List of interesting ones:
- Performance Tuning Ideas for Apache Camel
- Insights about tuning an Apache Camel application deployed into Spring Boot
Database connection
OpenHub uses database for saving messages and their states (cross several nodes in the cluster):
- use DB pooler, for example HikariCP (we have very good experience with it)
- DB configuration is via Spring Boot "spring.datasource" properties, see Spring Boot documentation - direct database connection or via JNDI
Count of database connections must correlate with
- number of concurrent consumers for processing asynchronous messages. See configuration parameter ohf.asynch.concurrentConsumers
- number of incoming asynchronous requests for processingÂ
- number of incoming synchronous requests if OpenHub is enabled for saving requests/responses.Â
Count of database connections must be at least count of incomming asynchronous requests plus count of concurrent consumers together.
SEDA configuration
Crucial component for processing incoming requests is SEDA component.
There are several points for keeping in mind:
- we use the following configuration (see org.openhubframework.openhub.core.common.asynch.AsynchMessageRoute#URI_ASYNC_PROCESSING_MSG)
- ASYNCH_CONCURRENT_CONSUMERS corresponds to the parameter ohf.asynch.concurrentConsumers, see OpenHub configuration
- PRIORITY_QUEUE_FACTORY refers to Spring bean with the name "priorityQueueFactory"
public static final String URI_ASYNC_PROCESSING_MSG = "seda:asynch_message_route" + "?concurrentConsumers={{" + ASYNCH_CONCURRENT_CONSUMERS + "}}&waitForTaskToComplete=Never" + "&blockWhenFull=true&queueFactory=#" + PRIORITY_QUEUE_FACTORY;
- default value of size or queueSize (maximum capacity of the SEDA queue i.e., the number of messages it can hold) is Integer.MAX_VALUEÂ
- queue size must be big enough for saving incoming requests otherwise thread that adds new message will wait
- SEDA uses PriorityBlockingQueueFactory (not default LinkedBlockingQueue implementation)
- queue is sorted (see org.openhubframework.openhub.core.common.asynch.msg.MsgPriorityComparator) by processing priority attribute defined at Message object.Â
- incoming (new) requests have higher priority than re-processing messagesÂ
@Bean(name = AsynchConstants.PRIORITY_QUEUE_FACTORY) public PriorityBlockingQueueFactory priorityQueueFactory() { PriorityBlockingQueueFactory<Exchange> queueFactory = new PriorityBlockingQueueFactory<>(); queueFactory.setComparator(new MsgPriorityComparator()); return queueFactory; }
- watch logs and search for the following log message that give information how long the incoming message waits in SEDA queue for processing. If the time is high then it means that message waits too long in the queue for further processing => increase queue size or number of consumers (see ohf.asynch.concurrentConsumers)
public void logStartProcessing(@Body Message msg, @Nullable @Header(AsynchConstants.MSG_QUEUE_INSERT_HEADER) Long msgInsertTime) { LOG.debug("Starts processing of the message {}, waited in queue for {} ms", msg.toHumanString(), msgInsertTime != null ? (System.currentTimeMillis() - msgInsertTime) : "-"); }
- Sometimes can happen when message is long time in queue that repairing process converts message back to PARTLY_FAILED state and evenly message can start with duplicate processing. In this case the following log message occurres:
- repairing process is configured mainly by these two parameters -  ohf.asynch.repairRepeatTimeSec and ohf.asynch.partlyFailedIntervalSec
2019-01-14 19:52:48 [Camel (camelContext) thread #20 - seda://asynch_message_route, ALTA, ID:95fc3756-182d-11e9-b9c2-005056010488] WARN syncProcessOut_route - Message (msg_id = 37357, correlationId = ID:95fc3756-182d-11e9-b9c2-005056010488) was obsolete, stopped further processing.
Thread pools
- see threading model in Apache Camel or threading model in JBoss FUSE (note: JBoss FUSE is enterprise version of Apache Camel with better documentation)
- see Design Notes for ThreadPool Configuration
- default ThreadPoolProfile is as follows (defined in org.apache.camel.impl.DefaultExecutorServiceManager):
- pool size - threads to keep minimum in pool
- max pool size - the maximum pool size
- max queue size - the maximum number of tasks in the work queue
defaultProfile = new ThreadPoolProfile(defaultThreadPoolProfileId); defaultProfile.setDefaultProfile(true); defaultProfile.setPoolSize(10); defaultProfile.setMaxPoolSize(20); defaultProfile.setKeepAliveTime(60L); defaultProfile.setTimeUnit(TimeUnit.SECONDS); defaultProfile.setMaxQueueSize(1000); defaultProfile.setAllowCoreThreadTimeOut(false); defaultProfile.setRejectedPolicy(ThreadPoolRejectedPolicy.CallerRuns);
- we have small changes in our custom configuration (see org.openhubframework.openhub.core.config.CamelConfig)
- MAX_THREAD_POOL_SIZE = 30
ThreadPoolProfile threadPoolProfile = camelContext.getExecutorServiceManager().getDefaultThreadPoolProfile(); threadPoolProfile.setId(DEFAULT_THREAD_PROFILE); threadPoolProfile.setMaxPoolSize(MAX_THREAD_POOL_SIZE);
Throttling
Throttling can be disabled:Â ohf.disable.throttling = false
Caching
Use Cache / memory-grid functionality to increase performance when often data reading is necessary.