Scheduled jobs
There are two types of the scheduled jobs which OpenHub supports:
- jobs which can run on random node and can be executed parallel (it's not condition to start jobs in the same time)
- jobs which can run only once in the cluster, no matter on which node (note: we don't take into consideration the possibility that scheduled job has to run on specific node only)Â
If there is only one OpenHub instance then doesn't matter which type of scheduled job is used.
API
- Quartz configuration API is in org.openhubframework.openhub.core.config.QuartzConfig package
- jobs and triggers are defined with the following annotations (package org.openhubframework.openhub.api.common.quartz)
- QuartzJob - defines one quartz Job with simple trigger QuartzSimpleTrigger or cron trigger QuartzCronTrigger
- defines execute type in the cluster (JobExecuteTypeInCluster):
- CONCURRENTÂ - job can be executed parralel on nodes
- NOT_CONCURRENTÂ - job can run only on one node
- defines execute type in the cluster (JobExecuteTypeInCluster):
- QuartzSimpleTrigger -Â this type of trigger is used to fire a job repeated at a specified interval
- QuartzCronTrigger -Â this type of trigger is used to fire a job at given time, defined by Unix 'cron-like' schedule definitions.
- QuartzJob - defines one quartz Job with simple trigger QuartzSimpleTrigger or cron trigger QuartzCronTrigger
Solution
- we use directly Quartz job scheduler, not Camel-quartz componentÂ
- implementation classes are in org.openhubframework.openhub.core.common.quartz package
- basic implementation is in the class org.openhubframework.openhub.core.common.quartz.scheduler.DefaultScheduler that creates all defined scheduled jobs
- information about concurrent jobs (=CONCURRENT) are stored in memory
- information about not concurrent jobs (=NOT_CONCURRENT) are stored in database - quartz uses default database schema for storing scheduled jobs (for H2 DB defined in db/db/migration/h2/V1_0_2__schema_quartz.sql in our project)
- scheduler is started by org.openhubframework.openhub.core.common.quartz.scheduler.QuartzSchedulerLifecycle
If you use different database than H2 DB then you must create database structure manually from GitHub scripts for concurrent scheduled jobs.
See Flyway - schema/data migration for more details.
Configuration examples
@OpenHubQuartzJob(name = "AsyncPostponedJob", executeTypeInCluster = JobExecuteTypeInCluster.NOT_CONCURRENT, simpleTriggers = @QuartzSimpleTrigger(repeatIntervalMillis = 30000)) public void invokePostponedJob() {}
@OpenHubQuartzJob(name = "MoreTriggerJob", executeTypeInCluster = JobExecuteTypeInCluster.CONCURRENT, cronTriggers = { @QuartzCronTrigger(cronExpression = "0 00 23 ? * *", name = "FirstTriggerForJob", group = "MoreTriggerGroup"), @QuartzCronTrigger(cronExpression = "0 00 10 ? * *", misfireInstruction = CronTriggerMisfireInstruction.FIRE_ONCE_NOW, name = "SecondTriggerForJob", group = "MoreTriggerGroup")}, simpleTriggers = { @QuartzSimpleTrigger(repeatIntervalMillis = 10000, repeatCount = 20, name = "ThirdTriggerForJob", group = "MoreTriggerGroup"), @QuartzSimpleTrigger(repeatIntervalProperty = ASYNCH_PARTLY_FAILED_REPEAT_TIME_SEC, intervalPropertyUnit = SimpleTriggerPropertyUnit.SECONDS, misfireInstruction = SimpleTriggerMisfireInstruction.FIRE_NOW, name = "FourthTriggerForJob", group = "MoreTriggerGroup") }) public void invokeJob() {}
Running jobs
The following jobs run in production profile (Spring profile value is "prod"):
Job name | Default configuration | Job description |
---|---|---|
confirmationPool | asynch.confirmation.repeatTimeSec = 60s | Job process that pools failed confirmations for next processing. |
core_FinalMessageProcessing | ohf.asynch.finalMessages.processingIntervalSec = 3600 | Definition of job to invoke final messages handling. |
partlyFailedPool | asynch.partlyFailedRepeatTimeSec = 60 | Route definition that starts job process that pools message queue (=database) and takes PARTLY_FAILED messages for further processing. |
extCallRepair | asynch.repairRepeatTimeSec = 300 | Repairs external calls hooked in the state PROCESSING |
messageRepair | asynch.repairRepeatTimeSec = 300 | Repairs messages hooked in the state PROCESSING |
There are also default (technical) jobs from Quartz:
- MEMORY_SCHEDULER
- DATABASE_CLUSTER_SCHEDULER