Scheduled jobs


There are two types of the scheduled jobs which OpenHub supports:

  1. jobs which can run onĀ random node and can be executed parallel (it's not condition to start jobs in the same time)
  2. jobs which can runĀ only once in the cluster, no matter on which nodeĀ (note: we don't take into consideration the possibility that scheduled job has to run on specific node only)Ā 

If there is only one OpenHub instance then doesn't matter which type of scheduled job is used.

API

  • Quartz configuration API is inĀ org.openhubframework.openhub.core.config.QuartzConfigĀ package
  • jobs and triggers are defined with the following annotations (packageĀ org.openhubframework.openhub.api.common.quartz)
    • QuartzJobĀ - defines one quartz Job with simple triggerĀ QuartzSimpleTriggerĀ or cron triggerĀ QuartzCronTrigger
      • defines execute type in the cluster (JobExecuteTypeInCluster):
        • CONCURRENTĀ - job can be executed parralel on nodes
        • NOT_CONCURRENTĀ - job can run only on one node
    • QuartzSimpleTrigger -Ā this type of trigger is used to fire a job repeated at a specified interval
    • QuartzCronTrigger -Ā this type of trigger is used to fire a job at given time, defined by Unix 'cron-like' schedule definitions.

Solution

  • we use directly Quartz job scheduler, not Camel-quartz componentĀ 
  • implementation classes are inĀ org.openhubframework.openhub.core.common.quartz package
  • basic implementation is in the classĀ org.openhubframework.openhub.core.common.quartz.scheduler.DefaultSchedulerĀ that creates all defined scheduled jobs
    • information about concurrent jobs (=CONCURRENT) are stored in memory
    • information about not concurrent jobs (=NOT_CONCURRENT) are stored in database - quartz uses default database schema for storing scheduled jobs (for H2 DB defined inĀ db/db/migration/h2/V1_0_2__schema_quartz.sqlĀ in our project)
  • scheduler is started byĀ org.openhubframework.openhub.core.common.quartz.scheduler.QuartzSchedulerLifecycle

If you use different database than H2 DB then you must create database structure manually from GitHub scripts for concurrent scheduled jobs.

See Flyway - schema/data migration for more details.

Configuration examples

@OpenHubQuartzJob(name = "AsyncPostponedJob", executeTypeInCluster = JobExecuteTypeInCluster.NOT_CONCURRENT,
        simpleTriggers = @QuartzSimpleTrigger(repeatIntervalMillis = 30000))
public void invokePostponedJob() {}
@OpenHubQuartzJob(name = "MoreTriggerJob", executeTypeInCluster = JobExecuteTypeInCluster.CONCURRENT,
        cronTriggers = {
                @QuartzCronTrigger(cronExpression = "0 00 23 ? * *",
                        name = "FirstTriggerForJob",
                        group = "MoreTriggerGroup"),
                @QuartzCronTrigger(cronExpression = "0 00 10 ? * *",
                        misfireInstruction = CronTriggerMisfireInstruction.FIRE_ONCE_NOW,
                        name = "SecondTriggerForJob",
                        group = "MoreTriggerGroup")},
        simpleTriggers = {
                @QuartzSimpleTrigger(repeatIntervalMillis = 10000,
                        repeatCount = 20,
                        name = "ThirdTriggerForJob",
                        group = "MoreTriggerGroup"),
                @QuartzSimpleTrigger(repeatIntervalProperty = ASYNCH_PARTLY_FAILED_REPEAT_TIME_SEC,
                        intervalPropertyUnit = SimpleTriggerPropertyUnit.SECONDS,
                        misfireInstruction = SimpleTriggerMisfireInstruction.FIRE_NOW,
                        name = "FourthTriggerForJob",
                        group = "MoreTriggerGroup")
        })
public void invokeJob() {}

Running jobs

The following jobs run in production profile (Spring profile value is "prod"):

Job nameDefault configurationJob description
confirmationPoolasynch.confirmation.repeatTimeSec = 60sJob process that pools failed confirmations for next processing.
core_FinalMessageProcessingohf.asynch.finalMessages.processingIntervalSec = 3600Definition of job to invoke final messages handling.
partlyFailedPoolasynch.partlyFailedRepeatTimeSec = 60Route definition that starts job process that pools message queue (=database) and takes PARTLY_FAILED messages for further processing.
extCallRepairasynch.repairRepeatTimeSec = 300Repairs external calls hooked in the state PROCESSING
messageRepairasynch.repairRepeatTimeSec = 300Repairs messages hooked in the stateĀ PROCESSING


There are also default (technical) jobs from Quartz:

  • MEMORY_SCHEDULER
  • DATABASE_CLUSTER_SCHEDULER