This is an internal documentation. There is a good chance you’re looking for something else. See Disclaimer.

Quartz Scheduler

Migration of Batch-Job contributions

Like before a batch job can be contributed in two ways, using either an annotation or a contribution. When using contributions there is a new class BatchJobContribution which is similar to the old one (BatchJobContribution).

The following needs to be considered when migrating the contributions:

  • id, active and description are exactly the same

  • privileged has been removed, jobs do not run privileged by default, this can be achieved programmatically using the SecurityManager

  • minutes, hours, months, daysOfMonth, daysOfWeek are replaced by the scheduleString attribute. To, for example, run a job every 5 minutes the following string would be used */5 * * * ?. If a batch job is contributed using the @BatchJob annotation, the schedule string can be copied from there. However it is necessary to either set daysOfWeek or daysOfMonth to ? (they cannot both be *).

  • callable or factory are replaced using the jobClass attribute where the class of the quartz job implementation should be specified

  • maintenance is not used by any batch job at this time and will be removed

There is also a new annotation BatchJob which is similar to the old one (BatchJob).

The following needs to be considered when migrating the annotations:

  • id, active and description are exactly the same

  • privileged has been removed, jobs do not run privileged by default, this can be achieved programmatically using the SecurityManager

  • schedule remains the same but be aware of the restrictions regarding daysOfWeek and daysOfMonth mentioned above

Migration of task callables

There is no longer a distinction between TaskFactory and TaskCallable the jobs are instantiated by quartz and spring, which means that the TaskFactory is obsolete. Only the TaskCallable needs to be migrated.

All tasks that need to be executed by the task queue need to extend from AbstractJob. This base class provides access to the Progress and ProgressLog and handles the metadata provided by the JobDataMap (like running the job in the given business unit with the given principal).

If the job may be cancelled (that is it returns true from TaskCallable#mayBeCancelled() and implements cancellation properly) it should extend from AbstractInterruptableJob, this base class provides an additional isCancelled() to check if the user has cancelled the task.

If TaskFactory#isConcurrentMode() is set to false, the annotation @DisallowConcurrentExecution should be used on the job class. All batch jobs should use this annotation.

Note

@DisallowConcurrentExecution only prevents concurrent execution of jobs with the same JobKey. In our case this only applies to batch jobs where the same job is called repeatedly by a cron trigger. Background actions (like sending mails) have a different job key for each execution (due to different task data parameters) and will always run concurrently. A manually executed batch job has a different job key as well and may run concurrently to the same job started by the cron trigger.

Some migration notes:

  • Methods getEventLogger() and getLog() on TaskContext were unused and have been removed

  • TaskContext#isCancelled() is now available as protected method isCancelled() in AbstractInterruptableJob

  • Methods getProgress() and getProgressLog() on TaskContext are now available as protected methods in AbstractJob. isProgressAvailable()```and ``isProgressLogAvailable() have been removed since the progress should always be available when a task has been scheduled manually.

  • TaskContext#getTaskData(): Task data can now be accessed through the JobDataMapReader which is passed to AbstractJob#doExecute(JobExecutionContext context, JobDataMapReader jobDataMapReader). There are some helper methods like getTaskId() for properties that are used often as well as generic methods like getObject(String key)

The SendMailJob can also be used as reference as it uses most task features.

Manually scheduling a task

While batch-jobs are scheduled automatically, other background tasks (like sending mails) need to be scheduled using the TaskSchedulingService.

  • jobClass is the class of the task that should be executed

  • taskName is the name of the job (used to be TaskData#setName())

  • taskType should be a valid Callable_type

  • jobData is the equivalent of TaskData. This data is persisted and available during task execution. The data is serialized using Xstream (as before) and supports custom data (putString() and putObject()) and also contains some methods to configure the environment (putPrincipal() and putBusinessUnit()).

  • executionDate point of time when the job should be started

Tests

Tests for batch-jobs can be easily migrated to EasyBatchjobTestCase.

See Batchjob Testing section.

Startup behaviour

ch.tocco.nice2.enableUpgradeMode=true

During the database upgrade, the scheduler is completely disabled. It is not possible execute any tasks and batch jobs will not be synchronized with the database.

UPDATE run environment

During the UPDATE environment, the scheduler is started, but all batch-job triggers will be paused. However explicitly submitted jobs will still be executed. The same behaviour can be achieved using the ch.tocco.nice2.tasks.disable.persistent.task.scheduling property.