Monthly Archives: March 2015

Spring Batch run by Spring Batch Admin

The second way to run Spring Batch job is by Spring Batch Admin. It is more powerful. By SBA, we can execute/stop/monitor our batch job in web UI. I downloaded the SBA sample code from Spring Batch website and made it more simple version.

I used the mysql to store the metadata. The metadata tables can be created by several ways. We can create it in normal spring batch job by adding below code in the config xml.

<jdbc:initialize-database data-source="dataSource">
      <jdbc:script location="org/springframework/batch/core/schema-drop-mysql.sql" />
      <jdbc:script location="org/springframework/batch/core/schema-mysql.sql" />

Or we can manually copy the sql and run it from spring-batch-core.jar: /org/springframework/batch/core/schema-mysql.sql

Once we have meta tables in database, let me explain the following configuration.

# Placeholders batch.*
#    for MySQL:
batch.jdbc.driver=com.mysql.jdbc.Driver  //In the project, remember we should add this dependency in maven
batch.jdbc.validationQuery=SELECT 1    //test the database connection
batch.schema.script=classpath*:/org/springframework/batch/core/schema-mysql.sql  //to build the meta tables
batch.drop.script=classpath*:/org/springframework/batch/core/schema-drop-mysql.sql  //to swipte the meta tables

# Non-platform dependent settings that you might like to change  //Always set false, or it will swipe and rebuild the meta tables

The configuration file is different from normal one. In the config xml in SBA, we should delete the jobRepository, transacionManager and jobLauncher. We only define the job itself. Pay attention to the namespace version, spring-batch-admin and spring version. Sometimes, it may not correct! According to my experience, spring-batch-admin 1.2.1, spring3.0.5 are together good ones.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="" xmlns:aop=""
      xmlns:tx="" xmlns:file=""
      xmlns:integration="" xmlns:p=""

<job id="infinite" xmlns="">
   <step id="step1" next="step1">
      <tasklet start-limit="100">
         <chunk commit-interval="1" reader="itemReader" writer="itemWriter" />

<bean id="itemWriter" class="com.wfs.springbatch.springbatchadmin.helloworld.ExampleItemWriter"/>

<bean id="itemReader" class="com.wfs.springbatch.springbatchadmin.helloworld.ExampleItemReader" scope="step"/>


In my case, I use mysql. So before run the batch job, we should set -DENVIRONMENT=mysql parameter for the JBoss or Tomcat server. If you use other databases, you should replace the with batch-[other-databse-name].properties

My ppt summary for Spring Batch Admin.

source code: link

Spring Batch run by Console

There are two ways to run your batch job.

First is to use console command. In order to do this, we need to compile the batch job, and saves all dependency jar in a lib file. We just write the following in the maven pom.xml(I refered this good example from mkyong.)


After that, we go to the target folder, and run following command:
java -cp “dependency-jars/*;SpringBatchExample.jar” spring\batch\jobs\job-hello-world.xml pliJob

Another way to run batch job is like we run normal main() in We compile it. And we use java -cp to call the App.class in the jar file:
java -cp SpringBatchExample.jar com.pli.project.sba.App

Spring Batch Basic

In last 2 weeks, I spent a lot of effort in researching in Spring Batch. This framework requires a lot of xml configuration. Below, I attached y passed code and explanation.
My Spring Batch project refers from mkyong.

If we want to autimatically create the metadata in database, we should add <jdbc:initialize-database>

<beans xmlns=""

    <!-- connect to database -->
   <bean id="dataSource"
      <property name="driverClassName" value="com.mysql.jdbc.Driver" />
      <property name="url" value="jdbc:mysql://localhost:3306/test" />
      <property name="username" value="root" />
      <property name="password" value="" />

   <!-- create job-meta tables automatically -->
   <jdbc:initialize-database data-source="dataSource">
      <jdbc:script location="org/springframework/batch/core/schema-drop-mysql.sql" />
      <jdbc:script location="org/springframework/batch/core/schema-mysql.sql" />

In order to launch job from It requires jobRepository, transacionManager and jobLauncher. If job doesn’t has job-repository property, job will use the default jobRepository(the name is jobRepository).
JobRepository is responsible for save the execution inforation. One choice is to save it in memory. Another is to save it in database. Based on different option, the configuraion are different. Below shows both ways.

<beans xmlns=""

   <!-- stored job-meta in memory -->
   <bean id="jobRepository"
      <property name="transactionManager" ref="transactionManager" />

    <!-- stored job-meta in database -->
   <bean id="jobRepository"
      <property name="dataSource" ref="dataSource" />
      <property name="transactionManager" ref="transactionManager" />
      <property name="databaseType" value="mysql" />
   <bean id="transactionManager"
      class="" />

   <bean id="jobLauncher"
      <property name="jobRepository" ref="jobRepository" />


<beans xmlns=""
   xmlns:batch="" xmlns:xsi=""

   <import resource="../config/context.xml" />
   <import resource="../config/database.xml" />

   <bean id="report" class="batch.entity.Report" scope="prototype" />
   <bean id="itemProcessor" class="batch.model.CustomItemProcessor" />

   <batch:job id="helloWorldJob">
      <batch:step id="step1">
            <batch:chunk reader="cvsFileItemReader" writer="myWriter" processor="itemProcessor"

   <bean id="cvsFileItemReader" class="org.springframework.batch.item.file.FlatFileItemReader">
      <property name="resource" value="classpath:cvs/input/report.csv" />
      <property name="lineMapper">
         <bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
            <property name="lineTokenizer">
                  <property name="names" value="id,sales,qty,staffName,date" />
            <property name="fieldSetMapper">
                <bean class="batch.model.ReportFieldSetMapper" />

   <bean id="myWriter" class="batch.model.MyWriter"/>

package batch.model;

import batch.entity.Report;
import org.springframework.batch.item.ItemWriter;
import java.util.List;

public class MyWriter implements ItemWriter {
    public void write(List items) throws Exception {
        PrintWriter writer = new PrintWriter("c:\\Users\\lipeng\\_Main\\output.csv", "UTF-8");

package batch.model;
import batch.entity.Report;
import org.springframework.batch.item.ItemProcessor;

public class CustomItemProcessor implements ItemProcessor {
   public Report process(Report item) throws Exception {
      System.out.println("Processing..." + item);
      return item;
After reading from reader, Spring Batch uses Mapper to transform the read data to a expected bean.

package batch.model;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import org.springframework.batch.item.file.mapping.FieldSetMapper;
import org.springframework.batch.item.file.transform.FieldSet;
import org.springframework.validation.BindException;
import batch.entity.Report;

public class ReportFieldSetMapper implements FieldSetMapper {
   private SimpleDateFormat dateFormat = new SimpleDateFormat("dd/MM/yyyy");   
   public Report mapFieldSet(FieldSet fieldSet) throws BindException {      
      Report report = new Report();
      String date = fieldSet.readString(4);
      try {
      } catch (ParseException e) {
      return report;      
When run a job and this running is recorded in memory or metadata, it is not allowed to run the job for the 2nd time, we should use JobParametersBuilder to add some parameter to discriminate each running. For example, we can use the timestamp as the parameter.

package batch;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobExecution;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.context.ApplicationContext;

public class App {
   public static void main(String[] args) {

      String[] springConfig  = 
      ApplicationContext context = 
            new ClassPathXmlApplicationContext(springConfig);
      JobLauncher jobLauncher = (JobLauncher) context.getBean("jobLauncher");
      Job job = (Job) context.getBean("helloWorldJob");

      try {
         JobExecution execution =, new JobParameters());
         System.out.println("Exit Status : " + execution.getStatus());
      } catch (Exception e) {

source code: link

Explanation for pom.xml


<modelVersion>4.0.0</modelVersion> <!–shows pom.xml structure complies with 4.0.0 version. Normally, it is 4.0.0>

<groupId></groupId>  <!–where the project locate> 
<artifactId>MavenTestApp</artifactId>  <!–project name>
<packaging>jar</packaging>  <!– the form of the compiled result, normally, it can be jar, war and ear.>
<version>1.0-SNAPSHOT</version>  <!– version of the project, SNAPSHOT means it is still under development>
<name>Maven Quick Start</name>   <!–name of the project>