Packt Pub.j BPM.Developer.Guide.Dec.2009

User Manual: Pdf

Open the PDF directly: View PDF PDF.
Page Count: 371

DownloadPackt Pub.j BPM.Developer.Guide.Dec.2009
Open PDF In BrowserView PDF
jBPM Developer Guide

A Java developer's guide to the JBoss Business
Process Management framework

Mauricio "Salaboy" Salatino

BIRMINGHAM - MUMBAI

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM Developer Guide

Copyright © 2009 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval
system, or transmitted in any form or by any means, without the prior written
permission of the publisher, except in the case of brief quotations embedded in
critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy
of the information presented. However, the information contained in this book is
sold without warranty, either express or implied. Neither the author, nor Packt
Publishing, and its dealers and distributors will be held liable for any damages
caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the
companies and products mentioned in this book by the appropriate use of capitals.
However, Packt Publishing cannot guarantee the accuracy of this information.

First published: December 2009

Production Reference: 1101209

Published by Packt Publishing Ltd.
32 Lincoln Road
Olton
Birmingham, B27 6PA, UK.
ISBN 978-1-847195-68-5
www.packtpub.com

Cover Image by Filippo Sarti (filosarti@tiscali.it)

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Credits
Author
Mauricio "Salaboy" Salatino
Reviewers
Jeronimo Ginzburg

Editorial Team Leader
Gagandeep Singh
Project Team Leader
Priya Mukherji

Federico Weisse
Acquisition Editor
David Barnes
Development Editor
Darshana S. Shinde
Technical Editors
Ishita Dhabalia
Charumathi Sankaran

Project Coordinator
Leena Purkait
Proofreader
Andie Scothern
Graphics
Nilesh R. Mohite
Production Coordinator
Shantanu Zagade

Copy Editor
Sanchari Mukherjee

Cover Work
Shantanu Zagade

Indexer
Rekha Nair

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

About the Author
Mauricio Salatino (a.k.a. Salaboy) has been a part of the Java and open source

software world for more than six years now. He's worked with several technologies
(such as PHP, JSP, Java SE, Java ME, and Java EE) during these years and is now
focused on JBoss frameworks. He got involved with the JBoss Drools project about
a year and a half ago as a contributor, gaining a lot of experience with the open
source community and with multiple technologies such as JBoss jBPM, JBoss
Drools, Apache RIO, Apache Mina, and JBoss Application Server.
During 2008 he dictated the official jBPM courses for Red Hat Argentina several
times, and he was involved in several JBoss jBPM and JBoss Drools implementations
in Argentina. He was also part of the Research and Development team of one of the
biggest healthcare providers in Argentina, where he trained people in the BPM and
Business Rules field.
Mauricio is currently involved in different open source projects that are being
created by the company he co-founded, called Plug Tree (www.plugtree.com),
which will be released in 2010. Plug Tree is an open source based company that
creates open source projects and provides consultancy, training, and support on
different open source projects.
Mauricio is an Argentinian/Italian citizen based in Argentina. In his free time
he gives talks for the JBoss User Group Argentina (www.jbug.com.ar), that he
co-founded with a group of local friends. He also runs his personal blog about
JBoss, jBPM, and JBoss Drools, that was originally targeted to Hispanic audiences
but is now aimed at an international audience and receives more than five hundred
questions per year.
I would like to thank my family for always being there to support
my decisions and adventures, my new and old friends who have
helped me during this process, all the Packt Publishing staff who
have guided me during these months of hard work; and last but
not least, the open source community guys who are always
creating new, interesting, and exciting projects.

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

About the Reviewers
Jeronimo Ginzburg has a degree in Computer Science from Universidad de

Buenos Aires, Argentina. He has more than 10 years of experience in designing
and implementing Java Enterprise applications. He currently works at Red Hat as
a Middleware Consultant, specialized in JBoss SOA-P (jBPM, Rules, ESB, and JBoss
AS). During the last four years, Jeronimo has been researching Web Engineering and
he has co-written articles published on journals, proceedings, and as a book chapter.

Federico Weisse was born in Buenos Aires, Argentina. He has over 10 years

of expertise in the IT industry. During his career he has worked with several
technologies and programming languages such as C, C++, ASP, PHP; different
relational databases (Oracle, SQLServer, DB2, PostgreSQL), platforms (AS400, Unix,
Linux) and mainframe technologies.
In 2002, he adopted Java as his main technology. He has been working with it since
then, becoming a specialist in this field. A couple of years later, he got involved with
BPM systems.
Nowadays, he is a J2EE architect of a BPM system based on OSWorkflow in one of
the most important healthcare providers of Argentina.
I want to thank Mauricio for choosing me to review his book, which
I think has great value for the developers who want to get to know
BPM theory and jBPM technology.
I also want to mention the effort and dedication of all the developers
around the world who provide open source software of excellent
quality, making it accessible for anyone eager to get new IT
knowledge.

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Dedicated to my loving future wife Mariela, and especially to my mother
who helps me with the language impedance.

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Table of Contents
Preface
Chapter 1: Why Developers Need BPM?

1
7

Business Process, why should I know about that?
"A sequence of tasks that happen in a repeatable order"
"executed by humans and/or systems"
"to achieve a business goal"
I know what BPs are, but what about the final "M" in BPM?
BPM stages
BPM stages in a real-life scenario
BPM improvements

8
8
9
12
12
13
15
16

BPM and system integration "history"
Some buzzwords that we are going to hear when people talk
about BPM
Theoretical definitions

18

Global understanding of our processes
Agile interaction between systems, people, and teams
Reduce paperwork
Real time process information
Process information analysis
Statistics and measures about each execution

Integration (system integration)
Workflow
Service Oriented Architecture (SOA)
Orchestration

Technological terms

Workflow
Enterprise Service Bus (ESB)
BPEL (WS-BPEL)

16
16
17
17
18
18

19
19

20
20
21
21

21

21
21
22

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Table of Contents

Business Process Management Systems (BPMS), my tool
and your tool from now on
BPM systems versus BPM suites
Why we really need to know BPM and BPMS, and how do they
change/impact on our daily life
New approach
Homework
Summary

Chapter 2: jBPM for Developers

Graph Oriented Programming
Common development process
Database model
Business logic
User interfaces
Decoupling processes from our applications
Graph Oriented Programming on top of OOP
Implementing Graph Oriented Programming on top of the
Java language (finally Java code!)
Modeling nodes in the object-oriented world
Modeling a transition in the object-oriented world
Expanding our language
Process Definition: a node container
Implementing our process definition
The Node concept in Java
The Transition concept in Java
The Definition concept in Java
Testing our brand new classes
Process execution
Wait states versus automatic nodes
Asynchronous System Interactions
Human tasks
Creating the execution concept in Java
Homework
Creating a simple language
Nodes description
Stage one
Stage two
Stage three
Homework solution

22
22
23
23
25
27

29

30
30
32
32
32
33
34
35
37
37
38
39
40
40
41
42
43
44
45
46
47
48
52
53
54
55
56
57
59

[ ii ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Table of Contents

Quick start guide to building Maven projects
Summary

Chapter 3: Setting Up Our Tools

Background about the jBPM project
JBoss Drools
JBoss ESB
JBoss jBPM
Supported languages
Other modules

Tools and software
Maven—why do I need it?

Standard structure for all your projects
Centralized project and dependencies description
Maven installation

59
59

61

62
64
64
64

65
66

68
69

70
70
71

Installing MySQL

72

Eclipse IDE

73

Downloading MySQL JConnector

73

Install Maven support for Eclipse

74

SVN client
Starting with jBPM
Getting jBPM

74
75
75

jBPM structure
Core module
DB module
Distribution module
Enterprise module
Example module
Identity module
Simulation module
User Guide module
Building real world applications
Eclipse Plugin Project/GPD Introduction

82
83
84
84
84
85
85
86
86
86
86

From binary
From source code

GPD Project structure
Graphical Process Editor
Outcome

Maven project
Homework
Summary

75
79

88
91
95

95
99
100

[ iii ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Table of Contents

Chapter 4: jPDL Language

jPDL introduction
jPDL structure
Process structure
GraphElement information and behavior
NodeCollection methods
ProcessDefinition properties
Functional capabilities
Constructing a process definition
Adding custom behavior (actions)

Nodes inside our processes
ProcessDefinition parsing process
Base node
Information that we really need to know about each node
Node lifecycle (events)
Constructors
Managing transitions/relationships with other nodes
Runtime behavior
StartState: starting our processes
EndState: finishing our processes
State: wait for an external event
Decision: making automatic decisions
Transitions: joining all my nodes
Executing our processes
Summary

Chapter 5: Getting Your Hands Dirty with jPDL
How is this example structured?
Key points that you need to remember
Analyzing business requirements
Business requirements
Analyzing the proposed formal definition
Refactoring our previously defined process
Describing how the job position is requested
Environment possibilities
Standalone application with jBPM embedded
Web application with jBPM dependency
Running the recruiting example
Running our process without using any services
Normal flow test

Summary

101

101
103
104
106
106
106
107

108
110

110
111
112
116
116
117
117
119
122
125
126
128
131
132
137

139

140
140
141
141
145
147
151
153
153
154
155
155

156

158
[ iv ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Table of Contents

Chapter 6: Persistence

Why do we need persistence?
Disambiguate an old myth
Framework/process interaction
Process and database perspective
Different tasks, different sessions
Configuring the persistence service
How is the framework configured at runtime?
Configuring transactions
User Managed Transactions (UMT)
What changes if we decide to use CMT?

Some Hibernate configurations that can help you
Hibernate caching strategies
Two examples and two scenarios
Running the example in EJB3 mode

Summary

159

160
161
161
164
167
169
173
174

175
176

176
177
177

181

183

Chapter 7: Human Tasks

185

Introduction
What is a task?
Task management module
Handling human tasks in jBPM
Task node and task behavior

186
186
188
189
191

TaskNode.java
Task.java
TaskInstance.java

193
193
194

Task node example
Business scenario

194
194

Assigning humans to tasks

199

Managing our tasks

202

Real-life scenario
Users and tasks interaction model

203
205

Practical example
Setting up the environment (in the Administrator Screen)
It's time to work

207
207
211

Summary

215

userScreen.jsp
UserScreenController.java
taskCheckDeviceForm.jsp
TaskFormController.java

Chapter 8: Persistence and Human Tasks in the Real World
Adding persistence configuration
Using our new configurations

212
213
214
214

217

218
219

[]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Table of Contents

Safe points
Advantages of persisting our process during wait states
Persistence in the Recruiting Process example
Human tasks in our Recruiting Process
Modifying our process definitions
Analyzing which nodes will change
Modified process definitions
Variable mappings
Task assignments

222
226
228
228
229
230
231
232
234

Summary

240

Assignments in the Recruiting Process example

Chapter 9: Handling Information

Handling information in jBPM
Two simple approaches to handle information
Handling process variables through the API
ContextInstance proposed APIs
ExecutionContext proposed APIs
Telephone company example
Storing primitive types as process variables
How and where is all this contextual information stored?
How are the process variables persisted?
Understanding the process information
Types of information
Variables hierarchy
Accessing variables

Testing our PhoneLineProcess example
Storing Hibernate entities variables

Homework
Summary

Chapter 10: Going Deeply into the Advanced Features of jPDL
Why do we need more nodes?
Fork/join nodes
The fork node
The join node

238

241

242
244
245
245
247
248
251
252
252
257

258
260
261

262

264

266
266

267

267
268

268
271

Modeling behavior
Super state node

271
274

Process state node

281

Phase-to-node interaction
Node in a phase-to-phase interaction
Node-to-node interaction between phases
Complex situations with super state nodes
Navigation

277
278
278
279
279

[ vi ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Table of Contents
Mapping strategies

285

The e-mail node
Advanced configurations in jPDL
Starting a process instance with a human task
Reusing actions, decisions, and assignment handlers

286
287
287
288

Summary

293

Properties
Bean
Constructor
Compatibility

Chapter 11: Advanced Topics in Practice

Breaking our recruiting process into phases
Keeping our process goal focused with process state nodes
What exactly does this change mean?
Sharing information between processes
Create WorkStation binding

289
290
291
292

295

295
299
301
302

302

Asynchronous executions
Synchronous way of executing things
The asynchronous approach

304
304
307

What happens if our server crashes?
Configuring and starting the asynchronous JobExecutor service
Different situations where asynchronous nodes can be placed
Summary

308
310
313
317

How does this asynchronous approach work?

307

Chapter 12: Going Enterprise

319

Index

341

jBPM configurations for Java EE environments
JBoss Application Server data source configurations
Taking advantage of the JTA capabilities in JBoss
Enterprise components architecture
The CommandServiceBean
JobExecutor service
JobExecutor service for Java EE environments
Timers and reminders
Mail service
Calendar
Timers
How do the timers and reminders work?
Summary

319
321
324
325
327
330
331
332
334
335
337
339
340

[ vii ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Preface
You are reading this because you are starting to get interested in the open source
world. This book is especially for Java architects and developers with a free mind,
who want to learn about an open source project. The fact that jBPM is an open source
project gives us a lot of advantages, but it also comes with a big responsibility. We
will talk about both—all the features that this great framework offers us and also all
the characteristics that it has, being an open source project.
If you are not a Java developer you might find this book a bit harder, but it will give
you all the points to understand how the open source community works.
I would like to take you through my own history, about how I discovered jBPM so
that you can identify your situation right now with mine. Take this preface as an
introduction to a new field—integration. It doesn't matter what your programming
skills, experiences, and likes (user interfaces, code logic, low level code, simple
applications, enterprise applications, so on) are, if you are a courageous developer
you will like to tackle down all types of situations at least once.
With the myriad of web technologies these days, it's not a surprise that the new
developers' generation starts building web applications. I have been working in the
software development field for approximately six years now. I used to spend most
of my time creating, developing, and designing web-based applications. I have also
learned more "low level" languages such as C and C++, but in the beginning I could
not make money with that. So, PHP and JSP were my first options. Although it was
challenging I realized that I could not create bigger projects with my knowledge
about JSP and PHP. The main reason for this, in my opinion, is that bigger projects
become unmanageable when you start having web pages that contain all your
application logic. At that point I recognized that I needed to learn new paradigms
in order to create bigger and scalable applications. That is when I switched to Java
Enterprise Edition (version 1.4), which provides us with a componentized way to
build applications in order to be able to scale and run our applications on clusters
and with all these features about high availability and fault tolerance. But I was
not interested in configuring and making environmental settings, I just wanted

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Preface

to develop applications. An important point in my career was when I started
getting bored as I had to spend hours with HTML and CSS frontend details that
I did not care about. So, I looked for other frameworks like JSF, which provides
a componentized way to build UIs and newer frameworks like JBoss Seam/web
beans (JSR-299) that have intimate relationships with the EJB3 specification, but
once again I had to check for HTML and CSS details for end users. I think that the
fact that I used to get bored with HTML and CSS is one of the biggest reasons why
I got interested in integration frameworks. When I use the word integration, I mean
making heterogeneous applications work together. Most of the time when you are
doing integrations; the user interfaces are already done and you only need to deal
with backends and communication stuff. That was my first impression, but then
I discovered a new world behind these frameworks. At this point two things got
my attention: the open source community and the theoretical background of the
framework. These two things changed my way of thinking and the way I used to
adapt to a new open source framework. This book reflects exactly that. First we'll
see how we can adapt all the theoretical aspects included in the framework and
then move on to how we can see all these concepts in the framework's code. This is
extremely important, because we will understand how the framework is built, the
project direction, and more importantly how we can contribute to the project.
I have been involved with the open source community for two years now, working
with a lot of open source frameworks and standards that evolve every day. When
I got interested in jBPM I discovered all the community work that is being done to
evolve this framework. I wanted to be part of this evolution and part of this great
community that uses and creates open source frameworks. That is one of the
main reasons why I created a blog (http://salaboy.wordpress.com) and
started writing about jBPM, I also cofounded the JBoss User Group in Argentina
(http://www.jbug.com.ar) and now Plug Tree (http://www.plugtree.com), an
open source-based company. With these three ventures I encourage developers to
take interest in new frameworks, new technologies and the most important thing,
the community.

What this book covers

Chapter 1, Why Developers Need BPM? introduces you to the main theoretical
concepts about BPM. These concepts will lead you through the rest of the book. You
will get an idea of how all the concepts are implemented inside the jBPM framework
to understand how it behaves in the implementations of the projects.
Chapter 2, jBPM for Developers, introduces the jBPM framework in a
developer-oriented style. It discusses the project's main components and
gets you started with the code distribution.
[]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Preface

Chapter 3, Setting Up Our Tools, teaches you to set up all the tools that you will be
using during this book. Basic tools such as Java Development Kit and the Eclipse IDE
will be discussed. It will also provide you with a brief introduction to Maven2 here
to help you understand how to build your projects and the framework itself. At
the end of this chapter you will see how to create simple applications that use the
jBPM framework.
Chapter 4, jPDL Language, introduces the formal language to describe our business
processes. It gives you a deep insight in to how this language is structured and how
the framework internally behaves when one of these formal definitions is used.
Chapter 5, Getting Your Hands Dirty with jPDL, gets you started with working on
real-life projects. You will be able to create your first application that uses jBPM
and define simple processes, using the basic words in the jPDL language.
Chapter 6, Persistence, sheds light on the persistence service inside the jBPM
framework, which is one of the most important services to understand in order to
create real-life implementations using this framework. The persistence services are
used to support the execution of long-running processes that represent 95% of
the situations.
Chapter 7, Human Tasks, describes the human interactions inside business processes,
which are very important because humans have specific requirements to interact
with systems and you need to understand how all this works inside the framework.
Chapter 8, Persistence and Human Tasks in the Real World, mainly covers configurations
to be done for real environments where you have long-running processes that
contain human interactions. If you think about it, almost all business processes
will have these requirements, so this is extremely important.
Chapter 9, Handling Information, helps you to understand how to handle all the
process information needed by human interactions inside the framework, as the
human interactions' information is vital to get the activities inside our business
processes completed.
Chapter 10, Going Deeply into the Advanced Features of jPDL, analyzes the advanced
features of the jPDL language. This will help you improve your flexibility to model
and design business processes, covering more complex scenarios that require a more
advanced mechanism to reflect how the activities are done in real life.
Chapter 11, Advanced Topics in Practice, provides us with practical examples on the
topics discussed in the previous chapters. This will help you to understand how all
the advanced features can be used in real projects.

[]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Preface

Chapter 12, Going Enterprise, introduces the main features provided by jBPM to run
in enterprise environments. This is very important when your projects are planned
for a large number of concurrent users.

Who this book is for

This book is mainly targeted at Java developers and Java architects who need
to have an in-depth understanding of how this framework (jBPM) behaves in real-life
implementations. The book assumes that you know the Java language well and also
know some of the widely-used frameworks such as Hibernate and Log4J. You should
also know the basics of relational databases and the Eclipse IDE. A brief introduction
to Maven2 is included in this book but prior experience might be needed for more
advanced usages.

Conventions

In this book, you will find a number of styles of text that distinguish between
different kinds of information. Here are some examples of these styles, and an
explanation of their meaning.
Code words in text are shown as follows: "As you can see, inside the 
tags different tasks ( tag) can be defined."
A block of code is set as follows:
public class MyAssignmentHandler implements AssignmentHandler {
public void assign(Assignable assignable, ExecutionContext
executionContext) throws Exception {
//Based on some policy decides the actor that needs to be
// assigned to this task instance
assignable.setActorId("some actor id");
}
}

When we wish to draw your attention to a particular part of a code block, the
relevant lines or items are set in bold:


One
Initial Interview




...

[]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Preface

Any command-line input or output is written as follows:
mvn clean install -Dmaven.test.skip

New terms and important words are shown in bold. Words that you see on
the screen, in menus or dialog boxes for example, appear in the text like this:
"If we take a look at the Source tab, we can see the generated jPDL source code."
Warnings or important notes appear in a box like this.

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about
this book—what you liked or may have disliked. Reader feedback is important for
us to develop titles that you really get the most out of.
To send us general feedback, simply send an email to feedback@packtpub.com,
and mention the book title via the subject of your message.
If there is a book that you need and would like to see us publish, please send
us a note in the SUGGEST A TITLE form on www.packtpub.com or e-mail
suggest@packtpub.com.
If there is a topic that you have expertise in and you are interested in either writing
or contributing to a book on, see our author guide on www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to
help you to get the most from your purchase.

[]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Preface

Downloading the example code for the book
Visit http://www.packtpub.com/files/code/5685_Code.zip to
directly download the example code.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes
do happen. If you find a mistake in one of our books—maybe a mistake in the text or
the code—we would be grateful if you would report this to us. By doing so, you can
save other readers from frustration, and help us to improve subsequent versions of this
book. If you find any errata, please report them by visiting http://www.packtpub.
com/support, selecting your book, clicking on the let us know link, and entering the
details of your errata. Once your errata are verified, your submission will be accepted
and the errata added to any list of existing errata. Any existing errata can be viewed by
selecting your title from http://www.packtpub.com/support.

Piracy

Piracy of copyright material on the Internet is an ongoing problem across all media.
At Packt, we take the protection of our copyright and licenses very seriously. If you
come across any illegal copies of our works, in any form, on the Internet, please
provide us with the location address or web site name immediately so that we
can pursue a remedy.
Please contact us at copyright@packtpub.com with a link to the suspected
pirated material.
We appreciate your help in protecting our authors, and our ability to bring you
valuable content.

Questions

You can contact us at questions@packtpub.com if you are having a problem with
any aspect of the book, and we will do our best to address it.

[]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Why Developers Need BPM?
I will start this book with a sentence that I say every time that I talk about jBPM.
"jBPM is a framework, keep it in mind". That's it, this is all that developers and
architects want to know to be happy, and it keeps them excited during the talks. For
this reason, the aim of the book is to give all developers an in-depth understanding
of this excellent, widely-used, and mature framework.
In this chapter we will cover the following topics:
•

Business process definition and conceptual background

•

Business process management discipline and the stages inside it

•

Business process management systems

To give you a brief introduction to this chapter, we are going to explain why
developers need to know about BPM and when they should use it. Before reaching
to this important conclusion we are going to analyze some new concepts like business
process, business process management discipline, and business process management systems
because it's important that developers manage the specific terminology pretty well.
Bearing these new concepts in mind, you will be able to start analyzing how your
company handles everyday work, so you can rediscover your environment with a
fresh perspective.
This chapter deals with vital conceptual topics that you need to know in order
to start off on the right foot. So, do not get disappointed if you feel that this is all
about theoretical stuff. Quite the opposite, just think that with all this conceptual
introduction, you will know, even before using the framework, why and how it
gets implemented as well as the main concepts that are used to build it internally. I
strongly recommend reading this chapter even if you don't know anything or if you
don't feel confident about the BPM discipline and all the related concepts. If you are
an experienced BPM implementer, this chapter will help you to teach developers the
BPM concepts, which they need in order to go ahead with the projects that will be
using it. Also if you are familiar with other BPM tools, this chapter will help you to

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Why Developers Need BPM?

map your vision about BPM with the vision proposed by the jBPM team. Because it's
a vast theoretical topic, it is important for you to know the terminology adopted by
each project reducing and standardizing the vocabulary.
The moment you get to the concepts that we are going to see in this chapter, you will
get a strange feeling telling you: "Go ahead, you know what you are doing". So you
can take a deep breath, and brace yourself to get your hands on planning actions
using the concepts discussed in this chapter. Because these concepts will guide you
through till the end of this book.
First things first, we will start with business process definition, which is a main
concept that you will find in everyday situations. Please take your time to discuss the
following concepts with your partners to get an insight to these important concepts.

Business Process, why should I know
about that?

As you can see, jBPM has Business Process (BP) in the middle; so, this must be
something very important. Talking seriously, Business Process is the first key
concept to understand what we are really going to do with the framework. You
must understand why and how you describe these Business Processes and discover
the real application of this concept.
A common definition of Business Process is: Business Process is a sequence of tasks that
happen in a repeatable order, executed by humans and/or systems to achieve a business goal.
To understand this definition we need to split it into three pieces and contrast it with
a real example.

"A sequence of tasks that happen in a
repeatable order"
This definition shows us two important points:

1. First of all, the word "task", sounds very abstract. For learning purposes we
can say that a task is some kind of activity in the company, atomic in the
context, which contributes towards obtaining/completing some business
goal. In real world a task (or an activity if you prefer) could be:
°

Signing a contract

°

Hiring a new employee

°

Reviewing a document
[]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 1

°

Paying a bill

°

Calculating a discount

°

Backing up a file

°

Filling a form

As you can see, these examples are concrete, and the rule of thumb is
that these tasks should be described with an action (verb in the sentence,
Reviewing for example) and a noun (document in this case) that represents
where the action is applied.
Developers and architects should avoid thinking that "call myMethod()"
will be a good example of a task in a Business Process. Because this is not!
"call myMethod()" does not have anything to do with the business field.
Also remember that this is a conceptual definition and is not related to
any particular technology, language, or system.

The second important word that we notice is "sequence", which demands a logical
order in which the actions are executed. This kind of sequence in real scenarios could
be seen as: Buyer buys an Item -> Buyer pays the bill -> Dispatch buyer the order. An
important thing to note here is that this sequence does not change in a short period
of time. This means that we can recognize a pattern of work and an interaction that
always occurs in the same order in our company to achieve the same business goals.
This order could change, but only if we suffer changes in business goals or in the
way that we accomplish them. Also you can see that this sequence is not achieved by
one person; there is some kind of interaction/collaboration among a group of people
to complete all the activities.

"executed by humans and/or systems"

Here we will see who performs these activities. But probably you have a question,
why definition makes these distinctions about humans and/or systems? This is
because humans behave differently from systems. Human beings are slow (including
me) and machines are fast, so we need to handle this interaction and coordination
carefully. Why are humans slow? From the application perspective, this is because
when a task is assigned to a human being, let's say Paul, this task must wait for Paul
to be ready to do the job. In case Paul is on vacation for a month, the task would
have to wait for one month to begin. Systems (also called automatic procedures)
on the other hand, just execute the action as fast as they can, or when the action is
required. These two opposite behaviors are one of the principal influences in the
design of the framework. For this reason, we are going to see how these behaviors
are implemented inside the framework in the following chapters.
[]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Why Developers Need BPM?

Another important thing to keep in mind about these two behaviors is: ''waiting
for humans to respond'' and ''executing it as fast as possible for systems'', are the
behaviors that we will need to synchronize. In real-life situations, we will have them
in a random order. (The order could be somewhat like: wait, execute as fast as you
can, wait, wait, execute as fast as you can, and so on) Let's clarify these points with
a practical, real-world example. Imagine that we are in a company called Recycling
Things Co. In one of the company branches papers are recycled. This branch of the
company is in charge of recycling used paper, processing it, and storing it until
someone buys it. Probably this process contains the following activities/tasks:
1. Receiving X ton(s) of a pile of paper in the "Just received" warehouse—here,
we probably have a guy (the "Just received" guy), who fills in a form the
specifics like the type of paper, the weight, and other technical details
about it upon receiving something each time. When he finishes the form, he
receives the paper pile and stores it in the warehouse. Finally, he sends the
form to the main offices of Recycling Things Co.
2. The form filled in the "Just received" warehouse arrives at the main office of
Recycling Things Co., and now we know that we can send X ton(s) of paper
to the recycling station. So, we send the X ton(s) of paper from "Just received"
warehouse to the recycling station. Probably we do that by just making a call
to the warehouse or filling another form.
3. When the pile of paper arrives at the recycling station, an enormous machine
starts the process of recycling. Of course, we must wait until this big machine
finishes its job.
4. The moment the machine finishes, the guy in charge of controlling the
outcome of this machine (Recycling Station guy), checks the status of the
just-recycled paper and, depending on the quality of the outcome he decides
to reinsert the paper into the machine again or to move the finished paper
to the "Just finished" warehouse. Just after that he fills in a form to report
to the main office of Recycling Things Co. that the X ton(s) of papers were
successfully recycled, and includes also the number of iterations he needed
to perform with the required level of quality to get the job done in the form.
Probably this level of quality of the recycled paper will also be included on
the form, because it is valuable information.
5. When the guy from the "Just finished" warehouse receives the recycled
X ton(s) of paper, he also sends a form to Recycling Things Co. main
offices to inform them that X ton(s) of paper are ready to sell.
To have a clear understanding of this example, we can graph it, in some
non-technical representation that shows us all the steps/activities in our
just-described process. One of the ideas of this graph is that the client, in this
case Recycling Things Co. manager, can understand and validate that we are
on the right track regarding what is happening in this branch of the company.
[ 10 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 1
Used Paper arrives

Store used paper

Fill and Send Form to
Main Offices

Just Received
Warehouse

Form Arrives

Call to the Recycling
Station

Main
Offices

Receive Call

Move the paper to
recycling station
Recycle Paper in Big
Machine

Recycling
Station

No
Quality Check
Pass?
Yes
Fill and Send Form to
Main Offices

Form Arrives

Call to Just Finished
Warehouse

Main
Offices

Receive Call

Move and Send Form to
Main Offices

Just Finished
Warehouse

Fill and Send Form to
Main Offices

As you can see, the process looks simple and one important thing to notice is that
these chained steps/activities/tasks are described from a higher-level perspective,
like manager level, not employee perspective. It is also necessary to clarify the
employee perspective and add this detail to each activity. We need to have a
clear understanding of the process as a whole, the process goal, and the details
of each activity.
[ 11 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Why Developers Need BPM?

If you are trying to discover some processes in your company, first ask at
manager level, they should have a high-level vision like in the Recycling
Things Co. example. Then you should know what is going on in everyday
work. For that, you should ask every person involved in the process about
the activities that they are in charge of. In a lot of companies, managers
have a different vision of what is going on everyday. So ask both sides,
and remember employees have the real process in their minds, but they
don't have full visualization of the whole process. Pay close attention
to this. Also remember that this kind of discovering task and modeling
process is a business analyst's job.

"to achieve a business goal"

It is the main goal of our jobs, without this we have done nothing. But be careful,
in most cases inexperienced developers trying to make use of their new framework
forget this part of the definition. Please don't lose your focus and remember why
you are trying to model and include your processes in your application. In our
previous example, Recycling Things Co., the business goal of the process (Recycle
paper) is to have all the recycled papers ready as soon as possible in order to sell it.
When the company sells this paper, probably with another process (Sell recycled
paper), the company will get some important things: of course money, standardized
process, process formalization, process statistics, and so on. So, stay focused on
relevant processes that mean something to the company's main goal. Try not to
model processes everywhere because you can. In other words, let the tools help you;
don't use a tool because you have it, but because it's in these situations where all the
common mistakes happen.

I know what BPs are, but what about the final
"M" in BPM?

If we have a lot of business processes, we will need to manage them over time. This
means, that if we have too many process definitions, and also we have executed
these definitions, we will probably want to have some kind of administration that
lets us store these definitions as well as all the information about each execution. You
may also want to keep track of all the modifications and execution data throughout.
This is really necessary because the process will surely change and we need to
adapt to the new requirements of our business that evolves each day. That is why
Business Process Management (BPM) emerges as a discipline to analyze, improve,
automatize, and maintain our business processes. This discipline proposes four
stages that iteratively let us have our business processes in perfect synchronization
with business reality.
[ 12 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 1

BPM stages

Now we are going to analyze the stages that this discipline proposes to us:
1. Finding/discovering real-life process: In this stage, business analysts try
to find business processes in the company. Depending on the methodology
used by the analysts to find a process, probably we will get some description
about the activities in the process. Most of the time these processes can be
found by asking company managers and employees about the goals of the
various processes and the activities needed to fulfill them. This will also give
us a list of all the business roles involved in each process.
2. Designing/modeling process: If we start with the description that business
analysts carry out in the previous state, this stage will try to represent
this definition in some formal representation/language. Some, in vogue
languages to do that are BPMN (Business Process Modeling Notation) and
XPDL (XML Process Definition Language), these languages are focused in
an easy graph representation of these processes and an intuitive and quick
understanding of what is happening in each of them. The goal of this stage
is that all the people who are in contact with this formal representation
understand the process well and know how the company achieves the
process business goal.
3. Executing process: This is one of the most interesting stages in BPM (at least
for us), because here our process definitions come to life and run, guiding the
work that the company is doing everyday. With this guidance the company
gains all the advantages discussed in the next section. The goal of this stage
is to improve the process execution times and performance and to make the
systems and people communication between people and systems smoother
in order to achieve the business goal.
4. Improving process: At this point, with processes already executed, we try
to analyze and find some improvements to our processes. This is achieved
by analyzing the execution data and trying to reduce the gap between the
formal definition of the process and the actual implementation style of our
company. Also, we try to reduce and find possible bottlenecks and analyze
if there are some activities that can be done simultaneously, or if we have
unnecessary activities, or if we need to add new activities to speed up our
process performance.

[ 13 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Why Developers Need BPM?

As you can imagine, all these stages are iteratively repeated over time. Take a look
at the following figure that shows us all the BPM stages. It also includes the most
common artifacts that are generated in each step.
Business Users
Business
Analysts


Manager Employees

DISCOVERING
PROCESS

Process
textual
description

Business
Analysts



IMPROVING
PROCESS

Business
Analysts


Process
formal
description

Developers
Statistics /
Execution
data



MODELING
PROCESS



Developers

Manager

EXECUTING
PROCESS

Business Users

Manager Employees

[ 14 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 1

BPM stages in a real-life scenario

One entire cycle that goes through all these stages is one step forward to our
well-defined processes that will guide our company everyday.
If we take the previous example of Recycling Things Co., we can say that BPM works
as follows:
1. A business analyst is hired to analyze the branch that is in charge of recycling
paper. He observes what happens in the branch and tries to describe the
activities of this branch with a textual description. Very similar to the process
we described in the example. Here we see the first stage, which is discovering
the process. This stage could start with previous definition of the process; in
this case the business analyst will need to update this definition with what is
happening then.
2. This business analyst translates the previously described process with the
help of a developer and with knowledge of some formal language. At this
point a validation with a client (in this case the manager of Recycling Things
Co.) would be advisable.
3. Once we have the formal definition of the processes validated, the developers
will analyze the environmental requirements of the processes. Moreover, all
the technical details that the process will need to run will be added (this is
our job/developer's job). When all of these details are set up, the process is
ready to run and guide the business users in their everyday work.
4. When the processes are running, the business analyst and developers need to
work together to analyze how this process is working, trying to improve the
process definition, and all the settings to make it perform better.
At the end of stage four, another cycle begins: improving, adapting, and
rediscovering all the processes continuously in the company.
In the next section we are going to discuss about all the advantages that this
discipline gives us.
This description of BPM is incomplete, but for developers who want to use jBPM, it
is fine. If you are interested in learning more concepts and theoretical background
about this discipline there is plenty of interesting literature out there. You just need
to search BPM on the Internet and many pages and articles will appear. For example,
take a look at http://en.wikipedia.org/wiki/Business_process_management
and http://www.bpminstitute.org/index.php?id=112.

[ 15 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Why Developers Need BPM?

BPM improvements

BPM, as with any other discipline, gives us a large number of practical advantages
that we want to know before we adapt it. Here we are going to discuss some of the
most important advantages and improvements that we can have if we adopt BPM
and how they can benefit our company.

Global understanding of our processes

When we find a process in our company, we discuss it with the manager and the
business analysts. This process now could be formalized in some formal language.
(Formal means it has no ambiguous terms, and it's said the same for everybody who
understands it.) If we achieve that, we gain two main things:
•

Now we know our process. This is important and no minor thing. Now our
process is no longer something that we have a vague idea about, we now
know what exactly our process goal is and what business roles we require
to achieve this goal. This formalization and visibility is the first step to
improving our existing process, because now we can see the possible
points of failure and find the best solution to fix them.

•

All our managers and employees can see the process now. This is very
helpful in two areas:
°

New employees could be easily trained because the process
will guide them through the activities of the process that
correspond to the new employee's role.

°

Managers can make more accurate decisions knowing exactly
what is going on in their processes. Now they have gained
the visibility of the roles involved in each process and the
number of tasks performed by each role in a specific process.

Agile interaction between systems, people, and
teams

When our process definitions are executed, all the employees will be guided through
their tasks, making the system integrations transparent to them, and improving
people's communication. For example, say in a post office we have a task called
receive letter. The person at the front desk there receives the letter and fills all the
information about the destination address of the letter on a form. When delivery time
arrives, some other task (say Deliver letter) will use all this information. In this case,
the process itself will be in charge of moving this data from one activity to another,
taking away the responsibility from both users. The data will go from one task to
another, making the information available for everyone needing it.
[ 16 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 1

Reduce paperwork

In all the human tasks (tasks that need an interaction with people) the most common
behaviors will be:
•

Read/insert/modify information: When people interact with activities of
the process, it is common that they introduce or read information that will
be used in the following tasks for any other role. In our Recycling paper
example, each form filled can be considered information that belongs to the
process. So, we can reduce all the paper work and translate it to digital forms
that give us two interesting advantages:
°

Reduction of paper storage in our company: There will be
no need to print forms and store them for future audits
or analysis.

°

Reduction of the time spent: The time spent on moving the
information from one place to another.

So, this results in saving money and not having to wait for the forms that
may not arrive or could be lost on their way.
•

Make a decision: Choose if something is OK or not and take some special
path in the process (we will talk about different paths in our process later). In
our post office example, when the Recycling Station guy checks the quality
of the just-recycled paper, he needs to choose if the recycling of paper is
done properly or it needs to be retried. Here the quality of the paper and the
number of retries could be maintained as process information, and do not
need to be written down on a paper or form. The machine can automatically
inform our process about all this data. In these cases, the advantage that BPM
gives us is that we can make automatic decisions based on the information
that the process passes from one activity to the next.

Real-time process information

In every process execution and at any moment, our managers can see in which
activity the process is currently stopped and who must complete it. With this
valuable information about the status of all the processes, the manager will know if
the company is ready to make commitments about new projects. Also you can switch
to other methodologies, such as, BAM (Business Activity Monitoring) to make a
more in-depth and wider analysis about how your company processes are working.

[ 17 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Why Developers Need BPM?

Process information analysis

With the formal definition of our processes we can start improving the way the
information is treated in each of our processes. We can analyze if we are asking for
unnecessary data or if we need more data to improve performance of our processes.

Statistics and measures about each execution

With audit logs of each execution we can find out where the bottlenecks are, who is
a very efficient worker, and who spends too much time on an assigned task without
completing it.
As you can see there are a lot of advantages of BPM but we need to understand all
the concepts behind it to implement it well.

BPM and system integration "history"

We as developers see BPM closely related to system integration, so in our head when
we see BPM we automatically merge the concepts and disciplines:
•
•

•

Workflows: This branch is conceived for people-to-people interactions, born
in the mid 70s.
Business process improvements: These suggest methodologies to
increment the overall performance of the processes inside a company,
born in the mid 80s.
System integration: This branch is focused on achieving fluid
system-to-system interactions. This concept is newer than the other
two and is more technical.

This mix gives us important advantages and flexibility, which let us manage all the
interactions and information inside our company in a simple way. But this mix also
brings a lot of confusion about terminology in the market.
At this point, when BPM began to appear in the market, vendors and customers
had their own definition about what BPM meant. Some customers just wanted
BPM; it didn't really matter what BPM really was, but they wanted it. Also vendors
had different types of technologies, which they claimed to be BPM just in order to
sell them.
When all this confusion lessened a bit and not everyone wanted to buy or sell BPM
tools, the big boom of SOA (Service Oriented Architecture) began. SOA was born
to bring us new architecture paradigms into give us more flexibility at design and
integration time. The main idea is: with SOA, each business unit will have a set of
services that can be easily integrated and have fluid interactions with other business
unit services and also with other business server partners.
[ 18 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 1

At this point, the confusion about overloaded terms came again. Also with
the addition of SOA, new languages come to play; one of the most fashionable
languages was BPEL (Business Process Execution Language, also known as
WS-BPEL—Web Services BPEL). BPEL is basically an integration language that
allows us to communicate with heterogeneous systems, which all talk (communicate)
using Web Services Standards. All of this is done in a workflow-oriented way (but
only for systems and not for people), so we can describe systems' interactions with
a graph that shows us the sequence of systems calls.
Also ESB (Enterprise Service Bus) products started gaining a lot of popularity
among vendors and customers. This product proposes a bus that lets us connect all
our services, which speak in different languages and protocols, and allows them to
communicate with each other.
But as you can see, SOA has two faces, one is technological and the other
corresponds to the architectural design patterns. This second face contributes
a lot to today's enterprise architectural choices that are being taken by big
companies around the world.
In the next section we are going to see some brief definitions about all these
technologies that are around BPM as they always bring confusion to all of us. If we
understand the focus of each technology, we will be able to think and implement
software solutions that are flexible enough and have the right concepts behind
them. Do not confuse technical terms with theoretical definitions.

Some buzzwords that we are going to
hear when people talk about BPM

In this short section we are going to discuss words that sometimes confuse us and
sometimes we misuse these words as synonyms. This section is aimed at clarifying
some ambiguous technical and theoretical terms that surround BPM. These terms
will be distinguished as theoretical definitions and technological terms. Sometimes
you will notice that different roles have different perspectives about the same term.

Theoretical definitions

These theoretical definitions try to clarify some concepts that aim to define topics
that are agnostic to technology, trying to understand the cornerstones behind terms
that are often used by technical people. Feel free to query other bibliographies about
these terms to get all the background that you need.

[ 19 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Why Developers Need BPM?

Integration (system integration)

We frequently hear about integration. BPM is about integration, but what exactly do
we need to understand when we hear that?
Probably when someone says "I want to integrate my systems", we understand
that this person wants all his or her company systems to talk (communicate) to
each other in order to work together. That is the most common understanding,
but we also need to understand that this integration will include the following
out-of-the-box implicit requisites:
•

Flexibility: The integration solution needs to be flexible enough to allow us
any kind of interaction

•

Extensibility: In future we need to be able to add other systems to the
newly-integrated solution

•

Maintainability: If some changes emerge, the solution should let us change
the integration to let us adapt to these changes and future changes as well

•

Scalability: The solution should allow our applications to
grow transparently

Workflow

One of the most overloaded words in the market. When we hear conversations
about workflows in most cases, we are talking about situations where only people
get involved. Most of the workflows are related to documents that are moved
through business units inside the company, where these business units modify these
documents, to achieve some business goal. Currently, in many companies the terms
BPM and workflow are used as synonyms, but in this book we are trying to make the
distinction between them clear.
Here when we talk about workflows we refer to some steps inside them and
specific application domains. BPM is like a more generic and extended set of
tools, which let us represent situation that integrate heterogeneous systems and
people's activities, with a fine-grained control.
However, workflows and BPM share the same theoretical nature; try to see
workflows like a domain specific set of activities and BPM as a set of tools to
integrate and communicate all the work that is being done in the company.

[ 20 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 1

Service Oriented Architecture (SOA)

Here we will discuss the theoretical aspect of the term SOA. When people talk
about SOA, most of the time they are talking about some specific architectural
design patterns that let our application be designed as services communicating with
each other. This means that in most of the cases our applications will be used across
the company business units. This requires one application to interact with services of
each unit. So, SOA advises us about how to build our application to have flexibility
and fluid communications between each business unit services.

Orchestration

This term refers to the possibility to coordinate the interaction between systems calls.
This coordination is always achieved by a director that will know which is the next
system call in the chain. This term is used to represent a logical sequence, which is
used to obtain a business result using different calls to different systems in a specific
order. This term is used very frequently in conjunction with BPEL. We'll discuss that
in the next section.

Technological terms

These technological terms, in contrast with all the theory that we see behind them,
give us the knowledge that we need to use tools in the way that is intended. Try
to link all this technical information with the theory that we have seen before. If
you feel that something is missing, please read more bibliographies until you feel
confident with it. But don't worry, I will do my best to help you.

Workflow

When developers talk about workflows, probably they are referring to some
framework, tool, or product, which lets them define a sequence of steps that one
application will take. That is, they mean some kind of state machine that will be
embedded in the application. As we mention this in most of the cases, workflows
are specific to one domain and probably to one application.

Enterprise Service Bus (ESB)

Enterprise service buses emerge as very flexible products that implement a lot of
connectors, which let us plug our heterogeneous applications to them and then
interact with each other. With ESB, we achieve the abstraction about which protocol
we need to use to talk with another application and we only need to know how
to talk with the bus. Then the bus is in charge of the translation between different
protocols and languages.
[ 21 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Why Developers Need BPM?

BPEL (WS-BPEL)

Business Process Execution Language (BPEL) is a language that defines how web
services calls are coordinated one after the other to obtain some business information
or to achieve some business action. This language lets us define how and when web
services for different applications need to be called and how the data should be
passed through these calls.
One final thing to notice here is that BPM is a discipline. This means that
BPM is technology agnostic, you can implement this discipline in your
company just with a pen and paper, but if you are a developer I would
think that you wouldn't want to do that.

That is why BPMS comes to save us.

Business Process Management Systems
(BPMS), my tool and your tool from
now on

Now, we know about BPM as a discipline, so we can implement it; but wait a second,
we don't need to. That's because jBPM is a framework that lets us implement the
main stages of BPM (unless you want to implement it in pen and paper!). BPMS
makes up for a piece of software that lets us implement all the main stages that the
discipline describes. These tools are frameworks that provide us with the designing
tools to describe our Business Processes. They also offer configurable executional
environments to execute our designed processes, and tools to analyze and audit the
history of our process executions in order to improve our processes and make more
accurate business decisions. That is exactly what jBPM gives us—an open source
development framework integrated with nice tools to describe our processes in a
formal language (called jPDL, jBPM Process Definition Language), an executional
environment to see how the processes live and guide our company through their
activities, and a set of best practices to analyze our processes and improve the
company performance and incomings.

BPM systems versus BPM suites

There is a lot of noise in the market about this. If you have not heard about any of
these terms you are lucky, because these too are overused terms.

[ 22 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 1

BPM systems, as we have discussed earlier, are developer-oriented tools that lets
us implement software solutions that use the BPM approach (this approach will be
discussed later). In the case of jBPM, that is a BPM system; we are going to see that
it is also designed to have a graph-oriented language that can be useful to have a
fluid communication with business analysts. But somehow, business analysts are
not supposed to design and execute business process on their own.
On the other hand, BPM suites are products oriented to help business analysts to
design and implement a fully-functional process on their own. These products have
a "developer free" policy. These kind of products are commonly closed sources and
also come integrated with an application server.
As you can imagine BPM suites are good tools. But the main problem is that most
of the time they do not have enough flexibility and expressiveness to adapt to
everyone's business needs.
To tackle the flexibility issues, BPM system's developers can adapt and modify the
entire framework to fulfill the business requirements. Also, BPM systems are like
any other framework—environment agnostic, so depending on the kind of business
requisites, developers can choose to create a standalone application, web application,
or a full enterprise application. This comes with another huge advantage; we can
choose any vendor of web or application server! We are not tied to JBoss, not to any
other license fees.

Why we really need to know BPM and BPMS,
and how do they change/impact on our
daily life

Enough of theoretical chat, we want to see how these tools and concepts will be
applied by us in our everyday job. Because these tools will change the way we think
about our applications and in my experience I never want to go back.
This is because a new approach of development arrives. This will give our applications
flexibility and an easy way to adapt to everyday changes that our company requires.

New approach

Here we are going to see how our component-oriented paradigm is modified a bit to
make use of the advantages proposed by the BPM discipline.
The main idea is to give developers more flexibility to adapt to future changes and,
of course improve the way they create applications that include/reflect the company
business processes.
[ 23 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Why Developers Need BPM?

To achieve that, we must have formal description of our processes, know on which
environment our processes should run and then combine these two areas in one
implementation that the business analysts understand and have all the technical
details that allow the process to run in a production environment. Running of this
process will guide all the employees involved in that process through their
tasks/activities in everyday work.
To make this implementation flexible and adaptable, we need to have a
loosely-coupled design and relationship between our formal representation of
the processes and all the technical details. Remember that these two "artifacts"
(process formal description and all that involves technical details: Java code, XML
configuration file, and so on) are focused on two different roles in the BPM systems'
perspective. This is because our process definition must always be understood by
business analysts, and technical details will always be implemented for developers
who know about environments and executional configurations.
To have a clear vision about what I'm talking about, we will analyze the
following image:
Developers Code/Configure and Maintain
Business Analysts Define
Business Process Definition

Runtime Environment
(Execute)

Runtime Environment
Configuration

Activity

XMLs

Activity


Java Code

Configuration
Files

Activity

Activity

Activity
Business
Analyst

Business
User

Business
User

Developer


Business
Manager

[ 24 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 1

Now our process and applications can change smoothly to reflect the changes
that our company has day after day. And our work will just be to modify some
process definition adoptions for new requirements (for example, change one
activity before another or put a new activity) and our process will be synchronized
with our company changes. I do not want to lie here, but if you apply some basic
principles to technical details, minor changes will be needed when our process
requires modifications.
Now it is time to get to know jBPM implementation in depth, so in the next chapter
we are going to focus on a deep analysis of the framework's internal component
distribution and the basic principles that cause the real implementation.
But wait a moment, before the next chapter I have some homework for you to do.

Homework

This is a special homework, and you can use the outcome to see your knowledge
improvements throughout the rest of the book.
For this homework you will need to find one or two simple processes in your
company and describe them, like the example that we saw in this chapter.
Try to describe your processes using the following guidelines:
•

First look for business goals: Try to find something that is important to your
company, not only for developers. You probably will find something that is
not even close to systems or computers (like the recycling example).

•

Put a name to your process: When you find a business goal, it's easy to name
your process related to the goal, as follows:
°

Goal: Recycle Paper

°

Name: Recycling Paper process

•

Focus on the entire company, not only on your area: As we mentioned
earlier, business processes are about team, business unit, and also business
partners' integration. So, don't limit your processes to your area. Ask
managers about tasks and if these tasks are part of a bigger business scenario.
80% of tasks are part of bigger business scenarios; so bother your managers
for five minutes, they will probably get interested in what you are doing with
the company processes.

•

Identify business roles: You will need to find out who does which tasks
that you find inside your process. This is a very important point, try to also
identify the nature of these roles, for example if some role in your process is
irreplaceable, or if there are a lot of people who can do the job.
[ 25 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Why Developers Need BPM?

•

Identify systems interactions: Try to identify different systems that are
used throughout different business units, and try not to imagine systems'
improvement. Just write down what you see.

•

Not too macro and not too micro: Try to focus on processes that have
concrete tasks, which people do every day. If you find that you are
describing a really huge process, and that every activity you find is another
complete process, this is not within the scope of this homework. On the other
hand, you can describe very detailed tasks that in reality is just one task;
for example, if you have a form that demarcates three sections, and these
three sections are filled in all together for the same business role at the same
time, you do not need to describe three separate tasks, one for each section,
because with one task you are representing them all in the filled form.

•

Do not rely on old documentation: Sometimes companies have these
business processes documented (probably if the company was trying to
achieve some Quality Assurance certification), but most of the time these
documented processes are not reflected in the company's everyday work.
So, try to look at a process that happens right near you.

•

Do not be creative: Do not add extra activities to your processes, there will
be time to improve your current processes. So, if you are not sure that the
task is being performed, throw it away.

•

A graph can help a lot: If you can graph your process as boxes and arrows,
you are heading in the right direction. Also a graph could be helpful to draw
quick validations from your partners and managers.
As I mentioned before, this discovering process task is for business
analysts. But for learning reasons, this can help us to understand and gain
expertise in recognizing real-life processes and then have more fluid talks
with business analysts.

Take your time to do this homework, later you will find that you are
learning how to improve your newly-found processes.

[ 26 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 1

Summary

In this chapter, we learned the meaning of three key concepts that will underpin all
of our work with jBPM:
•

Business Process: Sequence of activities that are done by humans and systems
interactions in order to achieve some business goal. This is a key concept, keep it
in mind.

•

Business Process Management: BPM is discipline oriented to analyze, improve,
and maintain business processes in an iterative way over time. We also see all the
advantages and some history about this.

•

Business Process Management Systems: It's a software that lets us
implement the main stages of BPM. A set of tools for designing, managing,
and improving our business processes. It is designed for teams composed by
developers and business analysts.

We also tried to understand why developers need to know all these concepts, and
we reached the conclusion that a new paradigm of software development can be
implemented using the BPM theory.
In the next chapter, we will learn about the jBPM framework. We will see how
it is composed and the set of tools that this framework provides us. The focus of
this book is to show how the framework is internally built using a developer's
perspective that will help you to:
•

Learn about any open source frameworks

•

Learn how to teach and guide the learning process of open
source frameworks

[ 27 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers
This chapter will give us a basic background into how the framework was built.
We will be fully focused on the approach used to implement jBPM. This approach
is called Graph Oriented Programming, and we will discuss and implement a basic
solution with it. This will guide us to knowing about the framework internals with a
simplistic vision. That will give us the power to understand the main guidelines used
to build the entire framework.
During this chapter, the following key points will be covered:
•

Common development process

•

Decoupling processes from our applications

•

Graph Oriented Programming
°

Modeling nodes

°

Modeling transitions

°

Expanding our language

•

Implementing our graph-oriented solution in Java

•

Wait states versus automatic nodes

•

Executing our processes

Let's get started with the main cornerstone behind the framework. This chapter will
give us the way to represent our business processes using the Java language and all
the points that you need to cover in order to be able to represent real situations.

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers

Graph Oriented Programming

We will start talking about the two main concepts behind the framework's internal
implementation. The Graph Oriented Programming(GOP) approach is used to
gain some features that we will want when we need to represent business processes
inside our applications. Basically, graph oriented programming gives us the
following features:
•

Easy and intuitive graphical representation

•

Multi-language support

These are concepts that jBPM implementers have in mind when they start with
the first version of the framework. We are going to take a quick look at that and
formulate some code in order to try to implement our minimal solution with these
features in mind.
Starting with GOP as a bigger concept, you will see that the official documentation
of jBPM mentions this topic as one of the most important concepts behind
the framework. Here, we will reveal all of the advantages that this approach
implementation will give us. Basically, by knowing GOP, we will gain complete
knowledge about how processes are represented and how they are executed.
Therefore, a common question here is, why do we need a new approach (GOP) for
programming our processes when we have already learnt about the object-oriented
programming paradigm?

Common development process

In order to answer the previous question, we will quickly analyze the situation
here. To achieve this, we need to understand the nature of our processes. We will
also analyze what kind of advantages developers gain when the business process
information is decoupled from the rest of the application code.
Let's clarify this point with a simple example. Imagine that we have to build an
entire application that represents the stages in the "Recycling Things Co." example
previously presented. The most common approach for a three-tier application and
development process will be the following:

[ 30 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 2
Development Process
Requirements

Business
Analyst

Observe /
Inquire

Use Cases /
Text
Description

Analysis and Design

Business
Analyst

Implementation

Developer

Developer

Analysis and
Design

Analysis
Class
Diagrams

Design
Class
Diagrams

Test

Tester

Implementation

Code
(Logic)

DB
(Structure /
Tables)

Test

Presentation
Code (User
Interface)

Test Cases
(Test
Scenarios)

This is a traditional approach where all the stages are iteratively repeated for each
stage/module of the application.
One thing that we can notice here, and which happens in real software projects,
is that the business analyst's description will be lost in the design phase, because
the business analyst doesn't fully understand the design class diagrams as these
diagrams are focused on implementation patterns and details. If we are lucky and
have a very good team of business analysts, they will understand the diagrams.
However, there is no way that they could understand the code. So, in the best case,
the business analyst description is lost in the code—this means that we cannot show
our clients how the stages of their processes are implemented in real working code.
That is why business analysts and clients (stakeholders) are blind. They need to
trust that we (the developers) know what we are doing and that we understand 100
percent of the requirements that the business analysts collect. Also, it is important to
notice that in most of the cases, the client validates the business analyst's work—if
changes emerge in the implementation phase, sometimes these changes are not
reflected in the business analyst's text and the client/stakeholders never realize that
some implementation aspect of their software changes.
Maybe they are not functional changes, but there are sometimes changes that affect
the behavior of the software or the way users will interact with it. This uncertainty
generated in the stakeholder causes some dependency and odd situations where the
stakeholder thinks that if he/she cannot count on us (the developers and architects
team) any longer, nobody will be able to understand our code (the code that we
write). With this new approach, the client/stakeholders will be able to perceive, in
a transparent way, the code that we write to represent each business situation. This
allows them (the stakeholders) to ask for changes that will be easily introduced to
reflect everyday business requirements. Let's be practical and recognize that, in most
situations, if we have the application implemented in a three-tier architecture, we will
have the following artifacts developed:
[ 31 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers

Database model

That includes logic tables to do calculations, UI tables to store UI customizations or
users' data about their custom interfaces, domain entities (tables that represent the
business entities, for example, Invoice, Customer, and so on), and history logs
all together.

Business logic

If we are careful developers, here we are going to have all of the code related to a
logical business processes method. In the case of the example, here we will have
all the stages represented in some kind of state machine in the best cases. If we
don't have a kind of state machine, we will have a lot of if and switch statements
distributed in our code that will represent each stage in the process. For example, if
we have the same application for all the branches of a company, this application will
need to behave differently for the main office's employee than for the 'just finished'
warehouse employee. This is because the tasks that they are doing are very different
in nature. Imagine what would happen if we want to add some activity in the
middle, probably the world would collapse! Developers will need to know in some
depth how all the if and switch statements are distributed in the code in order to
add/insert the new activity. I don't want to be one of these developers.

User interfaces

Once again, if we are lucky developers, the process stages will not be represented
here, but probably many if and switch statements will be dispersed in our UI code
that will decide what screen is shown to each of the users in each activity inside the
process. So, for each button and each form, we need to ask if we are in a specific
stage in the process with a specific user.

[ 32 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 2

Decoupling processes from our
applications

By decoupling the business process from these models, we introduce an extra layer
(tier) with some extra artifacts, but this helps us to keep the application simple.
Implementation

Where do
Business
Processes fit
here?

Developer

Implementation

Code
(Logic)

DB
(Structure /
Tables)

Presentation
Code (User
Interface)

Business
Processes

This new paradigm proposed here includes the two Business Process Management
(BPM) roles in all the development and execution stages of our processes (business
analysts and developers). This is mainly achieved through the use of a common
language that both sides understand. It lets us represent the current process that
the business analysts see in the everyday work inside the company, and all of
the technical details that these processes need, in order to run in a production
environment. As we can see in the next image, both roles interact in the creation
of these new artifacts.

[ 33 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers

We don't have to forget about the clients/managers/stakeholders that can validate
the processes every day as they are running them. Also, they can ask for changes to
improve the performance and the current way used to achieve the business goal of
the process.
BPM System

<>

Technical
Details

Developer
Processes
Definitions
Code
(Logic)

DB
(Structure /
Tables)

Presentation
Code (User
Interface)

Business
Analyst
<>

DB Structure
for processes
runtime and
versioning

DBA

Without
process details
mixed in the
code

On comparing with the OOP paradigm, class diagrams here are commonly used
to represent static data, but no executional behavior. These newly created artifacts
(process definitions) can be easily represented in order to be validated by our
clients/stakeholders. One of the main advantages of this approach is that we can
get visibility about how the processes are executed and which activity they are in at
any given moment of time. This requirement will force us to have a simple way to
represent our business processes—in a graphicable way. We need to be able to see, all
the time, how our production processes are running.

Graph Oriented Programming on top
of OOP

Here, we will discuss the main points of the Graph Oriented Programming paradigm.
With this analysis, we will implement some basic approach to understand how we use
this paradigm on top of the Java language in the next section.
[ 34 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 2

In order to do that, we need to know the most important requisites that we have
to fulfill in order to achieve the goal which integrates, maintains, and executes our
business processes in real-world implementation:
•

Easy and intuitive graphical representation: To let the business analysts
and developers communicate smoothly and to fully understand what is
happening in the real process.

•

Must give us the possibility of seeing the processes' executions in real time:
In order to know how our processes are going on to make more accurate
business decisions.

•

Could be easily extended to provide extra functionality to fulfill all of the
business situations.

•

Could be easily modified and adapted to everyday business (reality)
changes. No more huge development projects for small changes and
no more migrations.

Implementing Graph Oriented
Programming on top of the Java
language (finally Java code!)

With these requisites, presented in the previous section, in mind, we are able to
implement a simple solution on top of the Java language that implements this new
approach (called the Graph Oriented Programming paradigm). As the name of
the paradigm says, we are going to work with graphs—directed graphs to be
more precise.
A graph can be defined as a set of nodes linked to each other as the following
image shows us:

[ 35 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers

If we are talking about directed graphs, we need to know that our nodes will be
linked using directed transitions. These transitions will be directed, because they will
define a source node and a destination node. This means that if we have a transition
that has node A as the source node, and node B as the destination node, that
transition will not be the same as the one that has node B as the source node,
and node A as the destination node. Take a look at the following image:

Like in the object-oriented programming paradigm, we need to have a language with
specific set of words (for example, object) here. We will need words to represent our
graphs, as we can represent objects in the object-oriented paradigm. Here we will
try to expand the official documentation proposed by the jBPM team and guide the
learning process of this important topic. We will see code in this section and I will
ask you to try it at home, debug it, and play with this code until you feel confident
about what is the internal behavior of this example.
Let's get started first with the graph definition and with some of the rules that the
graph needs to implement, in order to represent our business processes correctly.
Up until now, we have had two concepts that will appear in our graph oriented
programming language—Node and Transition. These two concepts need to be
implemented in two separate classes, but with a close relationship. Let's see a class
diagram about these two classes and make a short analysis about the attributes and
methods proposed in this example.
Node
Long id
String name

leavingTransitions

Transition
String label

destination

[ 36 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 2

Modeling nodes in the object-oriented world
This concept will be in charge of containing all of the information that we want
to know about the activities in our process. The idea here is to discover the vital
information that we will need to represent a basic activity with a Plain Old
Java Object (POJO).

Node

Note that we are using Generics
here. A feature available from the
Java Language Specification 1.5
This lets us define a Collection, List in
this case, that only will contain
Transition objects.

Long id
String name
List leavingTransitions

As you can see in the class diagram, this class will contain the following attributes
that store information about each activity of our process:
•

id (long): For a unique identification of the node

•

name (String): To describe the node activity

•

leaving Transitions(List): This represents all the

transitions that leave the node

This concept will need to implement some basic methods about executional
behavior, but it will be discussed when we jump to the executional stage.
For now, we are only going to see how we can represent a static graph.

Modeling a transition in the object-oriented
world

This concept is very simple, we want to know how the nodes will be linked to each
other. This information will define the direction of our graph. For that, we need to
know the following information:
•

name (String): That must be unique for the same source node

•

source (Node): This is the source node from where the transition begins

•

destination (Node): This is the destination node where the transition arrives

[ 37 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers

With these two classes, we will be able to represent our processes. But wait a minute;
if our processes only have nodes, we are hiding what is happening inside them,
we only see nodes and not what is happening in the process. So, we need more
expressiveness, to describe our processes in a more shocking way. Those who see the
process, can understand the process "direction" and the nature of each activity inside
it. Of course, the behavior of each activity will also be clarified with a more specific
language that provides us with other words, and not just nodes and transitions to
describe activities and the flow.

Expanding our language

The only thing that we need to do in order to have more expressive power in our
graphs is to increase the number of words in our language. Now we only have Node
and Transition, these two words are not enough to create clear graphs that can be
understood by our stakeholders. To solve that, we can have some kind of hierarchy
from the most abstract concepts that represent a generic activity to more concrete
concepts related to specific activities such as Human Tasks, Automatic Activities,
Decision Activities, and so on.
In order to do this, you only need to extend the Node concept and add the
information that you want to store for a specific activity behavior. Basically, what I
am saying here is that the Node concept has the basic and generic information that
every activity will have, and all subclasses will add the specific information related
to that specific activity behavior in our specific domain.
Node

MoreConcreteNode

ConcreteNode

Abstract Node Functionality

AnotherMoreConcreteNode

Note that the node
class is not an
Abstract Class. It just
represents an Abstract
concept but it could
be instantiated

Concrete Node Functionality /
To achieve a concrete Activity

As you can imagine, if we have different sets of nodes to represent different
domains, we can say that our Graph Oriented Programming solution supports
multiple languages.

[ 38 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 2

Another important thing here, is to notice that the Node concept needs to be easily
graphicable, just to have graphical views of our processes. In an Object Oriented
Paradigm, we achieve that by implementing the graphicable interface, because we
want all the subclasses of Node to be graphicable as well (this is not shown in the
diagrams, but you can see it in the code). This interface provides the signature and
makes us implement the methods to graph each node easily, giving us the flexibility
to graph our process in different formats. It is also important to note that each
subclass can override the graphic functionality in order to represent, graphically,
the difference of each behavior.
The only thing that we need now is some container that represents the graph as a
whole. As we have a collection of nodes and transitions now, we need a place to
have all of them together.

Process Definition: a node container

This will be a very simple concept too, we can also use the word 'graph' to represent
this container, but here we will choose 'definition', because it is more related to
the business process world. We can say that this class will represent the formal
definition or our real situation. This will only contain a list of nodes that represent
our process. One interesting thing to see here is that this concept/class will
implement an interface called NodeContainer, which provides us methods with
which to handle a collection of nodes. For this interface, we need to implement the
functionality of the methods addNode(Node n), getNode(long id), getNodes(),
and so on.
<>
Graphicable

<>
Node Container

graph()

addNode(Node)
getNode(Long)
getNodes()

<>

<>
Definition
String name
List nodes

In the next section, we will see how these concepts and class diagrams are translated
into Java classes.

[ 39 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers

Implementing our process definition

During the following section, we will implement our own simple solution to
represent our business processes. The main goal of this section is to be able
to understand from the source how the jBPM framework is built internally.
Understanding this simple implementation will help you to understand how the
framework represents, internally, our processes definitions. That knowledge will
allow you to choose the best and more accurate way to represent your business
activities in a technical way.
We are ready to see how this is translated to Java classes. Please feel free to
implement your own solution to represent process definitions. If you don't feel
confident about the requirements or if you are shy to implement your own solution,
in this section we are going to see all the extra details needed to represent our
definitions correctly. I also provide the Java classes to download, play, and test.
As in the previous section, we start with the Node class definition. In this class,
we are going to see details of how to implement the proposed concept in the
Java language.
Here we are just implementing our own solution in order to be able
to represent simple business processes. This is just for the learning
process. This implementation of GOP is similar and simpler than the
real jBPM implementation.

The Node concept in Java

This class could be found in the code bundle for the book at http://www.
packtpub.com/files/code/5685_Code.zip (in the project called chapter02.
simpleGOPDefinition). We can find the Node class placed inside the package
called org.jbpm.examples.chapter02.simpleGOP.definition:
public class Node implements Graphicable{
private Long id;
private String name;
private Map leavingTransitions =
new HashMap();
public Node(String name) {
this.name = name;
}
public void addTransition(String label, Node destination) {
leavingTransitions.put(label, new Transition(label, this,destination));

[ 40 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 2
}
public void graph() {
String padding="";
String token="-";
for(int i=0; i < this.getName().length(); i++){
padding+=token;
}
System.out.println("+---"+padding+"---+");
System.out.println("| "+this.getName()+" |");
System.out.println("+---"+padding+"---+");
Iterator transitionIt =
getTransitions().values().iterator();
while(transitionIt.hasNext()){
transitionIt.next().graph();
}
}
... (Setters and Getters of each property)
}

As you can see, there is nothing to worry about. It is a simple class that contains
properties—here, you need to notice:
•

Implementing the Graphicable interface forces us to define the
graph() method.

•

The graph()method's implementation, in this case, will print some ASCII
characters on the console.

•

The use of a Map to store each transition with a name associated to it. The idea
here is to use the transitions based on names and not objects. In other words,
we can choose which transition to take using just a String.

•

The addTransition(String, Node) method that wrap the put method of
the Map and creates a new instance of the Transition class.

The Transition concept in Java

In the same package of the Node class, we can find the definition of the
Transition class:
public class Transition implements Graphicable{
private Node source;
private Node destination;
private String label;
public Transition(String label,Node source, Node destination) {
[ 41 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers
this.source = source;
this.destination = destination;
this.label = label;
}
... (Getters and Setters for all the properties)
}

Another very simple class. As you can see, here we have two Node class properties
that allow us to define the direction of the transition. Also, we have the label
property that allows us to identify the transition by name. This property (the label
property) is a kind of ID and must be unique inside all the leaving transitions of a
particular node, but it could be repeated in any other node.

The Definition concept in Java

In the same package, we can find the Definition class. The idea of this class is to
store all the nodes that compose a process definition.
public class Definition implements Graphicable, NodeContainer {
private String name;
private List nodes;
public Definition(String name) {
this.name = name;
}
public void graph(){
for (Node node : getNodes()){
node.graph();
}
}
public void addNode(Node node) {
if(getNodes() == null){
setNodes(new ArrayList());
}
getNodes().add(node);
}
public Node getNode(Long id) {
for(Node node : getNodes()){
if(node.getId() == id){
return node;
}
}
return null;
}
...(Getters and Setters for all the properties)
}
[ 42 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 2

The main things to notice here:
•

The name property will store the name of our process definition. This will be
important when we want to store these definitions in a persistent way.

•

This class implements the Graphicable and NodeContainer interfaces.
This forces us to define the graph() method from the Graphicable
interface, and the addNode(Node) and getNode(Long) methods from
the NodeContainer interface.

•

The graph() method, as you can see, only iterates all the nodes inside the list
and graphs them, showing us the complete process graph in the console.

•

The addNode(Node) method just inserts nodes inside the list and the
getNode(Long) method iterates the nodes inside the list, until it finds
a node with the specified id.

Testing our brand new classes

At this point, we can create a new instance of the Definition class and start adding
nodes with the right transitions. Now we are going to be able to have some graphical
representation about our process.
All of these classes could be seen in the chapter02 code in the simpleGOPDefinition
maven project. You will see two different packages, one that show a simple definition
implementation about these concepts, and the other shows a more expressive set
of nodes overriding the basic node implementation, to have a more realistic
process representation.
If you don't know how to use maven, there is a quick start guide at the
end of this chapter. You will need to read it in order to compile and run
these tests.

In the test sources, you will find a test class called TestDefinition, it contains
two tests—one for the simple approach and the other with the more expressive
approach. Each of these test methods inside the TestDefinition class uses the JUnit
framework to run the defined tests. You can debug these tests or just run them and
see the output on the console (and yes, I'm an ASCII fan!). Feel free to modify and
play with this implementation. Always remember that here we are just defining our
processes, not running them.

[ 43 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers

If you see the test code, it will only show how to create a definition instance and then
graph it.
public void testSimpleDefinition(){
Definition definition = new Definition("myFirstProcess");
System.out.println("########################################");
System.out.println(" PROCESS: "+definition.getName()+" ");
System.out.println("########################################");
Node firstNode = new Node("First Node");
Node secondNode = new Node("Second Node");
Node thirdNode = new Node("Third Node");
firstNode.addTransition("to second node", secondNode);
secondNode.addTransition("to third node", thirdNode);
definition.addNode(firstNode);
definition.addNode(secondNode);
definition.addNode(thirdNode);
definition.graph();
}

Process execution

At this point, where our definitions are ready, we can create an execution of our
defined processes. This can be achieved by creating a class where each instance
represents one execution of our process definition—bringing our processes to life
and guiding the company with their daily activities; letting us see how our processes
are moving from one node to the next one. With this concept of execution, we will
gain the power of interaction and influence the process execution by using the
methods proposed by this class.
We are going to add all of the methods that we need to represent the executional
stage of the process, adding all the data and behavior needed to execute our
process definitions.

[ 44 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 2

This process execution will only have a pointer to the current node in the process
execution. This will let us query the process status when we want.
Definition

Execution
Node currentNode
Definition definition

An important question about this comes to our minds: why do we need to interact
with our processes? Why doesn't the process flow until the end when we start it?
And the answer to these important questions is: it depends. The important thing here
is to notice that there will be two main types of nodes:
•

One that runs without external interaction (we can say that is an automatic
node). These type of nodes will represent automatic procedures that will run
without external interactions.

•

The second type of node is commonly named wait state or event wait. The
activity that they represent needs to wait for a human or a system interaction
to complete it. This means that the system or the human needs to create/fire
an event when the activity is finished, in order to inform the process that it
can continue to the next node.

Wait states versus automatic nodes

The difference between them is basically the activity nature. We need to recognize
this nature in order to model our processes in the right way. As we have seen before,
a "wait state" or an "event wait" situation could occur when we need to wait for some
event to take place from the point of view of the process. These events are classified
into two wide groups—Asynchronous System Interactions and Human tasks.

Download at WoweBook.com
[ 45 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers

Asynchronous System Interactions

This means the situation when the process needs to interact with some other system,
but the operation will be executed in some asynchronous way.
For non-advanced developers, the word "asynchronous" could sound ambiguous or
without meaning. In this context, we can say that an asynchronous execution will
take place when two systems communicate with each other without blocking calls.
This is not the common way of execution in our Java applications. When we call a
method in Java, the current thread of execution will be blocked while the method
code is executed inside the same thread. See the following example:
public class Main{
public void static main(String[] args){
doSomething();
Wait until finish
doBackup();
System.out.printIn("Backup Finished!");
}
}

doBackUp()
method code

The doBackup()method will block until the backup is finished. When this happens,
the call stack will continue with the next line in the main class. This blocking call is
commonly named as a synchronous call.
On the other hand, we got the non-blocking calls, where the method is called but we
(the application) are not going to wait for the execution to finish, the execution will
continue to the next line in the main class without waiting.
In order to achieve this behavior, we need to use another mechanism. One of the
most common mechanisms used for this are messages.
Let's see this concept in the following image:
Main Thread

External Thread
Create Message and Continue

...
doBackup();
...

Message

Execute Backup
MSG

[ 46 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 2

In this case, by using messages for asynchronous executions, the doBackup() method
will be transformed into a message that will be taken by another thread (probably
an external system) in charge of the real execution of the doBackup()code. The main
class here will continue with the next line in the code. It's important for you to notice
that the main thread can end before the external system finishes doing the backup.
That's the expected behavior, because we are delegating the responsibility to execute
the backup code in the external system. But wait a minute, how do we know if the
doBackup() method execution finished successfully? In such cases, the main thread
or any other thread should query the status of the backup to know whether it is
ready or not.

Human tasks

Human tasks are also asynchronous, we can see exactly the same behavior that we
saw before. However, in this case, the executing thread will be a human being and
the message will be represented as a task in the person's task list.
Main Thread

Task List
Create Message and Continue

...
doBackup();
...

Task

Do
Task
Task
Task

Actor

As we can see in this image, a task is created when the Main thread's execution
reaches the doBackup() method. This task goes directly to the corresponding user in
the task list. When the user has time or is able to do that task, he/she completes it. In
this case, the "Do Backup" activity is a manual task that needs to be performed by a
human being.
In both the situations, we have the same asynchronous behavior, but the parties that
interact change and this causes the need for different solutions.
For system-to-system interaction, probably, we need to focus on the protocols that
the systems use for communication.
In human tasks, on the other hand, the main concern will probably be the user
interface that handles the human interaction.

[ 47 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers

How do we know if a node is a wait state node or an automatic node?
First of all, by the name. If the node represents an activity that is done
by humans, it will always wait. In system interactions, it is a little more
difficult to deduce this by the name (but, if we see an automatic activity
that we know takes a lot of time, that will probably be an asynchronous
activity which will behave as a wait state). A common example could be
a backup to tape, where the backup action is scheduled in an external
system . If we are not sure about the activity nature we need to ask about
the activity nature to our stakeholder.

We need to understand these two behaviors in order to know how to implement
each node's executional behavior, which will be related with the specific
node functionality.

Creating the execution concept in Java

With this class, we will represent each execution of our process, which means that
we could have a lot of instances at the same time running with the same definition.
This class can be found inside another project called chapter02.simpleGOPExecution.
We have to separate the projects, because in this one, all the classes we have for
representing the processes include all the code related to the execution of the process.
Inside the package called org.jbpm.examples.chapter02.simpleGOP.execution,
we will find the following class:
public class Execution {
private Definition definition;
private Node currentNode;
public Execution(Definition definition) {
this.definition = definition;
//Setting the first Node as the current Node
this.currentNode = definition.getNodes().get(0);
}
public void start(){

// Here we start the flow leaving the currentNode.
currentNode.leave(this);
}
... (Getters and Setters methods)
}
[ 48 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 2

As we can see, this class contains a Definition and a Node, the idea here is to have
a currentNode that represents the node inside the definition to which this execution
is currently "pointing". We can say that the currentNode is a pointer to the current
node inside a specific definition.
The real magic occurs inside each node. Now each node has the responsibility of
deciding whether it must continue the execution to the next node or not. In order to
achieve this, we need to add some methods (enter(), execute(), leave()) that
will define the internal executional behavior for each node. We do this in the Node
class to be sure that all the subclasses of the Node class will inherit the generic way
of execution. Of course, we can change this behavior by overwriting the enter(),
execute() and leave() methods.
We can define the Node.java class (which is also found in the
chapter02.simpleGOPExecution project) as follows:
...
public void enter(Execution execution){
execution.setCurrentNode(this);
System.out.println("Entering "+this.getName());
execute(execution);
}
public void execute(Execution execution){
System.out.println("Executing "+this.getName());
if(actions.size() > 0){
Collection actionsToExecute = actions.values();
Iterator it = actionsToExecute.iterator();
while(it.hasNext()){
it.next().execute();
}
leave(execution);
}else{
leave(execution);
}
}
public void leave(Execution execution){
System.out.println("Leaving "+this.getName());
Collection transitions =
getLeavingTransitions().values();
Iterator it = transitions.iterator();
if(it.hasNext()){
it.next().take(execution);
}
}
...

[ 49 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers

As you can see in the Node class, which is the most basic and generic implementation,
three methods are defined to specify the executional behavior of one of these nodes
in our processes. If you carefully look at these three methods, you will notice that
they are chained, meaning that the enter() method will be the first to be called.
And at the end, it will call the execute() method, which will call the leave()
method depending on the situation. The idea behind these chained methods is to
demarcate different phases inside the execution of the node.
All of the subclasses of the Node class will inherit these methods, and with that the
executional behavior. Also, all the subclasses could add other phases to demarcate a
more complex lifecycle inside each node's execution.
The next image shows how these phases are executed inside each node.
Node
Enter

Node
Execute

Leave

Enter

Execute

Leave

Take

As you can see in the image, the three methods are executed when the execution
points to a specific node. Also, it is important to note that transitions also have the
Take phase, which will be executed to jump from one node to the next.
All these phases inside the nodes and in the transition will let us hook custom blocks
of code to be executed.
One example for what we could use these hooks for is auditing processes. We could
add in the enter() method, that is the first method called in each node, a call to an
audit system that takes the current timestamp and measures the time that the node
uses until it finishes the execution when the leave() method is called.
Node
Enter

Execute

Audit
System Call

Leave

Audit
System Call

[ 50 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 2

Another important thing to notice in the Node class is the code inside the execute()
method. A new concept appears. The Action interface that we see in that loop,
represents a pluggable way to include custom specific logic inside a node without
changing the node class. This allows us to extend the node functionality without
modifying the business process graph. This means that we can add a huge amount
of technical details without increasing the complexity of the graph. For example,
imagine that in our business process each time we change node, we need to store the
data collected from each node in a database. In most of the cases, this requirement is
purely technical, and the business users don't need to know about that. With these
actions, we achieve exactly the above. We only need to create a class with the custom
logic that implements the Action interface and then adds it to the node in which we
want to execute the custom logic.
Node
Enter

Execute

Action

Leave

Action
Action

The best way to understand how the execution works is by playing with the code.
In the chapter02.simpleGOPExecution maven project, we have another test that
shows us the behavior of the execution class. This test is called TestExecution and
contains two basic tests to show how the execution works.
If you don't know how to use maven, there is a quick start guide at the
end of this chapter. You will need to read it in order to compile and run
these tests.
public void testSimpleProcessExecution(){
Definition definition = new Definition("myFirstProcess");
System.out.println("########################################");
System.out.println(" Executing PROCESS:
"+definition.getName()+" ");
System.out.println("########################################");
Node firstNode = new Node("First Node");
Node secondNode = new Node("Second Node");
Node thirdNode = new Node("Third Node");
[ 51 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers
firstNode.addTransition("to second node", secondNode);
secondNode.addTransition("to third node", thirdNode);
//Add an action in the second node.
CustomAction implements Action
secondNode.addAction(new CustomAction("First"));
definition.addNode(firstNode);
definition.addNode(secondNode);
definition.addNode(thirdNode);
//We can graph it if we want.
//definition.graph();
Execution execution = new Execution (definition);
execution.start();
//The execution leave the third node
assertEquals("Third Node", execution.getCurrentNode().getName());
}

If you run this first test, it creates a process definition as in the definition tests, and
then using the definition, it creates a new execution. This execution lets us interact
with the process. As this is a simple implementation, we only have the start()
method that starts the execution of our process, executing the logic inside each node.
In this case, each node is responsible for continuing the execution to the next node.
This means that there are no wait state nodes inside the example process. In case we
have a wait state, our process will stop the execution in the first wait state. So, we
need to interact with the process again in order to continue the execution.
Feel free to debug this test to see how this works. Analyze the code and follow the
execution step by step. Try to add new actions to the nodes and analyze how all of
the classes in the project behave.
When you get the idea, the framework internals will be easy to digest.

Homework

We are ready to create our first simple GOP language, the idea here is to get
hands-on code and try to implement your own solution. Following and using the
guidelines proposed in this chapter with minimal functionality, but with the full
paradigm implemented, will represent and execute our first process. We could
try to implement our example about "Recycling Thing Co.", but we will start with
something easier. So, you can debug it and play with it until you get the main points

[ 52 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 2

of functionality. In the following sections, I will give you all the information that you
need in order to implement the new words of our language and the behavior that the
process will have. This is quite a lot of homework, but trust me, this is really worth
it. The idea of finishing this homework is to feel comfortable with the code and the
behavior of our defined processes. You will also see how the methods are chained
together in order to move the process from one node to the next.

Creating a simple language

Our language will be composed with subclasses from our previous node class.
Each of these subclasses will be a word in our new language. Take a look at the
ActivityNode proposed in the chapter02.simpleGOPExecution project, inside
the org.jbpm.examples.chapter02.simpleGOP.definition.more.expressive.
power package. And when we try to represent processes with this language, we
will have some kind of sentence or paragraph expressed in our business process
language. As in all languages, these sentences and each word will have restrictions
and correct ways of use. We will see these restrictions in the Nodes description section
of the chapter.
So, here we must implement four basic words to our simple language. These words
will be start, action, human decision, and end to model processes like this one:
Start

*

Action

Human

? Decision

*

*

Action

Action

End

[ 53 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers

Actually, we can have any combination that we want of different types of nodes
mixed in our processes. Always follow the rules/restrictions of the language that
we implement. These restrictions are always related to the words' meanings. For
example, if we have the word "start" (that will be a subclass of node) represented
with the node: StartNode, this node implementation could not have arriving
transitions. This is because the node will start the process and none of the rest
of the nodes could be connected to the start node. Also, we could see a similar
restriction with the end node, represented in the implementation with the EndNode
class, because it is the last node in our processes and it could not have any leaving
transitions. With each kind of node, we are going to see that they have different
functionality and a set of restrictions that we need to respect when we are using
these words in sentences defining our business processes. These restrictions could be
implemented as 'not supported operation' and expressed with: throw new Unsupport
edOperationException("Some message here");. Take a look at the EndNode
class, you will see that the addTransition() method was being overridden to
achieve that.

Nodes description

In this section, we will see the functionality of each node. You can take this
functionality and follow it in order to implement each node. Also, you could think
of some other restrictions that you could apply to each node. Analyze the behavior
of each method and decide, for each specific node type, whether the method
behavior needs to be maintained as it is in the super class or whether it needs
to be overwritten.
•

StartNode: This will be our first node in all our processes. For this
functionality, this node will behave as a wait state, waiting for an external
event/signal/trigger that starts the process execution. When we create a
new instance of process execution, this node is selected from the process
description and set as the current node in the current execution. The start()
method in the execution class will represent the event that moves the process
from the first node to the second, starting the flow of the process.

•

EndNode: It will be our last node in all our processes. This node will end
the life of our process execution instance. As you can imagine, this node will
restrict the possibility of adding leaving transitions to this node.

•

ActionNode: This node will contain a reference to some technical code that
executes custom actions that we need for fulfilling the process goal. This is a
very generic node where we can add any kind of procedure. This node will
behave as an automatic activity, execute all of these actions, and then leave
the node.
[ 54 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 2

•

HumanDecisionNode: This is a very simple node that gives a human being
some information in order to decide which path of the process the execution
will continue through. This node, which needs human interaction, will
behave as a wait state node waiting for a human to decide which transition
the node must take (this means that the node behaves as an OR decision,
because with this node, we cannot take two or more paths at the same time).

One last thing before you start with the real implementation of these nodes. We
will need to understand what the expected results are in the execution stage of our
processes. The following images will show you how the resultant process must
behave in the execution stage. The whole execution of the process (from the start
node to the end node) will be presented in these three stages.

Stage one

The following image represents the first stage of execution of the process. In this
image, you will see the common concepts that appear in the execution stage. Every
time we create a new process execution, we will start with the first node, waiting for
an external signal that will move the process to the next node.
Start
new

wait

*

Execution

Action

Human

? Decision

*

*

Action

Action

End

1. An execution is created using our process definition instance in order to
know which activities the process will have.
2. The start node is selected from the process definition and placed inside the
current node reference in the execution instance. This is represented by the
black arrow pointing to the Start node.

[ 55 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers

3. As the StartNode behaves as a wait state, it will wait and externally trigger
to start the process execution. We need to know this, because may, we can
think that if we create an instance of execution, it will automatically begin
the execution.

Stage two

The second stage of the execution of the process is represented by the
following image:
Start
start

start()

*

Execution

Action

wait
Human

? Decision

*

*

Action

Action

End

1. We can start the execution by calling the start() method inside the
execution class. This will generate an event that will tell the process to start
flowing through the nodes.
2. The process starts taking the first transition that the start node has, to the first
action node. This node will only have an action hooked, so it will execute this
action and then take the transition to the next node. This is represented with
the dashed line pointing to the Action node.
This node will continue the execution and will not behave as a
wait state—updating the current node pointer, which has the
execution, to this node.
3. The Human Decision node is reached, this means that some user must decide
which path the process will continue through. Also, this means that the
process must wait for a human being to be ready to decide. Obviously, the
node will behave as a wait state, updating the current node pointer to
this node.
[ 56 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 2

But wait a second, another thing happens here, the process will return the
execution control to the main method. What exactly does this mean?
Until here, the execution goes from one wait state (the start node) to another
wait state (the human decision node) inside the method called start,
enclosing all the automatic nodes' functionality and leaving the process
in a wait state. Let's analyze the method call stack trace:
• Execution.start()
• StartNode.leave(transtion)
• ActionNode.enter()
• ActionNode.execute()
• CustomAction.execute()
• ActionNode.leave(transition)
• HumanDecisionNode.enter()
• HumanDecisionNode.execute()

When the process reaches the HumanDecisionNode.execute(), it doesn't need to
do anything more. It returns to the main() method and continues with the next line
after the Execution.start() call.

Stage three

The following image represents the third stage of execution of the process:
Start

*

Decide

Action

Human

decide()

? Decision

Execution

*

*

Action

end

Action

End

[ 57 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jBPM for Developers

1. Now we are waiting for a human to make a decision, but wait a second, if the
thread that calls the start() method on the instance of the execution dies,
we lose all the information about the current execution, and we cannot get it
back later. This means that we cannot restore this execution to continue from
the human decision node. On the other hand, we can just sleep the thread
waiting for the human to be ready to make the decision with something like
Thread.currentThread().sleep(X). Where X is expressed in milliseconds.
But we really don't know how much time we must wait until the decision is
taken. So, sleeping the thread is not a good option. We will need some kind
of mechanism that lets us persist the execution information and allows us to
restore this status when the user makes the decision. For this simple example,
we just suppose that the decision occurs just after the start() method
returns. So, we get the execution object, the current node (this will be the
human decision node), and execute the decide() method with the name of
the transition that we want to take as argument.
Let's run ((HumanDecisionNode)execution.getCurrentNode()).
decide("transition to action three", execution). This is an ugly
way to make the decision, because we are accessing the current node from
the execution. We could create a method in the execution class that wraps
this ugly call. However, for this example, it is okay. You only need to
understand that the call to the decide() method is how the user
interacts with the process.
2. When we make this decision, the next action node is an automatic node
like the action one, and the process will flow until the EndNode, which
is ending the execution instance, because this is the last wait state, but
any action could be made to continue. As you can see, the wait states
will need some kind of persistent solution in order to actually be able to
wait for human or asynchronous system interactions. That is why you
need to continue your testing of the execution with ((HumanDecisionN
ode)execution.getCurrentNode()).decide("transition to action
three",execution), simulating the human interaction before the current

thread dies.

In the following chapters, you will see how this is implemented inside the
jBPM framework.

[ 58 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 2

Homework solution

Don't worry, I have included a solution pack to this exercise, but I really encourage
you to try to make it on your own. Once you try it, you will feel that you really
understand what is going on here; giving you a lot of fundamental concepts ready
to use.
You could download the solutions pack from http://www.packtpub.com/files/
code/5685_Code.zip and you will find the solution for this homework in the project
called chapter02.homeworkSolution.

Quick start guide to building Maven
projects
A quick start guide for building Maven projects is follows:
•

Download and install Maven 2.x (http://maven.apache.org/)

•

Append maven binaries in the PATH system variable

•

Open a terminal/console

•

Go to the chapter02 code directory and look for a file called pom.xml

•

Type mvn clean install into the console, this will compile the code, run the
tests, and package the project

•

If you are using Netbeans, you can just open your project (having the maven
plugin activated)

•

If you are using Eclipse, you need to run the project in the mvn eclipse:
eclipse project directory, in order to generate the files needed for the
project. Then you can just import the project into your workspace

Summary

In this chapter, we learnt the following main points that you will need in order to
understand how the framework works internally.
We have analyzed why we need the Graph Oriented Programming approach to
represent and execute our business processes.
In the next chapter, we will be focused on setting up our environment to get
started with jBPM. However, not for a simple "hello world", but for real application
development. We will also talk about the project source structure and how we can
create projects that use the framework in an embedded way.
[ 59 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools
This chapter is about setting up the environment that we need in order to build
a rock-solid application, which uses jBPM. In order to achieve this, we need to
download, install, and understand a common set of tools that will help us in the
development process.
This chapter will begin with a short introduction about the jBPM project so that we
can be "ordered" and understand what we are doing. We will discuss the project's
position inside all the JBoss projects. We will also talk about the project's main
modules and the features proposed by this framework.
All these tools are widely used in the software development market, but for those
who don't know about them, this chapter will briefly introduce these tools one
by one.
This chapter will also include two important steps. Firstly, a revision of the
framework source code structure. Secondly, how to set up this code in our
favorite IDE in order to have it handy when we are developing applications.
Once we have set up the whole environment, we will be able to create two
simple, real applications that will be our project templates for bigger applications.
At the end of this chapter, you will be able to create applications that use the
framework and also gain knowledge about how all of these tools interact in our
development environment.

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

The following topics will be covered throughout this chapter:
•

Background about the jBPM project

•

Tools and software

•

o

Maven

o

MySQL

o

Eclipse IDE

o

SVN client

Starting with jBPM
o

Binary structure

o

Building jBPM from source

•

First jBPM project

•

First jBPM Maven Project

Background about the jBPM project

In this section, we will talk about where the jBPM framework is located inside the
JBoss projects. As we know, JBoss jBPM was created and maintained for JBoss. JBoss
is in charge of developing middleware "enterprise" software in Java. It is middleware
because it is a type of software to make or run software, and "enterprise", as it is
focused on big scenarios. This enterprise does not necessarily mean Java EE. It is also
interesting to know that JBoss was bought from a company called Red Hat (famous
for the Linux distribution with the same name, and also in charge of the Fedora
community distribution).
In order to get the right first impression about the framework, you will need to know
a little about other products that JBoss has developed and where this framework
is located and focused inside the company projects. At this moment, the only
entry point that we have is the JBoss community page, http://www.jboss.org/.
This page contains the information about all the middleware projects that JBoss is
developing (all open source). If we click on the Projects link in the top menu,
we are going to be redirected to a page that shows us the following image:

[ 62 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3
Dev Tools

Portal

Integration

Telecom

JBoss Portal

JBoss ESB

Mobicents

Drools
jBPM

JBoss Tools

RichFaces/Ajax4jsf

Gravel

Web Interface

JBoss Profiler

Test Tools
JSF Unit
JRunit

Seam

Hibernate

JGroups

JNDI

JCA

IIOP

JBoss JMX

JBoss Web

JBoss Web
Services

JBoss
Messaging

JBoss EJB3

JBoss
Transactions

JBoss
Remoting

JBoss Cache

JBoss
Serialization

JBoss Security & Identity
Management

JBoss Federated SSO

JBoss AOP

Javassist

Application Server

JBoss AS

JBoss Microcontainer

Teiid

This image shows us one important major central block for the JBoss Application
Server, which contains a lot of projects intended to run inside this application server.
The most representative modules are:
•

JBoss Web: The web container based on Tomcat Web Server

•

JBoss EJB3: EJB3 container that is standard EJB3 compliance for Java EE 5

•

Hibernate: The world-renowned Object Relational Mapping (ORM)
framework
[ 63 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

•

Seam: The new web framework to build rich Internet applications

•

JBoss Messaging: The default JMS provider that enables high performance,
scalable, clustered messaging for Java

On top of that, we can see two frameworks for Web Interface design (RichFaces/
Ajax4jsf and Gravel) based on the components, which can be used in any web
application that you code.
And then, on top of it all, we can see three important blocks—Portal, Integration,
and Telecom. As you can imagine, we are focused on the Integration block that
contains three projects inside it.
As you can see, this Integration block is also outside the JBoss Application Server
boundaries. Therefore, we might suppose that these three products will run without
any dependency from JBoss or any other application server.
Now we are going to talk about these three frameworks, which have different
focuses inside the integration field.

JBoss Drools

Drools is, of late, focused on business knowledge, and because it was born as an
inference engine, it will be in charge of using all that business knowledge in order
to take business actions based on this knowledge for a specific situation. You can
find out more information about this framework (now redefined as Business Logic
integration Platform) at http://www.drools.org.

JBoss ESB

It is a product focused on supplying an Enterprise Service Bus (ESB), which
allows us to use different connectors to communicate with heterogeneous
systems that were created in different languages. These use different protocols
for communication. You can find out more information about this project at
http://www.jboss.org/jbossesb/.

JBoss jBPM

jBPM has a process-centric philosophy. This involves all the APIs and tools that are
related to the processes and how to manage them. As we are going to see in all the
chapters of this book, the framework perspective is always centered on the business
process that we describe. Also, the services available inside the framework are
only for manipulating the processes. All the other things that we want or need for
integration with our processes will be delegated to third-party frameworks or tools.
[ 64 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

Now, if we enter into the official page of jBPM (http://www.jbpm.org), we are
going to see all the official information and updates about the framework. It is
important to notice the home page, which shows us the following image:
jBPM
jPDL

BPEL

Pageflow

...
GPD

Process Virtual Machine (PVM)

Identity

Task
Mgmt

Enterprise

This is the first image that developers see when they get interested in jBPM. This
image shows us the component distribution inside the jBPM framework project.
Understanding these building blocks (components) will help us to understand the
code of the framework and each part's functionality. Most of the time, this image is
not clearly understood, so let's analyze it!

Supported languages

One of the important things that the image shows is the multi-language support
for modeling processes in different scenarios. We can see that three languages are
currently supported/proposed by the framework with the possibility to plug in
new languages that we need, in order to represent our business processes with
extra technology requirements.
These supported languages are selected according to our business scenario and the
technology that this scenario requires.
The most general and commonly used language is jBPM Process Definition
Language (jPDL). This language can be used in situations where we are defining the
project architecture and the technology that the project will use. In most of the cases,
jPDL will be the correct choice, because it brings the flexibility to model any kind
of situation, the extensibility to expand our process language with new words to
add extra functionality to the base implementation, and no technology pluggability
limitation, thereby allowing us to interact with any kind of external services and
systems. That is why jPDL can be used in almost all situations. If you don't have
any technology restriction in your requirements, this language is recommended.
jBPM also implements the Business Process Execution Language (BPEL),
which is broadly used to orchestrate Web Services classes between different
systems—currently, both versions of the BPEL language support Versions 1.1
and 2.0, but this language is outside the scope of this book.

[ 65 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

To support business scenarios where all the interactions are between web services, I
recommend that you make use of this language, only if you are restricted to using a
standard like BPEL, in order to model your business process.
PageFlow is the last one shown in the image. This language will be used when
you use the JBoss Seam framework and want to describe how your web pages are
synchronized to fulfill some requirements. These kind of flows are commonly used
to describe navigation flow possibilities that a user will have in a website.
Web applications will benefit enormously from this, because the flow of the web
application will be decoupled from the web application code, letting us introduce
changes without modifying the web pages themselves. This language is also outside
the scope of this book.
At last, the language pluggability feature is represented with the ellipse (...). This will
be required in situations wherein the available languages are not enough to represent
our business scenarios. This could happen when a new standard like BPEL or BPMN
arises, or if our company has its own language to represent business processes. In
these kind of situations, we will need to implement our custom language on top
of the process' virtual machine. This is not an easy task and it is important for you
to know that it is not a trivial thing to implement an entire language. So, here we
will be focused on learning jPDL in depth, to understand all of its features and how
to extend it in order to fulfill our requirements. Remember that jPDL is a generic
language that allows us to express almost every situation. In other words, the only
situation where jPDL doesn't fit is where the process definition syntax doesn't allow
us to represent our business process or where the syntax needs to follow a standard
format like BPMN or BPEL.
Also, it is important to notice that all these languages are separate from the Process
Virtual Machine (PVM), the block on the bottom-left of the image, which will
execute our defined process. PVM is like the core of the framework and understands
all the languages that are defined. This virtual machine will know how to execute
them and how to behave for each activity in different business scenarios. When we
begin to understand the jPDL language in depth, we will see how PVM behaves for
each activity described in our process definitions.

Other modules

Besides the PVM and all the languages, we can also see some other modules that
implement extra functionality, which will help us with different requirements. The
following list contains a brief description of each module, but there will be chapters
that detail each module:
•

Graphical Process Designer (GPD) module: It is the graphical process
designer module implemented as an Eclipse plugin.
[ 66 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

•

Identity module: This module is a proof of concept, out-of-the-box working
module used to integrate business roles for our processes. This module is
focused on letting us represent people/users inside the process definition
and execution. This module shows us a simple structure for users and groups
that can be used inside our processes. For real scenarios, this module will
help us to understand how we will map our users' structures with the
jBPM framework.

•

Task ManaGeMenT (TaskMGMT) module: This module's functionality
involves dealing with all the integration that the people/employees/business
roles have with the processes. This module will help us to manage all the
necessary data to create application clients, which the business roles will
use in their everyday work.

•

Enterprise module: This module brings us extra functionality for enterprise
environments.

Now that we know how the components are distributed inside the framework, we
can jump to the jPDL section of jBPM's official web page. Here we will find the third
image that all the developers will see when they get started with jBPM.

jPDL

Eclipse JPDL Editor

jbpm.war/.ear
jbpm.jpdl.jar



jbpm-identity.jar

PVM
hibernate.jar

Any DB

[ 67 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

Let's analyze this image to understand why and how the framework can be used in
different platforms. This image tries to give us an example of how jBPM could be
deployed on a web server or an application server. Please, keep in mind that this is
not the only way that jBPM could be deployed on, or embedded in, an application,
because jBPM can also be used in a standalone application. In addition, this image
shows us some of the BPM stages that are implemented. For example, we can see
how the designed processes will be formalized in the jPDL XML syntax in Graphical
Process Designer (GPD)— here called the Eclipse jPDL Editor. On the other side
of the image, we can see the execution stage implemented inside a container that
could be an Enterprise Container (such as JBoss Application Server) or just a web
server (such as Tomcat or Jetty). This distinction is made with the extensions of the
deployed files (war, for Web Archives, and ear, for Enterprise Archives). In this
container, it is important to note the jpdl-jbpm.jar archive that contains the PVM
and the language definition, which lets us understand the process defined in jPDL.
Also, we have the jbpm-identity.jar as a result of the Identity Module that we
have seen in the other image. Besides, we have the hibernate.jar dependency. This
fact is very important to note, because our processes will be persisted with Hibernate
and we need to know how to adapt this to our needs.
The last thing that we need to see is the Firefox/Internet Explorer logo on top of the
image, which tries to show us how our clients (users), all the people who interact
and make activities in our processes will talk (communicate) with the framework.
Once again, HTTP interaction is not the only way to interact with the processes,
we can implement any kind of interactions (such as JMS for enterprise messaging,
Web Services to communicate with heterogeneous systems, mails for some kind of
flexibility, SMS, and so on).
Here we get a first impression about the framework, now we are ready to go ahead
and install all the tools that we need, in order to start building applications.

Tools and software

This section will guide us through the download and installation of all the software
that we will need in the rest of this book.
For common tools such as Java Development Kit, IDE installation, database
installation, and so on, only the key points will be discussed. Detailed explanation
about how to set up all these programs is beyond the scope of this book. You can
always query the official documentation for each specific product.
In jBPM tooling, a detailed explanation will follow the download and installation
process. We will be going into the structure detail and specification in depth; about
how and why we are doing this installation.
[ 68 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

If you are an experienced developer, you can skip this section and go directly to the
jBPM installation section. In order to go to the jBPM installation section straightaway,
you will need to have the following software installed correctly:
•

Java Development Kit 1.5 or higher (This is the first thing that Java developers
learn. If you don't know how to install it, please take a look at the following
link: http://java.sun.com/javase/6/webnotes/install/index.html.)

•

Maven 2.0.9 or higher

•

A Hibernate supported database, here we will use MySQL

•

You will need to have downloaded the Java Connector for your
selected database

•

JBoss 5.0.1.GA installed (If you are thinking about creating Enterprise
Applications, you will need JBoss AS installed. If you only want to create
web applications with Tomcat or Jetty installed, this will be fine. In the book,
I've included some Java EE examples. Therefore, JBoss AS is recommended.)

•

Eclipse IDE 3.4 Ganymede (Eclipse IDE 3.4 Ganymede is the suggested
version. You can try it with other versions, but this is the one tested in
the book.)

•

An SVN client, here we will use Tortoise SVN (Available for Windows only,
you can also use a subversion plugin for Eclipse or for your favorite IDE.)

If you have all this software up and running, you can jump to the next section. If
not, here we will see a brief introduction of each one of them with some reasons
that explain why we need each of these tools.

Maven—why do I need it?

Maven is an Apache project that helps us to build, maintain, and manage our
Java Application projects. One of the main ideas behind Maven is to solve all the
dependency problems between our applications and all the framework libraries
that we use. If you read the What is Maven? page (http://maven.apache.org/
what-is-maven.html), you will find the key point behind this project.
The important things that we will use here and in your diary work will be:
•

A standard structure for all your projects

•

Centralized project and dependencies description

[ 69 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

Standard structure for all your projects

Maven proposes a set of standard structures to build our Java projects. The project
descriptor that we need to write/create depends on the Java project type that we
want to build. The main idea behind it is to minimize the configuration files to build
our applications. A standard is proposed to build each type of application.
You can see all the suggested standard structure on the official Maven page:

http://maven.apache.org/guides/introduction/introduction-to-thestandard-directory-layout.html.
Don't worry too much about it! In the project examples at the end of the
chapter, you will see how all of these topics work. During the rest of this
book, we will be analyzing different projects, which will introduce you to
the main functionalities of Maven.

Centralized project and dependencies description

When we are using Maven, our way of building applications and managing the
dependencies needed by these applications changes a lot. In Maven, the concept
of Project Object Model (POM) is introduced. This POM will define our project
structure, dependencies, and outcome(s) in XML syntax. This means that we will
have just one file where we will define the type of project we are building, the first
order dependencies that the project will have, and the kind of outcome(s) that we
are expecting after we build our project.
You have already seen these POM files in the chapter02 examples where we built
jar files with Maven.
Take a look at the following pom.xml file:


4.0.0
org.jbpm.examples
chapter02.homeworkSolution
jar
1.0-SNAPSHOT
chapter02.homeworkSolution
http://maven.apache.org



maven-compiler-plugin
[ 70 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3
2.0.2

1.5
1.5






junit
junit
3.8.1
test




We are basically defining all the mentioned characteristics of our project. If you
take a look at the pom.xml file for the homeworkSolution project discussed in
Chapter 2, jBPM for Developers, you will realize that we haven't described how to
build this project. We also haven't specified where our sources are located and
where our compiled project will be placed after it is built.
All this information is deduced from the packaging attribute, which in this case is:
jar

The standard structure of directories will be used in order to know where the source
code is located and where the compiled outcome will be placed.

Maven installation

Getting maven installed is a very simple task. You should download the Maven
binaries from the official page (http://maven.apache.org). This will be a .zip file,
or a .tar.gz file, which you will only need to uncompress in the programs directory.
You will also add the bin directory to the system Path variable. With that, you will
be able to call the mvn command from the console.

[ 71 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

To test whether Maven is working properly, you can open the Windows console and
type mvn. You should get something like this:

This output shows us that Maven is correctly installed. However, as it is installed in
C:\Documents and Settings\salaboy21\ (the installation directory) where there
is no project descriptor, the build failed.
I strongly recommend that you read and understand the Getting Started section
in the official Maven documentation at http://maven.apache.org/guides/
getting-started/index.html.

Installing MySQL

In most situations, we will need to store the current state of our processes and all the
information that is being handled inside it. For these situations, we will need some
persistence solution. The most common one is a relational database. In this case, I
chose MySQL, as it's free and easy to install in most of the operating systems. Feel
free to try and test jBPM with any of the other Hibernate-supported databases.
Installing MySQL is very easy, you just need to download the binaries provided on
the official page (http://www.mysql.org), then run it and follow the instructions
that appear in the wizard window. The only things that I chose, but are defaults,
are the type of installation Developer Machine and MySQL as a Windows service.
This will help us to always have a running database for our process and it will not
be necessary to start it or stop it at the beginning or the end of our sessions.
[ 72 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

Downloading MySQL JConnector

In order to connect every Java application to a relational database, we will need a
particular connector for each database. This connector, in this case, will be used by
Hibernate to create a JDBC connection to the database when it needs to store our
processes' data.
Almost all vendors provide this connector in the official page. Therefore, for MySQL,
you can look for it on the MySQL official page.
You will need to download it and then put it in your application class path, probably
right next to Hibernate.
Remember that this MySQL JConnector (JConnector is the name of the MySQL
JDBC Driver) is only a Java library that contains classes— it knows how to create and
handle connections between our Java programs and the MySQL database server. For
this reason, like any other dependency, you can use JConnector with Maven, as it's
only a JAR file. Depending on your database and its version, you will need to add
the correct dependency to your project descriptor (in the pom.xml file).
In this case, because I'm using MySQL 5, I will need to add the following
dependency to my project in the dependency section:

mysql
mysql-connector-java
5.1.6


Eclipse IDE

In these scenarios where we will handle a lot of different types of information—for
example Java code, XML files, business processes, configuration files, and so on—it's
very handy to have a unique tool that handles all of these kind of files and lets us
build/compile, run, and debug all our applications. For this, we have the Eclipse IDE
that provides us a lot of tools for our development work. Of course, we can install
other IDEs such as Netbeans or IntelliJ, but in this case, the jBPM plugins are only for
Eclipse. This could be a very interesting contribution if you want write a plugin for
the Netbeans IDE (another open source IDE) and the community will be very happy.
This doesn't mean that you can't use any other IDE, but probably, you will need to
write your processes in XML format from scratch.

[ 73 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

Installing Eclipse IDE is also very easy, you just need to download the IDE binary
from the official page (http://www.eclipse.org/downloads/). Then uncompress
the zip file inside the programs directory and when it's done, you will find the
eclipse.exe executable that you could run inside the bin directory. The only thing
that you need here is to have your JAVA_HOME variable set and the Java JDK binaries
inside your PATH variable.

Install Maven support for Eclipse

This is an optional plugin that can make our life easier. This plugin will allow us to
invoke and use Maven inside the IDE. This means that now our IDE will know about
Maven and give us different options to handle and manage our Maven projects.
This plugin, like any other plugin for Eclipse, needs to be installed with the
Software Update Manager. You will find instructions about how to do that in
each plugin's official page (http://code.google.com/p/q4e/wiki/Installation).
In this case, the quick way will be to add the update site (you need to go to
Help | Software Update inside Eclipse IDE). In the Available Software tab, click
on the Add Site button and enter http://q4e.googlecode.com/svn/trunk/
updatesite/. Then choose the site and wait for Eclipse to get all the information
about this site and choose the plugin for the list. This will install all the components
in the list and all this functionality will be turned on by the IDE.

SVN client

It's very important for you and your projects to have a right source version system.
In this case, we will not version our projects, but we will download the source code
of the framework for the official JBoss SVN repositories. For this task, you could add
an SVN plugin or an external tool to do the same with the Eclipse IDE.
For this book, I chose Tortoise SVN, which is a much-used SVN client and will let us
manage all our versioned projects by integrating with the Windows environment.
For this, you only need to download the binaries from the official Tortoise Page, run
it, and then restart your machine. This is because some major changes will be needed
for integrating the SVN client with the Windows explorer.

[ 74 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

Starting with jBPM

At this point, we are ready to download and install the jBPM framework into our
machine to start building up applications, which will use the framework.
As I have said in the first chapter of this book, jBPM is a framework like Hibernate.
Why do I need to install it? Hibernate doesn't need an installation. "Install jBPM"
doesn't sound good at all. Like every other framework, jBPM is only a JAR file that
we need to include in our application classpath in order to use it.
So, the idea in this section is to find out why we download a jBPM installer from the
jBPM official page as well as what things are included in this installer and in which
situations we need to install all these tools.

Getting jBPM

Basically, as developers, we have two ways to get the framework up and working.
We can download the installer from the official site of jBPM or we can download
the source and build it with our hands. None of these two ways are complex,
but always getting the source code of our frameworks will help us to debug our
applications by looking at how the framework behaves at runtime. We will
analyze these two ways in order to show which is the quickest way to have the
framework ready for production, and also what we need to do in order to extend
the framework functionality.

From binary

Just download the framework binaries from the jBPM official page and install it, you
will see that the installer asks you for some other installed software, such as JBoss
Application Server and the database which you will use.
This binary file (the installer) contains the framework library archives, a graphical
process designer (Eclipse Plugin), and a JBoss profile that contains everything
you will need to start up a JBoss Application Server instance and start using jBPM
deployed inside it. Also, it contains the framework sources that will help us to have
the framework code that we need in order to debug our applications and see our
processes in action.

[ 75 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

Here we will see the results of this installation and the directory structure that
contains jBPM files.

As you can see in the image, we will have the directories as described in the
following sections:

config directory

This directory will contain XML files with the configuration needed for the database
that we choose. We will use the file called hibernate.cfg.mysql.xml. You will
need to choose the file for your corresponding installed database. If you open this
file, you will find something like this:

[ 76 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

This file, by default, configures a JDBC connector to use for jBPM persistence. I know
that the persistence usage has not been introduced yet, but then when I talk about
that, you will know where the configuration is.
In the first section of the file, you will find a tag called . With
this information, jBPM can create two types of database connections—JDBC or
Datasource connections.
These two types of connection will be used depending on the environment that we
use. If we are building an application that will run outside a container, we will need
to configure the JDBC connector as shown in the image, where we specify the dialect
that Hibernate will use and the driver corresponding to the database we choose.
Pay close attention to the com.mysql.jdbc.Driver class, because this class will
be inside the JConnector in MySQL. So, it will be necessary that this jar is in the
application classpath. The other properties used here are the common ones to set
up a connection to a database.
For a DataSource connection, meaning that our application will run inside a
container like JBoss (an Enterprise Application container) or any other application
server, we will need to uncomment the line that specifies the DataSource connection
and comment the lines that specify the JDBC connection. With this DataSource
connection, all the access to the database will be handled by a connection pool that
will be inside the container that we will use—this is the reason why we only need
to specify the name of the DataSource, and then the container will handle all the
connection details for us.

database directory

This directory is very useful, because inside it, we will find the schema definition
that jBPM will use. The idea is to have the generation script for each database to
create each of them. If you open the file called jbpm.jpdl.mysql.sql, you will find
all the tables used by jBPM. The important thing to know here is that all these scripts
will need to be reviewed by your Database Administrator, because these tables are
not optimized for a production environment. What does this mean exactly? This is
only a recommendation, because jBPM generates a lot of data, and if you are in a
production environment where 1,000 users have access to your defined process,
this structure will generate and log a lot of information by default. So, take a look
at this file; you could also execute it inside a MySQL database to see the tables that
are created.

[ 77 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

designer directory

This directory will contain just one zip file that has the structure of an Update
Site for Eclipse and contains only the plugin that will turn our IDE into a Business
Process Designer. If you want to install this plugin in your current Eclipse
installation, you only need to go to Help | Software Updates. When the Software
Updates and Add-ons window opens, you should switch to the Available Software
tab, click on the Add Site… button followed by the Archive button, and locate the
zip file placed in this directory.

Then you should select the newly added feature, jBPM jPDL GPD, and click on
the Install... button. This will install all the software needed to start using the jBPM
plug-in inside Eclipse, but you will need to restart Eclipse before you can use it.

docs directory

This directory contains all the Javadoc for all the code of the jBPM different modules.
It also contains the HTML user guide. This is very important documentation that will
also help you to understand the framework.

examples directory

This directory contains a Maven project that includes a lot of tests that show us how
to use the jBPM APIs. If you are starting with jBPM, this example project will help
you to see the most common ways to use the jBPM APIs.
[ 78 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

lib directory

This contains all the jars needed by the framework. You will need to add these jars
to every application that use jBPM. These libs will be necessary if we don't use
Maven. If you choose to build your applications with Maven, these dependencies
will be resolved automatically. These jars will be needed in our application
classpath to compile and run our project that is using the jBPM framework.

src directory

This directory contains jar files that include the source code of the framework,
which is very useful for debugging our projects. You should notice that all of the
code cannot be modified or rebuilt. These libs with source code are only helpful to
see what the current executed code is. To build the framework from scratch, you will
need to download the code from the official JBoss SVN repositories, which will also
contain all the projects' descriptors needed to build the sources. This topic will be
described in the following section.

From source code

This section is focused on introducing the "common way of doing things" of
the community members. You could take this way of building the source code
that is downloaded from an SVN server, as one step towards being a part of
the community. Remember that jBPM, as all the projects hosted on the official
community page of JBoss, is an open source project. So take advantage of that—as an
open source developer, you can get involved with the projects that you use and help
the community to improve them. This is not as difficult as you assume it to be.
That is why, you always have the chance to get all the source code from the official
JBoss SVN repository. With these sources, you will be able to build the binaries that
you can download from the jBPM official page from scratch.
You can take advantage of this when you need to modify some or extend the base
code of the framework. This is not recommended for first time users, modifications
and extensions will need to be extremely justified.
This is just a quick way to do it. First of all, we need to get the framework sources
from the official JBoss SVN repository server. We do that with the SVN client
(Tortoise SVN that we have downloaded and installed earlier). We just need to
check out the source code from our framework version.
The official JBoss SVN repository can be found at http://anonsvn.jboss.org/
repos/jbpm/jbpm3.
You can see three directories in this repository: branches, tags, and trunk.
[ 79 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

In branches directory, you will find different sub-directories that contain code for
testing new functionalities that will be added to future framework releases. Also,
branches are used to patch existing versions that are in the maintenance mode.
In tags directory, you will find all the released versions from jBPM, to which you
will need to point, in order to perform the framework sources checkout. In this case,
you will need to check out the sources located at http://anonsvn.jboss.org/
repos/jbpm/jbpm3/tags/jbpm-3.2.6.SP1/.
These sources were used to build all the binaries of the 3.2.6.SP1 version that is
uploaded to the official page.
Finally, in the trunk directory, you will find all the code, which is used by the
community members that continuously add and improve the code of the framework.
In order to get the code from the official repository, you only need to do a check-out
with your SVN client as shown in the following screenshot:

By right-clicking on every directory, you will see the Tortoise SVN options. You just
have to click on SVN Checkout. This will pop up a window like the following one:

[ 80 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

You should set the repository URL to the JBoss Official SVN repository and the
Checkout directory to where your working copy will be located. I created a new
directory inside software directory called projects. Here we will locate all our
projects from now on, because jBPM framework is just another project we will put in
there. When you click on OK, the check out of the source code will begin. When the
check out command finishes in order to get all the repository code, you will have all
the sources of a major Maven project in the directory that you specify.
Now you can build this project by running the following command from the console:
mvn clean install -Dmaven.test.skip

The -Dmaven.test.skip flag is only used here to make the build faster.
Supposing that if the code downloaded is a stable release, all the tests
in the project will be passed. But, probably, running all the tests for the
whole project could take a long time. You can try without this flag if
you want.

These Maven goals will build the framework by skipping all the tests and then
move/install the resulting jars to the local Maven repository. The local Maven
repository will contain all the compiled artifacts that you build. This will mean that
we will have a local directory in our machine that will contain all the jars that we
compile and all the jars needed by our applications (dependencies). The default
location of this repository is /.m2/respository/.

[ 81 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

jBPM structure

It is an important task to understand the jBPM framework structure. We will find out
how the framework sources are divided by looking at similarities with the theoretical
background discussed in Chapter 2, jBPM for Developers.
Also, this section is very important for those programmers who want to be active
community developers, fixing issues and adding new functionalities.
As we have already discussed, jBPM was built and managed with Maven. For this
reason, we will find a file called pom.xml inside our working copy of the official
JBoss SVN repository that represents the project as a whole. If we run Maven goals to
this project, all the framework will be built. As we have seen in the previous section,
all the project modules were built. Look at the previous screenshot that informs us
that, by default, the modules Core, Identity, Enterprise, Examples, and Simulation
are built when we run the clean install goals to the main project. With the install
goal too, the generated jar files are copied to our local Maven repository, so we can
use it in our applications by only referencing the local Maven repository.
So, the idea here is to see in detail what exactly these modules include. If you open
the modules directory that is located inside your working copy, you will see the
following sub-directories:

In the next few sections, we will talk about the most important modules that
developers need to know in order to feel comfortable with the framework. Take
this as a quick, deep introduction to becoming a JBoss jBPM community member.

[ 82 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

Core module

The most important module of the jBPM framework is the core module. This module
contains all the framework functionality. Here we will find the base classes that we
will use in our applications. If you open this directory, you will find the pom.xml file
that describes this project. The important thing to notice from this file is that it gives
us the Maven ArtifactID name and the GroupID. We will use this information to
build our applications, because in our applications, we will need to specify the jBPM
dependency in order to use the framework classes.
The following image will show us only the first section of the pom.xml file located
inside the modules/core/directory. This file will describe the project name,
the group id that it belongs to, and also the relationship with its parent
(the main project).

If you open this file, you will notice that all the dependencies that this project
(jar archive) needs, in order to be built, will be described in the next section. This is
also interesting when you want to know exactly which libraries the framework will
need to have in the classpath in order to run. You need to remember that Maven
will take care of all the transitory dependencies, meaning that in this project file, only
the first order dependencies will be described. So, for example, in the dependencies
section of the pom.xml file, we will see the Hibernate dependency, but you won't see
all the artifacts needed to build and run Hibernate—Maven will take care of all these
second order dependencies.

[ 83 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

If we build only the Core module project by running the clean install goal
(mvn clean install -Dmaven.test.skip), we will get three new JAR archives in
the target directory. These archives will be:
•

jbpm-jpdl-3.2.6.SP1.jar: The core functionality of the framework—you

•

jbpm-jpdl-3.2.6.SP1-config.jar: Some XML configurations that the

•

jbpm-jpdl-3.2.6.SP1-sources.jar: This JAR will contain all the sources
that were used to build the main jar file. This can be helpful to debug our

will need this JAR in all your applications that use jBPM directly. Remember,
if you are using Maven, you will need to add this artifact dependency to your
project and not this archive.
framework needs. This configuration will be used if you need your process
to persist in some relational database.

application and see how the core classes interact with each other when our
processes are in the execution stage.

You will also find a few directories that were used as temporary directories to build
these three jar files.

DB module

This module is in charge of building the different database schemes to run jBPM
needed by the different vendors that support Hibernate. If you build this module
in the target directory of the project (generated with the clean install of
maven goals).

Distribution module

This is only a module created with specific goals to build and create the binary
installer, which can be downloaded from jBPM's official page. If you want to get a
modified installer of this framework, you will need to build this module. But it is
rarely used by development teams.

Enterprise module

This module will contain extra features for high-availability environments, including
a command service to interact with the framework's APIs, an enterprise messages
solution for asynchronous execution, and enterprise-ready timers.
If we build this module, we will get three JAR files. The main one will be
jbpm-enterprise-3.2.6.SP1.jar, the source code and the configuration
files that these classes will need.
[ 84 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

A deep analysis about this topic will be done in Chapter 12, Going Enterprise.

Example module

This is a very interesting module, because it contains some basic examples about
how the framework could be used. If you open this module, you will find different
packages with JUnit tests that show us how to use the framework APIs to achieve
some common situations. These tests are used only for a learning purpose and try to
introduce the most common classes that all developers will use. Feel free to play with
these tests, modify them, and try to understand what is going on there.

Identity module

This module contains a proof of concept model to use out of the box when we start
creating applications that handle human interactions. The basic idea here is to have a
simple model to represent how the process users are structured in our company. As
you can imagine, depending on the company structure, we need to be able to adapt
this model to our business needs. This is just a basic implementation that you will
probably replace for your own customized implementation.

Principal
Entity
+ permissions
*

-name:String

Permission
(from java::security)

+parent
Membership

User
+user

*

Group
*

+group

*

-groupType:String +children

[ 85 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

Simulation module

This module includes some use cases for simulating our process executions. The idea
here is to know how to obtain reports that help us to improve our process executions,
measuring times, and costs for each execution.

User Guide module

This module will let you build the official documentation from scratch. It is not
built when the main project is built, just to save us time. You can build all the
documentation in three formats—HTML file separated for chapters, one single
and long HTML file, or PDF.
Knowing this structure will help us to decide where to make changes and where to
look for a specific functionality inside the framework sources. Try to go deep inside
the src directory for each project to see how the sources are distributed for each
project in more detail.

Building real world applications

In this section, we are going to build two example applications—both similar in
content and functionalities, but built with different methodologies. The first one will
be created using the Eclipse Plugin provided by the jBPM framework. This approach
will give us a quick structure that lets us create our first application using jBPM.
On the other hand, in the second application that we will build, we will use Maven
to describe and manage our project, simulating a more realistic situation where
complex applications could be built by mixing different frameworks.

Eclipse Plugin Project/GPD Introduction

In this section, we will build an example application that uses jBPM using the Eclipse
plugin, which provides us with the jBPM framework. The idea here is to look at how
these kinds of projects are created and what the structure proposed by the plugin.
The outcome of this section will be a Process Archive (PAR) file generated by the
GPD plugin, which contains a process definition and all the technical details needed
to run in an execution environment.
To achieve this, I have set up my workspace in the directory projects inside the
software directory. And by having the jBPM plugin installed, you will be able
to create new jBPM projects. You can do this by going to File | New | Other
and choosing the New type of project called Process Project.

[ 86 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

Then you must click on the Next button to assign a new name to this project.
I chose FirstProcess for the project name (I know, a very original one!) and
click on Next again.
The first time that you create some of these projects, Eclipse will ask you to choose
the jBPM Runtime that you want. This means that you can have different runtimes
(different versions of jBPM to use with this plugin) installed.

[ 87 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

To configure the correct runtime, you will need to locate the directory that the
installer creates—it's called jbpm-3.2.6.SP1—then assign a name to this runtime.
A common practice here is to put the name with the correct version, this will help
us to identify the runtime with which we are configuring our process projects.
Then you should click on the Finish button at the bottom of the window. This will
generate our first process project called FirstProcess.
If you have problems creating a new jBPM project, this can be noticed
because you'll have a red cross placed in your project name in the
Project Explorer window. You could see the current problems in the
Problems window (Windows | Show View | Problems). If the problem
is that a JAR file called activation.jar is missing, you should do
a workaround to fix this situation. To fix this, you should go to your
jBPM installation directory—in this case, software/programs/jbpm3.2.6.SP1 on my desktop, and then go to src/resources/gpd and
open a file called version.info.xml and remove the line that makes
the reference to the file called activation.jar. Then you should restart
the IDE and the problem will disappear.
If you create the process project and the sample process definition is
not created (under src/main/jpdl), you could use the project created
inside this chapter's code directory called FirstProcess.

GPD Project structure

Once you have created the project, we could take a look at the current structure
proposed by the plugin.

[ 88 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

This image show us the structure proposed by the GPD plugin. Four source
directories will be used to contain different types of resources that our project will
use the first one src/main/java will contain all the Java sources that our process
uses in order to execute custom Java logic. Here we will put all the classes that
will be used to achieve custom behaviors at runtime. When you create a process
project, a sample process and some classes are generated. If you take a look inside
this directory, you will find a class called MessageActionHandler.java. This class
represents a technical detail that the process definition will use in order to execute
custom code when the process is being executed.
The src/main/config directory will contain all the resources that will be needed to
configure the framework.
In the src/main/jpdl directory, you will find all the defined processes. When you
create a sample process with your project, a process called simple is created. If you
create a process and the process sample is not created, just open the project called
First Process inside the code directory of this chapter.
And in src/test/java, you will find all the tests created to ensure that our
processes behave in the right way when they get executed. When the sample
process is created, a test for this process is also created. It will give us a quick
preview of the APIs that we will use to run our processes.
For the sample process, a test called SimpleProcessTest is created. This test creates
a process execution and runs it to test whether the process will behave in the way in
which it is supposed to work. Be careful if you modify the process diagram, because
this test will fail. Feel free to play with the diagram and with this test to see what
happens. Here we will see a quick introduction about what this test does.

SimpleProcessTest

This test is automatically created when you create a jBPM process project with a
sample process. If you open this class located in the src/test/java directory of the
project, you will notice that the behavior of the test is described with comments in
the code. Here we will try to see step by step what the test performs and how the
test uses the jBPM APIs to interact with the process defined using the Graphical
Process Editor.
This test class, like every JUnit tests class will extend the class TestCase (for JUNIT
3.x). It then defines each test inside methods that start with the prefix test*. In this
case, the test is called testSimpleProcess(). Feel free to add your own test in other
methods that use the prefix test* in the name of the method.

[ 89 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

If we see the testSimpleProcess() method, we will see that the first line of code
will create an object called processDefinition of the ProcessDefinition type
using the processdefinition.xml file.
ProcessDefinition processDefinition = ProcessDefinition.
parseXmlResource("simple/processdefinition.xml");

At this point, we will have our process definition represented as an object. In other
words, the same structure that was represented in the XML file, is now represented
in the Java Object.
Using the APIs provided by JUnit, we will check that the ProcessDefinition object
is correctly created.
assertNotNull("Definition should not be null", processDefinition);

Then we need to create a process execution that will run based on the process
definition object. In the jBPM language, this concept of execution is represented with
the word instance. So, we must create a new ProcessInstance object that will
represent one execution of our defined process.
ProcessInstance instance = new ProcessInstance(processDefinition);

Then the only thing we need to do is interact with the process and tell the process
instance to jump from one node to the next using the concept of a signal, which
represents an external event. It tells the process that it needs to continue the
execution to the next node.
instance.signal();

If you take a look at all the assert methods used in the code, they only confirm that
the process is in the node in which it is supposed to be.
Another thing that this test checks is that the Actions attached to the process change
the value of a process variable. Try to figure out what is happening with that variable
and where the process definition changes this variable's value.
The following assert can give you a small clue about it:
assertEquals("Message variable contains message",
instance.getContextInstance().
getVariable("message"), "Going to the first state!");

[ 90 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

To run this test, you just need to right click on the source of this class and go to Run
As, and then choose JUnit Test.

You should check whether the test succeeded in the JUnit panel (a green light will be
shown if all goes well).

Graphical Process Editor

In this section, we will analyze the most used GPD windows, giving a brief
introduction to all the functionality that this plugin provides us. We already see
the project structure and the wizard to create new jBPM projects.
The most frequently used window that GPD proposes to us is the Graphical Process
Editor, which lets us draw our processes in a very intuitive way.
This editor contains four tabs that gives us different functionalities for different
users/roles.

[ 91 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

The Diagram tab

This tab will let us draw our business process. Basically, we only need to
drag-and-drop the basic nodes proposed in the palette and then join them with
transitions. This is a very useful tool for business analysts who want to express
business processes in a very intuitive way with help from a developer. This will
improve the way that business analysts communicate with developers when they
need to modify or create a new business process.

The Deployment tab

This tab is exclusively for developers who know and understand the environment
in which the process will be deployed. This tab will help us to create and deploy
process archives. These special archives will contain all the information and technical
details that the process will need in order to run. In order to get our process archive
ready for deployment, first we need to select the resources and the Java classes that
will be included in this process archive, and then check the Save Process Archive
locally option. Now, we provide a correct path to store it and then click on Save
Without Deploying. Feel free to inspect the process archive generated—it's just a
ZIP file with the extension changed to .par with all the classes compiled and ready
to run.
[ 92 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

The Source tab

This tab will show us all the jPDL XML generated by the Diagram tab and will
keep all the changes that can be made in both tabs in sync. Try not to change the
graph structure from this view, because sometimes, you could break the parser
that is checking the changes in this view, all the time, to sync it with the diagram
view. In the next chapter, we will analyze each type of node, so you will probably
feel comfortable writing jPDL tags without using the GPD diagram view. For that
situation, when you know what you are doing, I don't recommend the use of the
plugin—just create a new XML file and edit it without opening the file as a process.
This will allow us to be IDE agnostic and also know exactly what we are doing, and
not depend on any tool to know what is happening with our processes.

[ 93 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

Properties������
panel

This panel is used in conjunction with the Graphical Process Editor window, as it gives
us all the information about all the elements displayed in our processes definitions. If
you can't see this panel, you should go to Window | Show View | Properties.
This panel will help you to add extra information to your diagrammed process and
customize each element inside it.
This panel will show different sections and information for each element typed
inside our process. If you look at the following screenshot, you will see that it
represents all the properties allowed to be set in the process definition element. In
other words, these properties are applied to the process definition and not to any
particular element defined inside the process. The selected element is shown at the
top of the panel with the element type and icon. In this case, the Process Definition
element is selected.

To see the global process definition information, you should click on the white
background of the GPD designer window that has no elements selected.
Also, you should notice the additional tabs for each element. In this case, we could
add Global Exceptions handlers, Global Tasks definitions, Global Actions, Swimlanes
and Global Events to our process. All these different technical details will be
discussed in the coming chapters.
If you select another element of the process, you will notice that this panel will
change and show another type of information to be filled in and other tabs for
selection, depending on the selected element type. The following image shows a
selected state node in our process. As we have discussed earlier, it is important
to notice that the tabs allowed for the state node will let us add another type of
information and all of this information will be related only with the state node.

[ 94 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

Outcome

As we mentioned before, the outcome of this section is only a PAR generated with
the Deployment tab of the GPD Designer window. Feel free to look at and modify
the sample process provided by the plugin and regenerate the PAR to see what
has changed.

Maven project

The idea of this section is to build an application using Maven to describe the project
structure. Meaning that we will need to write the pom.xml file from scratch. For this
example, we will use the same code that tested our process defined with the GPD
plugin. In this case, to create a console application that uses the jBPM API.
The outcome of this section will be a template for a project that could be used to
create any project that you want, which will contain the jBPM library to manage
and use business processes.
The only thing that you will need for this is a text editor and a console to create the
directory structure. The directory structure is needed by Maven to know where our
project sources are located in order to build them.
If you prefer, you could also use the Eclipse Q4E Maven plugin and Maven
archetypes to create this structure. In both situations, the plugin or by hand, we
need to have the following structure in order to have a standard project structure:

[ 95 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

Inside both src/main/java and src/test/java, you could start adding Java classes
structured in packages like you normally do in all of your projects.
The most common goals that you will use to build and manage your projects will be:
•

clean: This goal will erase all the already compiled code located in the

•

package: It will build and package our project following the declared
packaging structure in the pom.xml file. The resulting jar file could be

target directory.

located in the target directory if the build succeeds. If the compilation
of just one class can't be done, all the goals will fail.
•

test: This goal will run all the tests that are inside our project (located in the
src/main/test directory). If just one of these test fails, all the test goals fail.
If this goal is called from another goal, for example the install goal, when
just one test fails, the install goal will also fail.

•

install: This will run the package goal followed by the test goal, and if

both succeed, the outcome located in the target directory will be copied to
the local Maven 2 repository.

It is very helpful to know that you can chain these goals together in order to run
them together, for example mvn clean install. This will clean your target
directory and then package and test the compiled code. If all these goals succeed,
the outcome will be copied to the local Maven2 repository.
Sometimes, in the development stage, you know that some tests are failing, but want
to compile and install your project for testing. In these situations, you could use the
following flag to skip the tests in the install goal: mvn clean install -Dmaven.
test.skip.
The main points to notice in the pom.xml file will be the project description with the
artifact ID, the group ID properties, and of course the packaging tag with the jar
value that specifies the outcome when we build our project.
With the Q4E plugin from Maven 2 in Eclipse, we could create this project by
following the next few steps described further in the section.
If you have installed the Q4E plugin in your Eclipse IDE, you should go to
the New Project wizard and then choose Maven 2 Project | Maven 2 Project
Creation Wizard, then click on Next and enter a new name for the project. I chose
FirstMavenProcess, because it is the name of the root directory shown in the
previous image. Inside this directory, all the standard directory structure needed by
Maven will be created. Once you enter the name and click on Next, the following
screen will appear:
[ 96 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

In this screen, we will choose the Maven Archetype that we want for our project.
These archetypes specify what kind of structure Maven will use to create our
project. This maven-archetype-quickstart represents the most basic structure
for a JAR file.
If you browse the list, you will also find the archetype called maven-archetypewebapp that is used for creating the standard structure for web-based applications.
At this point, you can click on Finish and the corresponding project structure, and
the pom.xml file will be created.

[ 97 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

If we create this file manually, or using the Q4E plugin, it should look like the
following image:

Now, for telling Maven that this project will use jBPM, we only need to add the
reference to the jBPM artifact in the dependency section:

org.jbpm.jbpm3
jbpm-jpdl
3.2.6.SP1
false


This will ensure that Maven takes care of the jBPM artifact and all of their required
dependencies.
You can now open the project called FirstMavenProcess located in the chapter03
code directory. However, because Eclipse needs some special files to recognize that
this directory is a Maven project, you will need to open a console, go to the directory
called FirstMavenProcess, check the pom.xml file that is inside the directory, and
then run the mvn eclipse:eclipse maven goal, which will generate all the Eclipse
needed files (project.xml and other files).
Then you will be able to open the project inside Eclipse. Take a look at the added
classes and the App.java class that contains a similar code for the tests generated by
the GPD plugin.
[ 98 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 3

The last thing that you need to understand here is how to package your project
source code into a binary JAR file for running and distributing your application.
You can do that by running the mvn package goal from the console—this will
generate a JAR file called FirstMavenProcess-1.0-SNAPSHOT.jar.
If you run the App.java class (with Run As... | Java Application), you will see the
following output on the console:
Process Instance Created
We are in the 'start' Node
We Signal the process instance to continue to the next Node
At this point the variable message value is = Going to the first state!
Now we are in the 'first' Node
We Signal the process instance to continue to the next Node
Now we are in the 'end' Node ending the process
Now the variable message value is = About to finish!

If you see some warnings before that output, don't worry about them.

Homework

For this chapter, there will be two activities for homework. The first one will be to
create your first project with the Eclipse IDE GPD plugin and modifying the simple
process that it includes. Also, try to play with the sample test created, in order to
modify the process diagram and also pass the test. You should also explore all the
features of the plugin, learning how to use it. Don't expect discussion about everything
here. Feel free to test the plugin and give your feedback in the user forums.
The second part of the homework will be to create a web application (war archive)
that includes jBPM with Maven, and change the output from the console to a
web browser. You could use any web framework you want, this will be useful
to get involved with Maven and to see how you can create simple and complex
applications with it. Follow the same steps used, to create the simple JAR application,
but now you should use the war packaging type. You can achieve this by using the
maven-archetype-webapp to build your project from the Q4E Maven plugin.
I didn't give the project result for this homework, because the most difficult part
here is the investigation and getting our hands dirty with Maven and jBPM. Try to
spend at least a couple of hours playing with the project description, Maven goals
(clean, package, install, and so on), and the IDE in order to get all the details
that have been omitted here.
[ 99 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Setting Up Our Tools

Summary

In this chapter, we've learnt about all the tooling that we will use everyday in a
jBPM project implementation. At the end of this chapter, we saw how to create two
basic applications that include and use the jBPM framework, just to see how all
the pieces fit together. With the use of Maven, we gained some important features
including dependency management, the use of standard project structures, and IDE
independence.
Also, the Graphic Process Designer was introduced. This jBPM plugin for the
Eclipse IDE will let us draw our processes in a very intuitive way. Just dragging and
dropping our process activities and then joining them with connections will result
in our process definitions. This plugin also allows us to write all the technical details
that our process will need in order to run in a runtime environment.
In the next chapter, we will start to understand in deep the jPDL language that will
let us know exactly how our processes must be defined and implemented. This will
be very important, because we will be in charge of implementing and knowing in
more detail how all of our processes and the framework will behave in the runtime
environment. Probably, we will also guide the learning process of our business
analysts to enable them to understand this language in order to create more
accurate process definitions.

[ 100 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language
At the end of this chapter, you will be able to use, with a deep understanding,
the basic nodes proposed by the jPDL language. These basic nodes will be fully
explained in order to take advantage of the possibilities and flexibility that the
language provides.
This chapter will be focused on showing us all the technical details about basic
process definitions allowing us to know how we could correctly model/diagram
and build/run processes.
During this chapter, the following topics will be covered:
•

jPDL introduction

•

Process definition analysis

•

Base node analysis

jPDL introduction

As we have seen in Chapter 3, Setting Up our Tools, Graph Process Designer (GPD)
gives us the power to draw our business processes by dragging and dropping nodes.
This is a very useful and quick way to get our processes defined and validated by
our business analysts and also by our stakeholders.
But if you start modeling a process without understanding what exactly each node
in the palette means, and how these nodes will behave in the execution stage, then
when you want to run the modeled process, you will need to modify it to reflect
desirable behavior.

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language

In this section, we will analyze each node type and discover how the graph
represented in GPD is translated into jPDL XML syntax. This XML syntax will
describe our formal language, which is flexible enough to represent and execute
our business processes.
On one hand we will have the graphical representation that lets our business
analysts see how our processes look like at the definition stage and, on the other
hand, we will have the same process represented in jPDL XML syntax that allows
us (developers) to add all the technical information that the process needs in order
to run in a specific runtime environment.
It will be our responsibility to understand how the processes will be expressed in
this language and to understand how this language works internally, in order to
lead correct implementations. This will also allow us to extend the language if that
is required. This is because we will be able to decide whether we need a completely
new one based on the current implemented behaviors.
This chapter is aimed at developers who will be in charge of the implementation of a
project that uses jBPM to manage the business processes for a company. If you don't
have this knowledge, you will probably have to make a lot of unnecessary changes
in your model and you will not get the desirable behavior in the execution stage,
causing a lot of confusion and frustration.
This deep analysis will help you to understand the code behind your processes and
how this code will behave to fulfill your requirements.
If you analyze the process created in Chapter 3, Setting Up Our Tools, you
will see that the graphical representation of the process is composed of two XML
files. The main one is called processdefinition.xml and contains the jPDL
definition of our process. jPDL is expressed in XML syntax—for this reason, the
processdefinition.xml file needs to follow this syntax for defining correctly
our process graph.
In the next section, we will start analyzing how this file must be composed for it
to be a valid jPDL process definition. Once again, we need to understand how the
framework works and to know exactly what we are doing.
Another XML file is also needed by GPD to graph our processes, this file is called
GPD.xml and it only contains the positions of all the elements of our process
(graph geometry).
With this separation of responsibility (graph elements' positions and process graph
definition), we gain a loosely coupled design and two very readable artifacts.

[ 102 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4

Something important to notice here is that the GPD plugin must keep
these two files in sync permanently for obvious reasons. If one of
these files changes, the other needs to change in order to have a valid
representation. This introduces one disadvantage— if we modify the
processdefinition.xml file outside the IDE, we will need to modify
the GPD.xml file accordingly, in order to keep the sync between these two
files and this is not an easy task.

It's important to note that the GPD.xml file is only necessary to represent our
process graphically; if we don't need that, or if we want to build our custom process
graphical representation, the GPD.xml file can be discarded. In other words, the
GPD.xml file will neither influence the formal description of the process nor the
process execution behavior.

jPDL structure

The main goal of this section is to know how to write and express processes with
jPDL XML syntax. This is important because we will do a deep analysis of each
element that can be represented inside our process.
However, before that, we need to know where all the elements will be contained.
If you remember, in Chapter 2, jBPM for Developers, we discussed about something
called process definition (or just definition) that will contain the list of all the
nodes which will compose our process. In this case, we will have a similar object
to represent the same concept, but more powerful and with a complete set of
functionalities to fulfill real scenarios. The idea is the same, but if jPDL has XML
syntax, how are these XML tags translated to our Definition object?
This translation is achieved in the same way that we graph our defined process
in our simple GOP implementation. However, in this case we will read our
defined process described in the processdefinition.xml file in order to get
our object structure. In order to make this situation more understandable,
see the following image:
Model/Design Phase

jBPM Framework Objects

Process Definition (jPDL XML Syntax)

Translation











ProcessDefinition
Parse Process

*
List of Nodes
*
List of Leaving
Transitions

[ 103 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language

We could say that the processdefinition.xml file needs to be translated to objects
in order to run.
In the rest of this chapter, we will see how all these "artifacts" (graph representation
of our process → jPDL XML description → Object Model) come into play. Analyzing
the basic nodes, starting from the design view and the properties window, to the
translation to XML syntax, and how this XML becomes running objects.
It's necessary to understand this transformation, in order to know how our processes
need to be designed. This will also help us to understand each property's meaning
for each particular node; showing us how each property will influence the design
and execution stage of our process.

Process structure

It's important to note that one processdefinition.xml file, generated or not with
GPD, will represent just one business process. There is no way to put more than one
process in one file, so do not try it at home.
The process designed with GPD will be automatically translated to jPDL XML
syntax, so if you don't have the plugin, you will need to write this jPDL XML syntax
by hand (a common practice for advanced developers who know jPDL). This XML
will be the first technical artifact that we need to know in depth. So, here you will
find how this file is internally composed and structured.
If you create a new process and select the background (not an element), you will see
the following Properties window:

This panel will represent all of the global properties that can be set to a
process-definition element. Remember that this element will contain all the
nodes in our process, so all the global information must be placed here. As you can
see in the image, global exceptions, tasks, actions, and events can be configured here.

[ 104 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4

If you now switch to the Source tab, you will find that basically, one process
definition represented in XML jPDL syntax needs to have the following structure:



...

...


As you can notice, the root node will be a  XML tag that will
accept a collection of nodes, where each of these nodes will accept a collection of
leaving transitions.
This structure will help us to quickly understand how the process flow works without
having a graphical representation. We only need a little bit of imagination.
As you can imagine, this process definition tag and all of the elements inside it will
be parsed and transformed into objects by the jBPM framework. The Java class that
will represent this  tag will be the ProcessDefinition class.
Here we will analyze this class, but only how the basic concepts are implemented.
The ProcessDefinition class can be found in the org.jbpm.graph.def package,
inside the core module's src/main/java directory.
Here we are talking about the checked out project from the SVN
repository, not the binary distribution.

This class is in charge of containing and representing all the data described in the
processdefinition.xml file. It also includes a few extra meta data that will be
useful in the execution stage of our processes.
If you open this class (recommended, as you will learn a lot about the internal details
of the framework and you will also start feeling comfortable with the code). The first
thing that you will notice is that the class inherits functionality from a class called
GraphElement and implements the NodeCollection interface.
public class ProcessDefinition extends GraphElement
implements NodeCollection

[ 105 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language

GraphElement information and behavior

The GraphElement class will give the ProcessDefinition class all the information
needed to compose a graph and also some common methods for the execution stage.
The most common properties that we, as developers, will use are the following:
•

long id = 0;

•

protected String name = null;

•

protected String description = null;

These properties will be shared through all the elements that can be part of our
business process graph (Nodes and the Process Definition itself).
It is also important to see all the methods implemented inside the GraphElement
class, because they contain all the logic and exceptions related to events inside our
processes. But some of these concepts will be discussed later, in order not to confuse
you and mix topics.

NodeCollection methods

The NodeCollection interface will force us to implement the following methods to
handle and manage collections of nodes:
List getNodes();
Map getNodesMap();
Node getNode(String name);
boolean hasNode(String name);
Node addNode(Node node);
Node removeNode(Node node);

Feel free to open the GraphElement class and the NodeCollection interface in order
to take a look at other implementations' details.

ProcessDefinition properties

Now it is time to continue with the ProcessDefinition properties.
Right after the class definition, you will see the property definition section, all these
properties will represent the information about the whole process. Remember that
the properties inherited for the GraphElement class are not shown here. The most
meaningful ones are as shown in the following table. These properties represent core
information about a process that you will need to know in order to understand how
it works:
[ 106 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4

Property

Description

Node startState

It represents the node that will be the first node in our
process, as you can see, this property is not restricted to
the StartState type. It is this way, because this Node
class could be reused for another language that could
define another type of start node.

List nodes

It represents the collection of nodes included
between the  tags in the
processdefinition.xml file.

transient Map nodesMap

This property allows us to query all the nodes in our
process by name, without looping through all the nodes
in the list. With just one string, we can get the node that
we are looking for. As this property is transient, it will not
be persisted with the process status. This means that this
property will be filled when the process is in the execution
stage only.

Map
actions

It represents global actions (custom code) that will
be bonded to a name (String) and could be reused in
different nodes of our process. This feature is very
helpful to reuse code and configurations. It also keeps the
processdefinition.xml file as short as possible.

Map
definitions

It represents different internal/external services that could
be accessed by the process definition, we will learn more
about these modules later.

This is all that you need to know about the information maintained as process
definition level.

Functional capabilities

Now we need to see all the functionality that this class provides in order to handle
our process definitions.

[ 107 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language

An array of strings is defined to store the events supported by the process
definition itself.
// event types //////////////////////////////////////////////////////
////////
public static final String[] supportedEventTypes = new String[]{
Event.EVENTTYPE_PROCESS_START,
Event.EVENTTYPE_PROCESS_END,
Event.EVENTTYPE_NODE_ENTER,
Event.EVENTTYPE_NODE_LEAVE,
Event.EVENTTYPE_TASK_CREATE,
Event.EVENTTYPE_TASK_ASSIGN,
Event.EVENTTYPE_TASK_START,
Event.EVENTTYPE_TASK_END,
Event.EVENTTYPE_TRANSITION,
Event.EVENTTYPE_BEFORE_SIGNAL,
Event.EVENTTYPE_AFTER_SIGNAL,
Event.EVENTTYPE_SUPERSTATE_ENTER,
Event.EVENTTYPE_SUPERSTATE_LEAVE,
Event.EVENTTYPE_SUBPROCESS_CREATED,
Event.EVENTTYPE_SUBPROCESS_END,
Event.EVENTTYPE_TIMER
};
public String[] getSupportedEventTypes() {
return supportedEventTypes;
}

These events will represent hook points to attach extra logic to our process, which
will not modify the graphical representation of the process. These events are
commonly used for adding technical details to our processes and have a tight
relationship with the graph concept. This is because each GraphElement will have
a life cycle that can be defined where events will be fired. We will continue talking
about events in the following chapters.

Constructing a process definition

In this section, we will find different ways to create and populate our process
definition objects. This section will not describe the ProcessDefinition constructors
because they are rarely used, we will directly jump to the most used methods in
order to create new ProcessDefinition instances.

[ 108 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4

In most cases, instances of the parseXXX() method will be used to create
ProcessDefinition instances that contain the same structure and information
as a processdefinition.xml file.
Similar methods are provided to support different input types of process definitions,
such as the following ones:
•

parseXmlString(String)

•

parseXMLResource(String)

•

parseXMLReader(Reader)

•

parseXMLInputStream(InputStream)

•

parseParZIPInputStream(ZipInputStream)

The only difference between all of these methods is the parameters that they
receive. All of these methods will parse the resource that they receive and create a
new ProcessDefinition object that will represent the content of the XML jPDL
processdefinition.xml file.
The most simple parse method will take a String representing the process definition
and return a brand new ProcessDefinition object created by using the string
information. This String needs to represent the correct jPDL process definition in
order to be correctly parsed.
The most commonly used will be the one that uses a path to locate where the
processdefinition.xml file is and creates a brand new ProcessDefinition object.
It is good to know how we will construct or create a new process definition object
that will reflect the process described in the XML jPDL syntax. It is also important to
know how this generation is done. It could be helpful to know how the framework
works internally and where all the information from the XML process definition is
stored in the object world.
When we finish parsing all the elements inside the XML file, if our process definition
doesn't have any errors, a brand new ProcessDefinition object will be returned.
Just for you to know, this kind of parsing in real applications is only used a few
times. As you can imagine, this parsing process could take a while when you have
a large number of nodes inside it, but this is just a note, don't worry about that.

[ 109 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language

Adding custom behavior (actions)

If you jump to the actions section (in the ProcessDefinition class, marked with
the comment // actions), you will find methods for adding, removing, and getting
these custom technical details called actions. These actions could be used and linked
in different stages (graph events) and nodes in the current process. These actions
are global actions that must be registered with a name, and then referenced by each
node that wants to use them. These process-defined actions are commonly used for
reuse code and configuration details. This will also keep your process definition
XML file clean and short. If you look at the code, you will find that a bi-directional
relationship is maintained between the action and the process definition.
public Action addAction(Action action) {
if (action == null) throw new IllegalArgumentException
("can't add a null action to an process definition");
if (action.getName() == null) throw new IllegalArgumentException
("can't add an unnamed action to an process definition");
if (actions == null) actions = new HashMap();
actions.put(action.getName(), action);
action.processDefinition = this;
return action;
}

The bi-directional relationship between Actions and the process
definition will allow finding out how many action definitions the
process contains, to be able to dynamically define actions in different
places in the runtime stage.

I think that no more notes could be written about ProcessDefinition. Let's jump to
the basic nodes section. But feel free to analyze the rest of the process definition class
code, you will only find Java code, nothing to worry about.

Nodes inside our processes

Inside our  tag, we will have a collection (set) of nodes.
These nodes could be of different types and with different functionalities. You should
remember Chapter 2, jBPM for Developers, where we discussed about GOP and created
a new GOP language. This custom language used node hierarchy to achieve this
multiple behavior and functionalities. It expands language capabilities by adding
new words to our language, which are represented by different types of nodes.
jPDL is basically that; a main node which implements the basic functionality and
then a set of subclasses that conform the language.
[ 110 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4

This language (jPDL) contains 12 words/nodes in this version (in the GPD palette).
These nodes implement some basic and generic functionalities that, in most cases, it's
just logic about whether the process must continue the execution to the next node or
not. This logic is commonly named propagation policies.
If we want to understand how each word behaves, how it is composed, and which
"parameters" need to be filled in order to work correctly, firstly we will need to
understand how the most basic and generic node behaves. This is because all the
functionalities inside this node will be inherited, and in some cases overridden, by
the other words in the language.
StartState

EndState

Node

Fork

Decision

Join

SuperState
MailNode

State
ProcessState

TaskNode

For this reason, we will start a deep analysis about how the Node class is
implemented and then we will see all the other nodes, just mentioning the
changes that are introduced for each one.
To complete this section, we will just mention some details about the
parsing process.

ProcessDefinition parsing process

This parsing process begins when we load the processdefinition.xml file
using some of the parseXXX() methods of the ProcessDefinition class. These
methods internally use the JpdlXMLReader class to parse all the content of the
processdefinition.xml file. It's important to know that this JpdlXMLReader
class is designed to work with DOM elements. One of the core methods of the
JpdlXMLReader class is the following method:
public ProcessDefinition readProcessDefinition()

This method is in charge of parsing all of the process definition XML elements and
creating all the Objects needed to represent the process structure.
[ 111 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language

In this method, we will find the section that will read each part of the process
definition shown as follows:

gpd.xml

Node




processdefinition.xml

readSwimlanes(root);
readActions(root, null, null);
readNodes(root, processDefinition);
readEvents(root, processDefinition);
readExceptionHandlers(root, processDefinition);
readTasks(root, null);






graph()

Node
Node

parseXXX()
ProcessDefinition
Listnodes

It is important to note that the graphical information stored in the gpd.xml file is
neither parsed nor stored in the ProcessDefinition object. In other words, it is
lost in this parsing process and if you don't keep this file, the elements' position will
get lost. Once again, the absence of this file will not influence the definition and the
execution of our defined process.

Base node

As we have seen before, this node will implement logic that will be used by all the
other words in our language, but basically this class will represent the most common
lifecycle and properties that all the nodes will have and share.
With these nodes' hierarchy, our process definition will contain only nodes causing
that all the nodes in the palette will be of the Node type as well as all of its subclasses.

[ 112 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4

First of all, we will see how the node concept is represented in jPDL XML syntax. If
you have GPD running, create a new process definition file and drag a node of the
Node type inside your process.

Note the icon inside the node rectangle, a gear, is used to represent the node
functionality, meaning that the base functionality for a Node is the generic work
to be done, which can probably be represented with a piece of Java code.
This means that something technical is needed in our business process. That is why
the gear appears there, just to represent that some "machine working" will happen
during this activity of our business process. As you will see a few sections later, if
the technical details are not important for the business analysts, you can add them
in other places, which are hidden from the graphical view. This will help us to avoid
increasing the complexity of the graphical view with technical details, that doesn't
mean anything to the real business process.
An example of that could be a backup situation—if one activity in our process takes
a backup of some information, we will need to decide if the backup activity will be
shown in the graphical representation (as a node) of our process depending on the
context of the process, or if it will be hidden in the technical definitions behind the
process graph.
In other words, you will only use these type of nodes if the Business Analyst team
tells you that some important technical details are part of the process, and these
technical details need to be displayed in the process diagram as an activity.
In the jBPM context, we will use "technical detail" to refer to all of the code
needed to be able to run our process inside an execution environment. Do
not confuse this with something minimal or less important.

[ 113 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language

Let's analyze this node—if you have GPD plugin installed, select the node dropped
before, and go to the properties panel. Here you will see that some basic information
could be inserted as name, description, and so on. Just add the basic information,
save the process definition, and go to the source tab to see how this information is
translated inside the node tag.

In order to see these basic node properties, you could open the Node class to see how
these properties are reflected in the Java code. As we discussed before, this class will
represent the execution of technical details. So, if we select the node in GPD and see
the properties window, we will see that we have an Action tab that has a checkbox
to activate the configuration from this action. This will represent the added technical
details that will be executed when the process execution enters into this node. These
technical details could be anything you want. When you activate this option, you will
see that new fields appear asking about information that will describe this action.

[ 114 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4

If you read some documentation about this, you will see that these actions are called
delegated actions. This name is because these actions are in some way delegated
to external classes that will contain the logic to execute. These delegated classes
are external to the framework. In other words, we will implement these delegated
classes and we will just tell the process the class name that contains the action, then
the framework will know how to execute this custom logic.

In order to achieve this functionality, the command design pattern (click on
http://en.wikipedia.org/wiki/Command_pattern for more information) is
applied. Therefore, we only need to implement a single method interface called
ActionHandler in our class. We will see more about how to do this in the next
chapter where we build two real, end-to-end applications. You must keep in mind
that this action can include custom logic that you will need to write. This can be
done by implementing the ActionHandler interface that the framework knows
how to execute.
Until this point, we have a node (of the Node type) graphed in GPD, also expressed
in jPDL XML syntax with the tag  that is kept in sync with the graphical
diagram by the plugin. When we load the processdefinition.xml file in a new
ProcessDefinition object, our node (written and formally described in jPDL)
will be transformed in one instance of the Node class. The same situation will occur
with all of the other node types, because all the other nodes will be treated as
Node instances.
Here we will analyze some technical details implemented in the node class that
represent the node generic concepts and the implementation of nodes that can be
used in our processes.
This class also implements the Parsable interface that forces us to implement the
read() and write() methods in order to understand and be able to write the jPDL
XML syntax, which has been used in our process definitions.
public interface Parsable {
void read(Element element, JpdlXmlReader jpdlReader);
void write(Element element);
}

[ 115 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language

Information that we really need to know about
each node
Leaving transitions are one of the most important properties that we need to store
and know.
protected List leavingTransitions = null;

This list is restricted only to store Transition objects with generics. This list will
store all of the transitions that have the current node as the source node.
The action property is also an important one, because this property will store the
action that will be executed when the current node is in the execution stage.
It is important to note that a public enum is defined here to store each type of node
that could be defined using this super class.
public enum NodeType { Node, StartState, EndState, State, Task, Fork,
Join, Decision };

This enum specifies the built-in nodes inside the framework. If you create your own
type of node, you will need to override the getNodeType() method to return your
own custom type.

Node lifecycle (events)

The following section, the events section, marked with a comment
//event types////.. in the Node class, is used to specify the internal
points where the node execution will pass through. These points will represent
hook points which we can add the custom logic that we need. In this case, the
base node, support events/hook points called NODE_ENTER, NODE_LEAVE,
BEFORE_SIGNAL, and AFTER_SIGNAL. This means that we will be able to add
custom logic to these four points inside the node execution.
NODE

NODE
NODE_ENTER

NODE
NODE_LEAVE

TRANSITION

TRANSITION

take()
leave()

take()
enter()

execute()

leave()

enter()

The BEFORE_SIGNAL and AFTER_SIGNAL events will be described later when we
discuss external events/triggers that could influence the process execution.
[ 116 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4

Constructors

The node class instances will rarely be constructed using the following constructors:
public Node() { }
public Node(String name)
super(name);
}

{

gpd.xml




processdefinition.xml

In most of the cases the instances of node class will be created by the method
parseXXX() that reads the whole process definition and all the nodes inside it. So, in
most cases we don't need to create nodes by hand. However, it is important for us to
know how this parsing process is done.






parseXXX()

ProcessDefinition
Listnodes

Node
graph()

Node
Node

Managing transitions/relationships with
other nodes

If we observe the section delimited with the //leaving transitions// and
//arriving transitions// comments, we will find a few methods to
manage all of the transitions related to some nodes in our process.
As we have seen before, the transitions for a node are stored in two properties of
type list called leavingTransitions and arrivingTransitions. We have also a
helper map to locate each transition inside a particular node by name. In this section
of the node class, we will find wrapper methods to these two lists that also add some
very important logic.

[ 117 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language

For example, if we take a look at the method called addLeavingTransition(Transi
tion), we can see the following piece of code:
public Transition addLeavingTransition(Transition
leavingTransition)
{
if (leavingTransition == null)
throw new IllegalArgumentException("can't add a null
leaving transition to an node");
if (leavingTransitions == null)
leavingTransitions = new ArrayList();
leavingTransitions.add(leavingTransition);
leavingTransition.from = this;
leavingTransitionMap = null;
return leavingTransition;
}

Where the first few lines of this method check to see if the list of leavingTransitions
is null. If this is true, it will only create a new ArrayList to store all the transitions
from this node. This is followed by the addition of new transitions to the list,
and then the node reference is added to the recently added transition. At last, the
leavingTransitionMap is set to null in order to be generated again, if the method
getLeavingTransitionMap() is called. This is done in order to keep the transition
map updated with the recently added transition.
Another important method is called getDefaultLeavingTransition(), this method
logic will be in charge of defining which transition to take if we do not specify a
particular one. In other words, you must know how this code works in order to
know which transition will be taken.
public Transition getDefaultLeavingTransition()
{
Transition defaultTransition = null;
if (leavingTransitions != null)
{
// Select the first unconditional transition
for (Transition auxTransition : leavingTransitions)
{
if (auxTransition.getCondition() == null)
{
defaultTransition = auxTransition;
break;
}
}
}
[ 118 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4
else if (superState != null)
{
defaultTransition = superState.getDefaultLeavingTransition();
}
return defaultTransition;
}

If you see the code inside this method, you will see that the first unconditional
transition will be chosen if no other transition is selected. It is also important to see
that if there is a situation with nested nodes, the parent node will be also queried for
a default transition.

Runtime behavior

Up until this point, we have seen how and where the information is kept, but
from now on, we will discuss how this node will behave in the execution stage of
our processes.
The first method in the //Behavior methods// comment delimited section is the
method called enter(ExecutionContext).
The ExecutionContext class is used by the enter(), execute(), and
leave() methods in order to know all the contextual information needed
to execute each phase inside the node.

We already see the Node lifecycle graph, where this method will be the first method
called when the node is reached.
It's very important to see all the code in this method, because it give us the first phase
in the execution lifecycle of our node.
public void enter(ExecutionContext executionContext)
{
Token token = executionContext.getToken();
// update the runtime context information
token.setNode(this);
// fire the enter-node event for this node
fireEvent(Event.EVENTTYPE_NODE_ENTER, executionContext);
// keep track of node entrance in the token,
so that a node-log can be generated at node leave time.
token.setNodeEnter(Clock.getCurrentTime());
// remove the transition references from the runtime context
executionContext.setTransition(null);
[ 119 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language
executionContext.setTransitionSource(null);
// execute the node
if (isAsync)
{
ExecuteNodeJob job = createAsyncContinuationJob(token);
MessageService messageService = (MessageService)Services.
getCurrentService(Services.SERVICENAME_MESSAGE);
messageService.send(job);
token.lock(job.toString());
}
else
{
execute(executionContext);
}
}

Here in the first lines of the method appears the concept of Token that will represent
where the execution is, at a specific moment of time. This concept is exactly the same
as the one that appears in Chapter 2, jBPM for Developers.
That is why, this method gets the Token from the Execution Context and changes
the reference to the current node. If you see the following lines, you can see how this
method is telling everyone that it is in the first phase of the node life cycle.
// update the runtime context information
token.setNode(this);
// fire the enter-node event for this node
fireEvent(Event.EVENTTYPE_NODE_ENTER, executionContext);

If you analyze the fireEvent() method that belongs to the GraphElement class, you
will see that it will check whether some action is registered for this particular event.
If there are some actions registered, just fire them in the defined order.
As you can see at the end of this method, the execute() method is called,
jumping to the next phase in the life cycle of this node. The execute() method
is called as follows:
public void execute(ExecutionContext executionContext)
{
// if there is a custom action associated with this node
if (action != null)
{
try
{
// execute the action
executeAction(action, executionContext);
[ 120 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4
}
catch (Exception exception)
{
raiseException(exception, executionContext);
}
}
else
{
// let this node handle the token
// the default behaviour is to leave the
node over the default transition.
leave(executionContext);
}
}

Download at WoweBook.com

In this execute() method, the base node functionality implements the following
execution policy. If there is an action assigned to this node, execute it. If not, leave
the node over the default transition.
This node functionality looks simple, and if I ask you if this node behaves as a wait
state, you will probably think that this node never waits. Having seen the code
above, we can only affirm that if no action is configured to this type of node, the
default behavior is to continue to the next node without waiting. However, what
happens if there is an action configured? The behavior of the node will depend
on the action. If the action inside it contains a call to the executionContext.
leaveNode() method, the node will continue the execution to the next node in the
chain (of course, passing through the leave() method). But if the action does not
include any call to the leave() method, the node as a whole will behave like a
wait state.
If this node does not behave like a wait state, the execution will continue to the next
phase calling the leave() method.
public void leave(ExecutionContext executionContext,
Transition transition)
{
if (transition == null)
throw new JbpmException("can't leave node '" + this + "'
without leaving transition");
Token token = executionContext.getToken();
token.setNode(this);
executionContext.setTransition(transition);
// fire the leave-node event for this node
fireEvent(Event.EVENTTYPE_NODE_LEAVE, executionContext);
// log this node
[ 121 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language
if (token.getNodeEnter() != null)
{
addNodeLog(token);
}
// update the runtime information for taking the transition
// the transitionSource is used to calculate
events on superstates
executionContext.setTransitionSource(this);
// take the transition
transition.take(executionContext);
}

This method has two important lines:
•

The first one is the one that fires the NODE_LEAVE event telling everyone that
this node is in the last phase before taking the transition out of it, where
actions could be attached like the NODE_ENTER event

•

The second line is the one at the end of this method where the execution gets
out of the current node and enters inside the transition:
transition.take(executionContext);

This is the basic functionality of the Node class. If the subclasses of the Node class do
not override a method, the functionality discussed here will be executed.

StartState: starting our processes

As we have already seen in the GPD node palette, we have a node called Start State.
This will be the first node in all our processes. This is our first rule, we cannot break
it. As we already know, jPDL is a formal language where the syntax forces us to
begin our process definition with one node of the type Start State. It's important
to note that this syntax also defines that it could only have one start state for each
process. It is also important to observe that this node type takes the name of Start
State and not Start Node, because it will behave internally as a wait state. Remember
this important distinction, because all the nodes that will behave as wait states will
have the word "State" in their name.

Here we will drop a Start State node with GPD and also inspect these node properties.
We could see some of the same properties inherited from the GraphElement class
such as name and description.
[ 122 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4

If we take a quick look at the tabs that are available in this node, we will notice
that there's no Action tab here. For this type of node, however, we can see a tab
called Task.
For now, we need to know that when jPDL uses the word Task, it always makes
reference to an Activity that needs to be done by a person. In other words, task
always means human interaction.
The idea of this task placed inside the Start State is used to represent the fact that the
process will need to be started by a person and not automatically by a system. We
will analyze where we may need to use this functionality later.
Now, we will open the StartState class to see what methods were inherited from
the parent classes GraphElement and Node, and which of them were overridden. We
will see a lot more about tasks in Chapter 7, Human Tasks.
These modifications to the base functionality inherited will define the semantics
(in other words, the meaning) of each node/word in the jPDL language.
In jPDL language, a Start State will look like:

In this case, because it is the first word that we will use to define all of our processes,
we need to define some restrictions that we have already discussed when we talked
about GOP in Chapter 2.
The first and obvious restriction is that the StartState node could not have arriving
transitions, this is because it will be the first node in our processes, and this node
will be selected when the process execution is created to start our process. This is
implemented by just overriding the addArrivingTransition() method into the
StartState class as follows:
public Transition addArrivingTransition(Transition t) {
throw new UnsupportedOperationException( "illegal operation :
its not possible to add a transition that
is arriving in a start state" );
}
[ 123 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language

Another functionality that we see in the Start State node is that this node will never
get executed. That is why the execute() method is overridden and it doesn't have
any code inside it.
public void execute(ExecutionContext executionContext) { }

This also means that it will behave as a wait state, because the execution is not
followed by any transition.
We could say that the StartState node has a reduced life cycle, because it will only
execute the leave stage. The supported events defined for this class are as follows:
public static final String[] supportedEventTypes = new String[]{
Event.EVENTTYPE_NODE_LEAVE,
Event.EVENTTYPE_AFTER_SIGNAL
};

It's supposed that the start node will never execute the enter stage, because this is an
automatically selected node in the creation of the process execution. Another reason
for this is that no transition will arrive at it.
If you look at the other overridden method called read(), which is in
charge of reading the non-generic information stored for this particular type
of node, this method is called by instances of the parseXXX() method in the
ProcessDefinition class.
public void read(Element startStateElement,
JpdlXmlReader jpdlReader) {
// if the start-state has a task specified,
Element startTaskElement = startStateElement.element("task");
if (startTaskElement!=null) {
// delegate the parsing of the start-state
task to the jpdlReader
jpdlReader.readStartStateTask(startTaskElement, this);
}
}

In this case, this method is in charge of reading the task that can be defined inside
this node from the XML. This task will represent the fact that the process needs
human interaction in order to start the process flow.

[ 124 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4

EndState: finishing our processes

The EndState node is used to define where and when our processes will end. The
basic implementation could allow this node to be restrictive as the StartState node.
But in the jPDL implementation, the end word means a couple of things depending
on the situation.
As we can imagine, the basic functionality of this node will allow us to say where the
execution of our process will end. However, what exactly does the end of our process
execution mean?
Basically, our process will end when there are no more activities to perform. This
will probably happen when the business goal of the process is accomplished.
So, when our process ends, the execution stage also ends. That means that we will
not be able to interact with the process anymore. We will only be able to query the
process logs or information, but only in the sense of history queries. We will not be
able to modify the current state of the process anymore.
So, drop an EndState in GPD and let's analyze what properties it has:

That is translated into jPDL XML syntax in the following way:

And allow us to add some generic properties:

The restrictions of these nodes are similar to the StartState node, because this node
also has a minimal life cycle, because, it gets executed but this node is never left. This
node will only implement the NODE_ENTER stage.
public static final String[] supportedEventTypes =
new String[]{Event.EVENTTYPE_NODE_ENTER};
[ 125 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language

This is also reflected in the behavior of the method addLeavingTransitions()
inside the EndState class that could not be called because it throws an exception.
public Transition addLeavingTransition(Transition t) {
throw new UnsupportedOperationException
("can't add a leaving transition to an end-state");
}

You could also see in the EndState class that there is a property to represent whether
the End State stands for the real end of our process. As we can have different ending
situations, we need to express in each End State whether it will represent the real
ending of the whole process execution. For now it's okay to think that this node will
always represent that. But you could see where this property is used in the execute
method of this class.
public void execute(ExecutionContext executionContext) {
if ( (endCompleteProcess!=null)
&& (endCompleteProcess.equalsIgnoreCase("true"))) {
executionContext.getProcessInstance().end();
} else {
executionContext.getToken().end();
}
}

As you can see, the endCompleteProcess property will decide if the
ProcessInstance needs to be ended or just the current token that represents
the current path of execution end.

State: wait for an external event

There are situations in which the process itself could not continue the execution on
to the next activity. This is because the current activity needs to wait for someone or
some system(s) to do an external activity before going on. This very generic situation
is represented with this node, the State node.

In other words, with this node, we will be able to represent wait state situations. And
basically, this node functionality is reduced to the minimum, it only needs to wait
that an external event comes and tells the process that it can continue to the next
activity. This node has the full node life cycle, but it doesn't implement any logic
in the execute method. This node responsibility is only to wait.
[ 126 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4

If we drag a State node using the GPD plugin and then select it, we will find
something like this in the Properties panel:

This is a very simple node that will be represented in jPDL as follows:

If we open the State class, we will find a very reduced class. The most important
change in comparison with the base Node class is that the execute() method
was overridden.
public class State extends Node {
public State() {
this(null);
}
public State(String name) {
super( name );
}
@Override
public NodeType getNodeType()
{
return NodeType.State;
}
public void execute(ExecutionContext executionContext)
}

{

}

This State node will behave in the same way as the base node class except for the
execute() method. That will change the behavior of the node and the execution of
the whole process will wait in this node, until some external event/signal comes in.

[ 127 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language

Decision: making automatic decisions

The node decision will let us choose between different and exclusive paths modeled
on our processes. As you can imagine, this node will handle more than one leaving
transition, choosing just one in order to continue the process execution. This decision
based in a runtime information, or just a simple evaluation, will be automatically
taken and the process will always continue the execution to the selected next node.
In other words, this node will never behave as a wait state. It is also important to
note that this node will not include any human interaction. Then we will see how
to achieve this human decision, when we talk about humans.

If we see the decision node in GPD, we can notice that the node will let us take our
decision with a simple expression evaluation (Expression Language) or by using
a compiled Java class that needs to implement a single method interface called
DecisionHandler. This interface forces us to implement the decide() method
that must return a String representing the name of the chosen transition.

This will let us call whatever logic we want in order to take/choose the right
path/transition for a specific execution. In this DecisionHandler, we can call
external services, an inference engine that contains business rules, and so on in
order to make our decisions.

[ 128 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4

In the code of the Decision class, we could see some kind of policy for taking a
transition based on different kinds of evaluations. Something important to notice
here is that if no transitions are selected by the evaluations, the default transition is
taken using the getDefaultLeavingTransition() method defined in the base Node
class. If you look at the code that is used to get the default transition, you can see that
is the first one defined in the list of transitions in the jPDL syntax. You need to know
about this, because in most cases, your evaluations could not end in any particular
transition, so the first one defined will be taken.
This extra functionality will help us and give us a higher grade of flexibility when we
make path decisions in our processes.
The core functionality of this node is in the execute() method and it could be
divided into three phases of analysis.
execute()
Decision
Delegation Class

Call decide()

Instantiate

String =
Transition
Decision
Expression

Get Expression

Evaluate
getLeavingTransition(String)

Conditions

For Each
Transition

next Transition

Get Condition

True?

take(Transition)

Evaluate

The first phase checks if there is a delegation class assigned to this decision. This
check is done using the following line:
if (decisionDelegation != null)

[ 129 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language

If the decisionDelegation variable is not null, the framework will be in charge of
creating a new instance of the delegated class that we specify and then it will call
the decide() method proposed by the DecisionHandler interface. This decide()
method will return a String with the name of the leaving transition that was chosen
by our delegated class.
If the decisionDelegation variable is null, the code jumps to the next decision
phase where it checks for a decision expression with the following line:
else if (decisionExpression != null)

This decisionExpresion variable makes reference to an expression described in
Expression Language (EL), the same language that is used in Java Server Faces for
reference variables in custom JSP tags.
For more information about this language and how this expression can be built,
please visit the following page:
http://java.sun.com/j2ee/1.4/docs/tutorial/doc/JSPIntro7.html

If this decisionExpression is not null, the content of the expression is retrieved and
passed on to the Expression Language evaluator, which will decide what transition
to take.
At last, if we don't specify a delegation class or decision expression, the third phase
will check for conditions. These conditions will be inside each transition that is
defined inside the decision node. If you see the code about conditions, you will
find that there are two blocks to handle conditions. This is because some backward
compatibility needs to be maintained. You need to focus on the second one where
the following code is used:
// new mode based on conditions in the transition itself
for (Transition candidate : leavingTransitions) {
String conditionExpression = candidate.getCondition();
if (conditionExpression != null) {
Object result = JbpmExpressionEvaluator.evaluate
(conditionExpression, executionContext);
if (Boolean.TRUE.equals(result)) {
transition = candidate;
break;
}
}
}

Here, each leaving transition is analyzed in order to find if it has a condition declared
inside it. Then these conditions get evaluated and if some of them evaluate to true,
the method stops and the transition is taken.
[ 130 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4

Summarizing the above, we can say that the decision priority will be:
•

Delegation class

•

Expression evaluation

•

Conditions inside the transitions (for backward compatibility)

Transitions: joining all my nodes

At last, but not least important, we have the transition element that will join our
nodes in the process definition. These transitions will define the direction of our
process graph. As we have discussed when we talked about GOP, transitions are
stored in the source node list called leavingTransitions. Each one of them will
have a reference to the destination node.

These transitions will be represented by the class called Transition, which is also a
GraphElement but not a sub class of Node.
As all the transitions are GraphElements, they will a have minimal lifecycle.
Transitions in particular have a single stage lifecycle, represented with the take()
method where technical details (custom behavior) can be added. This is important
because we will have an extra stage (represented with the event) between the
leave() method from the source node and the enter() method from the
destination node, giving us extra flexibility for adding custom logic.

[ 131 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language

If we look at the code of this class, we will notice that transition doesn't have a
hierarchy relationship with any other class (of course, other than the java.lang.
Object class). If you think about it, there is no need to try to extend this concept,
because it is very generic and flexible enough to support any custom logic inside it.

If we use the DecisionHandler inside the decision node where we delegate
the decision to an external class, the conditions will be inside that class and the
transitions will be generated clean, as we can see in the following image:

Try to add some expressions and see how the code of the conditions is generated
inside the transition tag.

Executing our processes

Until now, we have discussed five basic nodes and how the information is stored in
each one of them. We have also seen how each node would behave in the execution
stage, but we haven't seen how to execute them.
Basically, a process execution stage is represented with an object called
ProcessInstance. This object will contain all the information related, with
just one execution of our process definition.
In order to create one process execution, we will need to have the
ProcessDefinition object that will represent which process we want to
execute. We have seen a little preview about this in the last section of Chapter 3,
Setting Up our Tools.

[ 132 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4

Now we are going to analyze some sections of the ProcessInstance class to see
some important concepts. These concepts will be extended when you start using the
framework and realize that you need to be sure about the runtime behavior of your
process and how to interact with this just-created execution.
Some interesting properties in the ProcessInstance class that we need to know are:
long id;
int version;
protected Date start;
protected Date end;
protected ProcessDefinition processDefinition;
protected Token rootToken;

As you can see here, some important properties will be the two Dates—start and
end. These two properties will store the specific point of time when the process starts
and also when it ends. It is important to store this information in order to be able
to analyze the statistics of the processes' execution, which will help us to optimize
them. You can also see that the ProcessDefinition object is stored in order to know
which process we are executing.
Finally, the most important property here is the Token rootToken property. This
property will represent all of the executional aspects of our process. The word token
is used to represent how the execution is passed from one node to the next until it
reaches a node of the End State type.
As we have seen in Chapter 3, the main constructors for the ProcessInstance
class are:
public ProcessInstance(ProcessDefinition processDefinition)
{
this(processDefinition, null, null);
}
public ProcessInstance(ProcessDefinition processDefinition,
Map variables)
{
this(processDefinition, variables, null);
}

Both of them call for the real implementation of the constructor that initializes
some internal variables, in order to correctly start the process execution. In this
initialization, a new token will be created, the StartState node will be selected,
and the token will be placed inside this StartState node, which will behave as a
wait state until someone starts the process flow.

[ 133 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language

In the second constructor, a Map is passed as the second argument.
This map will contain all the variables needed before the process starts. This could be
because some information is needed before it starts in the early stages of the process.
If we look at the methods that the class proposes, we could find that the most
important ones are wrapper methods that interact with this internal token. If
we jump to the operations section of the method, we will find the first one—the
signal() method comes in three flavors:
// operations ///////////////////////////////////////////////////////
////////
/**
* instructs the main path of execution to continue by
taking the default transition on the current node.
*
* @throws IllegalStateException if the token is not active.
*/
public void signal()
{
if (hasEnded())
{
throw new IllegalStateException("couldn't signal token :
token has ended");
}
rootToken.signal();
}
/**
* instructs the main path of execution to continue by
taking the specified transition on the current node.
*
* @throws IllegalStateException if the token is not active.
*/
public void signal(String transitionName)
{
if (hasEnded())
{
throw new IllegalStateException("couldn't signal token :
token has ended");
}
rootToken.signal(transitionName);
}
/**

[ 134 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4
* instructs the main path of execution to continue by
taking the specified transition on the current node.
*
* @throws IllegalStateException if the token is not active.
*/
public void signal(Transition transition)
{
if (hasEnded())
{
throw new IllegalStateException("couldn't signal token :
token has ended");
}
rootToken.signal(transition);
}

All of the methods in the operations section will be in charge of letting us interact
with the process execution. The method signal will inform the process that the
token needs to continue the process execution to the next node when it is stopped
in a wait state situation. This method, and the word signal itself, are used to
represent an external event that will influence the current status of the process. As
we can see in the following image, the process execution will stop when it reaches
a node that behaves like a wait state. So, if we try to access the rootToken of the
ProcessIntance object and get the node where it is currently stopped, we will
find that this node is behaving as a wait state.
The next method inside the operations section is the end method that will terminate
our process execution on filling the endDate property. All the processes that have
this property with a non-null value will be excluded from all of the lists that show
the current process execution.
/*** ends (=cancels) this process instance and
all the tokens in it.
public void end()
{
// end the main path of execution
rootToken.end();

*/

if (end == null)
{
// mark this process instance as ended
setEnd(Clock.getCurrentTime());
// fire the process-end event
ExecutionContext executionContext =
new ExecutionContext(rootToken);
processDefinition.fireEvent(Event.EVENTTYPE_PROCESS_END,
executionContext);
[ 135 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

jPDL Language
// add the process instance end log
rootToken.addLog(new ProcessInstanceEndLog());
...
ExecutionContext superExecutionContext =
new ExecutionContext(superProcessToken);
superExecutionContext.setSubProcessInstance(this);
superProcessToken.signal(superExecutionContext);
}
...
}

As you can see, the method end will first of all get the current token and end it. Then
it will get the current timestamp and set the endDate property. Finally, it fires the
Event.EVENTTYPE_PROCESS_END event to advise everyone that the process instance
has been completed. Also, it is important to note the first comment at the top of
the method, which informs us that ends could be also interpreted as canceled. This
comment is important, because in most cases, we don't need to end our process
instances, as they will end automatically when the execution reaches one of the
possible end states. This method is only used when we want to finish the process
at a point at which it is not supposed to end.
Basically, you will need only your process definition object, then create a new
ProcessInstance instance, and call the method signal for the process to start.
We can see that in the following code snippet:
ProcessDefinition processDefinition = ProcessDefinition.
parseXmlResource("simple/processdefinition.xml");
ProcessInstance instance = new ProcessInstance(processDefinition);
instance.signal();

Now you have all the tools required to start modeling the basic process. We are
now ready to continue. So, get your tools, because it's time to get your hands dirty
with code!

[ 136 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 4

Summary

We have learnt the basic nodes in this chapter—you need to understand how these
nodes work in order to correctly model your business processes. It is important to
note that, these nodes implement only the basic functionality, and more advanced
nodes will be discussed in the following chapters.
In this chapter we start with the most generic node called "Node". This node will be
our main word in our jPDL language. Extending it we will define all the other words
giving us an extreme flexibility to represent real business scenarios.
Then we analyze the StartState and EndState node. Both nodes have very similar
implementations and lifecycles and allow us to create correct definitions using the
jPDL syntax.
Finally, the nodes State and Decision were analyzed. The first one, State node,
has the most simplistic implementation, letting us describe/model a wait state
situation, where the node's job is only to wait, doing nothing, until some external
event comes. The second one, Decision node, has a little more logic inside it, and it
will help us in situations where we need to make an automatic decision inside our
process flow.
In the next chapter, we will build real business processes inside real applications
using these basic nodes. These real processes will help us to understand how our
designed process will behave in a real runtime environment.

[ 137 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Getting Your Hands Dirty
with jPDL
In this chapter, we will practice all the conceptual and theoretical points that we have
already discussed in the previous chapters. Here we will cover the main points that
you need in order to start working with the jBPM framework.
This chapter will tackle, in a tutorial fashion, the first steps that you need to know in
order to start using the framework with the right foot. We will follow a real example
and transform the real situation into requirements for a real jBPM implementation.
This example will introduce us to all the basic jPDL nodes used in common
situations for modeling real world scenarios. That's why this chapter will cover
the following topics:
•

Introduction to the recruiting example

•

Analyzing the example requirements

•

Modeling a formal description

•

Adding technical details to our formal description

•

Running our processes

We have already seen all the basic nodes that the jPDL language provides. Now it's
time to see all of them in action. It is very important for the newcomers to see how
the concepts discussed in previous chapters are translated into running processes
using the jBPM framework.
The idea of this short chapter is to show you a real process implementation. We will
try to cover every technical aspect involved in development in order to clarify not
only your doubts about modeling, but also about the framework behavior.

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Getting Your Hands Dirty with jPDL

How is this example structured?

In this chapter, we will see a real case where a company has some requirements to
improve an already existing, but not automated process.
The current process is being handled without a software solution, practically we
need to see how the process works everyday to find out the requirements for our
implementation. The textual/oral description of the process will be our first input,
and we will use it to discover and formalize our business process definition.
Once we have a clear view about the situation that we are modeling, we will draw
the process using GPD, and analyze the most important points of the modeling
phase. Once we have a valid jPDL process artifact, we will need to analyze what
steps are required for the process to be able to run in an execution environment.
So, we will add all the technical details in order to allow our process to run.
At last, we will see how the process behaves at runtime, how we can improve
the described process, how we can adapt the current process to future changes,
and so on.

Key points that you need to remember

In these kind of examples, you need to be focused on the translation that occurs
from the business domain to the technical domain. You need to carefully analyze
how the business requirements are transformed to a formal model description that
can be optimized.
Another key point here, is how this formal description of our business scenario
needs to be configured (by adding technical details) in order to run and guide the
organization throughout its processes.
I also want you to focus on the semantics of each node used to model our process. If
you don't know the exact meaning of the provided nodes, you will probably end up
describing your scenario with the wrong words.
You also need to be able to distinguish between a business analyst model, which
doesn't know about the jPDL language semantics and a formal jPDL process
definition. At the same time, you have to be able to do the translations needed
between these two worlds. If you have business analysts trained in jPDL, you will
not have to do these kind of translations and your life will be easier. Understanding
the nodes' semantics will help you to teach the business analysts the correct meaning
of jPDL processes.

[ 140 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 5

Analyzing business requirements

Here we will describe the requirements that need to be covered by the recruiting
team inside an IT company. These requirements will be the first input to be analyzed
in order to discover the business process behind them.
These requirements are expressed in a natural language, just plain English. We will
get these requirements by talking to our clients—in this case, we will talk to the
manager of an IT company called MyIT Inc. in order to find out what is going
on in the recruiting process of the company.
In most cases, this will be a business analyst's job, but you need to be aware of the
different situations that the business scenario can present as a developer. This is very
important, because if you don't understand how the real situation is sub-divided into
different behavioral patterns, you will not be able to find the best way to model it.
You will also start to see how iterative this approach is. This means that you will
first view a big picture about what is going on in the company, and then in order to
formalize this business knowledge, you will start adding details to represent the real
situation in an accurate way.

Business requirements

In this section, we will see a transcription about our talk with the MyIT Inc. manager.
However, we first need to know the company's background and, specifically, how it
is currently working. Just a few details to understand the context of our talk with the
company manager would be sufficient.
The recruiting department of the MyIT Inc. is currently managed without any
information system. They just use some simple forms that the candidates will have to
fill in at different stages during the interviews. They don't have the recruiting process
formalized in any way, just an abstract description in their heads about how and
what tasks they need to complete in order to hire a new employee when needed.
In this case, the MyIT Inc. manager tells us the following functional requirements
about the recruiting process that is currently used in the company:

[ 141 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Getting Your Hands Dirty with jPDL

We have a lot of demanding projects, that's why we need to hire new employees on a
regular basis. We already have a common way to handle these requests detected by
project leaders who need to incorporate new members into their teams.
When a project leader notices that he needs a new team member, he/she will
generate a request to the human resources department of the company. In this
request, he/she will specify the main characteristics needed by the new team
member and the job position description.
When someone in the human resources team sees the request, they will start
looking for candidates to fulfill the request. This team has two ways of looking
for new candidates:
•

By publishing the job position request in IT magazines

•

By searching the resume database that is available to the company

When a possible candidate is found through these methods, a set of interviews will
begin. The interviews are divided into four stages that the candidate needs to go
through in order to be hired.
These stages will contain the following activities that need to be performed in the
prescribed order:
•

Initial interview: The human resources team coordinates an initial interview
with each possible candidate found. In this interview, a basic questionnaire
about the candidate's previous jobs and some personal data is collected.

•

Technical interview: During the technical interview stage, each candidate is
evaluated only with the technical aspects required for this particular project.
That is why a project member will conduct this interview.

•

Medical checkups: Some physical and psychological examinations need to
be done in order to know that the candidate is healthy and capable to do the
required job. This stage will include multiple checkups which the company
needs to determine if the candidate is apt for the required task.

•

Final acceptance: In this last phase the candidate will meet the project
manager. The project manager is in charge of the final resolution. He will
decide if the candidate is the correct one for that job position. If the outcome
of this interview is successful, the candidate is hired and all the information
needed for that candidate to start working is created.
If a candidate reaches the last phase and is successfully accepted, we need to
inform the recruiting team that all the other candidate's interviews need to be
aborted, because the job position is already fulfilled.

[ 142 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 5

At this point, we need to analyze and evaluate the manager's requirements and find
a graphical way to express these stages in order to hire a new employee. Our first
approach needs to be simple and we need to validate it with the MyIT Inc. manager.
Let's see the first draft of our process:

Initial
Interview

Technical
Interview

Medical Check
Ups

Final
Acceptance

With this image, we were able to describe the recruiting process. This is our first
approach that obviously can be validated with the MyIT Inc. manager. This is our
first draft that tells us how our process will appear and it's the first step in order
to define which activities will be included in our model and which will not. In
real implementations, these graphs can be made with Microsoft Visio, DIA (Open
Source project), or just by hand. The main idea of the first approach is to first have a
description that can be validated and understood by every MyIT Inc. employee.
This image is only a translation of the requirements that we hear from the manager
using common sense and trying to represent how the situation looks in real life.
In this case, we can say that the manager of the MyIT Inc. can be considered as the
stakeholder and the Subject Matter Expert (SME), who know how things happen
inside the company.
Once the graph is validated and understood by the stakeholder, we can use our
formal language jPDL to create a formal model about this discovered process.

[ 143 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Getting Your Hands Dirty with jPDL

The idea at this point, is to create a jPDL process definition and discard the old
graph. From now on we will continue with the jPDL graphic representation of the
process. Here you can explain to the manager that all the new changes that affect
your process will go directly to the jPDL defined process.
Until now our artifact has suffered the following transformations:

Transcript from
the description
given by the
manager

Description

Hand Sketch
of the
process

Formalization
jPDL/XML

The final artifact (the jPDL process definition) will let us begin the
implementation of all the technical details needed by the process in order
to run in an execution environment.
So, let's analyze how the jPDL representation will look for this first approach in the
following figure:
<>
START

<>
Initial Interview

<>
Technical Interview

<>
Medical Check Ups

<>
Final Acceptance

<>
END

[ 144 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 5

At this point we don't add any technical details, we just draw the process. One key
point to bear in mind in this phase is that we need to understand which node we will
use to represent each activity in our process definition.
Remember that each node provided by jPDL has its own semantics and meanings.
You also need to remember that this graph needs to be understood by the manager, so
you will use it in the activity name business language. For this first approach we use
state nodes to represent that each activity will happen outside the process execution.
In other words, we need to inform the process when each activity ends. This will mean
that the next activity in the chain will be executed. From the process perspective, it
only needs to wait until the human beings in the company do their tasks.

Analyzing the proposed formal definition

Now that we have our first iteration that defines some of the important aspects
described by the MyIT Inc. manager, some questions start to arise with relation to
our first sketch, if it is complete enough, or not. We need to be sure that we represent
the whole situation and it defines the activities that the candidates and all the people
involved in the process need, to fulfill the job position with a new employee. We will
use the following set of questions and their answers as new requirements to start the
second iteration of improvement.
The idea is that each iteration makes our process one step closer
to reflecting the real situation in the company. For reaching that
goal, we also need to be focused on the people who complete each
activity inside our process. They will know whether the process is
complete or not. They know all the alternative activities that could
happen during the process execution.

If we see the proposed jPDL definition we can ask the following questions to add
more details to our definition.
•

How about the first part of the description? Where is it represented? The
proposed jPDL process just represents the activities of the interviews, but it
doesn't represent the request and the creation of the new job position.

•

What happens if the candidate goes to the first interview and he/she doesn't
fulfill the requirements for that job position?

•

How many medical checkups are done for each candidate?

•

What happens if we fulfill the job position? What happens with the rest of
the candidates?

[ 145 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Getting Your Hands Dirty with jPDL

These questions will be answered by the MyIT Inc. manager and the people who
are involved in the process activities (business users). The information provided by
the following answers will represent our second input, which will be transformed
into functional requirements. These requirements need to be reflected in our formal
description once again. Let's take a look at the answers:
How about the first part of the description? Where is it represented?
The proposed jPDL process just represents the activities of the
interviews, but it doesn't represent the request and the creation of the
new job position.
Yes, we will need to add that part too. It's very important for us to have
all the processes represented from the beginning to the end. Also, you
need to understand that the interviews are undertaken for each candidate
found, and the request to fulfill a new job position is created just once. So
the relationship between the interviews and the new employee request is
1 to N, because for one request, we can interview N candidates until the
job position is fulfilled.
What happens if the candidate goes to the first interview and he/she
doesn't fulfill the requirements for that job position?
The candidate that doesn't pass an interview is automatically discarded.
There is no need to continue with the following activities if one of the
interviews is not completed successfully.
How many medical checkups are done for each candidate?
All the candidates need to pass three examinations. The first one
will check the physical status of the candidate, the second will check
the psychological aspects, and the third one will be a detailed heart
examination.
What happens if a candidate fulfills the job position? What happens
with the rest of the candidates?
If one of the candidates is accepted, all the remaining interviews for all
the other candidates need to be aborted.

[ 146 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 5

Refactoring our previously defined process

Now with the answers to our questions, we can add some extra nodes to represent
the new information provided. Take a look at the following image of the process,
you will find new nodes added, and in this section, we will discuss the reason for
each of them.
The new proposed process is much more complex than the first, but the main idea is
still intact. You need to be able to understand the process flow without problems. It's
more complex just because it represents the real situation more closely.
<>
Interview Possible Candidate

<>
Initial Interview

? <>
Initial Interview Passed?

No- Find a New Candidate

<>
Technical Interview

No- Find a New Candidate

? <>
Technical Interview Passed?

<>
to Psychological Check Up

to Physical Check Up
to Heart Check Up
<>
Physical Check Up

<>
Heart Check Up

<>
Psychological Check Up

<>
Candidate Discarded

<>

? <>
Medical Exams passed?

No- Find a New Candidate

Last Interview
<>
Project leader Interview

? <>
Final Acceptance?
No- Find a New Candidate
<>
Create Workstation

<>
Candidate Accepted

[ 147 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Getting Your Hands Dirty with jPDL

In this section, we will review all the nodes added at each stage of the process.
With this, you will have a clear example to see how a real situation is translated
to a specific node type in our process definition and how we will iteratively add
information when we find more and more details about the situation.
<>
Interview Possible Candidate

<>
Initial Interview

?

<>
Initial Interview Passed?
Approved - Go to Technical Interview

If you take a look at the CandidateInterviews process (/RecruitingProcess/src/
main/resources/jpdl/CandidateInterviews/processdefinition.xml), you
will see that the first node (Start State node) doesn't have the default name Start/
start-state1. Here, I have chosen a more business-friendly name, Interview Possible
Candidate. This name looks too long, but it says precisely what we are going to do
with the process. Here a possible candidate was found and the process will interview
him/her in order to decide if this candidate is the correct one for the job.
Using the business terminology will help us to have a more descriptive
process graph that can be easily validated by the stakeholders.

The second node called Initial Interview will represent the first interview for each
selected candidate. This means that someone in the recruiting team will schedule a
face-to-face meeting with the candidate to have the first interview. If you take a close
look at the process definition graph or the jPDL process definition XML, you will
find that for this activity, I have chosen to use a State node. I chose this type of node,
because the activity of having an interview with the candidate is an external activity
that needs to be done by a person and not by the process. The process execution
must wait until this activity is completed by the candidate and by the recruiting
team. In the following (more advanced) chapters, we will see how to use a more
advanced node to represent these human activity situations. For now, we will use
state nodes to represent all the human activities in our processes.
Once the Initial Interview is completed, an automatic decision node will evaluate
the outcome of the interview to decide if the candidate must be discarded, or if
he/she should continue to the next stage of the process.
[ 148 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 5

This will look like:
<>
Initial Interview

?

<>
Initial Interview Passed?

No - Find a new Candidate

Approved - Go to Technical Interview

<>
Candidate Discarded

<>
Technical Interview

Just for you to know, this is not the only way to model these kinds of situations, feel
free to try other combinations of nodes to represent the same behavior.
The decision node is used to decide for which transition the process will continue
its execution. This decision node can define N (in this case, only two) leaving
transitions, but at runtime, just one will be chosen to continue.
Remember that the Decision node takes a transition based on the following two
evaluation methods:
•

Using an EL expression

•

Using a delegated class, implementing the method interface
DecisionHandler

No matter which method we choose, the information that is used to make a
decision needs to be set before the node was reached by the process execution.
In this situation, the information used in the evaluation method of the decision
node will be set in the state node called Initial Interview as the interview outcome.
Another way you can use to model this situation is by defining
multiple leaving transitions from the state node.

<>
state1
to state2

to state3

<>
state2

<>
state3

[ 149 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Getting Your Hands Dirty with jPDL

This approach writes the following logic inside an action of the state node:
1.

The logic used to determine which transition to take is based on
the interview outcome.

2.

Explicitly signals one of the N leaving transitions defined based on
the logic outcome.

This approach tends to be more difficult to maintain than a detached
decision node that handles all that logic. Basically, it is up to you to decide
how to model these kinds of situations.

The pattern of using a state node and then a decision node, to decide if the
previous activity is completed, with the desired outcome, is applied throughout
all the process stages in order to decide if the candidate can continue or not,
based on each activity's outcome.
The next stage described in the process definition looks exactly the same as the first
one. The Technical Interview looks exactly the same as the Initial Interview stage.
It also includes a decision node to evaluate the outcome of this specific interview.
If the candidate passes/approves the first two interviews, some medical
examinations need to be taken in the third stage.
As these check ups have to be done in different buildings across the city, taking
advantage of the fact that all of them are independent from each other, a fork node is
used to represent this temporal independence. Take a look at the following image:

<>
to Psychological Check Up

to Physical Check Up
to Heart Check Up
<>
Physical Check Up

<>
Heart Check Up

<>
psychological Check Up

<>

Here we need to understand that the Fork and Join nodes are used to define behavior,
not to represent a specific activity by itself. In this situation, the candidate has the
possibility to choose which exam to take first. The only restriction that the candidate
has is that he/she needs to complete all the activities to continue to the next stage. It
is the responsibility of the Join node to wait for all the activities between the Fork and
Join nodes to complete before it can continue with the execution.
[ 150 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 5

This section of the modeled process will behave and represent the
following situations:
•

When the process execution arrives at the fork node. (Note that the fork node
doesn't represent any one-to-one relationship with any real-life activity. It is
just used to represent the concurrent execution of the activities.)

•

It will trigger three activities, in this case, represented by state nodes. This
is because the checkups will be done by an external actor in the process. In
other words, each of these activities will represent a wait state situation that
will end when each doctor finishes each candidate's checkup and notifies the
outcome of the process.

•

In this case, when the three activities end, the process goes through the join
node and propagates the execution to the Decision node to evaluate the
outcome of the three medical checkups. If the candidate doesn't have three
successful outcomes, he/she will automatically be discarded.
We use the fork node because the situation behaviors can be
modeled as concurrent paths of execution. A detailed analysis of
the fork node will take place in the following chapters. But it's
important for you to play a little bit with it to start knowing this
node type. Try to understand what we are doing with it here.

Describing how the job position is
requested

In the previous section, we find all the answers to our questions; however, a few
remain unanswered:
•

How is the first part of the process represented? How can we track when the
new job position is discovered, the request for that job position is created,
and when this job position is fulfilled?

•

Why can't we add more activities to the current defined process? What
happens when we add the create request, find a candidate, and job position
fulfilled activities inside the interview process?

[ 151 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Getting Your Hands Dirty with jPDL

The answers for these questions are simple. We cannot add these proposed nodes
to the same process definition, because the interview process needs to be carried
out (needs to be instantiated) once for each candidate that the recruiting team finds.
Basically, we need to decouple all these activities into two processes. As the MyIT
Inc. manager said, the relationship between these activities is that a job request will
be associated with the N-interviews' process.
The other important thing to understand here, is that both the processes can be
decoupled without using a parent/child relationship. In this case, we need to create
a new interview's process instance when a new candidate is found. In other words,
we don't know how many interviews' process instances are created when the request
is created. Therefore, we need to be able to make these creations dynamically.
We will introduce a new process that will define these new activities. We need to
have a separate concept that will create an on-demand new candidate interviews'
process, based on the number of candidates found by the human resources team.
This new process will be called "Request Job Position" and will include the
following activities:
•

Create job request: Different project leaders can create different job requests
based on their needs. Each time that a project leader needs to hire a new
employee, a new instance of this process will be created where the first
activity of this process is the creation of the request.

•

Finding a candidate: This activity will cover the phase when the research
starts. Each time the human resources team finds a new candidate inside this
activity, they will create a new instance of the candidate interviews' process.
When an instance of the candidate interviews' process finds a candidate who
fulfills all the requirements for that job position, all the remaining interviews
need to be aborted.

We can see the two process relationships in the following figure:

Create Job Request
Find Candidate

For Each
possible Candidate

Candidate Interviews

Candidate Found

If we express the Request Job Position process in jPDL, we will obtain something
like this:
[ 152 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 5
<>
Job Position Opened

<>
Create request

<>
Finding Candidate

<>
Job Position Fulfilled

In the following section, we will see two different environments in which we can run
our process. We need to understand the differences between them in order to be able
to know how the process will behave in the runtime stage.

Environment possibilities

Based on the way we choose to embed the framework in our application, it's the
configuration that we need. We have three main possibilities:
•

Standalone applications

•

Web applications

•

Enterprise application (this will be discussed in Chapter 12, Going Enterprise)

Standalone application with jBPM embedded

In Java Standard Edition (J2SE) applications, we can embed jBPM and connect it
directly to a database in order to store our processes. This scenario will look like the
following image:

Standalone
Application

jbpm-jpdl.jar

Database

jbpm.cfg.xml /
hibernate.cfg.xml

[ 153 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Getting Your Hands Dirty with jPDL

In this case, we need to include the jBPM jars in our application classpath in order
to work. This is because our application will use the jBPM directly in our classes.
In this scenario, the end users will interact with a desktop application that includes
the jbpm-jpdl.jar file. This will also mean that in the development process, the
developers will need to know the jBPM APIs in order to interact with different
business processes.
It's important for you to know that the configuration files, such as
hibernate.cfg.xml and jbpm.cfg.xml will be configured to access the
database with a direct JDBC connection.

Web application with jBPM dependency

This option varies, depending on whether your application will run on an
application server or just inside a servlet container. This scenario will look like:

Data Source

Database

Servlet Container
Application

Transactions

Services

Libraries

jbpm-jpdl.jar

jbpm.cfg.xml /
hibernate.cfg.xml

In this case, we can choose whether our application will include the jBPM jars
inside it, or whether the container will have these libraries. But once again, our
application will use the jBPM APIs directly.
In this scenario, the end user will interact with the process using a web page that
will be configured to access a database by using a JDBC driver directly or between
a DataSource configuration.

[ 154 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 5

Running the recruiting example

In this section, we will cover the first configuration (the standalone one). This
configuration can also be used to develop a web application. We will see that one
has to test the whole process, which has been recently executed in order to see
how it behaves.
This process will live only in memory, and when the thread that starts it dies, all the
changes in that process will be lost. In other words, this process will start and end in
the same thread, without using the database access to store the process status.

Running our process without using any
services

In this section, we will see how our two processes will run using JUnit, so that we
can test their behavior. The idea is to know how to move the process from one state
to the other, and also to see what is really going on inside the framework.
Feel free to debug the source code provided here step by step, and also to step
into jBPM code to see how the jBPM classes interact in order to guide the
company's activities.
In this test, we will see how our two processes are chained logically in order to
simulate the real situation. By "logically", I mean that the two processes are manually
instantiated when they are needed. It is important to notice this, because there are
situations where the process can be automatically instantiated, which is not the
case here.
Take a look at the project called /RecruitingProcess/. You will find both the
process definitions under /src/main/resources/jpdl. If you open the test called
RecruitingProcessWithOutServicesTestCase located inside /src/test/org/
jbpm/example/recruiting/, you will see a long test that shows how the process
behaves in a normal situation.
Here we will explain this execution in order to understand the expected behavior
and how this can be checked using JUnit asserts.

[ 155 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Getting Your Hands Dirty with jPDL

Normal flow test

If you take a look at the method called test_NormalFlowWithOneCandidate()
inside the test case, you will see that we are trying to execute and test the normal
flow of our defined process. We will simulate the situation where a new job position
request is created. Then in our test, a new candidate is found. This will mean that a
new candidate interview process will be created to evaluate if the candidate will get
the job or not.
This is a simple but large test, because the process has a lot of activities. I suggest you
to take a look at the code and follow the comments inside it.
In a few lines, you will see the following behavior:
1. A new job position request is created. This will happen when a project
leader requires a new team member. This will be translated to an instance
of the Request Job Position process. Basically, we parse the jPDL XML
definition to obtain a ProcessDefinition object and then create a new
ProcessInstance from it.
2. Now we need to start this process. When we start this process, the first
activity is to create the request. This means that someone needs to define
the requisites that the job position requires. These requisites will then be
matched with the candidate's resume to know if he/she has the required
skills. The requests (requisites) are created automatically to simulate the
developer job position. This is done inside the node-enter event of the
"Create Request" activity. You can take a look at the source code of the
CreateNewJobPositionRequestActionHandler class where all this
magic occurs.
3. When this request is created, we need to continue the process to the next
activity. The next activity is called "Find Candidate". This activity will be
in charge of creating a new process instance for each candidate found by
the human resources team. In the test, you will see that a new candidate
is created and then a new instance of the Candidate Interviews process is
created. Also, in the test, some parameters/variables are initialized before
we start the process that we created. This is a common practice. You will
have a lot of situations like this one where you need to start a process, but
before you can start it, some variables need to be initialized. In this case, we
set the following three variables:

[ 156 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 5

°

REQUEST_TO_FULFILL: This variable will contain a reference

°

REQUEST_INFO: This variable will contain all the information

°

CANDIDATE_INFO: This variable will contain all the candidate

to the process instance that was created to request a new
job position.

that defines the job request. For example, this will contain the
profile that the candidate resume needs to fulfill in order to
approve the first interview.
information needed by the process.

4. Once the variables are set with the correct information, the process can
be started. When the process is started, it stops in the "Initial Interview"
activity. In this activity, some data needs to be collected by the recruiting
team, and this again is simulated inside an action handler called
CollectCandidateDataActionHandler. In this action handler, you
will see that some information is added to the candidate object, which
is stored in the CANDIDATE_INFO process variable.
5. The information stored in the CANDIDATE_INFO process variable is analyzed
by the following node called "Initial Interview Passed?" that uses a
decision handler (called ApproveCandidateSkillsDecisionHandler)
to decide whether the candidate will go to the next activity or he/she
will be discarded.
6. The same behavior is applied to the "Technical Interview" and "Technical
Interview Passed?" activities.
7. Once the technical interview is approved, the process goes directly to the
"Medical Checkups" stage where a fork node will split the path of execution
into three. At this point, three child tokens are created. We need to get each
of these tokens and signal them to end each activity.
8. When all the medical examinations are completed, the join node will
propagate the execution to the next decision node (called "Medical Exams
passed?"), which will evaluate whether the three medical check ups are
completed successfully.
9. If the medical exam evaluation indicates that the candidate is suitable
for the job, the process continues to the last stage. It goes directly to the
Project leader Interview, where it will be decided whether the candidate
is hired or not. The outcome of this interview is stored inside a
process variable called PROJECT_LEADER_INTERVIEW_OK inside
the ProjectLeaderInterviewActionHandler action handler.
That process variable is evaluated by the decision handler
(FinalAcceptanceDecisionHandler) placed inside the "Final
Acceptance?" activity.
[ 157 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Getting Your Hands Dirty with jPDL

10. If the outcome of the "Final Acceptance?" node is positive, then an automatic
activity is executed. This node called "Create WorkStation" will execute an
automatic activity, which will create the user in all the company systems. It
will generate a password for that user and finally, create the user's e-mail
account. It will then continue the execution to the "Candidate Accepted"
end state.
11. In the "Candidate Accepted" node, an action is executed to notify that the job
position is fulfilled. Basically, we end the other process using the reference of
the process stored in the variable called REQUEST_TO_FULFILL.
I strongly recommend that you open the project and debug all the tests to see exactly
what happens during the process execution. This will increase your understanding
about how the framework behaves. Feel free to add more candidates to the situation
and more job requests to see what happens.
When you read Chapter 6, Persistence, you will be able to configure this
process to use the persistence service that will store the process status inside
a relational database.

Summary

In this chapter, we saw a full test that runs two processes' definitions, which are
created based on real requisites. The important points covered in this chapter are:
•

How to understand real-life processes and transform them into
formal descriptions

•

How this formal description behaves inside the framework

•

How to test our process definitions

In the next chapter, we will cover the Persistence configurations that will let us
store our process executions inside the database. This will be very helpful in
situations where we need to wait for external actors or events to continue the
process execution.

[ 158 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Persistence
This chapter will talk about how the jBPM framework handles every task related
to the persistence and storage of our processes information. When we use the word
persistence here, we are talking about storing the runtime status of our processes
outside the RAM memory. We usually do that in order to have a faithful snapshot
of our process executions, which we can continue later when it's required. In the
particular case of jBPM, persistence will mean how the framework uses Hibernate
to store the process instances status to support long-running processes. Here we
will discuss all the persistence mechanisms and all the persistence-related
frameworks' configurations.
If you are planning to do an implementation that will use jBPM, this chapter is for
you. Because, there are a lot of points/topics that can affect your project structure,
there are a lot of advantages and design patterns that you can apply if you know
exactly how the framework works.
As you already know, the database structure of an application needs to represent
and store all the information for that application to work. If you are planning to
embed jBPM in your application, you also need to define where the framework
will put its own data.
In this chapter, we will cover the following topics:
•

The reason and the importance of persistence

•

How persistence will affect our business processes executions

•

APIs used to handle persistence in the jBPM framework

•

Configuring persistence for your particular situation

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Persistence

Why do we need persistence?

Everybody knows that the most useful feature of persistence is the fact that it lets us
store all the information and status of each of our processes.
The question then is: when do we need to store all that information? The answer is
simple—every time the process needs to wait for an unknown period of time.
Just for you to remember, every time the process reaches a wait state, the
framework will check the persistence configuration (we will discuss this
configuration in the following sections). If the persistence service is correctly
configured, the framework will try to persist the current process status and all
the information contained inside it.
What is the idea behind that? More than an idea, it is a need, a strong requirement
to support long-running processes. Imagine that one of the activities in our process
could take an hour, a day, a month, even a year, or also could never be completed.
Remember that we are representing our real processes into a formal specification.
But in reality, these kind of things happen. An example that clearly demonstrates
this situation is when someone is waiting for a payment. That's to say, in these times
of world crisis, when people try to save money, we see that some payments will
never be repaid. Also in this kind of situation, where the activity must be achieved
by a third party, outside of the company boundaries, we don't have any type of
control over the time used to complete the activity. If this payment isn't paid back,
and we don't use persistence, our process will be using server resources just to wait.
If you think about it, we will consume server CPU cycles just to wait for something
to happen. This also means that in the cases of server crash, power down, or JVM
failure, our process status will be lost because it only exists in the server RAM
memory where all our applications run.
That is why we need to talk about persistence. It is directly related to the support of
long-running processes. Think about it, if you have some small and quick process
without wait states, you can just run it in the memory without thinking about
persistence, but in all the other cases you must analyze how persistence will
work for you.
As we have mentioned before, using persistence will also give us the flexibility
of support server crashes without doing anything. Practically we will have fault
tolerance for free, by just configuring the persistence service. This occurs when our
server crashes and our process is stopped or is waiting in a wait state activity; the
stopped process will not be at the server RAM memory, it will just be stored in the
database waiting for someone or some system to bring it up to complete the activity.

[ 160 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 6

Disambiguate an old myth

Another primary goal of this chapter is to have a clear view about how the
framework works. This view is important because we can hear a lot of confusing
terminology in the market and in the BPM field. In this section we will attack the
term BPM Engine.
When we (especially I) hear BPM Engine, we automatically imagine a
service/application that is running all the time (24/7) holding your processes
(definition and instances) and provides you a way to interact with it. I imagine
some kind of server where I need to know which port I must use to be able to
communicate to. This is not how it works in jBPM. We need to get out of our
head the idea of a big and heavy server dedicated to our processes.
Remember that jBPM is a framework, and not a BPM engine. The way it works very
different from that concept. And one of the reasons for this difference is the way the
framework uses persistence.

Framework/process interaction

At the beginning, when you see the official documentation of jBPM; it is very difficult
to see the stateless behavior between the framework and the database. Let's analyze
this with a simple process example. Imagine the following situation:

Sell Item

Wait For
Payment

Dispatch Item

In a situation like this, the process is started and run until the process reaches the
Wait For Payment node, it will take all the process instance information and persist
it using Hibernate in the configured database.
[ 161 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Persistence

This will cause the process instance object to go out of the server RAM memory.
Basically, the following steps are executed from the beginning of the execution:
1. Get an already deployed process definition from the database. This will
be achieved using the framework APIs, which will generate a query
(first a Hibernate query that will be translated to a standard SQL query
by Hibernate) to retrieve the process definition and will populate a new
ProcessDefinition object.
2. Create a ProcessInstance instance using the retrieved ProcessDefinition
object. This step doesn't use the database at all, because it only creates a new
ProcessInstance object in memory. When the ProcessInstance object is
created, we can start calling the signal() method.
3. The process flows until it reaches a wait state situation. In this moment,
no matter if the process flows throughout one or one thousand nodes, the
process instance information is persisted for the first time in the database. In
this case, the framework will generate a set of insert and update queries to
persist all the ProcessInstance object information. As you can remember
from Chapter 4, jPDL Language, the ProcessInstance object has all the
process status information, including the token information and all the
process variables in the ContextInstance object. When all the persistence is
done, all these objects go out from the server memory and leave space for all
the other running instances.
4. When some external system completes the activity, it must inform the
process that it is ready to continue the execution. This is achieved by
retrieving the process information from the database and again populating
a ProcessInstance object with all the information retrieved from the
database. In other words, when other systems/threads wants to finish one
activity and continue to the next node, it retrieves the process status, using
the process ID and signal to the process to continue the execution.
5. The execution continues without using the database until the next wait state
is reached. In this case the end of the process is reached, and the process
instance status is persisted once again.

[ 162 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 6

Take a look at the following image to follow the steps mentioned:
Our Application using jBPM APIs
Get Deployed
Process Definition

HQL Named Query

Create Process
Instance

Start it /Signal it

First Stage
of execution

Sell Item

Persist Process
Instance Status

Wait For
Payment
Second Stage
of execution

Dispatch
Item

HQL Named Query

Database

HQL Named Query
Get Process
Instance

Pay
Buyer

Persist Process
Instance Status

HQL Named Query

It's important to note that every time we need to wait, the process needs to be stored
in the database for that time. Then, using the framework APIs you will maintain
stateless calls to the database to retrieve and update the process status.
As you can see, here the BPM Engine concept doesn't fit at all. You are not querying
a BPM engine to store the process status or to retrieve that status. You are just
generating custom Hibernate queries to retrieve and store data in a stateless way.
Here the word stateless is used to emphasize the way that the framework
communicates with the database. As you can see in the previous example,
the framework will generate a Hibernate query depending on the status
of the process. This will generate a request/response interaction with the
database, which is in contrast with the idea of a channel that is open all
the time where data flows when it's generated.

[ 163 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Persistence

Process and database perspective

In this section we will see the execution from the process and database perspective.
We need to understand that in the database we will have some kind of snapshots
of our processes, executions. With these snapshots we can populate a full
ProcessInstance Java object and make it run again.
If we also want to include the process definition in the equation we will get
something like this:
.class

Artifact
parse()


Object

jPDL

.class
parse()



Process
Definition

As we can see in the image our process definition represented with jPDL syntax in
XML format is parsed and translated into a Java object. Because these objects are
Hibernate entities, they can be persisted in a relational database to store each
object status.
.class

Object

*

.xml

Database

*

*

Table

Persist

HBM

.class

.class

Process
Definition

...

.xml
Process
Definition
Mapping
HBM

Node

...

ProcessDefinition

*

.xml

*
...

Node
Mapping
HBM

Database
Persist

...
Node

...

[ 164 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 6

Something similar happens with the process execution. Our executions will be
represented in Java objects and with rows entries in a relational database structure.
But in contrast with the process definition, our execution will not be represented in
jPDL/XML format.
jPDL

.class



Process
Definition

parse()

Database
Hibernate
Persist

hbm.xml
Process
Definition
Hibernate
Mapping

In the execution stage of our processes we have another kind of interaction. This is
because the process instance will be updated each time our process reaches a wait
state. We will see the following type of interaction when our processes are executed:
1
START

AUTOMATIC
NODE

DEPLOY/PERSIST PROCESS DEFINITION
2
CREATE/PERSIST PROCESS INSTANCE
3

EXECUTION

WAIT STATE

UPDATE PROCESS INSTANCE STATUS

HIBERNATE

DB

AUTOMATIC
NODE
4
END

END PROCESS INSTANCE

[ 165 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Persistence

The deploy/persist process definition in the database, in general, is done
just once. Then you can use that deployed definition to create unlimited
numbers of process instances. The only situation when you will persist
the process definition again will be when you need to use an updated
version of your processes.

With this kind of interaction we can take advantage of the database transactions to
commit the state of the process if all the activities are successfully completed until
reaching the wait state that triggers the persistence.
If something goes wrong, the database transaction is rolled back and the changes
are never persisted. So, you need to be cautious and know that this will happen if
something goes wrong.
In the Configuring transactions section, we will go in depth into this database
transaction topic and we will analyze the pros and cons of this type of interaction.
From the process perspective, the process will live in the server memory until
it reaches a wait state. Basically, the process will be using CPU cycles when it is
executing the automatic activities, and when it reaches a wait state the process will
go directly to the relational database. This approach will give us a performance
boost, because our process will be in memory for short periods of time, allowing
us to have multiple processes running without CPU bottlenecks.
The following image shows us how the process will go out of the memory, and how
the interaction with the database populates the object stored in a previous wait state:
START

EXECUTION

1 GET PROCESS INSTANCE - CREATE PROCESS INSTANCE - SIGNAL IT

AUTOMATIC
NODE

WAIT STATE

HIBERNATE

DB

2 UPDATE PROCESS INSTANCE STATUS

3 USING THE PROCESS ID GET THE PROCESS
INSTANCE AND SIGNAL IT TO CONTINUE

[ 166 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 6

In this case, the third step can be done by the other decoupled thread, which restores
the process status using the process ID and then signals it to continue the execution.

Different tasks, different sessions

The jBPM APIs are structured around the different functionalities they provide. In
this section we will see how the framework APIs use different sessions to achieve
different activities. The idea behind dividing the APIs is that different tasks have
different requirements, and if the framework handles all those requirements in a
generic way (with a unique API), users will be confused.
If you take a look at the JbpmContext class, inside the framework APIs you will
notice that the following sessions are used to fulfill different types of requirements:
•

TaskMgmtSession: This session is focused on giving us the most generic

methods to handle human tasks in our processes. You will use this session
when you need to query the task instances created by your processes
execution. If you take a look at the TaskMgmtSession you will find a lot of
methods that handle and execute Hibernate queries to the relational database
to retrieve TaskInstance objects using some kind of filter. Some example of
these methods are:
°

findTaskInstances(String): This method retrieves all the
task instances for the specified actorId without filtering the

process. This method is useful when you need to create a
task list for a specific user that is involved in more than
one process.
°

findTaskInstancesByToken(long): This method retrieves

°

getTaskInstance(long): This method will retrieve a task

°

Many more methods; check out the class to know them all.

all the task instances that were created related to a specific
token. This is very helpful when you want to filter tasks
within the scope of a process.
instance with the specified ID.

[ 167 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Persistence

•

GraphSession: This session will be in charge of storing and retrieving the

information of process definition and process instance. You will use this
session when you need to deploy a new process definition or to get an
already deployed process definition. You can also query and create new
processes' instances with this session. Some of the methods that you can
find in this class are:
°

deployProcessDefinition(ProcessDefinition): This

°

findLatestProcessDefinition(String): This method will

°

findProcessInstances(long): Using this method you

°

Take a look at the other methods in this class, probably you
will use these methods to administrate and instantiate your
different processes.

method will be used when you get a process definition
parsed from a jPDL XML file and you want to store that
definition inside the relational database. In jBPM, that means
deploying of a process, just to store the process definition
that is modeled in jPDL XML syntax in a relational way. This
transformation between XML syntax and the Object world is
achieved by parsing the XML file and populating objects that
at last will be persisted using Hibernate.
let you find an already deployed process definition filtering
by the process name. This method, as the name specifies will
also filter by the version number of the process definition,
only returning the process with the highest version number.
You will use this method to obtain an already deployed
process definition in order to instantiate a new process
instance with the returned definition. Once you get the
process definition deployed you don't need the jPDL XML
file anymore.
will be able to find a specific process instance filtered by the
process instance ID.

Every time we deploy a process definition to the jBPM database schema,
the APIs automatically increment the version number of the deployed
definition. This procedure will check by the name of the deployed
process. If there is an other process with the same name, the deployment
procedure will increase the version number of the process definition and
deploy it to the jBPM database schema.

[ 168 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 6

•

•

LoggingSession: This session is in charge of letting you retrieve and handle

all the process logging that occurs when your process is executed. In this
class, you will find the following important methods:
°

findLogsByProcessInstance(ProcessInstance): This

°

loadProcessLog(long): This method will let us get a specific

method will let you retrieve all the process logs generated by
the process instance execution.
process log using the process log ID.

JobSession: This session is related to jobs, and will let us handle our jobs in

jBPM. In the jBPM terminology, the word job is used to refer asynchronous
automatic activities. So, for example, if you want to execute a method in an
external system, and you don't want to wait until the method is finished, you
can do an asynchronous call using some kind of message strategy like JMS or
a database messaging system provided by jBPM. We will attempt this topic
in the last chapter of this book.

Configuring the persistence service

In this section we will discuss how to configure the persistence service in jBPM,
in order to understand how the configuration can change the runtime behavior of
our process.
The main configuration files in jBPM are jbpm.cfg.xml and hibernate.cfg.xml.
These two files contain the core configuration for the jBPM context.
The first file (jbpm.cfg.xml) contains configuration for all the services. If we open it
we will see something like this:










[ 169 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Persistence

As we can see, this configuration includes the configuration of the persistence
service, which uses the DbPersistenceServiceFactory in this case, to create and
configure the persistence service.
This factory will use the resource.hibernate.properties property to retrieve the
hibernate.cfg.xml file. This property is also specified in the jbpm.cfg.xml file
with the following line:



This file contains two main sections of configurations. The first one configures the
data source where the framework will store all the information about our processes.
We can choose to configure a direct JDBC connection or an already
existent data source. Take a look at the following XML configuration in
the hibernate.cfg.xml file:


org.hibernate.dialect.HSQLDialect



org.hsqldb.jdbcDriver

jdbc:hsqldb:mem:jbpm

sa




As you can see these properties will define the database that jBPM will use to store
all our process data.

[ 170 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 6

Let's see the meaning of each property in order to fully understand what we are
configuring here:
•

hibernate.dialect: One of the most important properties in this section.

•

hibernate.connection.driver_class: This property will contain the

•

hibernate.connection.url: This property will let you set up where your
database is located. In this case, the URL specified is jdbc:hsqldb:mem:
jbpm. This URL tells us that Hypersonic is using an in-memory database to

This property will set the Hibernate dialect for each particular database
vendor. This dialect defines each of the vendor specific ways to create
the queries and the schemas in the database. In the code snippet before,
this dialect is set to HSQLDialect. This is the dialect for the embedded
Hypersonic database. This is a commonly-used database in development
scenarios, because it does not require installation and can be created to work
in memory. It is a very quick and easy way to have some test relational
database for testing our applications. In the real world you will probably
choose proprietary vendors like Oracle or DB2 or some license-free vendors
like PostgreSQL or MySQL. In any of these cases you must change the
Hibernate dialect property to match your selected vendor. To know which
values this property can take, please refer to the Hibernate documentation,
where you can also find all the supported database vendors.
fully qualified name of the class (package name plus the class name) that is
a JDBC driver for your specified vendor. This class needs to implement the
java.sql.Driver interface.

run. If you take a look at Hypersonic documentation you will see that you
can also configure it to work in a file mode, where a file is created to store all
the database information. In a production scenario, you will see a different
URL depending on the vendor driver you are using.
•

hibernate.connection.username and hibernate.connection.password:

These two properties are self-explained by their names.

Those are all the properties that you need to modify, if you are planning to use jBPM
with a direct JDBC connection. In case you want to use an existing data source you
will need to comment out all the previous properties except on the dialect one. And
uncomment the following section:


[ 171 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Persistence

With this property we will use a specified and existing data source, which already
has a database configuration. This data source will be registered in a JNDI tree, and
that is why we can reference it with java:JbpmDS. If we are working with JBoss
Application Server probably we will choose this data source approach. In those
cases, we have sample data source files for the most common vendors in the
config directory of the framework binaries.
For example, if you choose to use MySQL, the data source file will look like this:



JbpmDS

com.mysql.jdbc.jdbc2.optional.MysqlXADataSource


localhost


3306


jbpmtest

jbpmtest



TRANSACTION_READ_COMMITTED







com.mysql.jdbc.integration.jboss.ExtendedMysqlExceptionSorter


com.mysql.jdbc.integration.jboss.MysqlValidConnectionChecker



mySQL



[ 172 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 6

This file comes with the jBPM binaries, and can be found in the config directory
with the name jbpm-mysql-ds.xml. One important thing to notice here is that this
data source is an XA data source, which means that it is prepared to work with
distributed transactions. When XA data sources are used, you need to think that
these data sources support two-phase commits using an XA driver. In other words,
if we have multiple databases and our processes insert, modify, or remove data
from all those databases, probably we want to do all those insertions, modifications,
or removals in a single transaction. To achieve this, we need to use distributed
transactions, which will hold all the modifications until all the drivers guarantee
that all the operations in different databases or resources are completed successfully.
If you want to learn more about transactions' configurations you can take a look at:
http://www.jboss.org/community/wiki/ConfigDataSources

If we are just using a single database resource, you can configure this data source
as . Please take a look at the JBoss Application Server
documentation to see how to do this correctly.
This file, which must be called *-ds.xml, in this case jbpm-mysql-ds.xml, must
be inside the /server//deploy directory, which is, in turn, inside the
application server structure.
This data source approach delegates the database pooling strategy to the application
server, which in turn takes the responsibility of administrating the database
connections for us.
With this approach we can also change the database vendor without modifying the
jBPM configuration files.

How is the framework configured at runtime?
All of these configuration files let us create two objects that you already know:
JbpmConfiguration and JbpmContext.

JbpmConfiguration is a class with a single purpose. It will parse and maintain all

the information stored in the XML configuration files, to make it available in the
object world.

JbpmContext will be created using a method called createNewJbpmContext()in
the JbpmConfiguration class because the context will be based on the current

configuration. This context will contain all the configured services available to be
used by our processes' executions. This context will also decide how our process
will be persisted using this configured persistence service. In other words, the
JbpmContext will let us define each interaction with the database, messaging
service, logging service, and so on we will interact.
[ 173 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Persistence

Configuring transactions

As we discussed before, the idea of transactions are very important here. We don't
want to have corrupt data stored in our database. Also, we need to be sure that
status details of our processes stored in the database are faithful snapshots of our
processes' status.
In the world of Java, when we talk about transactions we will see two ways to
manage them:
1. User Managed Transactions
2. Container Managed Transactions
Depending on our environment, we can choose either of the approaches mentioned.
But, why do we need transactions here?
There are a lot of reasons, the first thing is that we need to know how to store only
correct information in our database. Let's imagine the following situation:
SERVER CRASH

AUTOMATIC
NODE

AUTOMATIC
NODE

HUMAN TASK

Our process is running in memory, and an actor finishes a human task and this task
completion will trigger a signal that will take the transition to the next node. What
happens, and what is supposed to happen if the server shuts down in the middle?
When the server goes up again, where will the process point? It is very difficult to
see a generic solution for situations like these.
Imagine the following situation in the new phone line process:
SERVER CRASH
FILL REQUIRED
CLIENT INFO

GENERATE
NEW ACCOUNT

GENERATE BILL

[ 174 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 6

In this kind of situation, we need a mechanism to define boundaries where
the activities are done or undone in an atomic way. That mechanism is called
transactions. Back to the beginning of the section, we have two ways to define
these boundaries:

User Managed Transactions (UMT)

If we are in a standalone application or we don't want to take advantage of the
container (Application Server) policies to define transactions, we need to specify
where a transaction begins and where it ends. Basically, we will define a segment
in our process execution that must be completed without errors or must be undone
(rolled back). If we talk about transactions in the market, or if you are familiar with
transactions probably you know the terms begin, commit, and rollback a transaction.
If we choose to use user-managed transactions, we need to tell the process execution
where to start and where to end the current transaction. If we take a look at the
jbpm.cfg.xml file in the JbpmContext tag configuration, we already have our
transaction service configured.


This configuration is for user-managed transactions. If we have that service
configured, every time we create a new JbpmContext object a transaction will begin.
And when we close the JbpmContext, the transaction will be committed. If there is
a problem between the creation and the close of our context, the transaction will be
rolled back and the changes will not be reflected in the database.
If our server crashes, there is no possible way to have corrupt data persisted in our
database. It is also important to note that the transaction can also contain information
about the message queue status, and any other transaction-aware resource. We will
discuss more about other services in the next section, What changes if we decide to use
CMT? User-managed transactions translated to the jBPM APIs will look like the
following code:
JbpmConfiguration config = JbpmConfiguration
.parseResource("config/jbpm.cfg.xml");
JbpmContext context = config.createJbpmContext();
ProcessInstance processInstance = context
.newProcessInstance("MySimpleProcess");
assertNotNull("Definition should not be null", processInstance);
processInstance.signal();
context.close();

[ 175 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Persistence

If there are some errors, the transaction will never be committed. All the code
between the createJbpmContext() and context.close() methods need to be
executed without any exception. Otherwise, context.close() will discard all the
changes. All the changes in the process status are rolled back and never sent to the
database. As you can imagine, in the database, the changes are never reflected if an
error occurs in the process execution. Just for you to remember, each node change
is not reflected until the process reaches a wait state where it persists all the current
progress and the modified information.

What changes if we decide to use CMT?

The idea to delegate the transaction demarcation boundaries to a container-based
approach will give us one less concern or problem in our head: the container
itself will decide when a transaction should begin and when the transaction
must be committed.
This approach is commonly used in EJB3 containers where, by default, each method
represents a separate transaction.
Every time a method is called, a new transaction begins (by default). When the logic of
the method ends, the transaction is committed. As a result, we don't need to specify the
transaction boundaries, letting us write less lines of code of pure business logic.
If our code is running inside a container we can take advantage of these capabilities,
we only need to change the framework configuration to use the container policies to
manage all the transactions stuff.
This chapter provides two example projects with two different configurations, which
show you how the code behaves in these two kinds of environments.
The idea of these examples is to show you how the configurations look, and also
to show you how the interactions occur in a standalone application versus an
Enterprise Application (Java EE). Depending on your own situation you will
need to choose one of these two configurations.

Some Hibernate configurations that can
help you

First, we will take a look at the basic ones. In the hibernate.cfg.xml file you can
specify the following properties:
•
•

hibernate.show_sql(true/false): This option will activate the SQL

logging in the console. We will see all the Hibernate-generated SQL queries.
hibernate.format.sql(true/false): This option will activate the SQL
formatting option in the logger.
[ 176 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 6

•

hibernate.ddl2sql.auto (update/create/createdrop): This option will

let us create, modify, or drop the database every time we start up a Hibernate
session. These options are widely used during the development stage.

Hibernate caching strategies

Based on the nature of each piece of information, you can choose whether to
apply a caching policy or not. The idea behind using cache is not to query the
relational database all the time. This will probably give a performance boost to your
application. But you need to be careful and decide which pieces of information are
the best candidates for caching.
The pieces of information that don't frequently change are the best candidates for
this kind of policies. For example, the process definition objects commonly do not
change every day, so they can probably be cached.
If you think about process instances, the situation becomes more complex. You need
to analyze if your cached process instances don't change often. This is because if
you have a lot of changes, you will create a bad use of the caching policies and the
performance can be affected. For applying caching strategies with Hibernate, please
review the official documentation.

Two examples and two scenarios

This section is about the code examples provided by this chapter, available for
downloading at http://www.packtpub.com/files/code/5865_Code.zip (look for
Chapter 6). There are two simple projects, one standalone (/standAloneExample/)
and the other is an EJB3 module (/EJB3Example/).
Both the standalone and the EJB3 projects have the same functionalities. The main
idea here is to see how the persistence can be configured for a standalone application
and also for an enterprise environment. The standalone application is configured to
work with a direct JDBC connection to a local database. It is also configured to use
the default database transaction strategies.
This project will have four main functionalities:
•

Deploy a process definition

•

List the deployed process definitions

•

Create a new process instance

•

List all the process instances for a specific process definition

[ 177 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Persistence

This project will use Swing technology as a UI, just to show you that you can use any
UI you want (not only web interfaces).
Basically, the standalone project will have the UI and also the code to work in the
standalone mode. We can configure the application to work with the code that uses
a direct JDBC connection or to use an Enterprise Java Bean that will execute the logic
inside a JBoss Application Server.
In the standAloneExample project, you need to find the Launcher class that will
start up the Swing application.

This application will let us fulfill the features mentioned before (deploy, list
process definitions, create new process instance, and list these instances filtered
by process definitions).
All the magic happens inside the Main.java class (inside /standAloneExample/
src/main/java/org/jbpm/examples/standAloneExample/ui directory). This class
is just a JFrame from Swing that will contain all the logic for all the actions behind
the buttons and tables inside this simple application.
As you can see there is a tab called Configuration. In this tab, you will select the
execution mode of the application. By default, the StandAlone mode of execution
will be used.
[ 178 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 6

For using the EJB3 mode, you will need to have the /EJB3Example/ project deployed
inside an application server.
Let's see how the application works in the StandAlone mode. If you open the
main class that contains the Swing form and inspect the code, you will see that
in the actions bound to the buttons in the UI we call the jBPM APIs to interact
with our processes.
In the code we will analyze the following section:
JbpmContext context = null;
///////////////////////////////////////////////////////
// First Action: Deploy process definition
//
///////////////////////////////////////////////////////
//If we are working in StandAlone mode
if (jRadioButton1.isSelected()) {
try {
context = conf.createJbpmContext();
ProcessDefinition processDefinition = ProcessDefinition
.parseXmlResource(jTextField1.getText());
context.deployProcessDefinition(processDefinition);
} catch (Exception ex) {
ex.printStackTrace();
} finally {
context.close();
}
//If we are working with the EJB3 module
} else if (jRadioButton2.isSelected()) {
InputStream is = Main.class.getClass()
.getResourceAsStream("/" + jTextField1.getText());
BufferedReader reader = new BufferedReader(new
InputStreamReader(is));
StringBuilder sb = new StringBuilder();
[ 179 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Persistence
String line = null;
try {
while ((line = reader.readLine()) != null) {
sb.append(line + "\n");
}
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
is.close();
} catch (IOException e) {
e.printStackTrace();
}
}
processSession.deployProcessDefinition(sb.toString());
jTextPane1.setText(jTextPane1.getText() + "\n "
+ new Date(System.currentTimeMillis())
+ " Process Deployed successfully!");
jButton3ActionPerformed(evt);
}

The first block of code, the one that starts after the comment If we are working in
StandAlone mode represents the code used to work in the standAlone mode. This
code only uses the normal jBPM APIs to deploy a process. To execute these lines a
JbpmContext must be created. This context will be created using the jbpm.cfg.xml
file and the hibernate.cfg.xml file provided in that project. The important thing
to know about these files is that they are both configured to work in a standalone
mode using a direct JDBC connection to the database. Remember to modify these
files to reflect your database installation parameters (such as host, username, and
password).Another thing that's important to note is the demarcation of transaction
boundaries using:
try {
context = conf.createJbpmContext();

} finally {
context.close();
}

We need to do this for the framework to know when to, and, with which, services
reflect the process status in the relational database.
Every time you invoke a method here, you won't see any generated SQL until the
context.close() method is reached. It is here where the magic occurs and the
process status is merged with the already stored status in the database.
[ 180 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 6

The idea of these projects is that you can debug them and see what is going on in
each step of your program. Also if you have the framework sources available in your
IDE, you can step into the framework code and see what is happening inside it.

Running the example in EJB3 mode

We can also use the EJB3 mode of execution in the example Swing application. This
mode will use an Enterprise Java Bean Module (EJB3), which contains a Stateless
Session Bean with our business logic. The same functionalities of the standalone
application are covered by this component. This project will demonstrate how we
can use jBPM in different environments. In this case we will encapsulate the logic
inside the Stateless Session Bean that will be hosted inside the Application Server,
taking away the need to have the dependency from the jBPM framework in our
application. If you take a look at the second block of code after the comment:
//If we are working with the EJB3 module

We will see that there are no jBPM APIs used there. The code there just reads the
file containing the process definition, which needs to be deployed and stores this
definition in a simple String.
Then we call our EJB Stateless Session Bean to execute the logic in the server side.
processSession.deployProcessDefinition(sb.toString());

This line requires us to have a JBoss Application Server instance up and running
with the EJB3Example project deployed inside it.
In the rest of this section we will see all the details that we need to have in order to
have the code provided run into a Java EE environment.
If you take a look at the jbpm.cfg.xml file, you will see that the persistence
property now is set to use a JTA approach.


Also in the hibernate.cfg.xml file, it is configured to use an already deployed data
source. This is achieved using JNDI to find the data source by name in the JNDI tree
contained in the application server.


java:JbpmDS


[ 181 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Persistence

This means that besides our deployed application we need to deploy a data source
that has the same JNDI name specified in the hibernate.cfg.xml file.
In this case we are using MySQL, so we need to create a file called mysql-ds.xml
with the following content:


JbpmDS

jdbc:mysql://[your host]:3306/[your database]


com.mysql.jdbc.jdbc2.optional.MysqlXADataSource

[your user]
[your password]
true
1
10
10

mySQL




This data source needs to be deployed besides our EJB3Example application inside
the JBoss Application server's /deploy/ directory. It's also important to remember
that we need to put the application server instance of the JDBC driver, which will be
used by the data source, inside the /lib/ directory.
The code provided for this chapter also includes a project called
CommonInterfacesExample, which includes the common code used by the
standalone application and by the EJB3 module. This project needs to be compiled
and the resultant JAR file needs to be copied inside the /lib/ directory in the
application server instance.
With all these configurations you can select the EJB3 execution mode in the Swing
application and start using the remote services provided by the EJB3 module.
For more information about how to configure persistence and all the services for
Enterprise environments take a look at:
http://docs.jboss.com/jbpm/v3.3/userguide/ch08.html
[ 182 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 6

Summary

This chapter describes various topics related to the persistence of our processes.
This is a very important topic that you need to know in order to lead successful
implementations. So take a look at the examples, debug them, and feel free to play
with them.
The idea here is to understand that jBPM is not a black box that you cannot control.
So, go ahead and step into the framework.
In the next chapter we will see how the human actors become involved with
our business processes and how the persistence plays an important role in the
human interactions.

[ 183 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Human Tasks
In real business scenarios, human interaction is extremely important. If you take a
look at your own company, you will see that all the employees around you interact
with each other in order to make the company run on a day-to-day basis.
Up to this point, we have used the basic nodes from the jPDL language to represent
our business process. However, if our business is heavily based on or includes
situations where human interaction is needed, the question is how can we include
all the interaction that people have in our processes everyday? In this chapter we
will learn all about human tasks (and not just automatic procedures), and how these
tasks let us represent the work that the company does daily. To accomplish this, we
will introduce a special node and discuss some extra features, so that you can gain a
complete knowledge to comprehend and decide how you can model and configure
your company-specific situation.
During this chapter the following topics will be covered:
•

What is a task?

•

Task management module

•

Handling human tasks in jBPM

•

Task node example

•

Assigning humans to tasks

•

Practical example

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Human Tasks

Introduction

As we have seen before, in jPDL we have a node called "State", which represents a
generic situation while the process waits. We call those situations wait states. We
also said that when a human needs to interact with an activity inside our process, the
process as a whole needs to wait. This waiting is needed because a human may need
large periods of time to complete a particular task.
We can say that a human task is a special situation of wait state, because besides
waiting it also requires a real user who needs to complete the task, interact, and
manipulate the information in it.
Basically, if we try to use a node of type State to represent our human task activity,
which appears inside our process, we can do it, but the task node adds new specifics
and extended capabilities to support the required human interaction.
For this reason, the human task node is created, and is available in the jPDL
language. Task node allows us to represent the human interaction in a more specific
way, which gives us more features to represent these specific situations, that appear
in real scenarios, in a generic and comprehensible way.
This chapter will be focused on analyzing all the aspects that need to be covered in
order to represent a real situation that includes very complex human interactions.
In jBPM there is a set of functionalities related to this topic called task management
module, so we, as developers need to know about how to implement real scenarios
that include human tasks and all these module features. You can take a look at
Chapter 2, jBPM for Developers, where this module is introduced.

What is a task?

In the jBPM language a task is always an activity that requires human interaction to
be completed. Every time you see the word task in the jBPM jPDL language you need
to understand that we are talking about one or more people who need to interact
with the process to complete one or more specific activities.
If we try to describe this kind of activity in generic terms , we can say that a generic
task is composed by the following elements:
•

Input data

•

Assigned user

•

Action

•

Output data

[ 186 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 7
INPUT
DATA

add/update/remove/review

TASK

User
OUTPUT
DATA
Action 1

Action 2

In most situations we will have some input data that will be needed by the assigned
actor to begin work. This data can be anything that the user requires to work on
the activity. We can think about this information as contextual data needed by the
activity in order to be completed.
Imagine the situation where a user needs to review an article to be published on a
site. This article content will be the task data input, that the user will review and
decide if the article information is correct or not. This decision will be the action
listed earlier, for example if the article information is correct, the user can choose
to publish the article, if it is not sent to rewrite the article. The output data, in this
case can be the same article content that can be modified by the reviewer if some
minor things need to be changed. This output data will continue flowing through
the process if it is needed.
ARTICLE

REVIEW
ARTICLE

add/update/remove/review
Reviewer
REVIEWED
ARTICLE

Publish

Rewrite

[ 187 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Human Tasks

As you can guess, there are some requirements that this kind of situation needs in
order to have an application interact with a human being and vice versa. One of the
most basic features that we need to have is a User Interface (UI) that lets the human
interact with the software. It is also important to note that most of the time the
human activities handle and modify information; here it's the input data as discussed
before. A mechanism to handle all that information will be needed. Last, but not
least is the fact that human interactions are done by people within our company,
which brings us to the concept of assigned users. This implies that some user/role
assignment policies need to be done and configured.
We can say that this task management module needs to fulfill the
following requirements:
•

Well-defined UIs

•

A mechanism to handle information

•

User/roles policies to assign each task to the correspondent role

Task������������������
management�������
�����������������
module

In jBPM a whole module is created to handle this new concept of tasks. (Remember
that in jBPM there are always human tasks.)
You may ask, when do I need to know about this module? The answer is simple: you
must know about this module if your process includes human activities, because this
module will give you a lot of features to make your life easier.
This module introduces a new node called task node, which extends the basic node
functionality, allowing us to represent generic situations where human interaction is
required in our processes.
It also introduces a mechanism to expose all the data that needs to be shown to the
end user. With that information we can create our custom UIs. This mechanism also
hides all the technical details and all the process information that is not related to the
activity that requires human interaction.
This module formally introduces a set of tools that will allow us to easily manage all
the tasks created within our processes. Basically, it gives us a methodology that will
handle all the interactions needed by business roles to complete the defined human
activities in a specific process.

[ 188 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 7

With the inclusion of human interaction in our process definitions, we obtain the
following advantages:
•

Represent process that include human interactions: This is an obvious
point, but now we can easily represent real business scenarios that include
activities requiring human intervention. In some way, we are describing all
the human activities in our business processes. This will help us to know
exactly what information is handled by each user in our business activities.
In other languages like BPEL, the human activity concept doesn't have any
representation. This is the reason that we cannot represent situations where
humans are involved, in all such languages.

•

Analyze and improve the way humans achieve business goals: When we
have the description about how our information flows in our company's
processes, we can start analyzing and optimizing all the information handled
by each activity. These kind of improvements will help in the overall process
performance, letting all the managers know how to present the information
to each user involved in the company. Also, with this kind of analysis we
can discover silent failures or situations where information is missing.

•

Humans will be guided by the processes: The other clear advantage is
that all the new employees will be guided throughout their activities.
This is because now the responsibility to figure out each possible situation
that the process may face, is taken away from them and has become the
process responsibility.

Handling human tasks in jBPM

We need to represent human activities in our business processes, but how can we
achieve this?
In jPDL the task node gives us all that we want. However, we need to know in detail
how to configure it in order to represent our particular situation.
We are going to configure the task node for various real situations in order to choose
the correct configuration to your particular scenario.
As we described earlier, human tasks are wait states by nature. We first need to
understand when to use human tasks, that is, whether the situation in our process
really needs a human being or the activity could be done automatically.

[ 189 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Human Tasks

Once the real situation is understood we need to know how we can represent it in
our models. In jPDL models the task node will represent situations when we can
have one or more activities that need human interaction to allow the process to
continue the execution to the next activity.

Inside this Task Node, we can express each particular task that needs to be
completed to continue the execution to the next node. Basically, a task node is
a container of task definitions that will be created at runtime, when the process
execution arrives at this node.
If we drag a new task node to our process in GPD, we can analyze the properties this
node accepts.

In this case, two internal tasks are defined inside this Task Node. These two tasks
will represent two real activities that will require human interaction to be completed.
It's very important to notice that each task defined here, can be assigned to different
users, and there are no restrictions about that. So, in this case, the manager can sign
the materials bill and his/her secretary can call the provider.
Also, it is important to note that the definition order of these two tasks will not
demarcate the order of work that needs to be done. These two tasks can be done in
parallel or in a non-predefined order. In the case that we really need to express order
between tasks, two or more task nodes need to be created.
Remember that whenever we need to demarcate a sequence of tasks
within our processes, the concepts of nodes need to be used. If we decide
to define multiple tasks inside a task node, we are just defining a set of
tasks that will require completing of just one activity in our process graph.

[ 190 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 7

If we take a look at the Source tab, we can see the generated jPDL source code.





As you can see, inside the  tags different tasks ( tag) can
be defined.
TASK NODE
TASK
TASK

TASK

These two concepts are reflected in two Java classes: TaskNode and Task.

Task node and task behavior
��������

In this section we will see how the task node and the task concepts are implemented in
jBPM and also all the technical details that let our tasks' definitions run. It guides our
company roles/users in their every day work, because now, the concept of task will
represent each user's activity. It is also very important to be able to know how these
definitions will behave at runtime.
The task concept, as a whole, introduces additional execution stages and behavior to
the basic node functionality. All this additional functionality is created to handle, in a
generic way, a lot of different situations where human interactions are needed.
This extra functionality is oriented to represent and store all the specific information
that we will need in order to fulfill these human interactions.
It is important to mention that the concept of task defines the static information
about our activities. The task node on the other hand defines an extension of the base
node, where the specific runtime behavior for this kind of nodes is defined. But wait
a second, there is something missing! If the  tags are the only the static way to
define our tasks, how can we represent the tasks that are running currently?
[ 191 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Human Tasks

To represent the tasks running inside our processes, we have the concept of task
instances. This concept will tell us if the tasks we defined in our process are
instantiated and are ready to work on. In other words, these task instances will
represent the units of work that will be selected by the users in order to complete
each activity. These task instances can be used to create each user task list, because
they will represent all the active tasks that the user has been assigned to complete.

TASK

Task
Instance

TASK

Task
Instance

TASK

Each of these task instances will be exposed to the corresponding business role. The
specified business role will interact with these task instances through a UI. This UI
will let them review, add, remove, and modify information to complete each task.
Each of these task instances also adds four more events to the node life cycle,
which will be fired when each task instance is created, assigned, started, and
completed respectively:
•

EVENTTYPE_TASK_CREATE

•

EVENTTYPE_TASK_ASSIGN

•

EVENTTYPE_TASK_START

•

EVENTTYPE_TASK_END

Like every other event, these four new events will let us hook up automatic actions,
giving us extra flexibility to run custom code before and after each task completion.
These task instances will be created inside the execute() method of the class
TaskNode based in all the task elements defined or referenced inside the
 tags.
[ 192 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 7

Basically, to handle all the human interactions we will have three classes that
interact in both phases (definition phase and execution phase) to fulfill the most
common requirements.

TaskNode�����
.java

As you can imagine, the TaskNode.java class will represent the node that will
contain a set of task definitions. For this reason, this class extends the Node class.
public class TaskNode extends Node

If you take a look at the properties defined inside this class, you will find:
•

Set  tasks: This property will store all the tasks defined inside this
task node.

•

int signal (initialized with SIGNAL_LAST): This property will define the

•

Boolean createTasks (initialized true): This property will define if the node
needs to create the task instances automatically when it's reached by the
process execution, or if another procedure will be in charge of that creation.
We will probably override this value when we need to dynamically create
task instances.

•

Boolean endTasks (initialized with false): This property will define if the
node must end all the task instances created by this node before it leaves
the activity.

behavior of the task node. More about this in the example.

Task.java

The Task.java class will contain all the information related to each task inside a task
node. This class is also an in charge and decides how the actor assignment should be
calculated at runtime.
The following three properties are used for this purpose:
•

protected String actorIdExpression: This property will take and resolve
the actorId expression if it is defined in the task configuration.

•

protected String pooledActorsExpression: This property will take and
resolve the pooledActors expression if it is defined in the task configuration.

•

protected Delegation assignmentDelegation: This property will take and
use the delegation class to perform the assignment in the runtime stage.

[ 193 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Human Tasks

TaskInstance.java

The TaskInstance.java class will represent our currently running tasks. If you take
a look at this class source, you will see the following interesting properties defined:
protected
protected
protected
protected
protected
protected
protected
protected
protected
protected
protected

String name = null;
String description = null;
String actorId = null;
Date create = null;
Date start = null;
Date end = null;
Date dueDate = null;
int priority = Task.PRIORITY_NORMAL;
boolean isCancelled = false;
boolean isSuspended = false;
boolean isOpen = true;

As you can see, these properties contain all the information about which actor is
currently assigned to this task, when the task was created, started, and ended. It also
contains a field to represent the priority of this specific task instance. And the last
three boolean properties let us express the status of the task instance.

Task node example

To have a clear view about how this task node works, we will use a real example
where a task node needs to be modeled and configured in order to fulfill real
scenarios and different situations that commonly occur in real life. The code of this
example can be found at http://www.packtpub.com/files/code/5685_Code.zip
(look for Chapter 7). In this section only the most significant code will be shown, so I
encourage you to take a look at the full project for complete comprehension.

Business scenario

Imagine that a bank implements a process to withdraw money or jewels from the
vault. This process will start when someone needs to take out something from
his/her account. When this happens, the account manager has to fill in a request,
which has to be approved by the bank manager. When this operation is accepted,
the customer is informed with the date when he can do the transaction. When this
withdrawal is made, all the data about the customer and his/her assets in the vault
must be updated. This process is represented in the following graph:

[ 194 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 7
Fill in
Request

Account Manger

Request
Approval

Bank Manager

Transaction

Customer

Update Data

The first and the most important key point in this situation, in my opinion, is to
identify the business actors that will interact to fulfill the business goal. In the graph
these actors are already identified, but in real situations we need to be sure about
their roles.
It will be important for us to know if, in the business scenario, each role can be
accomplished by one person or if the task can be done by a group of prepared
people. This kind of detail will affect the way we relate the task to the correct
business user.
If you find that a group of people can be assigned for each activity, you
will need to use the concept of pooled actors, that will represent a group
of people who have the knowledge to complete one specific activity.

The second key point is to know the nature of each activity. For example, if we have
to wait to get our withdrawal approved by different managers and also by the bank
security chief, we will need to find how to model this particular situation in order to
reflect the real activities. As you can imagine, this kind of situation can be modeled
in a variety of ways.
Common sense will dictate to us that we can create one task for each of the
authorizations, but if we do that, which authorization will be the first one?
Which will be the last one?

[ 195 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Human Tasks

Take a look at the following image:
Fill in
Request

Account Manger

Bank
Manager
Approval

Manager

Account
Advisor
Approval

Account Advisor

Security
Chief
Approval

Security Chief

Customer

Transaction

Update Data

In this particular situation, the process needs to wait for each participant to approve
the withdrawal sequentially. It is obvious that this is not the optimal way to do it.
This is because if the first role is very busy or for some reason (such as vacation or
sickness), he/she cannot fulfill the activity, the other activities couldn't be started. If
you think about it, there is no reason for this sequence, because in real scenarios the
transaction approval can be done in any order (in this situation). The important thing
here, is to note that some similar activities/tasks need to be done by these three roles.

[ 196 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 7

So, for this situation we can enclose these three activities inside a task node that
describes the real scenario.
Manager
Approval

Security
Chief
Approval

Bank
Manager
Approval

Here, these tasks will be created and can be completed in no particular order.
This is because these three tasks do not depend on each other, and no particular
order is needed.
Another key point here is that the three tasks have similar behaviors. In this
situation, the same review and approving information task needs to be done by the
three roles. In other words, the three tasks will display information to each user, and
the user interface will let them decide if they approve the transaction required by the
customer or not.
For this kind of situation we can define three different tasks inside the task node, and
then define some policies to tell the process when it must continue to the next node.
This is because now we can define if all the tasks will need to be completed,
or just one needs to be completed, or it can continue without waiting.
In cases like the example where the task node contains multiple task definitions,
it is important to know what kind of policies will be set for execution propagation.
The default behavior is to wait until all the tasks created inside the task node are
completed, and then continue the execution. This behavior is implicitly defined
inside the  tags. And when we have , it is
the same as having .

[ 197 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Human Tasks

As you can see, a new property called signal appears. This property cannot be
specified using the GPD plugin property panels, and needs to be set by hand in the
Source tab.
Other values that this property can take are:
•

first: This value will let us continue the process execution to the next node

•

never: The process execution never continues, it will always wait for some

•

unsynchronized: the execution will always continue, it doesn't matter if the

when the first task is completed. Remember that inside the task node there
will be no order specified. So, the first task is not necessarily defined first in
the XML file. When any of the tasks defined is completed, it will continue the
execution to the next node.
external signal that tells the token to continue to the next node.

tasks are unfinished. The execution will create the tasks defined inside the
node and then it will continue to the next node.

It is also important to know what would happen if we define a task node without
tasks inside them, and also how we can create two or more identical tasks inside the
same node with the same definition.
If we don't define any tasks inside the task node tags, the default behavior (with
signal property equals to last) for the task node will not behave as a wait state
and continue the execution to the next node.
We can also have other behaviors when we don't define any task inside the task
node. In this case, if we want process execution to wait in the task node, we can
have two more values for the signal property:
•

last-wait: If no tasks are created, the execution will wait until tasks are

•

first-wait: If no tasks are created, the execution will wait until tasks are

created.

created and the first task is completed.

If we don't use any of these flags when no tasks are defined inside the task node, the
task node will behave as an automatic node and it will continue the execution to the
next node in the process without waiting.
In situations when we need to have more than one task that is created with the
same task definition, or in situations when we need to create tasks based in runtime
information, we need to know how to create tasks programmatically.

[ 198 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 7

Assigning humans to tasks

All the tasks in jPDL are related with physical human beings. For this reason, a
relationship with our business users needs to be maintained. As we have seen in
the previous example, business roles/users are related to each task defined in the
process definition. We will discuss how this works and all the features related to
these assignments.
jBPM has a very flexible way to maintain this relationship. If you see the class
TaskInstance where all the information about each particular task is stored, you
can see that this class has two simple properties to maintain these relationships:
protected String actorId = null;
protected Set pooledActors = null;

Based on these two properties' values, each task will be related to one actor to
complete the activity, or with a set of possible actors that can voluntarily take on
each task to work on it.
The relationship between tasks and actors is described in the following image:
TASK INSTANCE
1

String actorid

*

Set 
pooled Actor

Basically, we say that the task is assigned to an actor if the actorId String is not
equal to null. In the cases where the actorId has a value different from null, the
task instance can only be worked by this actor.
If the actorId String is equal to null, and the pooledActors have a value
different from null, the task can be taken from one of the users listed in the set of
pooledActors. When the task is claim, the actorId property is filled and the task
cannot be claim for all the actors inside the pooledActors property anymore.

[ 199 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Human Tasks

All these claimed tasks have the chance to return to the pool if the actor taking the
task decides that he/she cannot complete the task at this moment. The task will
return to the pool if the actorId property becomes null again. When this happens,
again, all the actors in the set of pooled actors can see this task and are able to take it.
There are several ways of doing this kind of assignment; we will discuss briefly how
to do it in real scenarios. It's important for you to know that the next three sections
will work with the jBPM Identity module.
The jBPM Identity module is a simple module containing a simple structure to
support simple cases where our users will be stored in the database. This model
will let us store users, groups, and the memberships that will link the users with
different groups. It's important for you to know that this module is just for simple
scenarios, and in most cases, this module is replaced by a directory service (for more
information, refer to http://en.wikipedia.org/wiki/Directory_service).

Expression assignments

If we decide to use direct assignments, we will be directly writing the Task
actorIdExpression and pooledActorsExpression properties. These two
properties will be used at runtime to assign the ActorId or the pooledActors
properties in the TaskInstance instance. In the following figure, we can see how
we can set the actorId value in the task property panel inside the task node.

In the following figure, we can see that if we need to assign a set of pooled actors, we
just need to insert them separated with commas, and select the Pooled Actors option
from the drop-down menu.

[ 200 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 7

If you take a look at the drop-down menu, you will see that there is also an option
called Expression. This option will let us insert a JSF-like expression that will be
evaluated at runtime and the result will be added to the correspondent field. For
more information you can take a look at the class called org.jbpm.identity.
assignment.ExpressionAssignmentHandler, which contains the current evaluator
for this type of expression. Just for you to know, you can build expressions like the
one shown in the following figure:

As you can deduce, these expressions will be parsed and resolved by the
ExpressionAssignmentHandler class, which will contain logic that will be specific
for the Identity module provided by jBPM. In other words, if you change the Identity
module for your company directory service, you will probably need to provide a
different implementation for the ExpressionAssignementHander.

Delegated assignments

In the case of delegations, we will just provide a class that will decide which actor
will be in charge of each specific TaskInstance at runtime.
In this case, we are filling the assignmentDelegation property of the Task class.
This property will contain a class that implements the AssignmentHandler interface
that will know how to assign users to each TaskInstance.
If you take a look at the AssignmentHandler interface, you will find that it is a very
simple interface that contains just one method:
public interface AssignmentHandler extends Serializable {
void assign( Assignable assignable, ExecutionContext
executionContext ) throws Exception;
}

Download at WoweBook.com
[ 201 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Human Tasks

We need to provide the delegation property with the fully-qualified name of the class
that implements this interface, in order to be able to do an automatic assignation
during runtime.

In this case, the MyAssignmentHandler implementation is just a simple class that
implements the AssignmentHandler interface and decides which actor will be
assigned for each particular TaskInstance.
public class MyAssignmentHandler implements AssignmentHandler {
public void assign(Assignable assignable, ExecutionContext
executionContext) throws Exception {
//Based on some policy decides the actor that needs to be
// assigned to this task instance
assignable.setActorId("some actor id");
}
}

Just as a last detail, it's important for you to know that the TaskInstance class
implements the Assignable interface that contains just two methods:
public interface Assignable extends Serializable {
public void setActorId(String actorId);
public void setPooledActors(String... pooledActors);
}

Managing����������
our tasks

The most common way to see and organize these task instances is to make use of the
task list for each user involved in the process.
These task lists can be generated/populated using the task management APIs
provided by the task management module. These APIs let us query information
about all the task instances created in our processes, with the possibility of filtering
the information retrieved with different conditions. For example, we can query all
the tasks that a user is assigned to, and with that information populate the user
task list.
[ 202 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 7

In this section we will create some basic user interfaces, to see how all the task
management APIs are used by implementing a real-life scenario that shows us
how this task list behaves with multiple users.

Real-life scenario

Here, we will discuss a simple scenario that has some business roles interacting with
their corresponding human activity.
A simple process with two task nodes and an automatic decision node will be used
to demonstrate how these tasks need to be handled in order to seamlessly complete
business goals.
This simple process is described in the following image:

Check
Device
Temperature

Device Checker
YES

OK?
NO
Modify Fan
Velocity

Cooling Expert

If we model this process in jPDL syntax we will get something like this:











[ 203 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Human Tasks













This simple process will store some data about the current status of an electronic
device that needs to be cooled by a fan. When the heat of the device is over a defined
threshold, a process instance is created to control the situation.
The first task that appears in the process (Check device temperature) only checks the
current temperature, and adds a value inside a process variable, which represents a
forecast prediction based on the weather of the day. If this forecast prediction plus
the current temperature is over the threshold, the process will create a new task to
the fan technician that needs to correct the velocity of the fan manually to cool
the device.
In this process, the following two business roles interact:
•

The guy who checks the temperature (called deviceChecker in the process
definition): This user role needs to be near the device and will always be
responsible for checking the device status and adding the forecast prediction
for the next few hours.

•

The guy who is a cooling expert (called coolingExpert in the process
definition): This user role is miles away from the device, and he has the
knowledge to modify the fan velocity to keep the device temperature
under a defined threshold.

Here we have only an automatic node, the decision node that automatically checks if
the temperature plus the forecast prediction is added by the reviewer and, based on
that decides which path of the process to take.
Now that we have a real-life example and our process is modeled, we can start
analyzing how our users will interact with these tasks.

[ 204 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 7

Users and tasks interaction model

As you can imagine, each task needs to be represented graphically for the end user
to interact with. In this section we will describe the common model to organize and
show these tasks to each end user involved in our processes. This section has a tight
relationship with how we assign the tasks to our users. So, the screens discussed here
will be for generic situations, where actor IDs and pooled actors can be used.
In normal situations all the tasks are presented to each user in the form of task list.
This task list will include all the tasks created for the task list owner. In other words,
each user task list will only display tasks that have the current logged/selected
user assigned.
All the tasks listed inside a task list will be running tasks, so basically each
entry in the task list will be an instance of the TaskInstance class. Each of these
TaskInstance instances will need to be represented graphically in order for the
user to be able to interact with it.
Basically, we are organizing all the UIs for each TaskInstance.
TASK
INSTANCE

USER
INTERFACE

Assigned User

The following image has the sketch of how these generic task lists can be placed in
the user screen:

[ 205 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Human Tasks

The following image demonstrates a particular task form:
TASK NAME
LABEL
LABEL
LABEL
SAVE

COMPLETE

You can see in these two images that these screens will handle all the information
from one business role in a generic way. Let's analyze each of them a little.
The first screen shows us two task lists, one labeled MyTasks List / "username",
which displays all the created tasks that have the username in the actorId field, in
other words, all the tasks that are currently assigned to the logged/selected user.
As we discussed in the Assigning human roles to tasks section, in most situations, we
need to have a user for each business role that interacts with the process.
The second task list, labeled Pooled Tasks List contains all the pooled tasks that are
created and have the currently logged-in users as possible assignees. This means
that these tasks need to be taken in order to work on it. In this second task list we
need to "claim" one of these pooled tasks, and it automatically will be assigned to us
and moved to our task list. When we take one of these pooled tasks, each of them
becomes our responsibility. These pooled tasks are displayed in all the user pooled
tasks lists that have their name in the String[] pooledActors field of the task.
In the second screen, we can see all the information of each task. Most of these
screens are forms where we can review, enter, modify, or remove information. It also
will contain some buttons that let us notify the process about the status of each task.
Some common buttons used in most cases are:
•

Finish/Complete: This will inform the process that the particular task is over.

•

Save: This will save the current information inside the task; however, the
task is still unfinished.

[ 206 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 7

Practical example

In this section we will analyze the code that comes with this chapter in order to see
how all the theoretical issues discussed in this chapter are applied to the process
example proposed before.
The proposed example will be implemented as a web application that you can run
inside a Servlet Container as Tomcat or Jetty.
This project is also created and built with Maven, so if you are not familiar with
Maven take a look at the Maven introduction section in Chapter 3, Setting Up
Our Tools.
This project will take the proposed process definition and deduces that the device is
getting hotter using a random value that the role/user called "Device Checker" will
need to check.
So, if you open and build the web application created with Maven, you will find four
different screens. The home page of the application will contain two links that will
only redirect us to the Administrator Screen and to the User Screen.
As you can imagine the Administrator Screen will only contain a few actions to
configure the environment to be ready for the process execution.
The User Screen on the other hand will let us interact with the human tasks that our
process will create. In real situations, the User Screen will take the logged-in user and
display only the data/tasks that this user can see. Here, the example implements a
drop-down list to select the user, without any security restrictions.
Let's take a look at the Administrator Screen functionality.

Setting up the environment���������
��������������������
(in the
Administrator Screen)

As an administrator, you will need to set up some basic artifacts to be able to run the
defined process correctly.
It is important to see the order of these actions, because there are some dependencies
between them. For example, we can create new users in our database if we don't
create the database structure first. So, you must follow the order proposed in the
user interface to configure our environment successfully.

[ 207 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Human Tasks

As you can see in the Administrator Screen, the following actions will be proposed:
1. Create DB schema: This action will create all the tables in the database
needed for our processes to work. The code for this action can be found
inside the AdminScreenController HttpServlet. If you take a look inside
the processRequest method of the HttpServlet, you will find the following
block of code:
if (action.equals("createSchema")) {
conf = JbpmConfiguration.parseResource("jbpm.cfg.xml");
conf.createSchema();
request.setAttribute("message", "Schema Created!");
RequestDispatcher rD = request
.getRequestDispatcher("newProcessScreen.jsp");
rD.forward(request, response);
}

It is important to see the classes and the configuration files that interact,
in order to create all the tables' definition that will support our running
processes. As you can see, an important file here is the file called
jbpm.cfg.xml. This will contain all the services available for our
execution. For example, the persistence service, that will be in charge
of storing our processes in the database. If you open this jbpm.cfg.xml
file you will find an important line inside it:


This line tells jBPM where to locate the file called hibernate.cfg.xml that
will contain all the information about how Hibernate will communicate with
a relational database in order to persist our processes in the database. In this
case, it is pretty obvious that we need to know what kind of database we are
using, in which dialect does this database talk, where is it located, and with
which user we connect it to. So, if you open this hibernate.cfg.xml file you
will find something like this:


org.hibernate.dialect.MySQL5InnoDBDialect



com.mysql.jdbc.Driver


jdbc:mysql://[YOUR HOST]:3306/[YOUR DB NAME]

[ 208 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 7

[YOUR USER]


[YOUR PASSWORD]


true 1, false 0












[ 256 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 9

As you can see, a converter is needed to transform the Integer object into a Long
object. This is because a single strategy is defined to store numbers, and that strategy
uses the VariableInstance subclass called LongInstance.
Another thing that you need to know is that the strategies defined in the
jbpm.varmapping.xml file will be evaluated sequentially in the order described in
the file. As a result, if you decide to implement your own strategy for your custom
type, you will need to be careful that you put your strategy in the right place. It's a
very common situation when you create your custom strategy and put it at the end,
then you store an object that is of your custom type, but is also Serializable, and
then your custom matcher will never be executed.
Let's see a business example that shows us how variables are defined, which
configuration is dealing with those variables and how our variables are stored
in the configured database.

Understanding the process information

We need to understand some important considerations and techniques to handle
information in our process. These techniques will help you with the following topics:
•

Analyzing and classifying each piece of information in order to know how
to handle it

•

Knowing some rules that are applied to the process variables when the
process has nested paths

•

Accessing each variable from the jPDL syntax to take advantage of dynamic
evaluations at the runtime

The following three short sections describe these three topics giving some examples
that you will find useful when you have to implement a real situation.

[ 257 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Handling Information

Types of information

When we are modeling a new business process it is very important to understand
the nature of each piece of information that the process handles, in order to correctly
decide how to store and load this information when we need it. This information can
be split into three main groups:
•

Information that we already have and we will use only to query: In our
telephone company example, we are creating a new client account that
doesn't exist in the company. But in other situations, we probably will have
this information already stored in a database. If we implement another
business process in the telephone company, for example, a business process
to offer promotions to our existing clients, the client information will
already have been stored and just used for queries. The key point here is
to differentiate the information created by the process, and the information
used as contextual information that already exists outside the process. In
other words, you will need to formally define separately the information that
the process will use to control the process flow and all the information that
will be used as contextual information that already exists in the company and
needs to be retrieved for third-party systems.

•

New information that is created by the activities in a process that is related
to information that we already have: In our telephone company example,
we can have all the information about the new bill created for a particular
client. In the new phone lines process, the client information itself is created
and stored by this process. This information needs to be maintained closely
with the process information, because the process execution could create
different information for different situations, and sometimes we will need
to review the process execution to know why a specific piece of information
was created.

•

Temporal information, which can or cannot be persisted, because it is
only used to do calculations or to guide our process: In our telephone
company example, information that has the approved or the activate flag
is information that just controls the flow of the process and is not related
with any business entity. This information could be transient or persisted for
history analysis, but the key point here is to notice that this information has a
tight relationship with the process instance and not with the business entities
used in the process.

[ 258 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 9

For the first type of information we only need to know how to get it. So, a very
simple strategy can be used. For this kind of information you can store a business
key (a fashionable way to say ID and the context where this ID is valid, for example
System ID + Table name + Person ID). With this key we will be able to retrieve the
information when we need it. The Person, Customer, and Client entities are the most
common examples where we need to store only the ID value to find out all the data
that we need about them. It is important to note that this information, in most cases,
can be stored in other systems or databases, and with this approach we keep the
business process loosely coupled with other applications. In the example, we create
a Client and store it as a process variable, because in this case we are collecting the
client information. In this case, we are not using an already existent structure/class.
For the second type of data, that is the data that will be collected and needed by this
particular process, the data handling will depend on the amount and the structure
of the data. You can choose to create and handle different process variables or an
object that will contain all this information. If you choose to create an object, it will
probably contain the client ID, so we can relate the process instance information
and the client data very easily. In our case, the Client class is created to store all
the client information.
The third type of information needs to be analyzed in each situation. You can
include this information in your current model (in this case the Client class), or
if it is only used for some calculations or in decisions, you can choose if it needs to
be persisted or not. If you don't want to persist a process variable you can always
use the setTransientVariable() method. This method will let you have the
information stored in a separate Map of variables that are erased when the process
instance reaches a wait state. In other words, all the variables in the transient Map
are bypassed when the persistence is applied.
Note about information classification:
It is very important to note some not-so-intuitive topics about how we
handle all the information in our processes. For example, if you only store
the client ID and make intensive use of some client information, and also
the client information is distributed in multiple tables, you will probably
generate a lot of queries and network traffic to get all the information
you need.
For each piece of information that you will consume, you need to carefully
analyze the possible impact and make decisions based on this analysis.
You also need to keep in mind that if you store objects in the process
variables, the information will be very easy to access, but the serialization
process for your objects can drastically decrease your system performance.

[ 259 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Handling Information

Variables hierarchy

In a process instance, variables hierarchy is used to specify different scopes where
the variable can be accessed. This is a very useful feature when we need to handle
variables in nested paths of our processes. We will create nested paths inside our
process executions when we use concurrent paths or subprocesses.
The most common example of these nested paths is when we use a fork node
that creates a child token for each path it creates. See the fork node definition in
Chapter 10, Going Deeply into the Advanced Features of jPDL.
Each of these child tokens will handle their own Map of variables, which we can
merge with the root token Map when all the child nodes reach the join node. Here
it is important to learn about the rules applied to these variables. First of all, it is
important to notice that the variable Map is directly related to the token containing it.
Let's see an example about how this hierarchy works with a process that contains a
fork node to represent concurrent paths.
Variables Root Token
Age= 30
Salary= USD 5000
Fork

Variables Child Token 1
Age= 21
Salary= USD 3000

Variables Child Token 2
Age= 30
Salary= USD 5000

Join

Variables Child Token 3
Age= 50
Salary= USD 9000

We can decide to take
node-leave action at the
Join node if we want to merge
information about all the child
token variables

Variables Root Token
Age= 50
Salary= USD 9000

[ 260 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 9

We can see that the variables Age and Salary are copied to the child tokens when the
fork node is reached by the process execution. Now these new variables in the child
token are independent of the variables in the root token. If we change the value of
a variable in one of the child tokens, the other tokens will notice neither this change
nor the root token. This happens, because each token will have an independent copy
of the variables.
Depending on our situation we can do a manual merge between these variable
values in the join node when all the modifications are already done.
A key point to remember here is that, if we have multiple nested tokens and we need
to access a process variable using the getVariable(String) method, a recursive
search over the parent token will be done in order to find our variable if it doesn't
exist in the current nested token.

Accessing variables

Until now we have seen how variables are stored in the contextual information of
each token in our processes and all its children. Also, we have analyzed how the
variables can be accessed by the jBPM APIs using the getVariable() method.
In this section we will see how we can access process variables from the jPDL process
definition. This will give us automatic evaluations to make decisions or just to print
information that will be dynamically calculated during the process execution.
As you can remember, the process definition in jPDL is a static view of our process
that needs to be instantiated to create the execution context that will guide us
throughout the process.
We can take advantage of the information that we know will flow throughout the
process to add dynamic evaluations using EL (Expression Language) expressions.
Let's see an example of how these expressions can be used in our defined process.
In our telephone company example, we have a decision node that evaluates if the
bill created for a client is correct or not. In this case, our decision node can use an
expression to read a process variable, and based on the variable value it can choose
a transition. This generic expression will be evaluated at the runtime for each
process instance.
In this case, the expression looks like:


[ 261 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Handling Information

Based on a process variable approved, this decision node will choose between two
transitions. This expression is defined using the EL, the same language used in JSF.
You can find more about this expression language at:
http://developers.sun.com/docs/jscreator/help/jsp-jsfel/
jsf_expression_language_intro.html

In these kind of situations we know at definition time, that the client information
will be needed in the process to make decisions or calculations. We can take this
information and we can start creating expressions that model generic situations.
In the same way we can also be use EL to define the name of the (human) task that is
dynamically created by the process execution.
In our example the human task called Review Bill can be called #0012 - Review John
Smith Bill, where #0012 can be the process instance ID.
This is because we already have all the client information collected in previous tasks.
In this case the name of the tasks can contain an expression that will be dynamically
resolved at the runtime. The expression can be like the following:







A good use of these expressions can help you to be more expressive or declarative in
the way the data is used in your process.
Then, in the task list of the users, each task will have the name of the client, helping
to identify all the tasks related with the same client and process them in a very
intuitive way.

Testing our PhoneLineProcess example

In this section we will run a test that interacts with our defined process, letting you
see how the variables are handled by the process and by the action handlers.
This test is called PhoneLineProcessTest and you can find it in the code provided
with this chapter at http://www.packtpub.com/files/code/5685_Code.zip
(Look for Chapter 9).

[ 262 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 9

For running this test, first you need to configure your database connection in the
hibernate.cfg.xml file. In this case we will use a direct JDBC connection to persist
our process information. You can learn a lot more about this configuration in
Chapter 6, Persistence.
So probably you will need to change the following data in the hibernate.cfg.xml file:


org.hibernate.dialect.MySQL5InnoDBDialect



com.mysql.jdbc.Driver


jdbc:mysql://localhost:3306/jbpmtestvariables

root
salaboy

true 1, false 0



As you can see, the common properties to establish a connection are required. If you
take a look at the hibernate.connection.url property, you will see that in this
case we are using a MySQL schema called jbpmtestvariables. You need to create
this schema in order for Hibernate to establish the connection successfully.
If you have time and want to experiment with another database, please feel free to
change the Hibernate Dialect and the connection properties to your specific vendor.
To do these kind of changes you will also need to get your correspondent JDBC
driver. Remember that you can do that with Maven, editing the pom.xml file, and
adding your driver dependency near the MySQL dependency.

mysql
mysql-connector-java
5.1.6


Now you can go and run the test. Let's analyze what is happening in that test. I have
left many comments there to guide you through what is happening.

[ 263 ]

Download at WoweBook.com

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Handling Information

Basically, the test just shows you how the telephone company process works
with real APIs. You will find how the variables are managed and handled by the
framework APIs and you can see how these variables are stored in the database.
If you take a look at the table called JBPM_VARIABLEINSTANCE, you will find all
the variables that your process stored in the context instance.

As you can see, the account, client, and bill process variables are persisted as a
ByteArray inside the JBPM_BYTEARRAY table, with the IDs corresponding to the
BYTEARRAYVALUE_ column in this table. We can deduce how jBPM persists
each variable, based on the CLASS_ column in this table. In the case of the variables
account, client, and bill, the value of this column is B (for binary or byte array).
If you want to see all the different variables' mappings and which letter represents
each of them, you can open all the HBM files that map each variable type.
The important thing here is that the variables of type object that match with the
Serializable matcher, will be serialized and stored as a binary object. This can be
a good approach in some situations but for the most common use cases we want to
persist these variables in a relational way.
In the next section we will see how we can store the client information in a
relational manner.

Storing Hibernate entities variables

In the PhoneLineProcessTest example, we just store the Client object as a
Serializable object. That means our object is being serialized each time our process
reaches a wait state. Sometimes, this is not the optimal approach. Here we will see
how to store these variables as a Hibernate entity. That means our object will be
mapped with a relation table containing all its data.
[ 264 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 9

We can achieve this by just mapping the Client class using a HBM file. The
client-mapping file will look like this:












You will only need to add this mapping file into the hibernate.cfg.xml
configuration file in the mapping section. In the example project, you can
find the comments at the end of the mapping section that looks like this:




And uncomment the following line:


Now, when you run the test again, you will see that in your database the Client table
is now created and used to store your client objects.
In some situations this approach is much more useful than having your object
serialized in your database.
It's important for you to know that you will need to choose between
serializing or mapping your objects as Hibernate entities. This decision
needs to be analyzed for each type of information that you will handle. If
you use external/third-party entities, you will probably want to reuse the
tables that contain your entities information. Remember that if you choose
to serialize your objects and they are already persisted as Hibernate
entities, you will be duplicating unnecessary data.

[ 265 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Handling Information

Homework

For this homework you should try to create the mapping file for the Account class
that represents a new client account into the company. If you are familiar with the
syntax in the HBM files, this will be an easy task for you. Then you can test and see
in the database how your Account objects are stored in the Account table. If you
are not familiar with the HBM files syntax, you can take a look at the Hibernate
documentation page to play with your mapping files.

Summary

In this chapter we learned about how the information flows throughout our
processes and the importance of analyzing each piece of information in order
to see how it will be persisted.
As you can see in this chapter, the persistence plays a big role in the behavior and the
performance of our processes. To reduce risks and do successful implementations
with jBPM, we must know about the framework internals configuration and how the
persistence needs to be configured for each situation. We'll see more on that in the
next chapter.
The points that you need to remember about this chapter are the following:
•

The information in our process is stored in process variables

•

The process variables are persisted at the same time that the process status
information is persisted when the execution reaches a wait state

•

There are extensible and pluggable strategies that let you customize how
each of the process variables is persisted

•

There are some rules about the process variables, which you need to know
when your process has nested paths

•

You can read, evaluate, and print the process variables' information using EL

•

How to add your custom mappings to store your process variables as
Hibernate entities

In the next chapter we will discuss more advanced topics of the jPDL language that
will help us model complex processes to fulfill a broader range of situations.

[ 266 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Deeply into the
Advanced Features of jPDL
In this chapter, the reader will learn about the advanced capabilities of the jPDL
process definition language. We need to go deep inside these capabilities to be able
to represent complex situations. We also have to understand about the flexibility that
the language provides.
We will begin this chapter by looking at the rest of the nodes that were not
covered in Chapter 4, jPDL Language, in order to give you a complete overview
of the built-in capabilities offered by the language. Then we will continue to
more advanced settings and configuration of nodes, and actions inside the
process definition.
In this chapter, we will discuss the following topics:
•

Fork and join nodes

•

Super state node

•

Process state node

•

E-mail node

•

Advanced configurations in jPDL
°

Start state task definitions

°

Parameterizing actions

Why do we need more nodes?

jPDL includes extra functionalities to give you the ultimate flexibility to represent
situations where you need advanced features like subprocesses, concurrent paths,
hierarchical organization, and so on.

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Deeply into the Advanced Features of jPDL

As you can see, all the mentioned behavior is generic and it can be applied in every
situation that meets some of these requirements. Once you have understood all
these behaviors and functionalities, you will be able to decide whether you need to
implement or extend a custom type of behavior that fits in your particular situation.
All the nodes discussed here are subclasses of the Node class, basically all the rules
that we have already seen about behavior and functionality are applied to these
nodes as well.

Fork/join nodes

This pair of nodes work together, that is why I need to explain them in the same
section. Besides this restriction, both implement different logic.
We will start by talking about fork node, because it implements a new
functionality that has not been covered yet, and it is very important that
you understand it correctly.

The fork node

This node is used to split the current path of execution (also known as token) into
multiple concurrent paths. We will need this functionality when we want two or
more activities to be executed and completed in parallel. The only requirement of
these two or more activities is to not have any order dependency on each other.
Let's discuss this with an example. Imagine that we need to pass different medical
exams to get a job. Basically, we can see a process like this one:

INITIAL
INTERVIEW

FORK

HEART EXAM

LUNGS EXAM

BLOOD
PRESSURE
EXAM

[ 268 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 10

In a situation like this, where the activities don't have any dependencies on
each other, the exams can be done in parallel. No matter which is started first or
which is ended first, the only important point here is that all the exams need to be
finished successfully.
The fork node gives us the possibility to take more than one path at the same time.
How does this work? Which is the current path? If there are multiple paths, how do I
interact with them?
The mechanism implemented in jBPM works as follows:
When the execution arrives at the fork node, we say that the execution is split into
N paths of execution. N is the number of leaving transitions defined in the fork node.
This node creates N new subpaths of execution, so each of the newly-defined paths
gives a signal to start the execution. Each of these new subexecution paths has a
hierarchical relationship with the main path of execution, which arrives at the fork
node. In other words, when the process execution arrives at the fork node, new child
tokens will be created and its parent-child relationship will be maintained with all
of them.
The following image describes how this works inside the framework, let's analyze it!

ROOT
TOKEN

1 COUNT LEAVING TRANSITIONS
2 FOR EACH TRANSITION
2.1 CREATE NEW CHILD TOKEN
2.2 SIGNAL IT

FORK

CHILD
TOKEN1

CHILD
TOKEN2

CHILD
TOKEN3

[ 269 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Deeply into the Advanced Features of jPDL

As you can see in the image, when the execution reaches the fork node, three new
child tokens are created (child token 1, child token 2, and child token 3) and the
main token (the root token) remains in the fork node. The newly created tokens are
automatically signaled to start their executions through each leaving transition.
Now our whole process execution will be represented by the root token and the three
just-created subtokens. We will have a parent path and the three child paths created
for the fork node. We will be able to query each of these tokens to find out in which
node they are stopped.
ROOT TOKEN

CHILD TOKEN

CHILD TOKEN

CHILD TOKEN

If we take a look at the framework's APIs, we will find that we can query the token
and get all of its children, and also the token of its parent.
public
public
public
public

Map getChildren()
Token getChild(String name)
boolean hasChild(String name)
Token getParent()

If we take a look at the database, we will find that new tokens are persisted and all of
them have a reference to the root token ID.
In jPDL, we can define a fork node with the following syntax:






This will automatically create N child tokens (because we have N leaving transitions)
and signal them for you—in this case, N is 3.
At this point, if we ask the process where it has stopped, it will reply "in the fork
node". This is because the main path of execution is waiting for all of its children to
finish their executions in order to continue.

[ 270 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 10

When the execution of our concurrent activities ends, we need a way to tell
the process execution that all the parallel paths have finished and we need to
synchronize them again in order to have just one main sequential path of execution.
We can achieve this synchronization by using the join node, which will let us wait
until all the sub-paths created by the fork node end.

The join node

This node is in charge of waiting for all the child tokens to arrive at it in order
to continue the main path of execution. The join node can only have one leaving
transition that will be taken when all the children arrive at this node.
NODE

NODE
CHILD
TOKEN1

CHILD
TOKEN2

NODE
CHILD
TOKEN3

JOIN

NEXT NODE

When all the child tokens arrive at the join node, the child token (subpaths of
execution) is marked as ended. It automatically triggers a signal to the root token
to continue.
The root token will not pass through all the nodes between the fork and join nodes.
It will only jump from the fork node to the join node when all the activities between
them are completed (by its child tokens).

Modeling behavior

Remember that we are modeling real business processes here and not discussing
technical issues. This is related to the behavior that we are trying to model. The
parallelism between the activities represented with the fork and join nodes is not
related to technological concerns. In other words, try not to relate fork/join nodes to
multi-threaded programming. This is a common mistake that needs to be avoided.
Try not to use fork and join nodes to represent technical issues. The aim of these
nodes is to represent business scenarios without any impact on the technical aspects
of the process execution.
[ 271 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Deeply into the Advanced Features of jPDL

Let's see how these nodes work in the following code example—you can find the
sources of this example in the /forkAndJoinNodesExample/ directory.
If we take a look at the project provided in this chapter, we will see the following
implemented process:
<>
Start

<>
Initial Interview

<>
Blood Pressure interview

Heart Interview
<>
Set up Heart Interview

<>
Set up Blood Pressure Interview
<>
Set up Lungs Interview

<>
Call Nurse
<>
Lungs Exam

<>
Blood Pressure Exam

<>
Heart Exam
<>

<>
Sign Final Approval

<>
End

[ 272 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 10

The jPDL modeled process tries to represent a situation similar to the one
discussed before. It is important to know that this example uses a lot of state
nodes to represent the activities in our process. This is just to isolate the example
for external complications that can confuse you. If you need to model the same
scenario, but in real life and not just for an example, you will probably use the
task node to represent human tasks. Here I have decided not to use task nodes.
It requires additional configurations that are outside the scope of this example.
Using the state nodes, we will get a kind of state machine where we will need to
signal each state node to continue the process execution.
Let's analyze the test included in the src/test/org/jbpm/example/ProcessTest.
java project directory.
ProcessDefinition pD = ProcessDefinition.parseXmlResource
("processes/processdefinition.xml");
assertNotNull(pD);
ProcessInstance pI = pD.createProcessInstance();
assertNotNull(pI);
pI.signal();
assertEquals("Initial Interview", pI.getRootToken().getNode().
getName());
//When we signal the first state node called "Initial Interview", the
// process goes directly to the fork node, generate three child token
// and signal them. These three tokens (new sub paths of executions)
// will continue until each of them reaches a wait state node.
//Here is a good point to step into the jBPM code and see what is
// happening behind scenes.
pI.signal();
//Once the fork node was executed and each of the three new sub paths
// of execution reach a wait a state or the Join node, the method
// signal return.
//Here we check that the main path of execution is stopped in the
// Fork Node.
assertEquals("fork1", pI.getRootToken().getNode().getName());
Map childTokens = pI.getRootToken().getChildren();
//We have three child tokens now.
assertEquals(3, childTokens.size());.
Set keys = childTokens.keySet();
//We need to iterate each of them to end the activities in each
// sub path.
for(String key: keys){
childTokens.get(key).signal();
}
assertEquals("Sign Final Approval", pI.getRootToken().getNode()
.getName());
pI.signal();
assertEquals("End", pI.getRootToken().getNode().getName());
[ 273 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Deeply into the Advanced Features of jPDL

This test will read the process definition, then it will create an instance of that
definition, which will start calling the signal()method.
As the first node called initial interview is a wait state, we will need to call the
signal() method again for the process to continue the execution to the fork node.
When the process reaches the fork node, it will automatically create three child
tokens based on the three leaving transitions defined in this example.
As the fork node doesn't behave as a wait state, it will automatically call the
signal() method for the three newly-created tokens. It is important to note that the
implementation of the fork node has a foreach loop that creates a token and signals
it for each leaving transition defined. This will cause each path of the execution to
continue until it reaches a wait state.
Once again, there is no multi-threaded programming, it is sequential from the
framework perspective, because the paths created are executed one at a time. From
the process perspective, the execution is in parallel because all the wait states are
external for the framework and can be worked out and completed independently,
without a specified order.
In this example, we will see that the execution continues through the automatic
nodes until each path reaches a wait state node.
As you can see, the three paths can have different amount of activities and there are
no restrictions about that. In other words, one of your paths could have one activity
and another path can have a thousand activities. In this example, we don't have any
path that only includes automatic nodes, but there is no restriction about that either.
In that case, the path will be executed throughout all the automatic nodes until it
reaches the join node. When you call the signal() method for the child token, you
will see how that path will arrive at the join node and be marked as ended.

Super state node

Another common situation when you need extra functionality is when you have a
large process with a lot of activities. The super state node allows us to order all of our
process nodes in phases. The idea here is to enclose a bunch of activities with a super
state to demarcate that all these activities are in a certain phase of the process. If we
do this grouping and enclose all the activities in different super state nodes (phases),
we will be able to query these phases and know, very quickly, the block or high-level
group of activities in which our process has been stopped.
In large processes, this technique will bring order to your activities and also
flexibility to add custom code when the process moves from one phase to another.
[ 274 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 10

Let's discuss how this works by looking at the following image:
Phase 3
ACTIVITY 6

Phase 1
Inner Phase1

ACTIVITY 1

Phase 2

ACTIVITY 7

ACTIVITY 4
ACTIVITY 2

ACTIVITY 8
ACTIVITY 5

ACTIVITY 3
ACTIVITY 9

ACTIVITY 10

With this subdivision, we will get a clear overview of where we are in the execution
of the whole process. If we need to design a user interface, using these phases will
allow us to demarcate a meaningful percentage of advance. When we have a lot of
activities, probably a few of them will represent a large amount of work.
In the following image, we see different phases that contain a similar amount of
work, grouping different amounts of activities. These phases will probably be
designed and discovered by business analysts.
In the example provided in this chapter, you will find the following real scenario:
<>
Start

<>
Collect Use Cases

<>
Refine Use Cases

<>
Define Class Diagrams

<>
Implement Classes

<>
Define Tests

<>
Define DB Structure

<>
Create DB Structure

<>
Run Tests

<>
Validate Use Cases

<>
Last Iteration

to Next Iteration

<>
End

[ 275 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Deeply into the Advanced Features of jPDL

In this example, we can see a simplistic and incomplete view of the unified process
for software development.
In this process, we can see a lot of activities grouped in different phases of the
software development cycle. As one of the main characteristics of the process of
software development is the iterative approach, we need to start with the first phase
when the previous one is completed.
In this incomplete view of the unified process, we have four defined phases:
1. Requirements
2. Analysis and design
3. Implementation
4. Testing
Each one of these phases is represented by a big gray block enclosing the state nodes,
and contains different amounts of activities. When we jump from one phase to the
other, we get to know that the cycle is progressing.
In this case, just for the example, we can say that each phase in the cycle represents
a quarter of the effort needed to complete a full cycle. Note that the amount of work
in each phase is not the same, because we have different amounts of activities that
logically represent the same amount of work.
In jPDL XML syntax, we can define a super state node with:







superstate-enter
Entering Implementation Phase




superstate-leave
Leaving Implementation Phase




[ 276 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 10

As you can see in the code snippet, you will have a set of nodes included between
the  tags. It's also important to note that this super state node
includes some useful events such as superstate-enter and superstate-leave.
The example discussed here is extremely simple, and in real scenarios, we will
certainly have more complicated situations. Let's see some other combination that
we can have in real scenarios, which isn't covered in the previous example.

Phase-to-node interaction

This is the other normal scenario that we can have when we are working with
node hierarchies.
NODE 3
NODE 1

NODE 4

NODE 2

This execution works as a normal node-to-node execution. When the second node
ends its execution, it triggers the end of the super state node that will take the default
leaving transition. Note that we cannot have a transition between the Node 2 and
the super state node.
In this case and in all the cases where we use state nodes, we will have the following
events to hook custom actions:

2) SUPERSTATELEAVE

3) TRANSITION

1) NODE
LEAVE

4) NODE-ENTER

[ 277 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Deeply into the Advanced Features of jPDL

Node in a phase-to-phase interaction

Another common situation is when we want two stages of our processes to interact.
NODE 4

NODE 3
NODE 1

NODE 2

NODE 5

NODE 6

These kind of cases look complicated, but in fact, all of them work in the same way.
When node 2 ends its execution, it will trigger the node-leave, transition,
superstate-leave, superstate-enter, and node-enter events.
If you try to model this situation, you will see that a special mechanism is needed
to make a node, which is inside a phase, communicate with an external node that is
outside it.
This happens because we need to specify that the transition is leaving a phase
without finishing all the necessary activities inside the phase. In other words, we can
also have situations where node 1 (in the previous figure) goes directly to node 4,
without completing node 2.
We will talk about this mechanism and its rules in the Navigation section.

Node-to-node interaction between phases
NODE 4

NODE 3
NODE 1

NODE 2

NODE 5

NODE 6

Something similar to what we have discussed before happens here. In situations
like this one, we are leaving a phase without pointing a transition to any super
state node. Here the transition will be defined inside the node. As opposed to the
phase-to-phase (in the development process example) and phase-to-node situations
where the transition is defined in the super state node.

[ 278 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 10

Complex situations with super state nodes
Of course, we can also have more complex situations, such as:

START

END

Here, everything works in the same way, don't worry! It's important to know that no
new tokens are created by the super state nodes.
At the API level, it is very useful to know that a super state node implements the
NodeContainer interface, similar to the ProcessDefinition class. Knowing this,
we will be able to query all the nodes enclosed by a super state, and with this we
can create some kind of description of each phase inside our processes.

Navigation

As we have seen in the previous sections, we need a mechanism to specify when we
are leaving or arriving directly to a node inside a super state (without pointing the
transition to the super state). This mechanism is very simple and intuitive. It is based
on the directory structure and directory paths of a filesystem. That automatically
denotes a hierarchy and inclusion between elements.
If we have the following situation:
NODE C
NODE A

NODE B

NODE D

[ 279 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Deeply into the Advanced Features of jPDL

The transition defined in node B will look like:




Here ,with ../ ,we are going down one level in the hierarchy. We can say that for
each nested super state, we have one more level in the hierarchical relationship.
Something that has not been mentioned yet is that we can have multiple nested
super states without any restriction.

NODE B

NODE E

NODE D

NODE A

NODE F

NODE G
NODE H

NODE C

If we want to have a clear vision of complex situations that involve many nested
super states, we can represent the same graph in something like a tree structure:

NODE A
NODE D
PROCESS
DEFINITION

NODE B

NODE F
NODE E

NODE C

NODE G

NODE H

As you can see, the root level is the process definition itself. It represents level 0 of
the hierarchy. We are creating a transition between two nodes at the same level. So,
we don't need to specify anything in the transition name. A transition between two
nodes at the root level will appear as we already know:



[ 280 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 10

However, if we are linking the nodes between different levels of hierarchy—in other
words, between different super states (that may or may not be nested), we need to
specify if we are going down or up the hierarchy.
In order to see an example, we can see the transition between the node D and the
node F inside the super state called E, which is jumping between different levels.
The transition in this case will look like:




The transition specifies that the node F is inside the node E in the path.
We can also have the opposite situation. If you take a look at the relationship
between the node H and the node C, you will notice that we need to go
down multiple levels of the hierarchy. The transition in this case will look like:




To complete this section, it is important for you to know that your node names
should avoid the use of the "/" (slash) character. You can try this if you want,
but at your own risk.

Process state node

This node will let us bind two different jPDL defined processes. In other words,
you will be able to include the execution of a whole process inside a node. This lets
us break our large processes into small ones with highly focused goals and then
coordinate them together.
It is very important to understand the difference between the process state node and
the super state node. This process state node will instantiate a whole new process
execution that will run as we have already seen.
The parent process will be in a wait state until the child process ends its execution.
When the child process is instantiated by a process state node, it is automatically
signaled to begin.

[ 281 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Deeply into the Advanced Features of jPDL

In order to create a relationship between the parent and the child process, we need to
specify the following information in the process state node definition:
•

Process definition name: The process definition name that needs to be
already deployed. This parameter is analyzed at runtime, so it will not be
validated when you deploy your parent process. If the child process has not
already been deployed when the process execution reaches the process state
node, an exception will be thrown.

•

Version: This is the version of the process definition that you want to use.
If you don't specify any version number, the latest process definition will
be used.

•

Variable mapping: You will be able to send information between your
parent and child processes. In most cases, this is required by the business
logic associated with your processes. However, having variable mapping
is optional.

Let's see an example of how this process state node works.
Imagine that you have to get some medical checkups done to be able to work in a
company. If you see the entire process to recruit people, the medical exams look like
just one activity. But indeed, it is a very complicated process that varies depending
on your age, sex, and the type of work that you will do in the company.
If you remember, in Chapter 4, jPDL Language, we said that we can define just one
process definition inside a jPDL XML file, so we need to define two processes in two
different files and then bind them together. It is important to mention that the child
process doesn't need to make any reference to the parent process. With this feature,
you can reuse your defined processes without making a special modification for
each parent.

[ 282 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 10

In the "medical exams" situation, this will look like:
<>
Start

<>
Initial Interview

<>
Medical Exams

<>
Final Approvement

<>
End

And the child process that will present a detailed description about the medical
exams is defined in another file as a normal process definition.
<>
Start

<>
Heart Exam

Blood Pressure

<>
Heart Exam

<>
Blood Pressure Exam

<>

<>
End

[ 283 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Deeply into the Advanced Features of jPDL

If you take a look at the code provided for this chapter, you will find an example
called /ProcessStateExample/ where you will find two defined processes and a
test case that will execute both processes.
It is a very common requirement that we need to pass contextual information from
the parent process to the child process. We can achieve this by using the variable
mapping feature provided by the process state node. This will let you map variables
that already exist in the parent process to variables that will be created in the child
process at runtime.
These mappings also include a strategy to decide if the variables can be modified
inside the child process and copied back to the parent.
The variable mapping for the process state node works in the same way as the
task instances variable mapping. In both situations, the same variable mapping
implementation is used.
The variable mapping between the parent process and the
sub-process is done inside the process state node properties,
in the same place where you parameterize the name of the
sub-process and the version to be used.

With the built-in implementation, you can create a one-to-one mapping using the
variable names. For example:
Variable

Mapped name

Read

Write

Variable1

Var1

X

X

Required

If we don't specify the mapped name attribute, the same name
will be used in the child process.

This mapping will copy, at runtime, the value of the variable called Variable1 of the
parent process to a variable called Var1 in the child process.
As we know, the variable value is copied into a new variable in the child process. If
we change the variable value inside the child process, the parent process will never
see that the change is reflected in its variable.
Basically, if we need to modify the parent variables inside a child process, we need
to use the strategies mentioned before. These strategies will let us define whether the
variables can be changed in the child process, and whether these changes are copied
back to the original variable when the child process is ended.
[ 284 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 10

Mapping strategies

The mapping strategies mentioned here are used to inform the framework how
to handle the process variables between a parent process and a child process
specified inside a process state node. These strategies are part of the base mapping
implementation that you can easily extend. We have three built-in strategies to use:
•

read: Each variable that is marked with the read strategy will be
automatically copied into the child process when the child process is
instantiated at runtime.

•

write: If we mark a variable as write, the variable's value will be copied back
to the parent process at the end of the child process.

•

required: If the variable is marked as required, at runtime, when the
framework tries to copy the variable from the parent to the child or vice
versa and the variable doesn't exist, an exception will be thrown.

It's important to note that if we use the read strategy, it will allow us to make
modifications (write and update) in the variable value inside the child process. The
only difference between read and write is that read doesn't copy the value back to
the parent variable when the child process ends.
The following images will show you how these mappings work and how we can
choose different strategies for different variables:
Parent Process Variables
Variable1= "10"

Mappings

Name
Variable1
Variable2

Mapped Name
var1

Read Write Required
x
x

x

Variable2= "5"
Variable3= "15"

When the child process is instantiated at runtime, it will have the following variables:
Child Process Variables
var1 = "10"
Variable2 = "5"

[ 285 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Deeply into the Advanced Features of jPDL

Now, in the child process logic, we can modify both variables. Let's suppose that
an activity modifies the variable called var1 with the value 0 and the variable called
variable2 with the value 8.
Child Process Variables
var1 = "0"
Variable2 = "8"

When the child process ends, once again, the strategies come into play and only
variable2 will be copied back, leaving us with the following process variables in
the parent process:
Parent Process Variables
Variable1 = "10"
Variable2 = "8"
Variable3 = "15"

In this short example we can see how the mapping strategies work by letting us copy
information between nested processes.
It's important to note that it is recommended to just copy the information needed by
the child process and not to map unnecessary information.
We need to remember that each variable that we store will represent one or more
queries to the database.

The e-mail node

This node is a classic example of how you can plug a specific activity extending the
Node class. This node, basically, will send an e-mail when the process execution
reaches the node. One interesting feature that this node provides is the availability of
customizing your mail with templates, which will be filled with the value of process
variable instances.
[ 286 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 10

As you can imagine, this node will need extra configuration and a valid running
e-mail server in order to work properly.
To configure the e-mail server needed, you have to take a look at the file called
jbpm.cfg.xml and change the parameter to reach the server. If you open the
default.jbpm.cfg.xml file provided with the jbpm-jpdl.jar, you will find
the following properties to configure your mail server.




It's important to note that the main difference between using this node and using
a code snippet to send a mail inside an action node, is the fact that the e-mail node
describes the process in a more declarative way so that it can be understood by a
person who sees the process graph.

Advanced configurations in jPDL

This section will be about the advanced features provided by the jPDL language.
You will see a lot of different topics covered here. So you can use this section as a
reference for the most commonly used advanced topics.
We will begin covering topics about configurations inside our process definitions
that are not yet covered because they describe ways of working that are not intuitive.
The first topic in this section will be how to start a process instance with a human
action, and also how to start the process with some input data needed at the time of
process initialization.

Starting a process instance with a human task
This feature was introduced based on the fact that a business role must be able to
start a process, and this action needs to be considered as a human task.

With this feature, you gain the ability to see a task in your task list that will represent
the starting activity in your process.

[ 287 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Deeply into the Advanced Features of jPDL

A common situation for this type of usage is when, for example, an administrator
creates the process instance, but the process needs additional information in order to
start, that is not known by the administrator at creation time. In these cases, a human
task can be created and assigned to the business role that knows or can find the
information needed to start the process execution. When this business role has a look
at the created task in his/her task list, he/she fills the required data, ends the task
and the process begins, leaving the start state.
The human task feature in the start state node is also used to know which business
role is the role that starts the process. In a lot of cases, you need the business role
that starts the process to be able to also handle other related tasks in this specific
process instance.
In the cases when the user creates the process instance, he/she already knows
all this information and wants to immediately begin the process, a map with the
process variables can be specified using the jBPM APIs, before the first call to the
signal method:
pI = pD.createsProcessInstance();
pI.getContextInstance().setVariables(variables); // variables is a Map
pI.signal(); //To start the process execution

It is important to see that both work, and achieve exactly the same goal, but you need
to analyze how the process behaves in the real world in order to choose a way to
implement your situation.

Reusing actions, decisions, and assignment
handlers

As we can already see, if you want to specify custom action code inside a node or
inside an event, you need to implement the ActionHandler interface and then bind
the FQN of the class to the node or the event action. We see the same situation with
decision and assignment handlers.
If you think about it and have similar but not the same functionalities in a set of
actions, you will need to create, compile, and maintain a class for each of them.
For these cases, you can take advantage of the jPDL features to reuse your classes
and parameterize them to behave differently in each situation.

[ 288 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 10

This parameterization can be achieved in multiple ways:
•

Properties

•

Bean

•

Constructor

•

Compatibility

Properties

This is the most common way to do it. This will let you parameterize your action
handler, setting property values specified in the jPDL XML syntax. Let's see how this
works in the following example.
jPDL syntax to parameterize your action handlers:



John


Smith


30





In this case, the action handler code will look like:
public class InitializePropertiesActionHandler
implements ActionHandler {
private String firstName;
private String lastName;
private Long age;
@Override
public void execute(ExecutionContext executionContext)
throws Exception {
System.out.println("First Name:" + firstName);
System.out.println("Last Name:" + lastName);
System.out.println("Age:" + age);
}
}
[ 289 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Deeply into the Advanced Features of jPDL

Now if you need to use this action in multiple nodes or events, you can change the
action configuration each time the process calls it. In other words, you can change
the behavior without changing the compiled class.
This method of configuration will access and set the properties directly, utilizing the
concept of encapsulation from the object-oriented programming perspective. In other
words, the properties will be set without using the accessor (setter/getter) methods.

Bean

Works in the same way as the properties method, but this method will use the
standard accessor methods to access the properties inside the ActionHandler class.
In this case, we need to add the getter and setter methods inside the ActionHandler
class for this configuration to work.
This method will let us validate the configuration provided inside the jPDL file.
Take a look at the following code:
public class InitializePropertiesActionHandler implements
ActionHandler {
private String firstName;
private String lastName;
private Long age;
@Override
public void execute(ExecutionContext executionContext)
throws Exception {
System.out.println("First Name:" + getFirstName());
System.out.println("Last Name:" + getLastName());
System.out.println("Age:" + getAge());
}
private void setLastName(String lastName) {
this.lastName = lastName;
}
private String getLastName() {
return lastName;
}
private void setFirstName(String firstName) {
this.firstName = firstName;
}
private String getFirstName() {
return firstName;
}
private void setAge(Long age) {
[ 290 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 10
this.age = age;
}
private Long getAge() {
return age;
}
}

In jPDL, the only modification is:


Constructor

This configuration will use a specific constructor to initialize the action variables.
This constructor will need to follow the next signature:
public class InitializePropertiesActionHandler implements
ActionHandler {
private String firstName;
private String lastName;
private Long age;
public InitializePropertiesActionHandler(String args) {
super();
String[]argsArray = args.split("|");
this.firstName = argsArray[0];
this.lastName = argsArray[1];
this.age = Long.parseLong(argsArray[2]);
}
@Override
public void execute(ExecutionContext executionContext)
throws Exception {
System.out.println("First Name:" + firstName);
System.out.println("Last Name:" + lastName);
System.out.println("Age:" + age);
}
}

In jPDL, the configuration will look like:


John|Smith|30



[ 291 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Deeply into the Advanced Features of jPDL

Compatibility

This configuration type will call a method with the following signature:
public class InitializePropertiesActionHandler implements
ActionHandler {
private String firstName;
private String lastName;
private Long age;
public InitializePropertiesActionHandler(String args) {
super();
}
public void configure(String args){
String[]argsArray = args.split("|");
this.firstName = argsArray[0];
this.lastName = argsArray[1];
this.age = Long.parseLong(argsArray[2]);
}
@Override
public void execute(ExecutionContext executionContext)
throws Exception {
System.out.println("First Name:" + firstName);
System.out.println("Last Name:" + lastName);
System.out.println("Age:" + age);
}
}

This method will receive a String that you will need to parse in order to initialize
your own variables, and the class will be constructed using the default constructor.
In jPDL this will look like:


John|Smith|30




[ 292 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 10

Summary

In this chapter, we have covered the most advanced node with generic functionalities
provided by the jPDL language. With these nodes, we will be able to model more
complex and real situations. The nodes covered in this chapter were:
•

Fork and join nodes

•

Super state node

•

Process state node

Also, advanced features of configuration are covered here. We need to know these
features to be able to add technical details using the best way to fit the situation that
we are trying to model. In this chapter, features like the human tasks inside the start
node and how to reuse and configure our action, decision, and assignment handler
classes were covered.
In the next chapter, you will learn to apply the advanced features of jPDL language
we learned in this chapter.

[ 293 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Advanced�������������������
Topics in Practice
In this chapter, the reader will apply the advanced concepts learnt in the previous
chapter. During this chapter, we will also cover an extra topic that becomes
important in real world implementation.
The first part of the chapter will cover how to include super state nodes and process
state nodes in our Recruiting Process example.
The second part of this chapter will be about asynchronous executions. This feature
will enable us to delegate the execution of special nodes to an external service that
will guarantee the node execution. In this chapter also, we will see how this service
is configured for standalone applications.
This chapter will cover the following:
•

How to introduce super state nodes to our Recruiting Process example

•

How to introduce a process state node to our Recruiting Process example

•

Asynchronous executions

•

Configuration needed by the executing services

•

Sample project that shows us how the asynchronous nodes will work

Breaking our recruiting process into
phases

The main idea of using super state nodes is to have different phases demarcated
in our processes. In other words, we will group nodes inside super state nodes in
order to have a clear view about the process phases. These phases will let us logically
group the activities in our process in highly focused subsets.

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Advanced Topics in Practice

In our particular situation, we will use super state node to split our Recruiting
Process into four phases. This will give us a higher level perspective on how
our processes are going.
One of the main advantages of this approach is that we can be notified each time that
our process enters or leaves one of these phases. To be more precise, we can attach
any kind of behavior to the "enter" and "leave" events of the super state nodes.
Here, we will group our defined nodes into four super state nodes. Then we will
notify or log the user each time that we enter or leave one of these four phases.
Our resultant process at the higher level will look like:

Initial
Interview

Technical
Interview

Medical Check
Ups

Project Leader
Interview / Final
Acceptance

Now you can gain a higher-level view about how our processes are executed using
this notification or logging mechanism.
Managers or project leaders in general are more interested in seeing how the process
flows from one stage to another, than considering the low-level details about
activities that occur inside a phase.
Of course, you can use super state nodes to measure how much time and what
resources you are using in a set of activities. With simple actions hooked to
superstate-enter and superstate-leave events, you can get all this information
and use it from statistics or measure your processes in an orderly way.
[ 296 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 11

There are no more tricks about super state nodes. Just take a look at the
/RecruitingProcessWithSuperStates/ project to see how the super state
is introduced in our example process.
<>
Interview Possible Candidate

<>
Initial Interview

?

No - Find a new Candidate

<>
Initial Interview Passed?
Approved - Go to Technical Interview
<>
Technical Interview

?

No - Find a new Candidate

<>
Technical Interview Passed?
to Medical Check Ups

<>
Candidate Discarded

<>
to Physical Check Up

to Heart Check Up

to Psychological Check Up

<>
Physical Check Up

<>
Psychological Check Up

<>
Heart Check Up

<>

?

<>
Medical Exams Passed?

No - Find a new Candidate

Last Interview
<>
Project Leader Interview

?

No - Find a new Candidate

<>
Final Acceptance?
to Create WorkStation
<>
Create WorkStation
<>
Candidate Accepted

[ 297 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Advanced Topics in Practice

As you can see in the process image generated with Eclipse GPD, now our Candidate
Interviews process has four well-delimited phases. It's a flaw of the plugin to not allow
us to print the name of each phase in the diagram.
If you open this process definition, you will see how each phase is enclosed inside a
super state node. The following block of XML code shows us the first phase called
the Initial Interview phase.



One
Initial Interview




...


...





One
Initial Interview




In this block of code, you can see how easy it is to hook actions inside the
SuperState frontiers. In this case, we are just logging into the console using two
customizable actions. For the rest of the sections, the pattern will be the same,
all the nodes in each phase will be surrounded by the  tags.
To reuse the code of the actions, we just create two classes—one to log when
the executions enters into SuperState (LogSuperStateEnterActionHandler)
and the other to log when the execution is going out of one of our phases
(LogSuperStateLeaveActionHandler).

[ 298 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 11

If you open one of these classes, which is very simple, you will find a
normal action handler where you can add time measurements' logic. The
LogSuperStateEnterActionHandler in our example just contains the
following code:
public class LogSuperStateEnterActionHandler implements ActionHandler
{
private String phaseNumber;
private String phaseName;
public void execute(ExecutionContext executionContext)
throws Exception {
System.out.println("LOG: Entering to "+phaseNumber+":
"+phaseName+" Phase");
}
}

Here we are just printing out a log to the standard console, but you can try to put in
some information about how much time an entire phase execution takes in order to
be completed.

Keeping our process goal focused with
process state nodes

The idea of this section is to show how ProcessState works in practice. But one
more important goal is to remember that each of our processes has one clear and
well-defined business goal. This business goal needs to be accomplished by all the
activities defined in that process. In other words, if you start introducing activities
that don't collaborate directly with the process goal, you probably have other goals
mixed in your process definition.
For this reason, and to maintain well-defined goals in our processes, ProcessState
nodes will let us include a full and complete process inside another process activity.

[ 299 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Advanced Topics in Practice

Let's see how it works in our Recruiting Process example:
<>
Interview Possible Candidate
<>
Initial Interview

?

No - Find a new Candidate

<>
Initial Interview Passed?
Approved - Go to Technical Interview
<>
Technical Interview

?

No - Find a new Candidate

<>
Technical Interview Passed?

<>
Candidate Discarded

to Medical Check Ups
<>
to Physical Check Up

to Heart Check Up

<>
Physical Check Up

to Psychological Check Up
<>
Psychological Check Up

<>
Heart Check Up
<>

?

No - Find a new Candidate

<>
Medical Exams Passed?
Last Interview
<>
Project Leader Interview

?

No - Find a new Candidate

<>
Final Acceptance?
to Create WorkStation
<>
Create WorkStation
<>
Candidate Accepted

As you can see in this image, the Create WorkStation node (of Node type) was
replaced by a process state node.

[ 300 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 11

What exactly does this change mean?

First of all, the Create WorkStation activity will now include several well-defined
activities that are not directly related to fulfilling the business goal of finding a
candidate. The business goal of these newly-introduced activities will be, as the
node name informs us, "Create a WorkStation for the newly-accepted candidate".
We decide that these activities need to be placed in another process and then link
them with the Create WorkStation activity.
As you can remember, the original node was an automated activity. Therefore,
unless we want to change that behavior, all the activities in our subprocess need
to be automated.
You can include wait states (and human tasks) if you want. But the process state in
the Candidate Interviews process will be blocked until the subprocess ends.
As we have seen in the previous chapter, when we discussed about process state
nodes, we said that there is no need to modify the child process. In this case, the
Create WorkStation process will not need any changes in order to be embedded
inside a Process State node.
In this case, our subprocess will look as shown in the following image:
<>
Create WorkStation

<>
Create System User

<>
Create Email Account

<>
Create Security Credentials

<>
WorkStation Created

This very simple process will be called CreateWorkStationActivities and can be
found inside the resources/jpdl/CreateWorkStationActivities directory in the
/RecruitingProcessWithProcessState/.
[ 301 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Advanced Topics in Practice

Sharing information between processes

As you might remember, process information lives in each process instance context.
These different contexts (one for each execution) will represent and differentiate one
instance from the other.
We can predict that when jBPM creates a subprocess (using a process state), the
process context is empty. In other words, if you decide or want some pieces of
information to be shared between your parent and child processes, you need to
explicitly mention it.
The idea is to copy/share only the information from the parent to the child, which
is needed by the child. If you copy the entire context content inside the subprocess
context, all this information will be duplicated inside your database.
We will share information between processes using the already known method
Variable Mappings. The same rules as those for task instances apply here. If you
mark a variable as "write", this variable will be copied to the subprocess context
information and you can modify it, and when the subprocess ends, it will be copied
back to the original variable.

Create WorkStation binding

Here we will discuss how this process state is configured in our Recruiting
Process example.
It is important for you to know that if you have multiple definitions of the same
process, I mean different versions of the same process, you can choose which
version to use in a specific binding. In this case, jBPM will get the latest version
of the process definition.






As you can see, you only need to define the name of the process that is
already deployed and the variable mappings to share information between
these two processes.

[ 302 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 11

The  tag will allow us to create the binding between the process state
node and the subprocess, which will be instantiated when the process execution
reaches the process state node. It is important to note the binding="late" attribute
of this tag, because it ensures that the process definition will be checked at runtime
and at the time of deployment. Just for you to know, the version attribute is
also accepted by this tag and you will use it when you need the subprocess to be
instantiated for a specific version. If you don't use the version attribute, the last
(the newest) version will be used.
It is very important to note that the framework will be in charge of creating a new
instance of the selected process definition as well as to signal it to start the execution.
Then the variable tag inside the process state will allow us to share information
between the parent and the child process. In this case, we are only sharing a variable
called CANDIDATE_INFO and the subprocess will use it only to read the information
contained inside the variable. If the subprocess makes a modification to this
variable, when the subprocess ends, it will not reflect the changes to the parent
process variable.
In the following image, we can see all the steps that are executed when the parent
process reaches the process state called "Create WorkStation".

Look for Subprocess
Definition
Instantiate
Subprocess
Copy variables
defined in Variable
Mappings
Process State

Start / Signal
Subprocess
The process runs until
it ends
Copy back the
variables marked as
"write" in Variable
Mappings

[ 303 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Advanced Topics in Practice

Take a look at the code inside the /RecruitingProcessWithProcessState/
project to see how this works. For this example, we keep the same behavior of the
automated tasks inside the process. This means that in the newly defined Create
WorkStation process, you will find just three automated nodes, which will only
log what they are doing.

Asynchronous executions

In this second part of the chapter, we will focus on asynchronous executions. We will
understand when we need to use them and how we will use them inside the jBPM
framework. It is really important that you get the concept and know when to apply it.
Asynchronous executions and communications can be found in a lot of places, but
many programmers and developers haven't got the idea yet.
Let's see how the asynchronous executions appear as compared to the synchronous
style of doing things that we have used until now.

Synchronous way of executing things
The following image represents the process execution as we know it:
Java Application Thread
System.out.println(""+System.currentTimeMillis());
pl.signal();
Automatic
Node

System.out.println(""+System.currentTimeMillis());

Automatic
Node
Automatic
Node
Automatic
Node
Automatic
Node
Wait State
Node

[ 304 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 11

The following steps are involved in creating the process:
1. We create a new process instance.
2. We start it.
3. We (our Java thread) need to wait until the process reaches the next wait
state. In other words, our Java thread (that runs our process) will be blocked
until the signal() method returns the execution to the main thread where
the wait state is reached.
So far, we haven't seen anything wrong or unusual. We will know if our server
crashes in the middle of our process execution and, that if we are using the
persistence service, any of the changes we make in the process status will be
committed to the database when a wait state is reached.
In those situations, we only need to start or continue the process from the last saved
status (wait state) in the database.
That's okay, but what if (I hate my boss's "what ifs.." !) we have the
following situation?
Calculate all
the country
taxes

Do a huge
tape back up

Send a lot of
emails

All these automatic nodes can take more than two hours each, making the overall
process execution time over six hours before it ends. In situations such as this, if
our server hangs up in the last minute of work (after 5 hours and 59 minutes of
work), you will need to do all the work again. One of the reasons for this really
bad news is that there is no wait state in your process and the intermediate
status is never persisted. In such cases, we have mentioned that we don't have
a fine-grained transaction demarcation. This means that we don't have any way
to persist the process status between these automated tasks.

[ 305 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Advanced Topics in Practice

We can propose a really ugly solution to the problem. For instance, we can insert a
wait state node between automatic nodes. But if we do that, we need to signal each
of these wait states manually whenever we see that the automatic work ends. We can
see this solution as a manual form of validation (human validation), which tells us
that everything is going fine with our automatic nodes.
Calculate all
the country
taxes

Control
Taxes
calculation

Do a huge
tape back up

Control
Huge Back
up Tape

Send a lot of
emails

Control
that all email
are sent

Admin

This can be a valid solution only for some situations, but it doesn't solve the whole
problem. This approach will persist the process status after each long-automated
activity ends.
Let's suppose that the described process is executed every night (the old and well
known nightly batch procedures or nightly builds). For such situations, you will
need to stay awake to continue/restart the process if something goes wrong.
There is another ugly thing that may happen here. Our main thread, which starts the
process, will be blocked until these large activities are executed. For this reason, the
signal() method will only return a value when the procedures that run inside the
Calculate all the country taxes, Do a huge tape backup and Send a lot of e-mails
nodes end.
Imagine that you have an end-user application that has a screen, which contains a
button to start this process. If you call the signal() method inside that button code
to start the process, the screen with the button will be blocked until the first activity
finishes. We can say that the user screen can be blocked for about two hours.
The same thing happens in web applications. If your request takes more than the
maximum time allowed by the web server, the user will get a time out response
screen in the application.
It is obvious that we need a way not to block our application threads when we start
a process that has large activities or a lot of automatic activities chained together. We
also need a way to manage our transactions with a fine-grained control. And, last but
not the least important, we need a way to be sure that the activities end successfully
without human control.
Those are the main reasons to start thinking about asynchronous executions.
[ 306 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 11

The asynchronous approach

In this section, we will see how to solve the following problems, which have been
mentioned before:
•

Blocking calls that take too long

•

The need of controlling the fact that some activities need to be launched
again if the server crashes without human interaction

•

The need of fine-grained transaction demarcations to persist the process
status without introducing extra activities to solve technical problems

How does this asynchronous approach work?

The main idea is to delegate the execution of our nodes to another thread of
execution. In jBPM, this other thread is called JobExecutor, as it offers some extra
features more than a simple thread does.
This approach is considered as a service because it is totally decoupled from the
framework and our applications. It needs to be started separately, just as any service
that we want to run, such as database, backup service, mail server, web server, and
so on.
Take a look at the following image that describes how these two threads interact:
Our Application Thread

Executor Service Thread

1) Create JbpmContext
2) Get Process Definition
3) New Process Instance
4) Process Instance signal()

1) Get Messages to Execute
2) Execute message

return
Async

Async

Auto

Auto

[ 307 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Advanced Topics in Practice

As you can see in the previous image, our blocking problems are solved,
now when we call the method signal(), the process will run until it reaches
the first wait state or the first node marked with asynchronous flag. In case the first
node is marked with the async flag, the method signal will not be blocked and
returned immediately.






Our nodes will be executed asynchronously only if they are marked with the async
flag and if our JobExecutor service is configured and running.

What happens if our server crashes?

In this section, we will analyze the default behavior of our process when our server
(or just the JVM) crashes while we are using the JobExecutor service.
Start / Or
Wait State
Node

1
Async Node

2

If the server crashes at stage 1 (and the previous node is not marked as an async
node), we must manually execute the signal() method again, in the previous
node, to continue or restart the execution. This means that we will lose the activities
executed after the last wait state.
If the server crashes at stage 2 (when the message is delivered to the JobExecutor
service) the executor service will be in charge of executing the content of the message
delivered to it until it confirms that the execution has ended successfully. Basically,
the JobExecutor service will include a mechanism to look at all the messages
delivered to it, find the ones that haven't been executed yet, and execute them.

[ 308 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 11

This behavior can be represented as follows:
Executor Service
1) Async Node creates a message

2) Take unfinished message

Database

3) Execute()
4) Mark Message as finished

When the message is retrieved from the database, it is executed to continue the
execution flow. It is important to note that if a node is marked as async and the
nodes following it are not marked with the async flag and are not wait states, the
execution started by the JobExecutor service will continue until the process reaches
a wait state or another async node. In other words, we need to know that when
the JobExecutor service takes a message and starts its execution, the execution will
continue in the JobExecutor service thread until it reaches a wait state or until a new
async node is reached.
It is normal to have the following behavior if we use asynchronous executions in
our processes.
Get unfinished
Message
execute()
Async

Database

Auto
Wait
State
Auto

Async

Auto

Mark as finished

Wait
State

Then you need to
signal the Wait State to
continue the process
until the end

[ 309 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Advanced Topics in Practice

In such scenarios, when an asynchronous node is found, the JobExecutor service
continues the execution until the wait state node is reached. This classical, but not
intuitive behavior needs to be clearly understood by newcomers.
If you think about it, you will find that your Java application thread is free to do
other things after the message is sent to the JobExecutor service. It's important
to note that the JobExecutor service is not responsible for continuing the process
execution when it completes processing the message. Your application will be
responsible for querying the process status and decide when to continue, in this
case from the last wait state.

Configuring and starting the asynchronous
JobExecutor service
Until here we have seen how it works, but we really need to see how it is
implemented and configured. In this section, we will configure and start the
JobExecutor service for a standalone application.

This JobExecutor service will store all the messages that it receives in a database table
to track the execution of each of them. This table will be periodically queried looking
for messages that are not completed yet. When these queries find one of these
messages, the service is in charge of taking the message from the table and
executing it until it's marked as completed.
The parameters used by the framework to configure the messaging DB service
are located inside the jbpm.cfg.xml file. Here we can see what this block of
configuration looks like:

















[ 310 ]

Download at WoweBook.com

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 11

With this configuration and the following line in the services configuration block:


We are configuring the framework to use the built-in DBMessageService that
will be using just one thread to query the JBPM_JOB table looking for new
messages to execute.
The number of threads used by the JobExecutor service can be modified by the
nbrOfThreads property as well as the time between two calls to the database by
changing the value of the idleInterval property, which is set to five seconds by
default. Remember, if you decrease this value, for example to one second or less, you
will be generating a lot of SQL SELECT queries just for checking if a new message
arrives at the JobExecutor service.
This service works with the following components' interaction:
•

The database table called JBPM_JOB

•

A cron-like service that queries the database table looking for messages
during regular time intervals

•

The asynchronous flag (async = true) to mark a node that needs to be
executed asynchronously

We have seen the database table used by this service and also the configuration of
the cron-like procedure that will query the mentioned table. Now we need to see
what happens inside a node marked with the async flag and how the framework
reacts during the execution stage. We will also see a full interaction and an example
about how this works.
If you open the Node.java file that contains the Node class, which you will find
inside the enter() method, you'll find the following lines of code:
// execute the node
if (isAsync)
{
ExecuteNodeJob job = createAsyncContinuationJob(token);
MessageService messageService = (MessageService)Services
.getCurrentService(Services.SERVICENAME_MESSAGE);
messageService.send(job);
token.lock(job.toString());
}

[ 311 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Advanced Topics in Practice

Also, you'll find a very short method called createAsyncContinuationJob
(Token token).
protected ExecuteNodeJob createAsyncContinuationJob(Token token)
{
ExecuteNodeJob job = new ExecuteNodeJob(token);
job.setNode(this);
job.setDueDate(new Date());
job.setExclusive(isAsyncExclusive);
return job;
}

These lines indicate that a message needs to be created when the node that the
framework is executing is marked with the async flag. Then this newly-created
message is delivered to the already-configured messaging service.
As you can see, the ExecuteNodeJob class is in charge of representing the job that
the JobExecutor service will take and execute. This class, as it needs to be persisted,
is mapped to the Hibernate entity using the ExecuteNodeJob.hbm.xml file.
It's also important to note that this class inherits from the abstract class called Job.
This class is also the super class of another class called ExecuteActionJob. This class
will be in charge of executing only those actions that are marked with the async flag.
This will enable us to have a node that contains multiple actions, but we need some
of them to be executed in an asynchronous way. In other words, with this feature, we
can also mark our actions (represented by classes that implement the ActionHandler
interface) to be executed in an asynchronous way.
You may ask, why can't I see where the message is persisted in the database using
Hibernate in the code? This is because we can plug in different type of services,
not just based on a database table approach. We will see how a JMS approach is
configured in Chapter 12, Going Enterprise. In the Node class, the framework asks
for the already configured messaging service with the following line:
MessageService messageService = (MessageService)Services.
getCurrentService(Services.SERVICENAME_MESSAGE);

This line will return the configured messaging service. In this case, it will return
DBMessagingService.
The last important thing about configurations and the JobExecutor service is how to
start the JobExecutor service itself. It's not a minor thing, because we need to have
this service running before our processes' executions.

[ 312 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 11

Basically, in our example, we have another class to start the service before running
our tests. This class will just contain the following lines:
public class StartJobExecutor {
public static void main(String[] args){
JbpmConfiguration config = null;
config = JbpmConfiguration.parseResource("config/jbpm.cfg.xml");
config.startJobExecutor();
}
}

As you can see, the idea is just to get the configuration information and then start the
JobExecutor service.

Different situations where asynchronous
nodes can be placed

In this section, we will analyze the project delivered with this chapter called
/SimpleAsyncExecutionExample/ in order to discuss how it works.
The process definition for this example will appear as follows:
1

2

3

4

Async

Wait
State

Async

Async

Auto

Auto

Async

Wait
State

8

7

6

5

The process has a lot of chained nodes to show the different situations, which you
can find when you mix asynchronous node executions with automatic and wait
state behaviors.

[ 313 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Advanced Topics in Practice

Basically, the following behavior is expected during the execution of the described
process. Two columns are used to represent the interaction between the application
thread and the JobExecutor service thread. The configuration used here is
DbMessageService.
Application thread

JobExecutor service thread

Start the process instance by calling the
signal() method.
As the first node is marked with the async
flag, the node execution will just create a
message that will be taken by the executor
service. The process status is persisted and
the signal() method returns the control
to the application thread.
(Free time, no waiting, no blocking the
main thread)

This service periodically reviews the
messages stored in the table called
JBPM_JOB. When it finds the message
created by the process execution, the service
takes it and executes it. When the node
content is executed, it takes the transition to
node 2. As node 2 is a wait state, the process
status is persisted in the database and the
message is marked as completed.

In our application, we can query the process
status to know if the process has reached
node 2. When this query returns that the
process is waiting in node 2, we can signal
the node to continue the execution. Now
the execution will take the transition to
node 3. Here, a new message is created and
delivered to the messaging service.
(Free time, no waiting, no blocking the
main thread)

Once again, the message is taken by the
executor service and executed by this thread.
In this case, node 4 is also marked with the
async flag, so another message is created
and the status after creating the new message
is stored in the database, marking the node 3
execution also as finished.

(Free time, no waiting, no blocking the
main thread )

The node 4 message is retrieved and
executed until node 5 is reached and the
status is persisted in the database, marking
the node 4 message as finished also.

[ 314 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 11

Application thread

JobExecutor service thread

When we decide to signal the wait state,
(node 5) the process will reach the last
asynchronous node, creating the last
message for this execution. Here the
method signal() returns the control to
the application thread after persisting the
process status to the database.
(Free time, no waiting, no blocking the
main thread)

The executor service takes the message
and executes the nodes 6, 7, and 8. When it
reaches the last node, it persists the process
status as ended, marking the successfully
finished message execution.

As you can see in this example, if your asynchronous node includes large and
time-consuming automatic tasks, the JobExecutor service thread will be in charge
of taking and executing that job.
If you open the process definition called SimpleAsyncExecution located inside the
resources/jpdl/SimpleAsyncExecution directory, you will find the following
modeled process definition.
<>
START

<>
1-Async

<>
2-wait state

<>
3-Async

<>
4-Async

<>
END

<>
8-Auto

<>
7-Auto

<>
6-Async

<>
5-wait state

The idea behind this process example is to show you how the asynchronous
nodes are executed by the JobExecutor service. I encourage you to debug this
process execution and also to see how the messages are inserted as rows in the
JBPM_JOB table.
Remember that before running the test called SimpleAsyncExecutionTestCase,
you need to run the StartJobExecutor.
In this test, you will see the behavior explained in the previous sections where
you will need to query the current status of the process in order to know if the
JobExecutor service has completed its asynchronous jobs.

[ 315 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Advanced Topics in Practice

Remember that the default configuration is set for querying the JBPM_JOB
table every five seconds. This is important for us because our asynchronous
nodes have activities that last for ten seconds and our automatic (auto)
nodes have activities that last for one second. You can see what these
activities do inside the LargeAutomaticActivityActionHandler and
ShortAutomaticActivityActionHandler classes.
Let's analyze the code inside the TestCase class:
ProcessInstance simpleAsyncExecutionPI = context.newProcessInstance
("SimpleAsyncExecution");
simpleAsyncExecutionPI.signal();
Assert.assertEquals("1- Async", simpleAsyncExecutionPI.
getRootToken().getNode().getName());
processInstanceID = simpleAsyncExecutionPI.getId();
context.close();
//Wait for five seconds
try {
Thread.currentThread().sleep(5000);
} catch (InterruptedException ex) {
//Log
}
//End waiting
context = config.createJbpmContext();
simpleAsyncExecutionPI = context.getProcessInstance
(processInstanceID);
//Ask again if the process is stopped in the first activity
Assert.assertEquals("1- Async", simpleAsyncExecutionPI.
getRootToken().getNode().getName());
context.close();
//Wait for ten seconds
try {
Thread.currentThread().sleep(10000);
} catch (InterruptedException ex) {
//Log
}
//End waiting
context = config.createJbpmContext();
//Now the process must be in the 2- wait state node
simpleAsyncExecutionPI =
context.getProcessInstance
(processInstanceID);
//check if the process is now in the second activity
Assert.assertEquals("2- wait state", simpleAsyncExecutionPI.
getRootToken().getNode().getName());
// if it is, signal it to continue
simpleAsyncExecutionPI.signal();
context.close();
[ 316 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 11

In this code, we can see that the following steps have been executed:
1. Creating the process and starting it (by calling the signal() method).
2. The signal() method returns instantaneously, which means that it creates a
message and delegates the execution to the JobExecutor service. You can see
that a new row is inserted in the database (JBPM_JOB table).
3. Then we decide to wait for five seconds to see if the JobExecutor service
finishes its job. It is important to note that we can do something else in that
time or we can query the process status all the time.
4. After waiting for five seconds, we query the process status to see if we
are still stopped at the 1- Async node, which will be true because our
asynchronous node will last for ten seconds to finish, plus the time between
the last query to the JBPM_JOB table and the next. In the worst case scenario,
the execution of our first asynchronous node will take 15 seconds.
5. That is why we make it sleep for another ten seconds to be sure that our
activity has ended.
6. Then, when we ask if the process has stopped in the 2- wait state node—if
the assertion is true, we can signal that wait state to continue the execution.
Take a look at the rest of the test case, because it reflects another situation explained
in the previously-described two-column table.

Summary

In this chapter, we have discussed several topics. The main idea behind this chapter
is to understand some advanced features, which are provided in the jPDL language
and the capabilities provided by the framework as a whole.
We saw some examples of how to use super states and process states in our recruiting
example process and jumped directly to the topic of asynchronous executions.
With this chapter, we have covered the most meaningful features of the framework,
taking care to cover the conceptual and practical aspects of all these topics.
If you are planning a jBPM implementation or if you are already a member of a
team responsible for an implementation, you must understand all the topics
discussed here in depth.

[ 317 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Advanced Topics in Practice

One of the most generalized ways of learning a new technology is by a
proof-and-error approach. I don't recommend that way of doing things. The
way of learning that I propose for things like the jBPM framework and all of the
other open source projects is to just learn about the background that creates the
new technology.
Sometimes, you don't have time to sit and learn about theoretical topics in order to
understand how to use a new tool and that is the main reason why I wrote this book.
It is focused on the technical background of the framework and how its technical
implementation reflects that background in the project's source code.
If you get that idea, minimal theoretical background and a little research about
how the framework/tool is implemented will guide you to adopt any kind of
new technology.
In the next chapter, the last one, we will see some features that the framework
includes for enterprise environments. These features for larger scenarios will help
us to understand how and when we need to start thinking about running jBPM in a
Java EE-compliant application server.

[ 318 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Enterprise
In this chapter the reader will learn about some topics introduced in the
jbpm-enterprise module. This enterprise module is aimed at solving
some of the most common problems that appear in large, clustered, and
distributed environments.
The topics that this chapter will cover are:
•

Configurations needed to run in an EE environment

•

CommandServiceBean

•

JobExecutor service in EE environments using JMS

•

Timers and reminders for standalone and enterprise environments

This chapter is aimed at people who need to know about how jBPM behaves in
Java EE environments. If you aren't planning a Java EE implementation, this
chapter will show you some of the advantages of designing and implementing
jBPM in such environments.

jBPM configurations for Java EE
environments

Here, when we talk about Java EE environments, we are talking about
Java EE-compliant application servers. Because JBoss has its own application
server, which is Java EE 5-compliant, we will use it as an example. However,
you can choose any Application Server to run jBPM. This means that jBPM is
not tied to JBoss Application Server in any way.
It's important for you to know that when we create enterprise applications, our
application will run with a lot of services that the application server provides us
out of the box.

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Enterprise

One of the most commonly used services that the application server provides us with
is the transaction management. This service is provided using the specification Java
Transaction API (JTA). We can use it and delegate the transaction demarcation from
the user to the application server's Enterprise Java Bean (EJB) container. In other
words, we are now changing from the User Managed Transaction (UMT) approach
to the Container Managed Transaction (CMT) one.
This will change the whole way that we interact with the framework. If you review
Chapter 6, Persistence, where we talk about the UMT approach, you will see that we
need to demarcate our application code when a transaction begins with:
JbpmContext context = config.createJbpmContext();

And when we finish the interaction, we need to close this context to commit the
current transaction changes. We use the following line to do that:
context.close();

We also need to remember that if an exception arises between the
createJbpmContext() and the close() method, the transaction will be
automatically rolled back. This is the correct behavior, because an error has occurred.
In such cases, all the modifications made in the process as well as the information
that the process maintains inside it are lost. This is a way for us to know that in the
database we will store only data that represents a correct status.
Now, in the Java EE counterpart all the methods inside our EJB will run by default
inside a transaction. We don't need to explicitly say anything about transactions, and
all our methods will run in a transactions-aware context by default.
EJB Container

my Method(){

create tx

JTA

Business
Logic
EJB
}

DataSource

commit tx
JDBC/
JConnector

Database

[ 320 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 12

In the previous figure we can see how our myMethod() method will run inside our
defined EJB enclosed inside a JTA transaction that will begin before it enters the
method and ends right after the method completion.
I'm trying to say that in this environment, all the code inside myMethod()—the
Business Logic cloud in the figure—that modifies transactional resources such as
relational databases, a mail service using transactions, file copies, tape backups,
and so on, won't be automatically committed after each line of execution. All the
changes need to wait until the method successfully ends without any error to be able
to commit all the changes made. If all the operations are completed successfully the
transaction in each resource will commit all the modifications made, if not all the
modifications are rolled back together.
To achieve this, our resources must support "two phase commits" using an
XA-compatible driver. This is a strong requirement that needs the application
server to handle distributed transactions across multiple resources.
In our case, we have been using just one relational database (MySQL), so we need to
configure it as a data source inside the application server. Doing this we will delegate
the database connections' administration to the application server, which will use
a connection pool to decide dynamically how many connections should be kept
open for our applications. Until now, we just used a direct JDBC connection to our
database, which we configure in the hibernate.cfg.xml file with the connection
parameters. Now we need to configure a new data source inside JBoss and then
configure jBPM to look up and use that data source. We need to create a new data
source, but we can use one of the data sources suggested in the /config/ directory
of the jBPM binary distribution.

JBoss Application Server data source
configurations

Configuring a data source in JBoss Application Server is an easy task. Basically, we
will define a data source descriptor (XML file) to describe the characteristics of our
particular data source. If we are talking about a relational database, these will be the
common properties, which need to be configured in jbpm-mysql-ds.xml:



JbpmDS

com.mysql.jdbc.jdbc2.optional.MysqlXADataSource

[ 321 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Going Enterprise

${jdbc.mysql.server}


${jdbc.mysql.port}


${jdbc.mysql.database}

${jdbc.mysql.username}
${jdbc.mysql.password}


TRANSACTION_READ_COMMITTED







com.mysql.jdbc.integration.jboss.ExtendedMysqlExceptionSorter


com.mysql.jdbc.integration.jboss.MysqlValidConnectionChecker



mySQL




You will need to replace all the ${...} values with your corresponding environment
information. With this file ready, we can deploy this data source inside the
application server. Pay attention to the name of the file, it's important to use the
format *-ds.xml, because JBoss will detect that pattern and automatically deploy the
data source located inside the /deploy/ directory. Then the application server will
publish it in the JNDI tree. This will allow us to reference this data source using just a
name, decoupling the entire database configuration from our application.

[ 322 ]

This material is copyright and is licensed for the sole use by ALESSANDRO CAROLLO on 18th December 2009
6393 south jamaica court, , englewood, , 80111

Chapter 12

Then we need to change our hibernate.cfg.xml file, where we usually specify
a direct JDBC connection to our database, now we will use the newly-deployed
data source:


org.hibernate.dialect.MySQL5InnoDBDialect



java:JbpmDS




Navigation menu