From EPN to BPMN

Book "Event Processing in Action" contains "Fast Flowers Delivery" use case. Below I tried to reproduce this use case in BPMN to see the internal behavior of each participant.

I think, I have to switch to BPMN 2.0 to better handle exceptions.



Contribution to: ACM: Feature or Paradigm

This is a contribution to a very interesting discussion "ACM: Feature or Paradigm" at http://social-biz.org/2011/01/22/acm-feature-or-paradigm/ and http://mainthing.ru/item/401/

Some of Keith’s arguments do not correspond to my experience with collaborative and process-based applications. Attention, please – those applications were designed for clients (including international ones) based in Switzerland – maybe similar applications for the US-based clients should be different.

At first, as usual, it is necessary to emphasize that BPM is a process-oriented management methodology, BPMS is a technology and ACM is a technology. So, it is not correct to compare BPM vs. ACM. My point of view about their relationships was expressed in http://improving-bpm-systems.blogspot.com/2010/12/illustrations-for-bpm-acm-case.html

<quote>BPM needs process architecture, ACM has no such need </quote>

Work of a social worker is based on the existing rules, procedures and laws. Some of them are expressed in as processes. So, the process architecture is necessary; it must exist but it is not visible (similar to the 90% of an iceberg); and preferably it should be explicit.

For example, an application for automating “Office de faillite” (a governmental structure to implement bankruptcies) is a mixture of ACM features and classic BPMS because the bankruptcy process template is defined in the law with many slight variations. Although each bankruptcy case (process instance) is different, they use the same process architecture which is the proof that each case follows the law.

<quote>In BPM the person who designs the process needs to be a data architect, but in ACM these are different roles.  The person who designes the “process” does not need to be a data architect. </quote>

Although many BPMS vendors provide data modelling capabilities, it is not always that a BPMS-based implementation of process-managed application forces the process architect to be a data architect. Some process-oriented applications are just moving existing data from one place to another or collecting process metrics.

<quote>BPM needs strong capabilities for integration, but in ACM there is little or no need for field-level integration. ACM can work well with documents, reports, and links to other application user interface.</quote>

At the beginning, the users of collaborative applications are very happy with just the access to documents, reports and links. Then those users ask for provisioning more case-related information which is usually “mastered” in central resources. For example, a Word document should contain several attributes extracted from SAP.

The mentioned above “Office de faillite” application is integrated with a corporate finance system, a corporate electronic publishing system, a corporate document management system, a country-wide postal-addresses system, etc.

In conclusion: considering that “knowledge workers” and “workers who are doing repeatable work” are working TOGETHER, the capabilities from both ACM and BPMS should work together. As the first step for achieving this synergy,  it is necessary to provide the commonly-agreed reference models and reference architectures (independent from the tools).



Explicit event processing agents in BPMN?

Sometimes we need to process in an instance a group of events collected from different instances. For example, incoming orders are collected and treated each hour all together. I call this pattern CPP:

Anatoly Belychook uses “interprocess communication via data” pattern (see http://mainthing.ru/item/332/) - something like that:

One of the building blocks of Event Processing Network (EPN) presented in “Event processing in action” (see http://epthinking.blogspot.com/) is event processing agent. It can, in particular, aggregate many events from a stream. Use of such an agent (between pools, of course) looks like that:

I found it rather explicit. Maybe a next version of BPMN should consider some building blocks of EPN?



Relationship between EA, PMO, an SDLC methodology and ITIL

Continue of the post “Relationships between EA and PMO”.

For the moment, I don’t discuss the “local” SDLC methodology. It is considered that it translates (as a project) a request for a business solution into a set of interdependent services. Some of those services are new; some of those services are new versions of existing services. The main steps of such a translation are:
  • Architect a solution as a set of services (BPM, SOA, etc. is are used for quick prototyping to understand WHY and WHAT for each service as well as the effect on the whole enterprise environment)
  • Design each service (supply HOW for that service – buy, build, rent, outsource)
  • Deploy each service (of course, provide the ruthless monitoring for each service before deployment)
So, it is necessary to guarantee that newly created services or versions of services will be the good ITIL citizens. For this reason, many of ITIL processes have to be “invoked” during projects as shown in figure below.



Automation and Intelligent Systems

(copied 2001-08-02 from Jeff Ellis page on Geocities.com and reproduced here because the original URL is not available any more)

Pitfalls of Automation

Development efforts for automated systems are usually justified in part by their presumed impacts on human performance: a reduced workload, enhanced productivity, and fewer errors. Automation has generally failed to live up to these expectations, due not to the automation itself but to its inappropriate application and design [1]. Studies of failures in automation reveal an "epidemic of clumsy use of technology" [2] which creates new cognitive demands on the operator rather than freeing his/her time, diverts the user's attention to the interface rather than focusing the user's attention on the job, creates the potential for new kinds of errors and "automation surprises" rather than reducing errors, and creates new demands and more difficult knowledge and skill requirements rather than reducing the operator's knowledge requirements.

A key finding [3] of these studies is that "strong, silent" systems, i.e. those implemented as black boxes, are difficult to direct and result in a system with only two modes: fully automatic and fully manual. In such cases the human operator is apt to interrupt the automated agent and take over the problem entirely if the agent is not solving the problem adequately. This situation results from the "substitution myth" [3] whereby developers often assume that adding automation is a simple substitution of a machine activity for a human activity. Instead, partly because activities are highly interdependent or coupled, adding or expanding the machine's role changes the cooperation needed and the role of the human operator. Table 1 summarizes the apparent benefits of automation in contrast to empirical observations of operational personnel [3].


Table 1.  Expected benefits of automation versus actual experience [3]

Expected/Perceived Benefit

Better results are obtained from "substitution" of machine activity
for human activity.
Practices are transformed; the roles of people change.
Work is offloaded from the human to the machine. Creates new kinds of cognitive work for the human, often at the wrong
Operator's attention will be focused on the correct answer. Creates more threads to track; makes it harder for operators to remain
aware of and integrate all of the activities and changes around
Less operator knowledge is required. New knowledge and skill demands are imposed on the operator.
Errors are reduced. New problems and potentials for error are

Billings [4] enumerates several fundamental attributes which are common to
occurrences of failure in automation and human/ automation interaction:

  • Automation Complexity:  The details of machine functions may appear quite simple because only a partial or metaphorical explanation is provided, and the true complexity of the operation remains hidden from the user.
  • Coupling Among Machine Elements:  Internal relationships and
    interdependencies between or among machine functions are not made obvious to
    the user, resulting in unexpected and apparently erroneous behavior.
  • Machine Autonomy:  Real, self-initiated machine activity
    requires the human operator to determine if the perceived behavior is
    appropriate or represents a failure.
  • Opacity (Inadequate Feedback):  The machine does not
    communicate what it is doing or why it is doing it, or communicates poorly or
  • Peripheralization:  Complex machines tend to distance
    operators from the details of an operation, and if the machines are reliable,
    operators will over time become less concerned with and aware of the details
    of the operation.
  • Brittleness:  The system performs well while it is within the
    envelope of its design but behaves unpredictably otherwise.
  • Clumsiness:  The operator has little to do when things are
    going well, but the computer demands more activity from the operator at times
    when the workload is already high.
  • Surprises:  The machine behaves in an unexpected, apparently
    erroneous manner.

Human-Centered Design Principles for Automation

Lessons learned from these past failures have led to an understanding of the importance of human-centered design principles in the development of intelligent and automated systems.  Essentially, the human-centered design approach to automation seeks to keep the operator in command, requiring that the operator be informed and involved with monitoring the automation.  An intelligent system must be inspectable (i.e. provide indications of what it is doing and why), predictable (i.e. support the operator’s need to anticipate the behavior of the system), repairable (i.e. allows the operator to assume control and fix the system), intelligent (i.e. learn from operator overrides), maintainable and extensible [5].  The interface design must provide obvious opportunities for action on the part of the operator and must be tailored around the operator’s activities at all levels of interaction.  Perhaps the most significant finding is that the human operator must likewise be inspectable and predictable to the intelligent system.  This requires that a formal model of the human operator’s tasks and actions be created.  This model is used to design the interaction environment and can also be exploited by the intelligent system to perform operator intent inferencing [6].

Billings [4] offers the following “first principles” as essential elements of an over-arching philosophy for human-centered systems:

  • Premise:  Humans are responsible for outcomes in human-machine
  • Axiom:  Humans must be in command of human-machine
    systems.  This is axiomatic if one accepts the premise.  The axiom
    implies certain corollaries which are consistent with past experiences with
    human-machine systems.
  • Corollary: Humans must be actively involved in the processes
    undertaken by these systems.
  • Corollary: Humans must be adequately informed of human-machine
    system processes.
  • Corollary: Humans must be able to monitor the machine components of
    the system.
  • Corollary: The activities of the machines must therefore be
  • Corollary: The machines must also be able to monitor the
    performance of the humans.
  • Corollary: Each intelligent agent in a human-machine system must
    have knowledge of the intent of the other agents.
The more recent literature on human-centered design provides a set of design principles and guidelines as well as proposed methods for modeling modes of human interaction and using these models to design user-interfaces.  Much of the groundbreaking work in this area has been performed at Georgia Tech’s Center for Human-Machine Systems Research.  NASA has also established a Space Human Factors Program which sponsors both solicited and unsolicited research proposals on a yearly basis.  


[1] Thurman, D. A., Brann, D. M., and Mitchell, C. M., “An Architecture to Support Incremental Automation of Complex Systems”, Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics, Orlando, FL (to appear).

[2] Woods, D. D., Patterson, E. S., Corban, J. M., and Watts, J. C., “Bridging the Gap Between User-Centered Intentions and Actual Design Practice”, on web site http://csel.eng.ohio-state.edu:8080/~csel/BridgeGapUserCtrInt.html.

[3] Woods, D. D., “Human-Centered Software Agents: Lessons from Clumsy Automation”, position paper for National Science Foundation Workshop on Human-Centered Systems: Information, Interactivity, and Intelligence, Arlington VA, February 1997.

[4] Billings, C. E., “Issues Concerning Human-Centered Intelligent Systems: What’s ‘human-centered’ and what’s the problem?”, plenary talk at National Science Foundation Workshop on Human-Centered Systems: Information, Interactivity, and Intelligence, Arlington VA, February 1997.

[5] Brann, D. M., Thurman, D. A., and Mitchell, C. M., “Human Interaction with Lights-out Automation: A Field Study”, Proceedings of the 1996 Symposium on Human Interaction with Complex Systems, Dayton OH, August 1996, pp. 276-283.

[6] Callantine, T., “Intent Inferencing”, on web site http://www.isye.gatech.edu/chmsr/Todd_Callantine/CHII.html.

[7] Mitchell, C. M., “Models for the Design of Human Interaction with Complex Dynamic Systems”, Proceedings of the Cognitive Engineering Systems in Process Control, November 1996.

[8] Thurman, D. A. and Mitchell, C. M., “A Design Methodology for operator Displays of Highly Automated Supervisory Control Systems”, Proceedings of the 6th Annual IFAC/ IFORS/ IFIP/ SEA Symposium on Man-Machine Systems, Boston MA, July 1995.

[9] Thurman, D. A. and Mitchell, C. M., “A Methodology for the Design of Interactive Monitoring Interfaces”, Proceedings of the 1994 IEEE International conference on Systems, Man, and Cybernetics, San Antonio TX, October 1994, pp. 1739-1744.

[10]  “Field Guide for Designing Human Interaction with Intelligent Systems”, Draft, on web site http://tommy.jsc.nasa.gov/~clare/methods/methods.html, December 12, 1995.


Relationships between EA and PMO

In my current position at a chief enterprise architect I have to provide a clear guidance how EA, PMO, PMBOK, BPM, SOA, ECM, SDLC, CMMI, ITIL, etc. should work together. This post is about EA and PMO.

EA is a management tool to help the enterprise to realise its vision by provisioning guidance and practical help for the design and evolution of the Bank via the enterprise models as well as with a coherent and proven set of principles, recommendations, and practices for working with those models.

EA has the three explicit parts: model, management and governance. Model is a set of enterprise artefacts and relationships between them. The governance part is used for the strategic improvements and self-tuning of the enterprise environment. The governance defines the model for a target environment and the means (a road map presented to the business as the strategy) to implement necessary changes from the baseline model to the target model. The management part supervises those changes which are carried out in different internal projects. The latter are controlled by PMO.