Architecting application architecture #apparch (inspired by #microservices)

Base references

A few interesting recent discussions about microservices:
have inspired me to write this blogpost in addition to the several recent blogposts:
  1. “#BPM for software architects - from monolith applications to explicit and executable #coordination of #microservices architecture” http://improving-bpm-systems.blogspot.ch/2014/08/bpm-for-software-architects-from.html
  2. “#BPM for the #digital age – Shifting architecture focus from the thing to how the things change together” http://improving-bpm-systems.blogspot.ch/2014/08/bpm-for-digital-age-shifting.html
  3. “e-government reference model #GeGF2014 #egov #entarch #bpm #soa” http://improving-bpm-systems.blogspot.ch/2014/10/e-government-reference-model-gegf2014.html 
  4. “#BPM for #SOA+#ESB+#API and #cloud (#PaaS and #SaaS)” http://improving-bpm-systems.blogspot.ch/2014/12/bpm-for-soaesbapi-and-cloud-paas-and.html 
  5. "Ideas for #BPMshift – Delenda est “vendor-centric #BPM” – How to modernise a legacy ERP" http://improving-bpm-systems.blogspot.ch/2014/04/ideas-for-bpmshift-delenda-est-vendor_27.html

And other references:


In this blogpost I try to stay at the conception level and avoid any implementation details (such as WS, REST, XML, API, HTTP, JSON, Web, common libraries, Java, etc.) – let us validate the concept first and then talk about implementation techniques.

At the same time, it is considered that there is no commonly-agreed definition of “microservice”. One of the reference definition of “service” is in http://www.infoq.com/articles/updated-soa-principles .

My definition of “service” is “explicitly-defined and operationally-independent unit of functionality”.

Application architecture current challenge

If, for a moment, we are not fussy with the terminology then we can accept a current challenge in the application development as it is defined by Jean-Jacques Dubray (as a comment to the first reference): the microservice architectural style ...is an approach to developing a single application as a suite of small services.

In other words, the question is how to implement an application as an organised collection of many autonomous components. Each Autonomous Component (AC) is (potentially) distributed (in-house or in-cloud) and thus it is a unit of deployment (otherwise it can be deployed in a separate host).

Note: At present, it is not possible to claim that ACs are microservices or services.

I think that this is a natural step in the evolution of the application architecture.

From monolith: one unit of deployment, one “in-process” (in techie’s meaning – in the same JVM), simple inter-component communication (again “in-process”) and simple error-handling (again thanks to “in-process”).

Then moved to client-server with two units of deployment (but not yet autonomous components): fat client (presentation and some biz logic) and the rest.

Then to three-tier with three almost ACs: thin client (presentation, better UX), business logic (calculations) and data access layer.

Now, with the recent popularity of mobile equipment, in addition to IT-centric decomposition (presentation, logic, data) into ACs, there is an opportunity for functional decomposition into ACs. So, each AC may have its own UI/presentation, logic, data and additional access channels, e.g. classic API. An example of functional decomposition is a portal – a data-centric portal for navigation over some data or a function-centric portal or a combination of them.

Meaning of “autonomous”

The word “autonomous”, being a keyword, requires an extra explanation. Some people associate it with “...Independent Scalability, Independent Lifecycle and Independent Data” but, let us look at the full life-cycle of AC.

In general, such a detailed life-cycle has the following phases:
  • Contextualise (or define future AC’s usage/expenses - WHY)
  • Plan (or schedule AC’s design, build and run - WHEN)
  • Design (or define AC’s characteristics - WHAT)
  • Build (or implement AC -HOW)
  • Bind (or linking)
  • Deploy (WHERE)
  • Run
  • Monitor (or meter of the usage of AC - WHO)
  • Measure (or evaluate the performance of AC – WITH WHAT RESULTS)
  • Un-deploy

Considering that each AC will participate in one or many assemblies for provisioning a “bigger” solution (or richer functionality), ACs are interdependent at their “creation” phases (contextualise, plan, design, build). Imagine that a highly-efficient team is created by the careful selection and training of people.

Ideally, ACs are independent at their “operation” phases (deploy, run, monitor and measure). For example, a high consumption of CPU by one AC will have no negative effect on other ACs. Or, an AC may be stop, un-deployed, re-deployed and resumed without determination of the whole assembly performance.

But, the binding or linking of ACs into an assembly makes ACs dependent in accordance with their contracts. Up-steam dependency is related to the AC’s performance. Down-stream dependency may happen as well by imposing some ACs to be used. (Note: There are several techniques to reduce dependency).

Thus, autonomicity of each component will vary from more to less (independency, dependency and interdependency) at the different phases of the full life-cycle.

Structurally interdependent, behaviourally independent and contractually dependent (Note: there are behavioural rules like in a red-card football) – what a mixture.

Working together in assembles

There must be a set of common rules for AC to help ACs work synergistically together (as an assembly) for a common goal. I think about the following rules:
  • each AC acts similar to services, 
  • AC may have a particular specialisation: user-facing (interactive service), coordinator (orchestration, etc.), lawyer (decisional business rules), utility (basic functionality), resource (data, etc.), communicator (lightweight moderator), dispatcher (event handler?), referee (behaviour rules enforcer), porter (security service), registrant (naming services), etc.
  • to work together, ACs follows agreed admission rules: naming, formally defined interface, performance, behaviour, etc. Admission rules for different assembles may be different.
  • in particular, ACs may have to agree using some common ACs with assembly-wide functionality.
Must AC be small or not? Ideally, they should follow SRP.

In extreme, ACs may work together as a group of fully-universal-agents (no specialisation) or as a team of specialised ACs (some team-sports). Particular type of specialisation may be a coordination of ACs with “small or simple” functionality into an AC with “big or complex” functionality. (For example, combining “generate PDF”, “protect PDF by a digital signature” and “disseminate PDF” together.)

AC nomenclature

Let us try to use ACs for some modern ideas about a better application architecture (again don’t be fussy with the terminology) as one of them cites below:

“A modern application is a functional ecosystem comprising a loose association of apps and services. Apps implement the application front end (user-facing) to support a specific functionality on a particular type of device and interaction medium, and services implement the application back end. Together these apps and services support a particular business domain.” (Note, also RIA were talking about this).

I think that the following types of AC are necessary to cover the majority of “modern applications”:

  • various resources (primarily, data) – this is a typical data-access AC.
  • utilities – this is a typical data/documents transformation AC.
  • data/documents role-based mini-portal (select data/documents and initiate an operation on them) - this is a user-facing AC.
  • functional portals (select a function and execute it on some data/document) – this is a user-facing AC.
  • short-running (or synchronously executed or near-immediate completion) operation – this is a user-facing AC with some logic and invocation of some other ACs.
  • long-running (or asynchronously executed) operation – this is a orchestration-like AC for long-running coordination of people and ACs.
  • stateless and idempotent composition of some ACs for a composite operation – this is a low-code aggregation for invocations of other ACs.

Archetypes of application which can be constructed from ACs

Data-centric application archetype

Typical usage: browse a data repository, select and execute an operation.

Document-centric application archetype

Typical usage: browse a document repository, select and execute an operation.

Example: document management systems

Operation-centric application (with short-running operations) archetype

Typical usage: browse a static list of operations, select an operation, associate with it some data/documents, execute this operation.

Example: employee portal

Note: all operations are short-running – they are completed in less than a few minutes (so a person can wait for their completion in front of the screen).

The static list of operation is role-dependent.

Operation-centric application (with long-running operations) archetype

Typical usage: browse the static and dynamic lists of operations, select an operation, associate with it some data/documents, execute this operation.

Example: employee portal.

Note: all operations are long-running – they are completed in a few days / weeks (so a person cannot wait for their completion in front of the screen).

The static and dynamic lists of operation are role-dependent.

Real applications

There may be a mixture of mentioned above archetypes.


Do you see other types of application? Do you know other specialisations of AC? Please share.



Yet another definition of enterprise architecture #entarch and metrics for enterprise architects

My short definition of #entarch from the viewpoint of an enterprise architect

Enterprise Architecture (EA) is a system-thinking applied management discipline about essential decisions for coordinating people, processes, projects and products in 11 dimensions:
  1. focus space (biz unit, enterprise, country, etc.)
  2. architectural space (business, information, etc.)
  3. time span (project life-cycle, solution life-cycle, enterprise life-cycle, etc.)
  4. sector span (various industries)
  5. problem space (re-structuring, rationalisation, M&A, standardisation, etc.)
  6. solution space (from a concept to operations - like ZF rows)
  7. cultural space
  8. practice space (are you theoretician vs methodologist vs practitioner?)
  9. media space (physical vs analog vs digital)
  10. financial space (number of zero in the budget)
  11. people space (top, management, middle-management, super-users, workers)

  • environment
  • legal 
  • socio-technical vs technical system
  • CX span (touch point, journey, storytelling, lifecycle)
  • social space (gender equality, consensus building, conflict resolution)
  • person span (individual, house, personal cars, public places, etc.)
  • technology ?

Using the definitions below:
  • System-thinking applied management discipline is an applied management discipline which uses system-thinking approach.
  • Applied management discipline is a management discipline which applies scientific knowledge for solving practical problems.
  • Management discipline is a discipline for the better management of the enterprise functioning in support of the enterprise goals.
  • Discipline is a coherent set of governing rules.
And the self-contained definition will be:

Enterprise Architecture (EA) is a coherent set of governing rules for the better management of the enterprise functioning in support of the enterprise goals by applying system-thinking scientific knowledge for solving practical problems in coordinating people, processes, projects and products in 11 dimensions.


This definition can be used as a metric to qualify enterprise architects. In my books, an enterprise architect must comply with the following:
  1. focus space: work for 1000+ people; 
  2. architectural space: able to contribute to all architectural domains; expert knowledge in 2-3 domains
  3. time span: have experience with complete enterprise life-cycle
  4. sector span: min 5 different sectors experience; able to find similarities between sectors
  5. problem space: leading min 3 critical enterprise-wide changes
  6. solution space: comfortable in min 4 rows
  7. cultural space: know culture specifics for the majority of staff members
  8. practice space: be good at least in 1 of these roles
  9. media space: make fully digital a whole company (100+ people)
  10. financial space: 1 M budget minimum
  11. people space: able to talk to everyone to explain how EA will address their concerns and change their working habits for the better

#BPM for #SOA+#ESB+#API and #cloud (#PaaS and #SaaS)

Some recent reflection how some of TLAs should work together.

 Warning: slide # 7 has animation.



iCMG conference in Bangalore

What a company!



"Improvement software program performance" publication as an illustration to the "capability" concept

 This blogpost is a partial copy of my work on improving software programs performance. I am going to use it to illustrate the concept “capability” which is currently discussed in some LinkedIn groups.

We measured the behavior of a typical data processing program in the high-energy physics – geometrical reconstruction. 

Firstly, we measured by sample testing how much CPU is consumed by module (a subroutine).

Fig 1

Secondly, we measured by intercepting call/return instructions how much CPU is consumed by module (a subroutine).

Fig 2

Then we used the logical structure of the program.

Fig 3

And calculated the amount of resource (CPU in this particular case) which is “consumed” through each of possible connection. The width of the connection is proportional to this amount. Important to know that the sum of incoming “flows” does not equal of the sum of outgoing “flows” because each module can spend some amount of resource inside it.

Fig 4

Interesting that each connection demonstrates the amount of resource consumed by each module as a self-contained sub-system, athough some sub-systems are shared some modules.

And, just a simple question - what is the publication year of this document? A small hits - its price was 33 kopecks.



e-government reference model #GeGF2014 #egov #entarch #bpm #soa

My presentation "e-government reference model" at the Global e-Government Forum 2014 in Astana.

Notes contain references to some blogposts.



#BPM - ladder of business process practices

A presentation inspired by potential clients which think to implement "business managed by processes".



Concept "capability" for #BPM, #entarch and #bizarch, version 1

1 Proposed definition of the concept "capability" 

Adapted from: http://www.businessdictionary.com/definition/capability.html

capability, noun
[short] measure of the ability of a component to achieve a particular result
[long] measure of the proven possession of the characteristics and/or means and/or power and/or skills to achieve a particular result

Note1 : Is it necessary "... and/or power ..."?

Note 2: Capability is an attribute of a component - not a component in its own right.

результатоспособность или делоспособность(?), сущ.
мера обладания характеристиками и / или средствами и / или навыками для получения конкретного результата

capacité, nom
une mesure de la possession les caractéristiques et / ou moyens et / ou compétences pour obtenir un résultat particulier

So capability may be:
1. Capability as performance – “Demonstrated Result (DR)” / ”Required Result (RR)”
2. Capability as potential – “Potential Result (PR)” / ”Required Result (RR)”
3. Capability as capacity – “Architected Result (AR)” - maybe non-linear
4. Capability heat map – if RR>DR then (RR-DR)/RR else "1".
5. Capability maturity model – ??

Demonstrated Result – DR (i.e. observed over some time in the past in case of black-box components, e.g. services)

Required Result – RR (e.g. required by the corporate strategy)

Architected Result – AR (i.e. by design in case black-box & white-box components, e.g. processes)

Potential Result – PR (a priori estimation?)

2 Relationships between concepts 

A component may be implemented in three ways:

  1. implicit coordination of some assets (e.g. by a service) – capability of a component is based on “demonstrated result” or “potential result” – active approach
  2. explicit coordination of some assets (i.e. by a process) - capability of a component is based on “architected result” – proactive approach
  3. outsourcing – capability of a component is based on “demonstrated result” or “potential result” – reactive approach (because of the contract life-cycle) 

3 Related concepts 

ability, noun
proven possession of the characteristics and/or means and/or skills to achieve a particular result

system, noun
functional entity formed by a group of interacting, interrelated or interdependent components

Etymology: from the Latin systēma, which in turn is derived from the Greek σύστημα meaning “together”.
Note: enterprise or organisation is a system

structure, noun
internal arrangement of, and relationship between, components

Note: One can say organisational or corporate structure

component, noun
considered as a functional whole constituent part of a system

Note: component may be another system; in this case one can say subsystem

architecture, noun
fundamental orderliness (embodied in its components, their relationships to each other and the environment), and the principles governing the design, implementation and evolution, of a system

capacity, noun
specific feature of a component or an asset, measured in quantity and level of quality, over an extended period

Adapted from: http://www.businessdictionary.com/definition/capacity.html

characteristic of a component, noun
a distinguishing trait, quality, feature or property

asset, noun
something valuable that a component owns, benefits from, or has use of, in achieving a particular result

Adapted from: http://www.businessdictionary.com/definition/asset.html

business process, noun
explicitly-defined coordination for guiding the purposeful enactment of business activity flows

Note: A simple business process is an agreed plan to follow; the plan is a directed graph of (both parallel and sequential) business activities; the plan may include some variants and allow some changes.

business activity, noun
a unit of work

performance, noun
measurement that expresses how well something or somebody is achieving a particular result

key performance indicator, noun
quantifiable performance

throughput, noun
the rate at which a system achieves its goal

4 Groups 

The meaning of the concept “capability” is discussed in the following LinkedIn groups:

1. https://www.linkedin.com/groupItem?view=&gid=84758&type=member&item=5894509808182112260&commentID=-1&trk=groups_item_detail-b-jump_last#lastComment

2. https://www.linkedin.com/groupItem?view=&gid=1175137&type=member&item=5796714327490707456&commentID=-1&trk=groups_item_detail-b-jump_last#lastComment

3. https://www.linkedin.com/groupItem?view=&gid=2639211&type=member&item=5908276236370616321&commentID=-1&trk=groups_item_detail-b-jump_last#lastComment

5 Several variations in opinions about capability 

Q1: Capability – a characteristic of a system (Ravi) or an element of a system (Stephen, Louise) or possibility to reconfigure a system (Lalen)? My current understanding – characteristic of a component

Q2: Capability – an internal characteristic of system or an external characteristic of a system (Christian)? My current understanding – internal characteristic (because I would like to have an opportunity to architect it)

Q3: Capability – a potential only, i.e. to-be, (Ravi, Ben) or can be also demonstrated during operations, i.e. as-is? My current understanding – both

Q4: Capability – only ability or ability + assets (Stephen, Louise)? My current understanding – both are possible depending on situation

Q5: Capability – dimensionless quantity or what is its dimension? My current understanding – mainly dimensionless but sometimes a capacity of the system (i.e. its limiting/design parameter) is used as its capability

Q6: Capability – binary value (only "yes" or "no", "capable" or "not capable") or as a continuous value, e.g. in the rage between 0 and 1? My current understanding – continuous value.

Q7: Capability –it can be applied to components; can it be applied to assets? My current understanding – capability can be applied to both components and assets.



#BPM for the #digital age – Shifting architecture focus from the thing to how the things change together


In the digital age, the focus of enterprise/business/application/etc. architects is not the thing (strategy, policy, service, rule, application, process, etc.) – the focus is how the thing changes and how things change together.

In addition to being cheaper, faster, better it is mandatory to become to more agile, more synergetic (i.e. IoT), more comprehensive.
  • Digital eats physical: Everything becomes digital: products, information, content, documents, records, processes, money, rights, communications.
  • Fast eats slow: As digital is intangible thus news tools and new execution speed immediately.
  • Group eats single: It is mandatory to collaborate to address modern complex problems.
  • Big eats small: Digital things are at new scale.


This blogpost outlines how BPM can enable changes which accelerate improvements and innovations in the digital age.

This blogpost is based on the blogpost “#BPM for software architects – from monolith applications to explicit and executable #coordination of #microservices architecture” http://improving-bpm-systems.blogspot.ch/2014/08/bpm-for-software-architects-from.html (referred as “base” blogpost below).

The goal of IT in the digital age is to be able to provide software-intensive solutions which are easy to evolve instead of classic monolithic applications which are difficult to evolve. This blogpost shows how to design and build process-centric solutions which are easy to evolve. Such solutions are explicit and executable aggregates of components. Aggregates are organised around business processes and components are microservices which wrap various process-related artefacts.

Note: Considering that microservices are autonomous units of functionality, one monolithic application as a big unit of deployment may become a few hundred of microservices as small units of deployment (although the size does not matter in this case).

Both aggregates and components (some of aggregates are components as well) will be analysed from the change (evolution) point of view. In other words, how to carry out changes of each particular artefact and all afterfacts together without breaking the system - enterprise as a systems of processes ( see http://improving-bpm-systems.blogspot.co.uk/2014/03/enterprise-as-system-of-processes.html ) and for achieving the enterprise goals.

Note: Evolution is related to the impact analysis, dependency management and optimisation.

It is considered that all artefacts are versionable and several versions of the same artefact may co-exist in the company’s computing environment. Traceability considerations are at maximum – everything (including changes and work done) is logged as records.

All artefacts are wrapped as services (actually, microservices). A process coordinates (with the use of various coordination techniques – see http://improving-bpm-systems.blogspot.co.uk/2014/03/coordination-techniques-in-bpm.html ) various services and it is a service itself. Thus process is an explicit and executable way to aggregate smaller services into bigger ones. In other words, a process is an aggregated service.

Versioning of artefacts (10.3 from by book http://www.samarin.biz/book )

To achieve the versioning of artefacts it is necessary to understand how to treat relationships between artefacts (see 2.4.4 of the book).

We recommend that a system be evolved via some kind of transformation cycle as shown in Figure 1. Start with a stable configuration of approved artefacts. Then introduce a new version of the artefact B3 which is available only for one consumer (i.e. artefact A2) which has to be also versioned. After achieving higher confidence with these new versions, switch all other consumers (i.e. artefact A1) to the new version of the artefact B3. When it is considered that all new artefacts are functioning correctly, their old versions can be removed. The transformation is over and a stable configuration of approved artefacts is once again reached.

Figure 1 Figure 10.3 “Transformation cycle” from the book 

In a properly architected system, you may carry out several transformation cycles at the same time.

Process-template and process-instance

A process-centric solution have several processes (actually process-template - a formal description of the process) and some stand-alone services (e.g. a stand-alone service may generate an event which launches one of the processes (actually the process-instance of a process-template - enactment of the process template).

The distinction between process template and process instance is very important. The life-cycle of process-template is controlled at design-time. The life-cycle of process instance is controlled at run-time. A process instance is created, maybe suspended & resumed and finally terminated. Many process instances may co-exist at the same time as shown in Figure 2.

Figure 2 Templates and instances

Process-centric artefacts

Process-centric artefacts and relationships between them are the following:
  • The business is driven by events 
  • For each event there is a process to be executed 
  • Process coordinates execution of activities (automated and human and sub-processes) 
  • The execution is carried out in accordance with business rules 
  • Each activity operates with some business objects (data structures and documents) 
  • A group of staff member (business role) is responsible for the execution of each human activity 
  • The execution of business processes produces audit trails 
  • Audit trails (which are very detailed) are also used for the calculation of Key Performance Indicators (KPIs) 
Also, one can read more about artefacts in chapters 7 and 11 of the book.


Evolution of an event is very straightforward – just new version for any change. Usually, there is a mapping (or decision) table (implemented as a “dispatch” service – see 2.6 of the base blogpost) to provide the correspondence between events and processes. In the simplest policy, a particular event is linked to a particular process template (or a particular version of a particular process template). More sophisticated policies are possible, e.g. usage of the most recent version, time-based selected, etc.

Note: Events mays be generated by processes.

Note: Events may be processed via EPN and decision management techniques.

Potential side-effects (evolving together): None at the moment (just explicitly ignore event if it does not launch any processes).


Evolution of a process template is evolution of a composite object. The simplest policy is a very strict binding (also called “early binding”) – a particular version of the process template refers to a particular version of each components (actually microservices).

Figure 3 Early binding 

More sophisticated policies are possible, e.g. process-template is using the most recent versions of each component available at the process-instance launching moment (also called “late binding”). Because, the process-template is actually a description then its versioning is not a big problem.

Potential side-effects (evolving together): As life-cycles of a particular process-template and its process-instances do not match, it is necessary to understand what should be done with the running process-instance in case of changing the process-template, although the process-template and its process-instances are different objects (similar to mother and her borne children).


Process-instance is a composite object and its evolution is better to avoid (like changing a running car). Evolution of a process-instance may be necessary for some legal purposes if a long-running process-instance should be modified in accordance with the evolution of related process-template. The related technique is described in http://improving-bpm-systems.blogspot.ch/2010/03/practical-process-patterns-mint.html . Of course, it is better to avoid evolution of process-instance at all, but small changes should be possible.

In practice, the main reason to evolve process-instance is for correcting various errors and exceptions, e.g. in data or in rules or in automation. If some of the components are expected to be quickly evolving or “shaky” then the relationships between the composite and these component should be indirect thus manageable externally.
Figure 4 indirect binding 

Sometimes, it is necessary to create a version of an external component which must be used only by a particular process-instance. In general, all external components are re-usable from various aggregates.


Roles should be define in a suitable DSL externally from the process template and changes for a particular process instance should be possible. Usual technique is to have a set of dedicated functional roles (Responsible, Accountable, Consulted, Informed) for each human activity within a process and be able to provision these roles by various organisational and other roles externally from the process template.


Rules is a typical service which is implemented in a DSL (decision management notation). This service is stateless and easy to evolve.

Audit trails

Audit trails are easy to evolve. Ii is important to define them explicitly in processes, for example, like measurement-points. Audit trails must be kept outside the process engine in, for example, an enterprise data warehouse thus be independent from evolution of BPM suite itself. Typical process execution data (start/finish time for each activity, etc.) must be merged with some business data to associate separate process-instances which treated the same business objects.


If audit trails are done correctly then KPIs are easy to evolve.

Human activity

Human activity is implemented as an interactive service. Sometimes, such a service is a generic tool (which is external to process-template) and such a tool should receive from the process-instance a reference to a human activity to be treated. This is an example of indirect relationship mentioned above.


Typically, early or late binding is applied for selecting a version of a sub-process to be used (although it depends on capabilities of the business process engine). In the majority of situations, the late binding works fine – just remember to record the version of a sub-process template used in each invocation.

Data structures

As a good practice, business data structures are kept in a generic format (e.g. SDO) and transferred along the process as a black box. To implement some routing logic, additional technical or process-template-specific data structure is created. Bridging between business and technical data structures is done by in automated activities.


Documents are kept in external repositories, e.g. a document management system or ECM tool. They are referred via URLs and some metadata. 

Automated activities

Automates activity is the most “shaky” component of the process (as an aggregate). The indirection binding which is used for them is done through a “robot” (see 2.3 in the base blogpost). Robot is a very stable service and the process-instance passes to it the name of the automation script to be executed as well as input and output parameters. The name of the automation script is a process parameter (thus changeable by the process-template administrator and the process-instance administrator) and input/output parameters are SDOs.

The typical error recovery practice discussed below. Figure 5 shows a “container” in which an automated activity “A” operates within the processes. The normal execution sequence is “E1-A-E2”. Because the automated activity may fail, the container contains the intermediate exception event "E3" and an activity for Error Recovery Procedure (ERP).

Figure 5 Error recovery loop and Error Recovery Procedure (ERP) – exception handling 

In case of failure, the recovery execution sequence will be “E1-E3-ERP-E1-A-A2”. ERP may be very trivial (just try again) or more intellectual (try three times and then ask a person to have a look at it).

In additional to exception, it is necessary to define time-out to prevent endless automated activities as shown in Figure 6.
Figure 6 Error recovery loop and Error Recovery Procedure (ERP) – exception and time-out handling 

Automation activity is an automation script which is executed by robot. Typical automation script is an aggregate (usually in an interpreted language) of several micro-services and this aggregate should be executed as a one transaction (see Figure 7).
Figure 7 Execution of an automation script by the robot 

Again, normal execution sequence is “E1-A1-A2-A3-E2”. In case of failure of “A2”, the sequence will be “E1-A1-A2-E3-ERP1-E1-A1-A2-A3-E2”. The double execution of “A1” is possible because of that all micro-services are idempotent (see 2.10 in the base blogpost). If “ERP1” is a human activity then the correction of “A” automation script may be carried out within this human activity.

Note: Processes with only automated activities must be idempotent.

Of course, there is no a robot per each automated activity because the robot must be able to handle concurrently several automation scripts as the same time (as several process-instances of the same process-template may be executed at the same time). Instead, there is a queue of jobs for a group of similar robots. An automation activity of a process-instance puts an automation script into a queue and waits for a robot to execute this script and inform the process-instance that this automation activity is completed (see Figure 8).

Figure 8 Queuing of jobs for robots 

The queue is shared between various process-instances and it is possible to have several specialised queues. The queue size and robots are monitored.

In some sense, robots work as humans – wait for a job from process-instances, execute jobs when they can and inform a particular process-instance that a particular job is completed.


The describe approach was used (since the year 2000) for a production system comprising about 3 000 complex products per year, 50 persons, about 50 different activities, 3 production chains, 6 repositories and 40 IT services (actually, a couple hundred of micro-services). The system was in place for several years. The maintenance and evolution of this production system required several times less resources. Also, several successful (and easy to do) migrations of its big components were undertaken. 



#BPM for software architects - from monolith applications to explicit and executable #coordination of #microservices architecture

This document is an attempt to outline how BPM (trio: discipline, tools and practices/architecture) can address some concerns of microservices architecture, primarily:
  • avoiding “big ball of mud” syndrome (chapter 3)
  • recovery from error (chapter 4)
  • agility of solutions - will be a separate blogpost, because the primary focus of architecture is not the thing (process, services, etc.) but how the thing changes. (Thanks to Jason Bloomberg ) see http://improving-bpm-systems.blogspot.ch/2014/08/bpm-for-digital-age-shifting.html 
  • defining microservices (chapter 5)
  • making microservices friendly to clouds (chapter 6)
  • enhancing information security (chapter 7)
Thus, this document shows a way from monolithic applications to solutions which are based on explicit and executable Coordination Of MicroServices Architecture (COMSA).

The sources about microservices are, primarily, http://martinfowler.com/articles/microservices.html and http://www.tigerteam.dk/blog

1     About microservices (as the latest incarnation of #SOA)

The blogpost http://www.brunton-spall.co.uk/post/2014/05/21/what-is-a-microservice-and-why-does-it-matter/ defines a microservice as following:
  • A small problem domain (AS: one function with a couple of screens of code ,often created by just a few people)
  • Built and deployed by itself (AS: operationally independent; run on the OS levels; and even ownership independence;– it seems that microservice follows my definition http://www.samarin.biz/terminology/artefacts-important-for-the-bpm-discipline/service )
  • Runs in its own process (AS: again, operationally independent, e.g. in its only JVM)
  • Integrates via well-known interfaces (AS: interface which is implemented language-independent)
  • Owns its own data storage (AS: a microservice may have its own data storage)

2     Implementation techniques for process-centric solutions

Note, you may want to glance at the chapters 9, 10 and 11 (which provide some information about BPM) before reading this chapter.

2.1 Guiding principles

  • Speed of developing automation is the primary factor of agility of a process-centric solution.
  • Automation and process template have different speed of changes – keep automation outside the process template.
  • Automation may be long-running and resource-consuming.
  • Automation may and will fail.
  • Failures maybe because of technical (no access to a web service) or business (missing important data) reasons.
  • Recovery after failure should be easy.
Automation’s problems (failures, resource consuming) must not undermine the performance of process engine.

2.2 Interpretive languages

Business routines are usually built on existing APIs to access different enterprise systems and repositories. They look like scripting fragments to manipulate some services and libraries. Thus, a combination of interpreted and compiled static programming languages will bring extra flexibility – interpreted language for “fluid” services (business routines) and compiled language for “stable” services (libraries, business objects, data). Examples of such combinations are: Jython and Java, Groovy and Java, etc. In combining them, it is important to use the strong typing to secure interfaces, enjoy introspection, and avoid exotic features.

Example in Jython:
# Pending WF
# 2001-09-27 AS: Date written
# 2003-04-04 AS: Rewrite
# Pre-processing
def task_Pre_processing( ) :
   print thisW.getTitle(), "Execute task Pre-processing()"
   l = thisSession.getResource("/iso/baa/_BO.py")
   BO_initBusinessSession (['pmdb', 'twdb'])
   ID = thisW.getWorkAttributeAsString(thisWP, "ID")
   Language = thisW.getWorkAttributeAsString(thisWP, "Language")
   Languages = thisW.getWorkAttributeAsString(thisWP, "Languages")
# Find related BO
   dProjectPmdb = aProjectPmdbHome.findBo ( ID )
   if ( not dProjectPmdb.isValid() ) :
      _PytErrorAdmin("No standard in the PMDB ID=%s %s" % (ID, dProjectPmdb.getReturnMessage()))
# Rename workflow title
   if (thisW.getID() != thisSW.getID() ) : # ? subworkflow ?
      nameW = thisW.getTitle()
      nameSW = thisSW.getTitle()
      thisSW.setTitle(nameSW +" for "+nameW)
# Display Project's class
   cc = BO_getProjectClass (dProjectPmdb)
   thisW.setMasterManager (string.upper(cc[0:1])+string.lower(cc[1:])) ;
# Post-processing
def task_Post_processing( ) :
   print thisW.getTitle(), "Execute task Post-processing()"

2.3 Robot as a generic microservice

Keeping microservices for “business routines” outside the process description allows some quick modifications even within a running process instance. The execution of such microservices can be carried out by a universal service which receives a reference to a text fragment to be interpreted, fetches this text fragment and interpret it. We call this service “robot”; universal robots and specialised robots may co-exist. Robots must be clonable (for scalability, load-balancing and fault-tolerance).

A crash of a robot will not disturb the process engine except that the activity, which caused the crash, will be marked in the process instance as “late” or “overdue”.

2.4 Monitoring

  • Ruthless monitoring of all services (including robots, other systems and repositories).
  • Not just checking that a port is bound, but asking to do a real work; for example, echo-test.
  • Service should be developed in the way to facilitate such a monitoring.
  • System should be developed in a way to facilitate such a monitoring.
  • Also, robots proactively (before executing automation scripts) must check (via monitoring) the availability of services to be used in a particular automation script.
  • It is better to wait a little than recover from an error.

2.5 Explicit versioning of everything

The intrinsic separation between process template and individual process instance in process-centric solutions allows the use the full power of microservice versioning. A lot of variants are possible:
  • Process instance may use the “current” version of a particular microservice.
  • Process instance may use the particular version of a particular microservice.
  • In case of some compliance requirement:
    1. Since 1st of April all new process instances will use process template v2
    2. Already running process instances must remain at process template v1
  • Some already running process instances will remain at process template v1 (if those instances are close to the completion)
  • Some already running process instances may be migrated to process template v2 (if those instances are far from the completion)
Thus everything (process templates, XSD, WSDL, services, namespaces, documents, etc.) must be explicitly versioned and many versions of the “same” should easily co-exist. We also recommend to use the simplest version schema – just sequential numbering: 1, 2, 3, etc.

2.6 Use of other types of coordination in addition to classic process templates

Business rules is another DSL which is very popular in BPM. We recommend to follow the TDM approach ( see http://www.kpiusa.com/ ).

We recommend centralising the treatment of important business events (all external ones and some internal ones) as one service called “dispatch”. The “dispatch” service analyses business events and decides which business process should be initiated. Each process should send to this service an internal business event when the work has been completed (see Figure 1).

Figure 1 “Dispatch” service carries out coordination of processes

See also EPN and BPMN "Explicit event processing agents in BPMN?" at http://improving-bpm-systems.blogspot.com/2011/01/explicit-event-processing-agents-in.html .

2.7 Pattern PDP (Pre-processing, Doing, Post-processing)

Frequently work is divided into three parts (see Figure 2):
  • pre-processing or preparation, e.g. receipt of information from various sources in different formats, or from different repositories, and conversion into a standard presentation;
  • doing or processing, i.e. data or information processing in accordance with a standard presentation;
  • post-processing or finalisation, e.g. conversion from a standard presentation into a particular presentation.
Figure 2 The PDP pattern

Note, the PDP pattern may be used at the scale of the whole processes.

2.8 Pattern AHA (Automated, Human and Automated)

The AHA pattern is a variant of the PDP pattern aimed at facilitating human work, e.g. collection of data and maybe documents for a human activity (in the same way as a good assistant prepares documents for his/her boss) followed by automation of the follow-up activities. We recommend using this pattern to model all intellectual and verification human activities (see Figure 3).

Figure 3 The AHA pattern

Although in some cases the analysis may define that the pre- or post-processing activity is empty, we recommend that these activities are always inserted – in this way the addition of some automation later will be easy because no changes to the process will be required.

2.9 Pattern ERL (Error Recovery Loop)

Any service invoked within a process may fail. The error must be acted upon in some way, e.g. to re-invoke a service, or to suspend or terminate the process. Figure 4shows a possible approach to treat a service failure – here we ask a human to do something to correct the service and then re-invoke the service. In this diagram we consider that the activity Service returns an error flag which is analysed in the gateway G01.

Figure 4 The ERL pattern (with error return)

If the activity Service raises an exception then the diagram should be as shown in Figure 5.

Figure 5 The ERL pattern (with exception). Note, after “Error recovery” activity the execution continues from the end of respective sub-process, i.e. just before the gateway “G01”. 

Activity “Error recovery” may be a human activity for a person who is responsible to carry out necessary corrections actions. Depending on the kind of error, this activity may be assigned to different people.

2.10 Pattern IRIS (Integrity Reached via Idempotency of Services)

To achieve integrity within a process, shall we use the ERL pattern “around” each invocation of a service or not? In general yes, but idempotent services can be grouped (as shown in Figure 6). Idempotency of a service means that it can be invoked many times with the same effect. Any state-less service is idempotent. Some state-full services can have this quality also, e.g. a service to add a new version to a document may ignore the request if the most recent version of this document is exactly the same as the requested one.

The process in Figure 6 may have the following audit trail:
  • Activity01 – finished
  • Activity02 – failed and raised an exception
  • Error Recovery – did something
  • Activity01 – finished again thanks to idempotency
  • Activity02 – finished
  • Activity03 – finished
Figure 6 The IRIS pattern. Note, after “Error recovery” activity the execution continues from the end of respective sub-process, i.e. just before the gateway “G01”.

Note, idempotence (pron.: /ˌaɪdɨmˈpoʊtəns/ eye-dəm-poh-təns) is the property of certain operations, that can be applied multiple times without changing the result beyond the initial application.

3 Avoiding “distributed big balls of mud”

This problem was mentioned in the blogpost http://www.codingthearchitecture.com/2014/07/06/distributed_big_balls_of_mud.html with Figure 7 and next quote:

If you can't build a monolith, what makes you think microservices are the answer? If teams find it hard to create a well structured monolith, I don't rate their chances of creating a well structured microservices architecture. As Michael Feathers recently said (in https://michaelfeathers.silvrback.com/microservices-until-macro-complexity) , "There's a bit of overhead involved in implementing each microservice. If they ever become as easy to create as classes, people will have a freer hand to create trouble - hulking monoliths at a different scale.". I agree. A world of distributed big balls of mud worries me.

Certainly, I can see a lot of similarities between microservices architecture and process-centric solutions in Figure 8 which is from my book about BPM ( www.samarin.biz/book ), published in the year 2009.

Figure 8 Disassembling monolith into services and assemble them via coordination

The question is how to coordinate separate microservices. The obvious choice is ESB (as shown in Figure 9).
Figure 9 Flow of data

This means that all microservices should be on this picture with potential connectivity everyone to everyone which has the N*(N-1)/2 complexity. Where N is number of microservices resulting in “explosion” of an application. We estimate this number at about 100 per application (or 300 from http://www.infoq.com/interviews/goldberg-microservices).

Also a couple of issues from ZapThink http://www.zapthink.com/2013/05/21/cloud-friendly-bpm-the-power-of-hypermedia-oriented-architecture/
  1. Where to keep the state for this composite service (i.e. ex-application)? If in ESB then this makes ESB too complicated.
  2. Is ESB cloud-friendly? Just imaging a re-start of the VM with the ESB.
It seems that ESB is necessary but not sufficient. What is missing? We believe that the flow of control is more important than the flow of data (as shown in Figure 10).

Figure 10 Flow of control

In the former, the primary importance is exchange of data. In the latter, the primary importance is the result of working together, but not individual exchanges of data (like in football). Of course, both are necessary, but only ESB is not enough. Considering that more than one coordination techniques may be used by a solution then Figure 11 is more realistic.

Figure 11 Several coordination techniques

The issues (complexity, state and cloud) are answered as following:
  • Complexity is much lower because only “business routine” services (which are interacting with the process) are depicted.
  • State is discussed in chapter 4.
  • Cloud-friendliness is discussed in chapter 6.
Also, some classification of microservices may be added:
  • explicit coordination (orchestration, cooperation, biz rules, event processing)
  • functional components (like elementary filters in UNIX pipes)
  • functional aggregations (e.g. combination of functional components)
  • data storages
  • data aggregations (i.e. combination data from several data storages)
  • human (interactive) 
  • and some combinations of the previous
This classification helps to understand which microservices may be provisioned from clouds.

4 Easy recovering from errors (by design)

We all know that the main difference between a monolithic applications and distributed solutions is in the error recovery practices. We need distributed solutions because of the scalability, fault-tolerance and cloud-based provisioning. At the same time, we have to architect the recovery from losing connectivity between nodes and service failure (VM reloading or note failure).

If a subordinated service (relatively to the coordination service) has failed then the coordination service will recover via error recovery loop (see 2.8 and 2.9).

If the coordination service has failed then some of running its subordinated services cannot complete their associated activities; after the restart of the coordination service, those activities will fail by timeout (because each activity has its SLA).

If a resource may change its state without the control of the process then the process must interrogate the state of such a resource before its usage.

Because of processes which provide clear and detailed context, the identification of problems is very quick.

5 Defining microservices

BPM helps to provide context, define, coordinate microservices. It helps to eliminate endless discussions about the necessary “granularity” of the services:

“If we select a top-down style then we will create coarse-grained business-related services, but we are not sure whether such services are implementable or reusable. If we follow a bottom-up style then we will implement too many fine-grained services for which the business value is not obvious.”

Actually, the native flexibility business processes and explicit versioning allow the rapid and painless adaptation of services to increase or decrease their granularity. Any wrong decisions are easily corrected; services are quickly adapted to the required granularity.

6 Explicit allocation of microservices to clouds

See http://improving-bpm-systems.blogspot.ch/2011/12/enterprise-pattern-cloud-ready.html

7 Enhancing information security

See http://improving-bpm-systems.blogspot.ch/2014/04/ideas-for-bpmshift-delenda-est-vendor.html and related PPT http://improving-bpm-systems.blogspot.ch/2013/04/addressing-security-concerns-through-bpm.html

8 Characteristics of a Microservice Architecture annotated

These characteristics are from http://martinfowler.com/articles/microservices.html

8.1 Componentization via Services

Definition: component is a unit of software that is independently replaceable and upgradeable.
BPM helps to define services.

8.2 Organized around Business Capabilities

Microservices, which, in majority, implement various business artefacts, naturally grow around business capabilities.

8.3 Products not Projects

The three different types of projects appear instead of classic projects:
  1. Mini-project for developing process-centric solutions (including new microservices)
  2. Architectural evolution of common components TOGETHER
  3. Implementation of common components (e.g. BPM suite tool)
Architecture-based agile project management (archibagile) may be useful for mini-projects. ( see http://improving-bpm-systems.blogspot.ch/2014/06/different-coordination-techniques-in.html ).

8.4 Smart endpoints and dumb pipes

Sure, ESB just a reliable communication mechanism without any business intelligence. Everything is happened in services, even process-centric coordination.

8.5 Decentralized Governance


8.6 Decentralized Data Management

Sure again.

8.7 Infrastructure Automation

Yes, also process provides the context for services thus test cases. Process itself is an integration test for its services.

8.8 Design for failure


8.9 Evolutionary Design

There are several tempos of design: process, process-specific microservices, common microservices, common operating environment (testing, deployment, monitoring, etc.) and the overall architecture.

Processes make easier to use the power of total versioning. Thus process-specific microservices should mature very quickly.

9 Briefly about Business Process Management (BPM)

BPM (see Figure 12) is a trio: 1) discipline how to better manage an enterprise, 2) COTS and FOSS tools known as BPM suite and 3) an enterprise portfolio of the business processes as well as the practices and tools for governing the design, execution and evolution of this portfolio.

Figure 12 BPM as a trio

The key concept of BPM is business process which is explicitly-defined coordination for guiding the purposeful enactment of business activity flows. In other words, a business process is an agreed plan which is followed each time a defined sequence of activities is carried out; the plan may include some variants and will possibly allow for some unplanned (i.e. unanticipated) changes. (see other BPM-related definitions http://improving-bpm-systems.blogspot.ch/2014/01/definition-of-bpm-and-related-terms.html ).

The operative word in the above definition is coordination. Although business processes are often associated with only one coordination technique known as template (workflow-like and BPEL-like fixed logic for sequencing activities), there are many coordination techniques (see http://improving-bpm-systems.blogspot.ch/2014/03/coordination-techniques-in-bpm.html ). The most popular from them are various data-based (also rule-based, decision-based, intelligence-based) and event-based (see EPN - http://improving-bpm-systems.blogspot.fr/2011/01/explicit-event-processing-agents-in.html ) coordination techniques.

From the behavioural (or dynamic) point of view, various coordination techniques are necessary to provide enough flexibility to realise various variants of BPM usage – see http://improving-bpm-systems.blogspot.ch/2010/12/illustrations-for-bpm-acm-case.html .

From the structural (or static) point of view, an enterprise can be presented as a system of processes which comprises various coordination constructs of different granularity (process patterns, processes per se, clusters of processes and value-streams) formed via various coordination techniques (see http://improving-bpm-systems.blogspot.ch/2014/03/enterprise-as-system-of-processes.html ).

For software architects, it is important to know BPM consider business processes explicit (i.e. formally defined to be understandable by different participants) and executable (conceptually, the process instance executes itself, following the BPM practitioner’s model, but unfolding independent of the BPM practitioner; process instances are performed or enacted, which may include automated aspects).

P.S: Various ways how a company can benefit from a BPM are listed in http://improving-bpm-systems.blogspot.ch/2014/05/ideas-for-bpmshift-delenda-est-vendor_9.html

10 Structuring executable processes and services

An executable process coordinates the execution of some services. Such a process is expressed in a particular language (i.e. BPMN) and it invokes some services. In Figure 13, the process is in the pool “COOR”, interactive services are in the two pools above it and automated services are in the two pools below it. Note, BPMN is a typical DSL.

Figure 13 Process coordinates some services

This is a classic picture, but how to bring microservices to it?

Each enterprise is a complex, dynamic, unique (for each enterprise) and recursive (i.e. like “Russian doll”) relationship (see Figure 14) between services and processes:
  • All processes are services
  • Some operations of a service can be implemented as a process
  • A process includes services in its implementation
Figure 14 Recursive nature of relationship between processes and services

Thus, some “big” services are implemented as explicit and executable processes until only microservices are used.

The relationship does not force to have a “pure” structure, but brings flexibility of converting processes to services and vice-versa as necessary, e.g. to use services provisioned from cloud (as shown in Figure 15).

Figure 15 Structure of process and services

Note that the business process modelling procedure should take care about decomposing a big service into smaller services which are coordinated by the process. Different people in similar situations should find similar services (especially, microservices) although that such a decomposition is creative work. Example of such a modelling procedure is in http://improving-bpm-systems.blogspot.ch/2013/07/bpm-for-business-analysist-modelling.html .

11 Multi-layered structuring of process-centric solutions

Because a process coordinates various business artefacts , e.g. “Who (roles) is doing What (business objects), When (coordination of activities), Why (business rules), How (business activities) and with Which Results (performance indicators)”, these artefacts can be structured around processes.

This structure arranges different artefacts on separate layers as shown in Figure 16. Each layer is a level of abstraction of the business and addresses some particular concerns.

Figure 16 Multi-level implementation model

More details are available from http://improving-bpm-systems.blogspot.ch/2011/07/enterprise-patterns-caps.html

Each layer has two roles: it exploits the functionalities of the lower layer, and it serves the higher layer. Each layer has a well-defined interface and its implementation is independent of that of the others. Each layer comprises many services that can be used independently – it is not necessary that all layers be fully implemented at the same time or even be provided in a single project.

Another practical observation is that different layers have lifecycles of different time scales: typical repositories have a 5- to 10-year life-span while the business requires continuous improvement. Because of the implementation independence of the different layers, each layer may evolve at its own pace without being hampered by the others.

Business objects, routines, processes, KPIs, events, rules, audit trails, roles, etc. are the first candidates for microservices which implement particular artefacts.



Technology-enabled #healthcare transformation (via synergy between #entarch, #BPM, #SOA)

Concept paper abstract - please contact me of you want to have a look at the full paper and provide your feedback

We believe that healthcare sector needs a disruptive transformation:
  • healthcare should be more affordable; 
  • healthcare should offer the best possible services for each patient; 
  • healthcare should become the centre of the health value-stream; 
  • healthcare should seamlessly incorporate innovations; 
  • healthcare should be secured by design; 
  • healthcare should prevent unjustified proliferation of tools. 
Our experience shows that systems of the complexity as healthcare must be carefully architectured to
  • avoid duplications, 
  • mitigate execution risks, 
  • enable coordination, collaboration and cooperation, 
  • build mutual understanding among all participants and 
  • explicitly demonstrate how the system will address stakeholders’ concerns. 
To address two essential challenges: 1) “functional silo” nature of the modern healthcare ICT and 2) high level of diversity between IT-enabled healthcare implementation initiatives, this concept paper proposes a platform-based approach for realisation of healthcare platform.

The concept paper explains this approach via a coherent set of the following views:
  • Big picture of healthcare 
  • Reference functional architecture 
  • Enterprise as a system of processes 
  • Security enhanced by the use of processes 
  • Some participant’s view 
  • Platform-based approach 
  • Implementation practices 
  • Project management practices 
  • Multi-layer implementation model 
  • Agile solution delivery practices 
  • Various technologies around 
  • Modernisation of applications to become process-centric 
Majority of those views are briefs on more detailed and proven methodologies and technologies.

The concept paper provides as well a mapping between views and the following stakeholders: Citizens, Patients, Professionals, Healthcare self-regulators, Governmental regulators, Service providers, Medical research, Vendors, Insurance, Information systems architects, Project managers.



#smartcity project proposal - call for collaboration (to use the joint power of #entarch and #BPM)

Full concept paper (added later)

Official invitation

Dear colleagues,

Initiated by a consortium headed by the Technology Institute at the Varna Free University and several ICT companies (with the support of the local Public Administration) a project idea is being developed and prepared for submission under the H2020 Programme.

Topic: e-Infrastructure Policy Development and International Cooperation.

Attached please find the project fiche with the most relevant parameters outlined.

As the deadline is on Sept. 2 (and moreover with August being a vacation period for many) a confirmation of interest in taking part in the consortium is kindly suggested by the first days of August.

Various types of partners are sought - Universities, local administrations, ICT companies etc. - so in case you might not find the topic of interest, please refer this e-mail to other working partners of yours who might be suitable or interested.

Thank you very much in advance!
Project Administrator

Some explainations from the Project Administrator

Financial obligations
As for financial obbligations - it would be 100% financed by the European commission - whatever we promise in the budget, the organisations spend, they keep documents of the expenditures, and it gets reimbursed.

Municipalities share some of their data, network, etc - so that e-government is improved; Universities aid with research, provide other ideas, share student's work and whatever else we want them... while the ICT companies write and maintain the framework.

Project website - http://foneca.com/SMARTCITY/

Realisation smart-city as a sociotechnical system: concept paper abstract 

The study “Mapping smart cities in the EU” defines smart-city as “a city seeking to address public issues via ICT-based solutions on the basis of a multi-stakeholder, municipally based partnership”. Thus smart city should be considered as a system and, actually, as a sociotechnical system to emphasize that the relationships between socio and technical elements should lead to the emergence of productivity and wellbeing.

Our experience shows that systems of such complexity must be systemicly architectured to

  • avoid duplications, 
  • mitigate execution risks, 
  • enable coordination, collaboration and cooperation, 
  • build mutual understanding among all participants and 
  • explicitly demonstrate how the system will address stakeholders’ concerns. 
To address two essential challenges: 1) “functional silo” nature of the modern cities ICT systems and 2) high level of diversity between smart-city implementation initiatives, this concept paper proposes a platform-based common urban business execution approach for realisation of smart-city.

The concept paper explains this approach via a coherent set of the following views:
  • Big picture of smart city 
  • Reference functional architecture 
  • Platform-based approach for working and advancing together 
  • Platform implementation practices 
  • Project management practices 
  • Implementation smart-city governance practices 
  • Enterprise as a system of processes 
  • Enhancing information security by the use of processes 
  • Some participant’s view 
  • Multi-layered implementation model 
  • Agile solution delivery practices 
  • Various technologies around the implementation model 
  • Modernisation of applications to become process-centric 
Majority of those views are briefs on more detailed and proven methodologies and technologies.

The concept paper provides as well a mapping between views and the following stakeholders: Citizens, Government authorities, Funding bodies, Local government stakeholders, National regulatory agencies, Political parties, Public service providers, IT vendors, Local businesses, Information systems architects and Project managers. The next step will be in selection of several smart-city projects and their architecting in accordance with the proposed platform.