Post-platform enterprise pattern: EEYORE

Post-platform enterprise pattern: Enterprise-Enterprise Yet Open Robotic Environment (EEYORE)

With thanks to John Morris (@JohnHMorris) and Michael Poulin (@m3poulin) for their valuable comments.

An updated version of the article is available at https://bpm.com/bpm-today/blogs/1292-post-platform-enterprise-pattern-faster-and-cheaper-inter-enterprise-ecosystem-business


In the modern business world, the vast majority of work is done by joint efforts of two or many enterprises. Inter-enterprise “working together” is a norm right now. It may have different longevity: one-off or sporadic or permanent (as B2B partnership). Can such “working together” be used systemically? For example, a modern enterprise is trying to get competitive advantage by
  1. perfecting, digitizing and innovating its core-business capabilities, and 
  2. using the best “other” enterprises for provisioning “other” capabilities needed to achieve the particular goal (even its mission); note that, those “other” capabilities for the enterprise point of view are the core-business capabilities from “other” enterprises point of view. 

In other words, an enterprise may combine its own “internal” capabilities with “external” capabilities obtained from other enterprises to achieve the particular goal.

Thus every enterprise involved is carrying out only its core-business capabilities! This is a big difference with the current situation in which some of enterprise’s capabilities (i.e. core-business) must compete in the world-wide market and some of enterprise’s capabilities (supporting) have no competitor at all.

The economist Ronald Coase - said that “Firms exist when the transaction cost of doing something within the firm, even with all its overhead, is lower than cost of doing things through a marketplace of free agents”. At present, an enterprise possesses core-business capabilities and supporting capabilities. Historically, the cost of intra-enterprise transactions is lower than the cost of inter-enterprise transactions. At the digital age the costs of inter-enterprise transactions are constantly dropping as digital technology develops (the so-called "API economy" is a good example of this). Of course, the cost contracting out versus direct management is only one of many factors such as risks, time lag, security, etc. to be taken into account.

Thus, the famous classic representation of an enterprise may be redrawn as shown in the illustration below. Note: Also logistics, marketing and sales may be considered as non core-business capabilities and be provided by “other” enterprises.

Ability of several enterprises to work together with maximum synergy and minimum overhead (thus much better than a classic enterprise) is critically important. Potential list of common activities to achieve that is not long but daunting:
  • formally define a work to be done ; 
  • find the best other enterprises for this work to be done; 
  • contract selected other enterprises; 
  • activate and configure a trusted working environment for all participating enterprises; 
  • carry out the work with secured sharing some data and information among participating enterprises and 
  • complete all contracts related to this work to be one. 
Certainly if modern digital technologies are properly architected together then they can enable this new way of working by making it more efficient and more effective that the current way of working.

Explicit, formal, machine-readable and machine-executable processes can act as a neutral and natural referee for coordinating inter-enterprise work. A potential implementation is an BPM-suite tool. A target is to position a BPM-suite tool as an independent and trusted 3rd party (a referee or a coordinator) to execute legally-bound processes for 2 or more mutually non-trusted parties. See http://improving-bpm-systems.blogspot.com/2016/07/digital-contract-as-process-enables.html

Of course, when many enterprises contribute into some common work, records management must be centralized (i.e. accepted by all the participants) and suitable for digital way of working. A digital archive with good availability and integrity can act as an escrow for some transactions and documents. Potential implementation is the blockchain technology.

An example of such a “disaggregation” of a classic enterprise – https://news.cgtn.com/news/35636a4e33637a6333566d54/share_p.html


Working together for the same goal requires sharing some things (business and technical artefacts) and, probably, some agreed means. Such a sharing is lesser for cooperation (different enterprise for different activities) and greater for collaboration (different enterprises for same activities).

Imagine that 10 enterprises have decided to work together. Shall each of them implement business processes required for this engagement? Add complex EDI infrastructure for communicating between processes from different enterprise? Keep everything within each enterprise? It looks a bit difficult. If something that must be shared is done once for everyone then everyone will gain. Thus some processes related to this engagement may be implemented once at an independent provider and linked to existing processes in each enterprise.

WHAT to share
Supporting means
Data (see the DIKW pattern)
hashes, cryptographic keys
common immutable storage
Information (see the DIKW pattern)
business objects
Interoperability, e.g. in interfaces
common immutable storage
audit trails, signed documents, agreed goals, demonstrated KPIs, inputs & outputs of inter-enterprise transaction
common immutable storage
some facts important for business
synchronization of work
common event queue
agreed business logic
regulation of work
common decision mechanism
agreed procedures, agreed SLAs, agreed protocols
common scripts in domain-specific languages
agreed testing methods
coordination of work
common coordination mechanism
KPI calculations
agreed formulas and results of calculations
common view on performance
common dashboard
agreed algorithms and results
common view on planning
common prediction mechanism
some proposals for better working
better joint performance
common set of digital models and agreed scenarios
Estimations of uncertainty which are evolving along the time
participants opinions on some factors
non-biased corporate security for all the enterprises involved
common immutable storage
Estimations of risk which are evolving along processes
participants opinions on adverse impacts
non-biased corporate security for all the enterprises involved
common immutable storage
payment mechanism
minimizing transactions with banks
own local currency

Things, which are shared in the particular case, must be a) architected together and b) digital (explicit, formal, machine-readable and machine executable) to achieve desired values of the important emergent characteristics such as:
  • the transactional cost for inter-enterprise transactions, 
  • risks, 
  • time lag, 
  • security, 
  • manageability, 
  • etc. 


New application architecture for inter-enterprise agile engagements can be built on the following pillars:
  • Immutable common digital archive (FACTS) to store all versions of all the artefacts. 
  • Microservices (ACTIONS), potentially in a serverless computing environment (because a digital archive is used as a common immutable storage). Please note that microservices are normal services (according to OASIS) except that one unit-of-functionality is one-unit-of-deployment is one-unit-of-execution. 
  • Machine-executable processes (COORDINATION). 
These pillars perfectly work together (in a robotic way without high level of creativity) – process templates define the granularity and interfaces for microservices and digital archive keeps all the artifacts (of course, they must be versioned). Process instances define validity of various facts.
See also http://improving-bpm-systems.blogspot.com/2018/06/architecting-modern-digital-systems.html

Some hurdles – processes modelling

Modelling of inter-enterprise processes is not easy because such processes are distributed (each of them may be executed in its own computing environment) and they communicated by exchanging of messages thus it is necessary to handle exceptions in communicating processes. Good modelling styles and process patterns will help.

Some hurdles – process execution

Some BPM-suite tools are already connected to the blockchain which is used as a common immutable storage.

1) Ultimus can use blockchain as a data storage - "Ultimus uses Tierion to create an audit trail for business processes and prove the integrity and timestamp of documents." See https://medium.com/tierion/ultimus-integrates-tierion-for-blockchain-digital-process-automation-f8331d76216a

2) Bonita can use blockchain – https://www.youtube.com/watch?v=lkwvko2Uy24

3) ConsenSys uses Camunda BPM-suite to use blockchain as a data storage, creating new users and executing smart-contracts https://www.youtube.com/watch?v=oww8zMzxvZA&feature=youtu.be
Some hurdles – process execution as a smart contract

One of the option to bring process to existing blockchain implementations is to implement a BPMN-like interpreter as a smart contract in the blockchain technology.

Some hurdles – blockchain mentality

People from the blockchain domain follow the “if you have a hammer then everything is a nail” negative pattern. https://blog.apla.io/report-of-the-egaas-team-556e5e4717bd

See http://ipe-lab.com/publication/223/?sphrase_id=41 as a case of the blockchain for control of third-party services. This an attempt to change a group of processes (from a few B2B partners) into a blockchain application. On the bottom picture (TO-BE), actual processes from the top picture (AS-IS) were lost. And, “Erroneous behaviour is judged by voting among all participants”, not by thorough analysis!


This pattern opens new perspective for the traditional Business Process Management which becomes also Inter-Enterprise Ecosystem Business. In other words, processes will expand outside enterprises to enable faster and cheaper “working together” for enterprises. 

And a quote from Alberto Manuel‏ (@AlbertoManuel) "Architecting expanded Value chain in the era of digital ecosystems and shared capabilities."



Architecting modern digital systems #entarch #bizarch #apparch #bpm #security #microservice

Mini-course at VFU

1 Title

Architecting modern digital systems

2 The problem to be addressed

At present, there are many IT-related methodologies, technologies, tools and schools of thoughts which overlap and contradict each other. The best practices are actually the best only in particular situations. Often decisions about software-intensive solutions are taken on the in-complete and subjective base. All of this tremendously complicates the modern digital systems thus reducing their potential effectiveness and efficiency. 

3 Objectives

The purpose of this course is to provide the basic knowledge and experience necessary to better understand how to deal with the increasing complexity of the information technologies to obtain the synergy between business needs and IT potentials. 

4 The approach

The course is based on the practical use of Enterprise Architecture (EA) which a methodology and practice for architecting solutions. EA provides an overarching guideline for understanding a “problem space” and take necessary decision about the “solution space” to deliver which addresses the problem. 

5 Learning outcomes

The trainees will
  • learn a systematic approach for architecting digital solutions; 
  • learn about some modern information technologies; 
  • learn how those technologies are working together for a systematic architecting, design, implementation, operations and evolution of digital systems; 
  • carry out a practical architecting exercise. 

6 Target audience

Bachelor and master level students specialising in IT. 

7 Requested knowledge

General knowledge of IS/IT. General programming experience. 

8 Layout of the teaching

Teaching will be given as six 1.5-hour lectures. The first 4 lectures will be about presenting some methodologies and technologies. Then the students will be asked to architect a solution based on a practical situation. The last session will be about presenting the student’s solution and wrapping up this mini-course. 


Better architecting with – digital models

1 The expression “all models are wrong” is wrong

"All models are wrong" is a well-known aphorism from statistics (attributed statistician to George Box, 1976), which has been used in other disciplines. A modern book on system engineering claims that "The map is not the territory, the menu can’t be eaten, the drawings do not fly, the source code does not store the values of its variables during execution". Let's analyse these statements.

The territory existed before its map. A map is an informational (or digital right now) "twin" of the territory. Since the territory is a natural (made by nature) object, its digital "twin" (made by man) is secondary and approximative.

The menu is the chef's plan and, at the same time, the informational (or digital right now) "twin" of kitchen services. Kitchen services are planned ahead for the several good reasons: discuss them with all the involved persons, organize work, optimize costs and reduce risks. Thus the menu (as a planning tool) helps in achieving the result, but it is not mandatory to provide services. In this case, the informational (or digital right now) "twin" may appear before the physical "twin".

It is clear that the drawings do not fly, but there is no flight without them. Those drawings is a necessary "part" of the aircraft to be manufactured according to these drawings. It is clear that the drawings, in themselves, are not a sufficient "part" of the aircraft because there is a long way from the drawing to the working copy. However, we can consider that the drawings are informational (or digital right now) "ancestors" of aircrafts. In this case, the informational (or digital right now) "twin" is necessarily created before its physical "twin".

Well, finally, the computer program and its source code. What is the relationship between them? There is no program without its source code. The source code can be expressed in several views - in high-level language and in the language of machine instructions (i.e. assembler). This is common but not necessary, because the source code can be interpreted directly without being translated into machine instructions. Wait, this looks rather familiar.

Wow, this is the genetic code for a bionic program! The genetic code does not have all the details, but it determines (albeit partially) the future bionic system. Of course, any bionic system is an adaptive system with a complex "bootstrap" procedure. While modern software systems must be highly-dependable.

So, in ther digital world we copy the mother nature – we create a piece of digital genetic code (in some programming languages) and from it we create a program with the help of the digital environment. Thus, the source code is the main part of the program. Both they are digital artefacts. In this case, there is only an informational (or digital right now) "twin". Well then, it is not a "twin" but an "original"! And this is a digital model.

Physical form
Digital form
1. Territory
2. Menu (probably)
3. Drawings (mandatory)
4. Program (inevitable)
2. Meal
3 Plane
1. Map

2 Obliterating differences between architecture and its description in the digital world

For the digital world, we must slightly adjust some of the provisions of ISO/IEC/IEEE 42010 Systems and software engineering - Architecture description. This standard clearly separates the architecture of the system and the description of the architecture. In accordance with this standard, the architecture description consists of models. But in the digital world, models can be also elements of the system-of-interest.

The usage of digital models:
  1. simplifies the choice of elements and system-of-interest options, 
  2. allows making predictions about the behaviour of the system-of-interest and 
  3. replaces the system-of-interest itself, for example, for training purposes. 

Such digital models are machine-readable and machine-executable. For example, a business process is not only an illustration, but also a piece of the source code of the system-of-interest. This increases the importance of Domain-Specific Languages (DSLs) through which some elements of the system-of-interest can be defined in business terms. For example, BPMN is a DSL. (As many years ago with the advent of SGML and HTML people began to say: the program becomes a document, in a document becomes a program.) Also, the appearance of the machine-executed elements of the system-of-interest in the early stages of its life cycle allows us to speak about the emergence of the BizDevOps culture as a natural up-stream extension of the DevOps culture.

The logic of the architecture viewpoints changes. Now, they are designed to systematically create model-types, some of which will be digital, i.e. machine-executable elements and/or machine-readable elements (or nomenclatures, for example, a list of all roles). Architecture viewpoints become some kind of aqueduct columns that support the logic of creating digital systems.

Fragment of the longest (132 km) Roman aqueduct, Tunisia.
(by the way, some parts of this structure are still working and used by local people)

Relationships between models also change. Previously, it was considered that models and views were created solely for stakeholders and, often, different models were created by different people thus models must be permanently aligned, e.g. by a chief architect. With the digital models, there is a lot of interest in semi-automatic and automatic creation of some models from already existing models. For example, if there is a functional map of the organization, then you can automatically offer the initial version of the organizational structure.

It is observed that the difference between the system-of-interest and its architecture description is disappearing in two directions:
  • Some elements of the system-of-interest can be used instead of some architecture description models. 
  •  Some architecture description models became the system elements. 

Ideally, the whole system description should be automatically generated from the existing system elements. This reminds us the “literate programming” from prof. Knuth – see https://www-cs-faculty.stanford.edu/~knuth/lp.html

It is clear that for each type of systems some of its digital models are system-forming elements. Imagine a directed and non-cyclic graph of dependencies between nodes as models and let us assign to its edges measure of the complexity of the "transition" between nodes. Then the models from which one can easily create the majority of other models are system-forming models.

All this is partially described in the series https://improving-bpm-systems.blogspot.com/search/label/%23BAW



Many viewpoints on the concept capability

This blogpost is based on the several recent LI discussions about the concept “capability” (see their URLs at the end of this blogpost).

Those endless discussions only confirm a well-known systemic observation – a complex concept is better understand via its relationships to other concepts. Thus, to define the concept “capability”, it is necessary to define together the several related concepts, such as “function”, “service” and “process”. (Other concepts could be added on demand.)

Another complexity is, again, a well-known systemic observation – different people see the same thing differently. It is called “architecture viewpoint” (like an 3D object may have 3 projections). The many problem with architecture viewpoints is that they must be aligned.

The aim of this article to outline a main (or master) viewpoint which allows to align all other viewpoints. (With special thanks to Michael Poulin for his valuable comments for this article.)

1 Different viewpoints on capability

So far, the several viewpoints on the concept “capability” have been detected.

Demand viewpoint – to achieve our mission and vision we need a system with a particular performance of doing something. Demand-capability is a relative measure of ability of a system (or its element) doing something at a particular level of performance.

This viewpoint is about WHAT and HOW-WELL without any information about WHO, HOW, WHERE, WITH-WHAT-RESOURCES, etc.

Supply viewpoint – we have a system with a particular performance because we made it and deployed some resources. Supply-capability is a proven performance of a system (or its element) doing something.

This viewpoint is about WHAT, HOW-WELL, WHO, HOW, WHERE, etc.

Reference viewpoint – all the systems with a similar purpose (or mission) should be able to do this. Reference-capability is an ability of a system (or its element) doing something.

This viewpoint is about WHAT only. Typically, the reference viewpoint relates to a particular type of business, e.g. banking, rent-a-car, telecom, etc.

2 Let us classify some of the existing approaches

The list below is copied from https://www.dragon1.com/terms/capability-definition to be annotated.

ArchiMate 3.1: A capability represents an ability that an active structure element, such as an organization, person, or system, possesses. AS: It seems that it is the supply viewpoint.

TOGAF 9.1: A capability is an ability that an organization, person, or system possesses. Capabilities are typically expressed in general and high-level terms and typically require a combination of organization, people, processes, and technology to achieve. For example, marketing, customer contact, or outbound telemarketing. AS: It seems that it is the supply viewpoint.

BIZBOK 4.1: A capability is a particular ability or capacity that a business may possess or exchange to achieve a specific purpose or outcome. AS: It seems that it is the reference viewpoint.

Bas van Gils (Strategy Alliance): CAPABILITY = CAPacity x ABILITY. - ABILITY refers to skills and proficiency in a certain area. It should be noted that ability is a relative term: one actor (human, machine, computer) may have higher levels of proficiency than others. The level of ability can be increased due to (formal) training, and practice. - CAPacity refers to the degree to which actors (human, machine, computer) are available to use their skills to achieve a goal. Capacity can be influenced by freeing up / adding resources to the available pool. More information on the Strategy Alliance Website. AS: It seems that it is the supply viewpoint.

Tom Graves (http://weblog.tetradian.com/2013/12/14/definitions-on-capability/ ) RE “Performance is an attribute of a service – not of a capability as such”. AS: It seems that it is the supply viewpoint.

Mark Paauwe (https://www.dragon1.com/terms/capability-definition ) A capability is a set of tasks that a system is potentially able to perform at a certain performance level, but only with the use of required resources. AS: It seems that it is the supply viewpoint.

Michael Poulin (https://organicbusinessdesign.com/agile-business-capability-part-1/ ) - A business capability is an ability of an entity - person or organisation - to create or deliver certain Real world Effect (outcome) in particular business execution context. If the context changes, yesterdays capability can vanish. A fact that you did something yesterday does not mean (itself) that you can do this tomorrow. A capability exists only if there are all needed resources available for the capability realization. No resources - no capabilities; competencies/knowledge/skills are not enough for having the capability. You lose capability if you outsource it. AS: It seems that it is the supply viewpoint.

Richard Hillier - A business capability is the ability to perform a business activity which is recognized as being required for success and which needs to be specifically managed. AS: It seems that it is the supply viewpoint.

So far, there is no demand viewpoint. Why?

3 Where is the demand viewpoint? 

Any demand viewpoint it is dynamic and organisation specific. In any business, “bigger” (with emergent characteristics) capabilities are assembled from “smaller” (available or not yet) capabilities. Because such emergent characteristics are exhibited as the result of interactions of “smaller” capabilities between themselves and with other capabilities then some coordination of such interactions is mandatory.

Note: It is not a bottom-up approach, but a recursive combination of analysis (finding what "smaller" capabilities are necessary) and synthesis (proving that "smaller" capabilities and some coordination between them achieve "bigger" capability). 

Imagine, an enterprise or solutions architect has to implement a particular demand-capability within an organisation (which is, obviously, a system). There are several choices:
  1. Implement this demand-capability within the organisation as a coordination of some other capabilities.
  2. Outsource this demand-capability via Business-to-Business (B2B) partnership and access it in accordance with a contract between two organisations.
  3. Acquire this demand-capability as commodity maybe via a tender.
  4. Ignore this demand-capability by providing some good reasons.

With the option 1 the enterprise architect must chose a set of “smaller” capabilities and a way to coordinate them. The reference viewpoint, if any, may help to find out those “smaller” capabilities. (Of course, some “smaller” capabilities may be not available yet and have to be implemented recursively).

Also, saying that “to implement this capability we will use those two capabilities” is not enough because the way to coordinate those capabilities will affect the performance of this capability. Sure that various estimations of the performance of this future supply-capability may be provided.

Any demand-capability or reference-capability which is implemented by (or within) the organisation is called function. Creating a function implies that several organisational, technical, contractual, resourcing, staffing and other changes must be carried out within the organisation. Function immediately has some performance approximation as supply-capability, i.e. its expected performance is stated. Ideally, the performance of such supply-capability exceeds the requested performance of the related demand-capability. (Sometimes, a gap between them can be huge – remember that we never drive our cars at their maximum speed.)

An illustration of relationships between various concepts is shown below. The left half of this illustration is the reference map of an organisation and the right half of this illustration is the functional map of this organisation. The functional map is smaller then the reference map, because some capabilities were implemented as commodities or via B2B partnership. A formal procedure for moving from “left” to “right” can be produced on demand.

Because functions can’t provide a good approximation of its expected performance, organisations uses services – service is an arrangement to access to one or more functions on a contractual basis. (Note: such an access may be within the same organisation as well as between different organisations). Because any service must take into consideration its contract (including SLA) and its expected usage, its performance may be anticipated better than for functions. Creating services also implies some organisational, technical, contractual, resourcing, staffing and other changes.

Nevertheless, neither functions nor services specify explicitly the coordination between “smaller” capabilities thus their estimations of the expected performance is still a guess. So far, only Business Process Management (BPM) allow the organisation to build, run and improve “bigger” capabilities in predictive, transparent and provable manner because process is an explicit, formal, machine-readable and machine-executable coordination. Obviously, one can evaluate (with a high level of confidence) the performance of a “big” supply-capability by knowing the process, its usage and performance of “small” supply-capabilities.

A few notes: Considering that there are many coordination techniques then there are no principal differences between BPM and Adaptive Case Management – see http://improving-bpm-systems.blogspot.bg/2014/03/coordination-techniques-in-bpm.html . BPM is actually a trio: discipline to manage business via processes, software to manage processes themselves (BPM-suite tools) and practice & architecture. Also, orchestration and choreography are variants of coordination.

Some assets and skills are required to operate services and processes. Obviously, assets and skills may be outsourced (or insourced).

Organisational structure depends on the structure of functions (or functional map). (Think about the separation of responsibilities). http://improving-bpm-systems.blogspot.bg/2011/10/enterprise-pattern-structuring-it.html http://improving-bpm-systems.blogspot.bg/2012/01/enterprise-pattern-sito-extended.html

4 Big picture

The overall logic is the following:

  1. Capability The organisation has to be able to do something (because of the mission) with a particular level of performance (because of the vision).
  2. Function Some of needed (demand-)capabilities must be implemented within the organisation. For example, because it is a core-business capability. By definition, a function is already a supply-capability (as a system element of an organisation as a system) and some assets, skills and coordination have to be provided. 
  3. Service Although function is already a supply-capability, the evaluation of its performance is rather approximative. Service allows improving the evaluation of its expected performance by specifying its contractual conditions.
  4. Process For better estimation of the expected performance, processes (actually, BPM) offer an explicit coordination of ”smaller” capabilities.

5    Conclusion

To avoid confusion when talking about capabilities, please, be explicit about what viewpoint(s) you are using. Also, please, define related terminology up-front.


Related LI discussion

Other discussions:







Better architecting with – explicit #Digital #Systems Life Cycle (DiSyLiCy)

This blogpost continues the "Better Architecting With" series http://improving-bpm-systems.blogspot.bg/search/label/%23BAW

1 About Digital Systems

A digital system is a system which builds life cycles of its primary artefacts on the primacy of explicit, formal, computer-readable and computer-executable presentation of those artefacts (in other words, digital presentation of those artefacts). For example:
  • a house is designed digitally as an “ideal digital house”;
  • this digital form drives 3D printers and robots to build a real house;
  • this real house is equipped by IoT sensors which generate the “real digital house”, and
  • differences between the “ideal digital house” and the “real digital house” is used for maintenance and various improvements.
Digital systems employ the concept of “digital twins” – computerized companions of physical assets that can be used for various purposes. The relationship between digital twins and physical assets is the following:
  • for a man-made object, a digital twin comes first.
  • for a nature-made object, a digital twin comes second.
For more details about digital, please read http://improving-bpm-systems.blogspot.bg/2015/03/entarch-view-on-ditigal.html

Digital systems are uber-complex real-time systems of cyber-physical, socio-technical and classic IT systems with the following characteristics:
  • digital data and information in huge volumes;
  • software-intensive;
  • distributed and decentralized;
  • great influence on our society;
  • ability to interact with the physical world;
  • many essential characteristics which are required by design and by default (e.g. security, safety, privacy and resilience);
  • low cost of operation;
  • short time to market;
  • self-referential (some), and
  • long and complex life cycle.
This document outlines an approach for building digital systems which is based on synergy between:
  1. the project (or work) management practices and 
  2. the digital systems life cycle management practices.
This approach facilitates optimisation of work management practices for digital systems life cycle. For example, if a digital system has two major components (bespoked and COTS) then each of them may have its own work management practice.

Let us consider the following hierarchy:
  1. The type of a system-of-interest defines its DiSyLiCy (as a variant of the generic DiSyLiCy template) of the system-of-interest.
  2. The DiSyLiCy defines the DiSyLiCy management (because each phase of the DiSyLiCy may have its own management practice).
  3. The DiSyLiCy management defines the work planning (overall and per phases) methods.
  4. Work planning defines the work execution management (i.e. project management).
Note: In the context of this document, the concepts “system-of-interest” and “solution” are used interchangeably because a system-of-interest is a solution of a problem.

2 WHY the Digital Systems Life Cycle (DiSyLiCy) is important

We are dealing more and more with digital systems. They are intrinsically complex systems in which software primarily defines of the system as a whole. The recent trends in digital systems show that such systems have the following common characteristics.
  • Such systems are assembled from many distributed elements which are deployed in various computing environments: in-house, in-cloud (SaaS, PaaS), at partners.
  • Elements of such systems have different granularity, e.g. platforms, applications, services and microservices.
  • Elements of such systems have different life cycles, e.g. some elements, especially business-facing, may require changes more often.
  • Elements of such systems have different ownership: FOSS, bespoke, commodity, community, service providers.
  • Elements of such system may be shared with other versions of this system and/or other software-intensive systems.
  • There are many internal and external drivers for changes of those elements, e.g. security threads, natural evolution of their elements, morphing business requirements, continuous improvements.
  • The speed of changes in their elements must fit the required urgency, e.g. time-to-market, levels of the security risks, etc.
  • The trustworthiness (security, safety, resilience, privacy) of their elements becomes very critical in the digital era because even one “weak link” in an assembly may ruin common efforts.
  • The TCO of such systems follows the classic 20/80 ratio – 20 % to build (development and transition phases) a system and 80 % to operate and evolve it. 
Obviously, only concentrating on the development phase of such systems is not enough because such systems, after being in production, must evolve very fast and in many unpredictable ways. Thus all the phases of the whole life cycle are equally important.

Also, a new “non-functional” (or quality) system characteristic, called “variability”, becomes very critical. “Most modern software needs to support increasing amounts of variability, i.e. locations in the software where behaviour can be configured. This trend leads to a situation where the complexity of managing the amount of variability becomes a primary concern that needs to be addressed.” ( http://program-transformation.org/Variability/SoftwareVariabilityManagement ).

3 HOW the Digital Systems Life Cycle (DiSyLiCy) is composed

The assembled nature of software-intensive systems, certainly, complicates their life cycle which must address:
  • the life cycle of each element and 
  • the life cycle of the system as a whole.
It is clear that such systems share some common characteristics with systems-of-systems (making a system from elements without having a direct ownership on them). Thus, the coordination is critical for the seamless transition from one phase to another and for the seamless integration of various elements.

The necessary coordination is achieved by a combination of the following:
  • Architecture which is critical for good, right and successful software-intensive systems.
  • The systems approach providing a transversal systemic description which comprises several views and models. They evolve together during the system life cycle. http://improving-bpm-systems.blogspot.ch/2017/07/better-architecting-with-systems.html
  • Explicit and tailorable generic DiSyLiCy template which is adjustable to unique needs of the system-of-interest. This life cycle recommends to provide various views and models at different phases.
  • Various architectural styles and techniques to optimise DiSyLiCy within phases and beyond phases for the system-of-interest. 
  • Various work management practices for each phase and beyond phases.

4 WHAT is the Digital Systems Life Cycle (DiSyLiCy)

4.1 Overview of the generic DiSyLiCy template

The DiSyLiCy template comprises the following phase:
  1. Business case phase
  2. Architecting (or elaboration) phase
  3. Construction (or build or implementation) phase which may comprise the following sub-phases:
    • Architecting sub-phase – if necessary
    • Construction sub-phase
    • Transition sub-phase – if necessary
  4. Transition (or deployment) phase
  5. Pilot (or lab) phase – optional
  6. Production (or operating) phase
    • Operations sub-phase
    • Maintenance (or evolution) sub-phase – repetitive
      • Architecting sub-phase – if necessary
      • Construction sub-phase
      • Transition sub-phase
  7. Retiring phase
  8. Decommissioning phase

Without sub-phases, the DiSyLiCy template is depicted in figure below which shows how a software-intensive system become more concrete during its life cycle.

The complexity of the construction phase must correspond the complexity of its software-intensive system. The construction phase may simultaneously be:
  • recursive –- complex system elements must be architected to produce elements which are simple enough to construct;
  • concurrent – some sub-phases may be executed in-parallel (depending on the availability of resources and dependencies between constructed elements). 
This variant of the generic DiSyLiCy template is depicted in figure below. 

Another variant is to decompose a complex system-of-interest during a single architecting phase.

In the same way, the production phase may have several maintenance phases as shown in the figure below.

Practically all the phases may be repetitive if some conditions of their completion have not been met.

Some phases maybe carried out iteratively (or incrementally) in a few steps to achieve the target situation. Such iterative way of execution is depicted in figures below.

An initial situation

The situation after the first integration.

The situation after the second iteration.

And the final situation.

Please note, that such iterative way of execution is very similar to agile management practices.

4.2 The DiSyLiCy phases vs the systemic description views

At each DiSyLiCy phase, the systemic description of the system-of-interest is updated. in other words, some views (and pertinent models) are prepared and some views (and pertinent models) are updated. The simplified (without sub-phases) dependencies between the DiSyLiCy phases (rows) and the systemic description views (columns) are shown in the table below.

Business Case
  • AGG – aggregated
  • DET - detailed
  • UPD – updated
  • N/A – not applicable
Naturally, that, during the life cycle, systemic views (and pertinent models) gradually become more and more detailed (or concrete). See the blogpost http://improving-bpm-systems.blogspot.ch/2017/07/better-architecting-with-systems.html for the mapping between views and models.

5   Management of the DiSyLiCy

5.1 General

There are two types of logic in any management practice:
  • specific logic which depends on the life cycle (thus called life cycle management), e.g. what phases to finish or what phases to start, and
  • generic logic which does not depends on the life cycle, e.g. what works to finish and what works to start, depending on various (typical in programme and project management) conditions such as availability of some resources, e.g. free staff. Also called work management.

These two logic are strongly intertwined in the life cycle management. For example:
  • the decision to implement a new system depends on this system’s potential business value and some capacity of some resources (generic logic);
  • the decision to complete the architecting phase depends on the quality of the systemic description (specific logic), and
  • the decision to start in parallel one or more construction phases depends on capacity of some resources (generic logic).

Ideally, the life cycle can be presented as a set of interrelated units-of-work which are managed by these two logics. As said before, each unit-of-work has (minimum) two associated events (the start and the finish) at which these management logics are applied. However, there are a lot of other ad-hoc events at which these two logics must be applied as well. For example, various incidents, capacity fluctuation, etc.

Thus the life cycle management is based on a set of events and the following considerations:
  • there is some natural hierarchy and some coordination between events;
  • some of those events are considered as management points at which some management decisions have to be taken;
  • some management decisions may require different levels of authority;
  • some management decisions may be delegated;
  • any management point is associated with a set of rules based on specific and general logic;
  • some events can be planned (they are also called milestones);
  • some work planning methods are available;
  • missing a milestone is also a management event;
  • if more events are planned and less of them are not missed then the execution of life cycle is more seamless;
  • etc.
Any classic project management is based on the management of work with the use of generic logic only.

5.2 Review of some pertinent management practices

Let us illustrate the life cycle management and work management practices.

PMI is a work management practice which is based exclusively on the generic logic and project life cycle. Obviously, it is mandatory to map the life cycle management of the system to be built into project management life cycle. PMI argues to develop Work Breakdown Structure (WBS) which is a hierarchical decomposition of the total scope of work to be carried out by the project team to accomplish the project objectives and create the required deliverables. Obviously, the WBS is a waterfall-like bridge with life cycle management.

PRINCE2 is a work management practice which is based exclusively on the generic logic and project life cycle (which is more elaborated then from PMI).
Waterfall is a life cycle management practice which executes all its phases sequentially and tries to plan all works in advance. But the usage of the same planning method for all the phases is very inefficient.

Iterative is a life cycle management practice which allows incremental and iterative execution of some its phases.

HERMES is an IT-oriented project management practice which uses a very simplified IT systems life cycle as the project life cycle.

TOGAF is a life cycle management practice which covers, primarily, the implementation of IT solutions. Its Architecture Development Method (ADM) was originally “waterfall-like” but the recently some iterations are admitted.

ITSM is a life cycle management practice for IT services. It provides some planning for related works by outlining all necessary processes.

IT4IT is a life cycle management practice for IT solutions. It is an up-streamed version of the ITSM, however the IT4IT says nothing how to implement IT solutions.

DevOps is a life cycle management practice for IT changes, covering from coding to monitoring.

Agile (SCRUM) is a work management practice with an emphasis on software development. In other word, it is a mixture of life cycle management practice and work management practice, leaning to the latter. It is very light on the solution architecture which is presented as a set of small stories. Thus, creation of work to be done is rather ad-hoc. SCRUM is very strong with the work management by time-bound sprints; it promotes incremental and iterative execution of works. A short-time planning is possible. The SCRUM work management is presented in the figure below.

Case management is a work management practice. The case is a circumstance or undertaking that requires a set of works to obtain an acceptable result or achieve a goal. The case management focuses on the subject, over which the works are performed (for example, a person, a case, an insurance case), and is being led by the gradually emerging circumstances of the case.

Classic process management is a work management practice which formally defines a plan (as a flow-chart) of work. A flow-chart may be mimicking a life cycle. Thus, the planning of work is very explicit.

PDCA is a work management practice for small changes which is carried out in four steps: Plan, Do, Check, Act.

Kanban is a method for work planning (scheduling).

Critical path is a method for work planning (scheduling) for projects and processes.

Table below shows how the 6 management practices are compared to the DiSyLiCy phases. Because some of those practices are enterprise-wide then only pertinent parts of them are considered. For example, only 3 from 4 IT4IT value streams are considered (R2D, R2F, D2C).

Business Case












This table shows that no existing management practice which covers fully the DiSyLiCy. 

5.3 Resume

The management of the DiSyLiCy is based on tailoring of the genetic DiSyLiCy template and recommendations what work management practices can be used for each phase.

6 Detailed description ofthe DiSyLiCy phases

Because of this document size only one phase is described below.

6.1 Business case phase


An appropriate authority (e.g. a corporate-wide standing Business & IT governance body) mandates an ad-hoc team for this phase to prepare an estimation for a solution of a given problem.

The goal of this phase is to estimate a solution thus this standing governance committee can take an informed decision “Go / no-Go”.

Typical deliverables of this phase are the following:
  • Solution estimation: initial situation, objectives, scope, assumptions, constraints, schedule, required resources, risks, cost estimation, ROI, etc.
Acceptance practices
The phase team can validate the deliverables with other standing governance bodies, e.g. ARB.

Work management practices

The phase team is composed as the following:
  • business focal point (or Product Owner);
  • business architect or domain business architect;
  • business analyst(s) or domain business analyst(s);
  • solution architect.
Viewpoints and model kinds to be considered

Value view (may be aggregated) with on or many of the following models:
  • Problem space description
  • Problem space influencing factors study
  • The problem space terminology
  • The problem space constrains
  • The mission statement and the vision statement
  • The context (using systems, enabling systems, partner systems) for a future solution (i.e. the system-of-interest)
  • The future solutions’ stakeholder nomenclature
  • Stakeholders’ concerns nomenclature
  • Dependencies between architecture viewpoints, systems roles, stakeholders, stakeholders’ concerns and categories of concerns
  • Some classifications which are specific for the problem space and pertinent for the solution space
  • The high-level requirements (WHO, WHAT, WHY)
  • The high-level stories (WHO, WHAT, WHY, WHERE, WHEN)
  • The high-level use cases (WHO, WHAT, WHY, WHERE, WHEN, HOW)
  • The common high-level requirements
  • The problem space coverage by the high-level use cases
Big picture view (may be aggregated) with on or many of the following models:
  • The solution space terminology
  • The solution space constrains
  • Some classifications which are specific for the solution space
  • Illustrative model(s) of the future solutions including relationships between top-level structure and some context elements
  • The solution space essential characteristics
  • Dependency matrix: problem space common high-level requirements vs. solution space essential characteristics
  • The architecture principles of the solution space
  • The dependency matrix: essential characteristics vs. architecture principles
  • The high-level design for the future solutions
Capability view (may be aggregated) with on or many of the following models:
  • Level 1 capability map
  • Level 2 capability map
  • Level 3 capability map
  • Heat maps
Risk view (may be aggregated)