2016-09-26

Some #entarch concepts derived from the #systemsapproach

In this blogpost I use the systems approach to derive some definitions for Enterprise Architecture (EA) subject field. The basics are in slides 3-12 of http://improving-bpm-systems.blogspot.ch/2016/07/enterprise-architecture-entarch-as.html .

System is a set of interacting discrete parts organised as a whole which exhibits (as the result of interaction between the parts) some emergent characteristics indispensable to achieve one or more stated purposes.

Any system-of-interest has an architecture which is a totality of fundamental concepts or properties of a system in its environment embodied in its discrete parts and relationships, and in the principles of its design and evolution. Architecture of a system-of-interest maybe accidental or intended depending on the way of constructing this system. In any case, any serious change in an enterprise-of-interest implies changing in its EA.

Enterprise is an emotive or motivational structure, bounded by a shared vision, shared values and mutual commitments for joint efforts to achieve one or more stated purposes. An enterprise is realized by an organisation which is a legal structure, bounded by rules, roles and responsibilities. Obviously, any modern enterprise together with its organisation is a socio-technical system (in which the interaction between people and technology is a dominant consideration). Also, an enterprise is a self-evolution system.

Thus Enterprise Architecture (EA) is architecture of an enterprise as a socio-technical system. (Although, it is correct, it is a useless definition for many people). The main and unique power of EA is the ability to objectively estimate effect (cost, benefits and risks) of potential internal changes. For example, what could be the effect of changes in a business unit which necessitates some modifications in some enterprise and departmental applications?

A good EA is the primary enabler for any internal transformations of different extent: project, program and strategy. For any transformation, EA is used to define and validate the future version of EA (called target architecture or blueprint). For example, a good EA can evaluate a level of implementability of a proposed strategy.

Usually, EA is described via a set of architecture viewpoints. Those architecture viewpoints define a set of model kinds which establish relationships between various artefacts: vision, mission, objectives, rules, servers, etc. Architecture viewpoints applied to a system-of-interest generates views whcih comprise some models.

Ideally those viewpoints are aligned, but in the reality it is not the case because different viewpoints are created by different people.

Because of the socio-technological nature of enterprises and their high-level of complexity, EA historically considered as two domain architectures:
  1. Business architecture is architecture of an enterprise considered as a social system for delivering Value (as products and/or services). Main artefacts in business-centric viewpoints are: mission, vision, products, services, directives, objectives, processes, roles, etc.
  2. IT-architecture is architecture of an enterprise considered as an IT-system. Main artefacts in IT-centric viewpoints are: IT tools, processes, and methodologies and associated equipment employed to collect, transform, transport and present information. 




The dependency between those architectures is, in theory, very straightforward. The business architecture defines the IT-architecture. But, in practice, very often, the IT-architecture evolves much slower than the business architecture, thus there is always a gap or misalignment between them.

To avoid this gap, it is necessary:
  1. version all the artefacts during their lifecycle;
  2. evlove artefacts to become digital, externalised, virtual and components of clouds;
  3. model explicitly all relationships between artefacts;
  4. make all models machine-executable, and 
  5. be able to convert models from one view to models in another view.

Thanks,
AS

2016-09-09

Beauty of #microservices: part 9 explicit coordination as a microservice

1 Introduction


This blogpost is inspired by several blogposts about microservices and it is based on the blogpost [REF1] “Architecting #cloud-friendly application architecture #apparch (inspired by #microservices)” http://improving-bpm-systems.blogspot.ch/2015/04/architecting-cloud-friendly-application.html

See also the previous blogposts of “Beauty of #microservices” series

2 Things work better when they work together, on purpose (from www.tetradian.com )


To be efficient, their work must be explicitly coordinated. Certainly, this is strongly applied to all the microservices comprise an application. Of course, it is considered that an application is a several very loosely-coupled clusters of microservices to be coordinated (for example, each such a cluster is responsible for the lifecycle of a particular business entity).

Although there is an opinion that “Service is not comprised of other services due to the independence requirement” (see https://www2.opengroup.org/ogsys/catalog/W169), it is considered that some (with bigger responsibility) microservices can be assembled from other (with smaller responsivity) microservices.

There are several techniques to implement coordination.

Orchestration

  • nature: centralised at design-time and centralised at run-time thus may be explicit
  • specific: there is a misconception that it uses only synchronous communication (à la RPC) although it may use also asynchronous communication (à la message-passing)

Choreography

  • nature: decentralised at design-time and decentralised at run-time thus implicit
  • specific: uses only asynchronous communication (à la message-passing)

Reactive streams and runnable graphs

  • nature: decentralised at design-time and centralised at run-time thus implicit
  • specific: optimised for high volume event processing

Business-process-based

  • nature: centralised at design time and decentralised at run-time thus explicit
  • specific: each case is a completely separate instance with its own lifecycle; and the process may be another microservice

3 Implementation of business-process-based coordination


Of course, it should be a DSL to define an explicit coordination (e.g. BPEL, BPMN, etc.). Using the terminology from the section 7 of [REF1], a DSL-processor may act as a specialised container for DSL-scripts. Also, some microservices which are coordinated by a DSL-script may use some specialised containers. For example, a specialised container for human-operations, a specialised container for business rules, a few specialised containers for automated-operations.

4 Conclusion


Some advantages of the business-process-based technique:
  • Assembled microservices have no routing logic (thus they follow SRP). 
  • All the necessary microservices (assembled and dependent) can be instance-bound (help to predictive analytics).
  • All the necessary microservices (assembled and dependent) can be instantiated on demand (this minimises DEVOPS).
  • A particular instance may be stopped for the error-recovery without influencing other instances (operational isolation).
  • A few versions of the same coordination (i.e. business process) may co-exist (versioning is easy).
  • Different instances of the same process and their may be executed on different nodes (linear scaling out).
  • Easy to visualise for the business people.



Thanks,
AS

2016-09-07

Beauty of #microservices: part 8 dumb-pipes & smart-containers & minimalistic-microservices

1 Introduction


This blogpost is inspired by several blogposts about microservices and it is based on the blogpost [REF1] “Architecting #cloud-friendly application architecture #apparch (inspired by #microservices)” http://improving-bpm-systems.blogspot.ch/2015/04/architecting-cloud-friendly-application.html

See also the previous blogposts of “Beauty of #microservices” series

2 Importance of containers for microservices


This blogpost is inspired by two comments to my blogpost about microservices:
  • Igor Topalov “microservices shall be considered in conjunction with containers” 
  • Bogdan Năforniţă “considering transactions moves us away from the concept of ‘smart endpoints, dumb pipes’” 
Considering the SRP is one of commonly-agreed characteristics of microservices, the “Smart endpoints and dumb pipes” characteristic is in the direct contradiction to the SRP. Making endpoints (i.e. microservices) “smart” requires that they have to have many various functionality in addition to their “core” functionality. Thus the question is how to allow simplify microservices and thus simplify the life of software developers.

I used several types of nested primitive containers:

generic – JVM on top of any popular OS (experience in programming of portable software helped);

language-specific – Jython on top of JVM to run small Python programs, and

specialised – particular environment on top of Jython on top of JVM; this environment considerably simplified the development of automation and integration functionality.


With each nested container, my microservices became more functional and easier to evolve. Finally, each of them was as small text fragments stored in a source version control tool; they were loaded into containers dynamically (at the run-time) and they could dynamically load some modules. Devops was minimal.

3 Conclusion

  • Keep pipes dumb (no logic!).
  • Create your own smart containers (maximum housekeeping and specialisation) from some standard ones.
  • Help your microservices be functionally minimalistic (thus simplify the life of your software developers).

Thanks,
AS

2016-09-06

Beauty of #microservices: part 7 breaking the monolith

1 Introduction


This blogpost is inspired by several blogposts about microservices and it is based on the blogpost [REF1] “Architecting #cloud-friendly application architecture #apparch (inspired by #microservices)” http://improving-bpm-systems.blogspot.ch/2015/04/architecting-cloud-friendly-application.html

See also the previous blogposts of “Beauty of #microservices” series

2 Breaking the monolith


This blogpost is a continuation of two previous ones:
Below, a series of steps to show how to remove from a monolith (actually a home-made ERP) some functionality around a particular Business Entity (BE) or a group of BEs.

At the AS-IS step, the monolith is the master of everything related to this BE (thisBE) – data, rules, processes and events (which are generated during the lifecycle of this BE and may affect other BEs.)

The first step is to externalise and make explicit the process to manage this BE in a BPM-suite tool. Keeping of thisBE (i.e. its data) is externalised as well. Also the associated rules must be externalised (as a copy) to reproduce the business logic spread in the monolith.

The monolith keeps its slave copy of data which are maintained via some stor-API. The associated business logic and event logic are still managed by the monolith. The data (as a slave) must always stay in the monolith because they may be used somewhere.

The second step is to externalise rules (when they are good enough to cover all the existing rules from the monolith).

At the TO-BE step, everything related to thisBE is externalised, but the associated events must be “injected” into the monolith by some func-API.


3 Anti-pattern DOUM


Avoid the "DOUble Master" (DOUM) anti-pattern (section 5.6 of my book "Improving enterprise BPM systems" http://www.improving-bpm-systems.com/book ).

As coordination can be carried out by an application or by a process engine, we have to be very careful to avoid the “double master” anti-pattern. At any moment in time there must be only one master responsible for the coordination of a particular process instance. (Of course, the coordination role may be delegated if appropriate.) This is analogous to a well-organised meeting where the chairperson decides who talks next.


The non-recognition of this anti-pattern can be very costly. We have observed a BPM solution which allowed the modification of data by a process engine, by an interactive application (i.e. by a human) and by a batch at the same time. The coordination of activities was based on data and, if necessary, the application or the batch could “correct” the process. The process engine was used mainly for the handling of three human activities, and the implementation of this solution (for a relatively simple business process) took several man-years.

4 Conclusion


How will you eat an elephant? Piece-by-piece, of course.

Thanks,
AS

Beauty of #microservices: part 6 managing state is a teamwork

1 Introduction


This blogpost is inspired by several blogposts about microservices and it is based on the blogpost [REF1] “Architecting #cloud-friendly application architecture #apparch (inspired by #microservices)” http://improving-bpm-systems.blogspot.ch/2015/04/architecting-cloud-friendly-application.html

See also the previous blogposts of “Beauty of #microservices” series.

2 Managing state is a teamwork


Obviously, a solution or an application implemented with microservices is a set or suite of stateful and stateless microservices. The chapter 7 of REF1 provides a classification of microservices. The stateful microservices are those which:
  1. manage some resources
  2. provide legacy functionality 
  3. assemble (implicitly and explicitly) other microservices
Each stateful microservice must be, ideally, idempotent to contribute to managing state.

Microservices, which manage some resources, may have a few impotency pitfalls to be avoided. For example, the read operation may be not idempotent if concurrent updates are possible. The update (or write) operation may be not idempotent if it can change some metadata, e.g. modification date. Also, idempotency may depend on a particular operation. The safest way is to create a small “shell” to guarantee the idempotency and a unique ID for each invocation.

Also, a small “shell” is only the option for microservices, which provide legacy functionality (they are considered as black boxes).

Microservices, which implicitly assemble other microservices, are a real pain because they have to be carefully reviewed about their idempotency. A possible approach for their idempotency is to re-execute again such a microservice. If all the microservices, which are invoked from it, are idempotent and don’t have any human involvement then such a re-execution will be idempotent as well.

Microservices, which explicitly assemble other microservices, may create some “check-points” (similar to mainframe batch systems) to start their re-execution from the last “passed” checkpoint. Of course, the data associated with checkpoints must be stored somewhere else as records. ( A similar approach can be found in http://www.theidentitycookbook.com/2016/06/blockchain-for-identity-access-request.html )

3 Error recovery (and distributed transactions)


As microservices form a distributed system, the error recovery is very difficult.

Explicit assembly of microservice, e.g. a business process in BPMN, can implement the error recovery in the following way.

Imagine a process fragment with three automated activities (A, B, and C) to be executed as a transaction. Each of those activities is an invocation of a microservice and the normal execution sequence is E2-A-B-C-E4.  Because any of those microservices may fail, this fragment contains the intermediate event E3 to intercept a failure and an activity for Error Recovery Procedure (ERP); the latter may be a human activity.

The first pass (with a failure of activity B ) has the following sequence:

E2-A(done)-B(failed)-E3-ERP


The second pass (with a failure of activity C) has the following trace:

E2-A(already done)-B(done)-C(failed)-E3-ERP


The third pass (with no failures) has the following trace:

E2-A(already done)-B(already done)-C(done)-E4

Activity A was executed 3 times, but it did the real work only at the first time – two other times were ignored because it is idempotent.

An extension with a timeout can be found in http://improving-bpm-systems.blogspot.ch/2014/08/bpm-for-digital-age-shifting.html

This way can be used also for implementing distributed transactions (please, note, some compensation activities may be necessary).

A similar approach was described at http://www.grahamlea.com/2016/08/distributed-transactions-microservices-icebergs/

4 Conclusion


Microservice architecture requires common efforts from microservices to achieve
  • state management, 
  • error recovery and 
  • distributed transactions.
Thanks,
AS

2016-09-02

Beauty of #microservices: part 5 defragmentation of enterprise data model

1 Introduction


This blogpost is inspired by several blogposts about microservices and it is based on the blogpost [REF1] “Architecting #cloud-friendly application architecture #apparch (inspired by #microservices)” http://improving-bpm-systems.blogspot.ch/2015/04/architecting-cloud-friendly-application.html which uses other blogposts about microservices http://improving-bpm-systems.blogspot.ch/search/label/%23microservices

See also the previous blogposts of “Beauty of #microservices” series.

2 Unfortunate historical fragmentation of enterprise data


In modern enterprise computing environments, an enterprise data model (as a set of business-entities defined at the enterprise level) is, usually, an utopia. Enterprise data are spread among many existing applications that master some attributes of some business-entities. Also some business-entities are duplicated (usually, partially, by some fragments of them) among existing applications.

Because each of those applications (in-house developed, SaaS-based, PaaS-based, intact COTS, modified COTS) has its own lifecycle, the enterprise data model is far from the reality and complex data integration is mandatory to keep the integrity of the enterprise data.

Each new application has its own data and uses data from other existing applications. Thus, an application data model comprises some views on some business-entities from the enterprise data model. However, in such an application data model only some attributes are under control of the application.

If an application has to use data from several other applications then the evolution of some application may have a destructive effect on some other applications – typical “Château de cartes” anti-pattern.

3 Enterprise data model must be flexibility-driven not legacy-held


Ideally, an enterprise data model must
  • easy to evolve (by definition)
  • be service oriented so data can be accessed only via API (this its fragments can be joint up together even not sitting in the same database)
  • implement total versioning for some business-entities (to cover the enterprise lifecycle)
  • cover all the types of business-entities
    • transactional data (your business-specific and business-critical data)
    • reference data (taken from other sources and business-parties)
    • operational data (about how the work has been done)
    • reports
    • analytics data
    • records
    • documents
    • media
    • social

There are several techniques to move to this ideal.

Make some functionality enterprise-wide as a Corporate Unified Business Execution (CUBE) platform (see http://improving-bpm-systems.blogspot.ch/2015/10/enterprise-patterns-peas-example-cube.html ). Thus an individual application comprises only business-specific functionality. For example:

  • security management (identity, authentication and authorization)
  • content and knowledge management
  • software factory
  • reporting and analytics
  • business process management
  • records management
  • etc.

Make explicit data update processes – see the pattern “Practical process patterns: Synchronisation Of Sources (SOS)” http://www.slideshare.net/samarin/practical-process-pattern

Use some specific characteristics of your enterprise.

4 Adding flexibility


The classic “point-to-point” scenario is that a solution can read some data from some applications via storage-centric API or asset API or stor-API. Actually, those data are application-specific views on some enterprise business entities, e.g. “ClientERP BE” is the business entity CLIENT as it is defined in an ERP. Of course, in this sensation, the solution 1 strong depends on the evolution some other applications.


Another classic “data-access-layer” scenario is to use a microservice aggregate all the application-specific views in a read-only business entity.


The ability of microservices to have their own persistence store (which may be just a table in a common database) is very useful to implement a small extension of an business-entity. The same technique may help to avoid customisation of COTS products. This is a safe way to mode to an ideal enterprise data-model.


With some efforts (mainly for externalising some business logic), read-only APIs may be transformed into read-write APIs. If a business-entity uses more than one underlying applications then a “data update process” is necessary to implement a multi-phase commit.


If it is possible to externalise the business logic then some functionality-centric API or business API or func-API may be implemented.



5 Conclusion


With the use of microservices and other technologies, an enterprise data model may:

  • become always up-to-date and
  • evolve with the speed of agile development.

Thanks,
AS