Models and architecture
now browsing by category
The main category of my posts, my experience and research as well.
Supervised IT implementations have a much greater chance of success
Supervised IT implementations: projects characterised by strong governance, clear, expert-driven guidance, and active stakeholder involvement have a significantly higher probability of success compared to, for instance, unsupervised or purely “lift-and-shift” approaches.
Evidence indicates that guided implementations lead to better ROI, faster deployment, and higher adoption rates.
Key Reasons Supervised Implementations Succeed
- Reduced Risk through Active Governance: Structured supervision allows teams to identify, manage, and mitigate risks, such as scope creep, before they become significant roadblocks.
- Clear Goal Definition: Supervised projects benefit from clearly defined objectives, ensuring that technical teams and business units work toward the same, measurable outcomes.
- Improved Adoption via User Engagement: Involving end-users from the beginning helps ensure the system is adopted rather than ignored, a common cause of failure in unguided implementations.
- Expert Oversight: Proper guidance ensures that best practices are followed, which is crucial for complex,, long-term, or large-scale digital transformations.
Best Practices for Supervised Implementation
- Phased Deployment: Rather than a “Big Bang” approach, a phased, guided implementation allows for better control, faster validation, and lower risk.
- Data Quality and Validation: A key component of supervision is ensuring the data used for implementation is clean and accurate.
- Continuous Feedback Loop: Successful projects use regular, structured feedback mechanisms to make evidence-based decisions throughout the process.
While supervised approaches require more upfront investment in planning and resources, they prevent the high costs associated with failed or ineffective IT projects.

ERP – Hybrid or monolith?
Introduction
This is a question that everyone planning this investment asks themselves. Is it an easy choice? No. First, an important fact: monolith implementations:
What is the reason? Architecture is key:
- The relational databases of these systems have several thousand related tables.
- SQL queries to these tables consist of hundreds of lines of code for each query, and there are hundreds of such queries.
- The code of these applications is even more complex (we are talking about millions of lines of code).
Together, it looks like this:
The result? Customising such complex code is practically a guaranteed failure, and unfortunately, this is what virtually all implementation companies offer.
WARNING! Customising licensed ERP code will void the manufacturer’s warranty.
Given that module integration involves sharing this single database, every change and every error always affects the entire system.
What is the alternative? Hybrid:
What do we have here? Generally, domain-specific applications are connected by an integration bus. It is no coincidence that specialised domain-specific applications have been available on the market for many years. It is no coincidence that ERP systems and these applications have rich APIs and that new data exchange standards are being developed. It does not matter whether we integrate with a partner’s system, a courier company or another application within the company.
Integration
Scaring people with the costs of integration may have made sense 30 years ago, but not today, when integrating your own online shop with a complex e-payment system or bank takes just a few minutes.
Let’s look at the diagram below:
Typowa średnia i większa firma to może być kilka aplikacji. Ich współpraca wygląda tak:
Calling the required operation on a company-wide scale involves executing a sequence of commands using domain-specific applications. We are not interested in their internal structure because currently, each one has an API.
What causes problems with monoliths? Below is a well-known model called Porter’s internal value chain:
Processes marked as Support Activities are standard operations referred to as “back-office”. They are essentially 100% regulated by law: finance, human resources, fixed assets, warehouses, and supplies. There is nothing new here. When an accountant or human resources manager changes jobs, they may not even notice that they have changed industries :).
The problem arises in the operational process that builds the added value of products: Primary Activities. Why? Because this is where the law interferes minimally and where differences between companies, even within the same industry, arise. This is where market advantage is created.
It is no coincidence that the above diagram shows a cascade of activities rather than a stack of parallel layers, as in the case of Support Activities. Each of these activities is potentially specific to a given company, and the ideal solution here is to select software for each of these activities individually, rather than loading everything into a single universal database together with human resources and accounting.
Note: for some time now, accounting services have been increasingly outsourced to external companies (accounting offices), while ERP monoliths are built on a central financial accounting database. As a result, with an ERP monolith, meaningful financial accounting outsourcing is essentially impossible.
Implementing a single universal ERP monolith while wanting to preserve the company’s unique characteristics almost always involves a huge compromise and interference with the code. The result? Over 75% of implementations turn out to be very costly problems.
And a hybrid? We give up on replacing FK (or outsource it) and select domain modules. There is a wide range of them available on the market, so their implementation and integration takes place without any customisation. Secondly, this process can be spread out over time by selecting and purchasing software only when we actually start implementing it.
Where does the widespread opinion about the difficulty or impossibility of full integration come from? In my opinion, it comes from ignorance. Such systems have been around for many years, but the dominant narrative of monolithic ERP providers drowns out common sense.
If there is anything difficult about integration, it is the standardisation of document structures and procedures in the company that precedes it. Unfortunately, without this, it will not be possible to implement any larger system, and certainly not a monolithic one. Standardisation for a monolith (a single database) is a “roller that will flatten everything”, including the company’s advantage. Implementing a hybrid is only standardisation of communication and not “everything”, so we do not kill the company’s market advantage.
Summary
The choice between implementing a hybrid ERP and a monolith is a strategic decision. Contemporary trends, especially the development of artificial intelligence (AI), mean that traditional monoliths are giving way to flexible systems built from components.
Monoliths (Traditional ERP systems)
Monoliths are centralised systems in which all business functions (finance, production, HR, and warehouse) are integrated into a single database and application.
- Advantages: Data consistency, easier control, less integration complexity, and stability.
- Disadvantages: Huge technological debt (IDC indicates that companies often remain stuck in it for years), difficult scalability, high upgrade costs, and low flexibility in adapting to new technologies.
- Application: Stable environments with defined processes that do not require frequent changes.
Hybrid ERP
Component-based approach:
- Advantages: Flexibility: Ability to quickly connect specialised applications (e.g. WMS, CRM, PLM) via open APIs. Scalability: Easy to increase computing power for selected modules. Risk reduction: Lower implementation costs and reduced risk of downtime. Innovation: Better support for modern trends such as AI/ML.
- Disadvantages: Greater management complexity (distributed IT environment), need to manage communication between services.
Key differences in the context of implementations
- Architecture: Monoliths are a single “solid”, hybrids are usually a collection of cooperating microservices.
- Development: In hybrids, each service can be developed independently by a smaller team.
- Updates: In monolithic systems, updates always affect the entire system. In hybrid ERPs, selected areas can be modernised, avoiding a complete IT revolution.
For manufacturing companies, hybrid ERP systems are becoming the standard, offering flexibility and support for modern technologies. Traditional monoliths remain an option for smaller companies with simpler, stable processes, as long as they are not afraid of technological debt.
Final advice
MANAGING IT ARCHITECTURE IN-HOUSE IS CRITICAL: System integrators can be a valuable part of digital transformation, but they should never have complete, uncontrolled power over the entire project. (source: THE HERTZ VS. ACCENTURE DISASTER).
Therefore, the path to success is first to analyse the company, then optimise it, and finally select and implement the software. The key here is management of the entire process on the buyer’s side. This increases the probability of success from 25% to over 80%. Supervised implementations are characterised by the lowest effectiveness.
So? Engage an analyst-architect, cut down the monolith to MRPII, and select satellite systems for yourself on the market without causing a revolution in the company.
Why IT projects mostly fail…
Because IT projects are treated as technology projects and are most often outsourced to technology suppliers.
This is the biggest mistake you can make. Here, I would like to point out that requirements are divided into functional and non-functional. Non-functional requirements have never been an issue, but functional requirements are always a problem. So why is project management entrusted to technology providers?
Enterprise architecture – the old SOA model
The diagram below (Street, K. (2006). Building a Service Oriented Architecture with BPM and MDA. 2(1), 8.) illustrates the key layers (levels) of organisational description:
- business processes
- business services (required by the business)
- components (applications providing these services)
- resources (the environment in which these applications)

This model perfectly describes “what exists”. The problem with IT projects is what is not visible here: the content processing mechanism. The second problem is understanding that data is not content. Data is characters processed by a computer. Content is what a person understands when they have access to this data. A book written in a language we do not know is data, but it has no content for us.
Let us examine the individual layers of this model
Business processes

We usually model business processes using BPMN notation (OMG.org). This model describes workflow and document flow. It is an important model because it describes how an organisation works. It is a mechanism for human activity.
NOTE! This model does not describe the mechanism of document content processing.
Business services

Application services are the support that computers provide to those performing these tasks. If working with documents consisted solely of humans creating and processing content, computers would be nothing more than memory. This layer is a specification of needs: semantics and requirements, the key to an IT project, which is not included in this diagram and is absent from most projects.
What this model lacks: a mechanism
If we expect a computer to provide any support in processing content, we must express this in the form of a mechanism for processing specific data. The point is that a computer does not understand stored data, but it is excellent at performing procedures for processing it.
Therefore, a description of this mechanism must be created. Many authors describe it, but all these publications have one thing in common: they contain models. For example, such as the one below (Rosenberg, D., Stephens, M., & Collins-Cope, M. (2005). Agile development with ICONIX process: People, process, and pragmatism. Apress.):

The diagram above shows the process of designing software mechanisms:
- screen content, i.e. a document in a business process (GUI),
- description of the data processing mechanism (Dynamic),
- the architecture of the application code implementing this mechanism (Static),
- code as a result of technology selection,
- Tests verify that the application is functioning correctly.
Alistair Cockburn described it as Validation-V, also known as the V-model:

The resulting project is a design. With this in hand, you can consider which technology to use for implementation. Once the application has been created, the above diagrams serve as documentation describing how the existing application works. Such documentation also allows the application to be recreated in any other technology. That is why these models are called Platform Independent Models (OMG.org, MDA).
Unfortunately, this is the kind of work that AI still cannot do because AI “has data but does not understand”.
Components
This is the architecture of application integration and its deployment in the environment:

Application environment

With an application design and knowledge of the chosen technology, you can finally select the technological environment and perform implementations in a specific coding language. Paradoxically, this is the least risky part of the job, provided that you know what and how to code.
The two lowest layers can be expressed using the C4 model described by Simon Brown.
Summary
The IT industry is the area of engineering with the lowest effectiveness. It is estimated that the number of successful projects does not exceed 20-30%. One of the reasons given for this is that these projects are initiated immediately at the technical level, i.e. without any knowledge of what is to be created and how. This missing knowledge is the mechanism of the system.
(article also published on LinkedIn)
Legacy migration
Migrating old applications to new environments and technologies is easy if we have documentation describing how they work. We then simply commission the implementation of the new technology and migrate the data.
The problem arises when such documentation does not exist. In this case, the only option is to analyse the existing software, which we treat as a prototype, and develop a description of the logic explaining how the documents generated by this application are created.

I have already completed several such projects. I was able to perform analysis and create application models, allowing for their quick re-implementation in newer technology.
Annotated UML models were created to enable their implementation, and this was not reverse engineering of the source code (which the client often does not have).
My Software Architecture Academy
Group and individual workshops: architectural design patterns and application design using UML notation.
Design vs implementation
Introduction
In software development, design software refers to the process of defining the structure, components, and behaviour of a software system. In contrast, implementation software focuses on translating those designs into actual, executable code. Essentially, design is about planning and architecture, while implementation is about building and coding.
Here’s a more detailed breakdown:
Design Software:
- Purpose: To create a blueprint for the software, specifying its functionality, architecture, and user interface.
- Activities: Requirements gathering, system modelling (using diagrams like UML), database design, user interface design, and creating technical specifications.
- Focus: Problem-solving, high-level planning, and creating a clear vision of the software before coding begins.
- Output: Design documents, specifications, prototypes, and architectural diagrams.
Implementation Software:
- Purpose: To turn the design into a working, executable software product.
- Activities: Writing code in a specific programming language, unit testing, integration testing, and debugging.
- Focus: Coding, testing, and deploying the software based on the design specifications.
- Output: Source code, compiled binaries, and the deployed software application.

Disaster Response and Recovery Process in a Manufacturing Enterprise
Scenario Title: Disaster Response and Recovery Process in a Manufacturing Enterprise
Description: This process is triggered when a manufacturing plant faces a natural disaster such as a flood, fire, or earthquake. The event may be detected via various sources — factory IoT alarms, calls from the plant manager, emergency authorities, or even social media. Once detected, the Crisis Management Team is activated, triggering multiple parallel response tracks: Human Safety Track: Evacuation, medical assistance, employee family coordination Business Continuity Track: Alternative sourcing, production rerouting, insurance claims Infrastructure & IT Track: Damage assessment, disaster recovery, data risk mitigation External Communication Track: Legal disclosures, PR statements, government reporting The process involves numerous asynchronous events, external parties, exception handling, and delayed subprocess closures like audits and legal resolutions

Process for managing customer complaints at a bank
Kolejne zadanie postawione na LinkedIn (LINK)
“Let’s imagine a process for managing customer complaints at a bank, where they can be received at the branch, on the customer line or directly by e-mail from the complaints management team. How can I illustrate this scenario with BPMN, taking into account the different entry points? Note that regardless of the entry point, the complaint must be forwarded to the complaints management team.”

Handling of the credit application. How to use BPMN and SBVR – example.
A common problem in companies, both very small and very large, is the lack of full knowledge of how the company really works. The consequence is the unpredictability of the consequences of decisions taken.
Innym negatywnym efektem tego braku wiedzy jest trudność w zawieraniu umów i wdrażanie do pracy nowych pracowników.
The solution to both of these problems is a correctly executed business model, the backbone of which is business process maps, associated procedures, business rules and document templates. A correctly developed model allows you to familiarise yourself with everything that affects the business quickly. It also allows everyone to understand how the business works.
Below is an example of such a model: the business process model.

Mechanism of Operation vs System Model vs Diagram
Mechanism of Operation vs System Model vs Diagram
Author | Jaroslaw Zelinski |
Date | 2024-03-12 01:48:01 |
Categories | Business analysis and software design |
Introduction
During analysis, we often use the term ‘model’ and less frequently the term ‘mechanism’. The term “mechanism” when we want to explain something, such as “the mechanism for generating a discount on an invoice.” ”
But here beware: the model (block diagrams, formulae, etc.) is documentation and a copyrighted description. The mechanism is what we understand by reading this documentation (model), as the mechanism is protected know-how. The content of the application to the Patent Office is the model (description), but what we patent is the invented/developed mechanism.
Keywords: model, mechanism, diagram, UML
Mechanism vs. model
Mechanism and model in science are close concepts. For example, they are described as:
Modeling involves abstracting from the details and creating an idealization to focus on the essence of a thing. A good model describes only the mechanism. [Glennan argued that mechanisms are the secret connection that Hume sought between cause and effect: “A mechanism of behavior is a complex system that produces that behavior through the interaction of many parts, where the interaction between the parts can be characterized by direct, invariant generalizations relating to change” (Craver & Tabery, 2019)
Let’s examine how these terms are defined by the official Dictionary of the Polish Language:
Mechanism: “the way something is formed, runs, or operates.”
Model: “a construction, scheme, or description showing the operation, structure, features, relationships of some phenomenon or object.” “
Dictionary of the Polish Language (https://sjp.pwn.pl/)
As you can see, very similar but not identical. The term “diagram” is defined in English literature as:
Diagram: a simple plan showing a machine, system, or idea, etc., often drawn to explain how it works.
(https://dictionary.cambridge.org/)
In the scientific (English-language) literature, the concept of modeling is defined as follows: to
model something is to create a copy or description of an action, situation, etc., so that it can be analyzed before proceeding with the real action.
https://www.oxfordlearnersdictionaries.com/definition/english/model_2?q=modeling
Graphically, this conceptual model can be illustrated as a diagram:
Concepts: diagram, model, and mechanism, and the areas of law governing them. (Author’s own)
There remains the question of concepts: the phenomenon (which we want to describe and explain) and its explanation. A given phenomenon is a certain observed fact. Most often, we describe it literally or create statistics about it. Craver illustrates the relationship between the phenomenon and the mechanism of its formation in this way:
The upper ellipse represents our observations of stimuli and effects, and the ellipse is a record of the facts and their statistics. Statistics, however, is not a model, it is only a statistic, a collection of data about the facts, it does not provide any explanation for their formation.
The lower ellipse represents the mechanism that explains the formation, initiated by stimuli, of the observed effect (facts collected in statistics). It is an explanation of how the effect (facts, effect) arises and is the mechanism for the emergence of what we observe. We illustrate (can be expressed as) this mechanism (explanation) as a model, which can be expressed, for example, by a flowchart.
Examples
An example that everyone is probably familiar with is Copernicus’ Theory. The diagram shows: top left, a record of observations of the heavens, wrongly called a (statistical) model. These are the so-called epicycles (below left): a depiction of observations of the paths of planets and stars in the sky, observed from Earth. On the right, the heliocentric solar system diagram is a model that explains the mechanism of epicycles or loops that the observed stars and planets make in the sky.
Watt Regulator
Another example is the Watt regulator. Below is a model of this regulator:
The original schematic is Watt’s regulator, describing its design.
The description of the mechanism in the patent application was text similar to this:
If the machine is at rest, then the weights (balls) are at the very bottom, and the throttle is fully open. If the steam machine starts working, the rotating wheel of the steam machine is connected to the speed regulator, the balls begin to rotate. Two forces act on the balls of the regulator: the gravitational force attracting the balls vertically downward and the centrifugal force pulling the balls outward, which, with this design of the regulator, causes the balls to float upward. The rising balls cause the throttle to close, and this resulted in less steam being supplied to the steam engine. The machine slows down, so the centrifugal force decreases, the balls fall down,soandthe throttle opens, thus supplying more steam to the machine.
https://pl.wikipedia.org/wiki/Regulator_od%C5%9Brodkowy_obrot%C3%B3w
Diagram describing this regulator:
Diagram describing the mechanism of the Watt Regulator.
The mechanism that explains the operation of this regulator is a negative feedback system (Bertalanffy, 2003):
Negative feedback as a mechanism to explain the operation of the Watt regulator.
Watt’s regulator is precisely the negative feedback. In the diagram above, PROCESS is a steam machine. The quantity at the input is steam with a certain pressure, and the quantity at the output is the speed of the drive shaft of the steam machine. An increase in the speed of the shaft causes a decrease in the pressure of the steam supplying the steam machine, which in turn causes a decrease in the speed of the shaft, thus opening the steam valve at the input and increasing the speed again. The phenomenon will lead to a fixed (stabilized) shaft speed with small fluctuations.
Clock
A typical analog clock (its face) hanging on many walls in a house (or mounted on many towers) looks like the one below:
Clock face
Possible construction of such a clock on the tower:
Example of a design reproducing a clock mechanism
The time measurement mechanism we use to explain the indication on the clock face, which is the basis for the construction of clocks, is expressed as a model in UML notation.
A model expressing the timing mechanism
Conceptual model.
We model domain knowledge as a Concept Dictionary, and this can be expressed graphically in the form of taxonomies and syntactic relationships. The application code architecture expressed graphically is its model, adding to fewer diagrams describing♥ for example, use case scenarios, is also part of this model. The whole, however, describes the mechanism for implementing functional requirements.
Below left conceptual model, he ne is, however, the mechanism for the implementation of functional requirements. On the right, the model (fragment) of the application architecture, supplemented with a sequence diagram, would be a model describing the mechanism of implementation of a specific functionality.
Summary
As you can see, sometimes it is easy to confuse the terms model and mechanism, but we can say that a model is a diagram depicting something, while a mechanism is an explanation of a phenomenon (how something is created how it works). A mechanism can be illustrated in the form of a model. If we aim for the model to be an idealization, then that’s it:
Modeling involves abstracting from the details and creating an idealization so as to focus on the essence of a thing. A good model only describes the mechanism.(Craver & Tabery, 2019).
Bertalanffy, L. (2003). General system theory: foundations, development, applications (Rev. ed., 14th paperback print). Braziller.
Craver, C. F. (2007). Explaining the brain: mechanisms and the mosaic unity of neuroscience. Clarendon Press.
Craver, C., & Tabery, J. (2019). Mechanisms in Science. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2019). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/entries/science-mechanisms/
Frigg, R., & Hartmann, S. (2020). Models in Science. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2020). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2020/entries/models-science/
Weilkiens, T. (2007). Systems engineering with SysML/UML: Modeling, analysis, design (1. Aufl). Morgan Kaufmann OMG Press/Elsevier.
