Compare of 5 Software Styles (Transactional ⚡ Excel ⚡ BI ⚡ EPM ⚡ Costing) — and ERP Future Discussion

Andy Imperfect
62 min readMar 13, 2024

--

Hello! The article is shared with the professional community to discuss business technology functional architectures, which dictate a lot of the long-term opportunities and limitations but often remain intuitive and implicit.

By providing a Big Picture perspective, the explicit comparisons of this kind can help us in several ways:

  • To more quickly understand a new type of software that we haven’t dealt with before.
  • To formally present and discuss the software types. In turn, it can help in building individual IT landscapes (by distributing business tasks between software systems), justifying architectures when developing software, communicating between specialists from different business/technical domains, developer onboarding, aligning IT and business, B2B product management practices, review of new technologies, etc.
  • The criteria themselves may be of interest for analyzing specific B2B products with them.

The article is the first attempt, originally prepared for internal needs. By publishing it, I want to draw public attention, hoping to gauge the interest (or lack thereof) of the audience. I hope that it may be useful to someone in their practice, and for those who prepare similar analyses (including for other areas), it may serve as an incentive to make their findings also public. Additionally, I encourage the community to exchange opinions and explore collaborations if such analysis can help increase market awareness and improve enterprise architectures.

Separately, I encourage EPM/Costing vendors and consultants to collaborate in enhancing market awareness of these types of technologies.

A separate appeal to ERP/PLM specialists

You may be only interested in Paragraph 4.4 (and, less probably, 1.1 and 4.2). Please take a look at them. It would be interesting to exchange experiences and ideas regarding system integration in the future.

INTRODUCTION

In the example of automating the Cost Accounting domain, we will analyze 5 types of technologies that are actually just alternative functional styles for designing enterprise software (either separate apps or modules of integrated systems like ERP or, say, PLM):

  • ‘USUAL’ (‘TRANSACTIONAL-LIKE’ or ‘ACCOUNTING-LIKE’) style, which is still the most common for most modern ERP modules.
  • SPREADSHEETS, with Microsoft Excel being the main representative.
  • BI (BUSINESS INTELLIGENCE).
  • EPM (ENTERPRISE PERFORMANCE MANAGEMENT), which is the core architectural style for budgeting software products and modules.
  • COST CHAIN MODELING ENGINES.

Although we focus on a specific business domain, the article may be relevant if you’re interested in almost any area of business automation.

Four Big Disclaimers

#1. We don’t consider “ERP” as a separate style because it is not

An “ERP system” is a product or a set of products that combines several functional components (grouped into modules), each of which can be designed in one of the styles. For example, an ERP system can include a reporting engine that is BI-like, a budgeting module that is EPM-like, and the rest of the modules built in a Transactional-like style.

When dealing with large integrated products like ERP or PLM, it is more appropriate to analyze the style for individual components rather than the entire product.

#2. The software styles are just an abstraction, so many exceptions are possible and such comparisons are not directly applicable to making decisions about PARTICULAR vendors’ products

For example, if you create BI software, you typically don’t have some physical/legal restrictions that dictate how the software should seem and what it should do.

Then, each particular software system is in fact unique, while highlighting the abstract types (styles) is just a convention.

Therefore, in the comparisons (Table 1 and Chapter 3) any mention of vendors is deliberately removed. And even though the article may sometimes explicitly mention exceptions to the generalizations made, it’s important to note that exceptions can exist even when they are not explicitly mentioned. Therefore, when making important decisions, always study each specific product.

#3. This is the first and relatively early material

Currently, it is more important to stimulate discussion than to provide ready conclusions. Please, consider this article as such a stimulus rather than an absolute truth.

And if you can make any contribution or clarification, I’ll be glad to collaborate in the course of further research.

#4. We call the styles functional even when speaking about “transactionality”, “OLAP” and other ambiguous concepts

The key capabilities and constraints associated with each of the styles are largely determined by the vendor’s functional vision, while low-level (physical) details are always of secondary importance for analysis like this.

There are many common rude assumptions, for example, that EPM and BI systems must necessarily store data in physical cubes, however, in modern tech architectures, things are not quite like that.

Although accounting-like software typically hugely relies on SQL-supportive databases, this is where the common patterns end. For example, I’ve seen spreadsheets, BI, and EPM engines built over relational databases, and some leading BI vendors use a non-cube (and even not column-oriented) data model while delivering tremendous performance.

Five Small Disclaimers

  • #1. The article is mostly written as if there are no distinctions between the questions “What does an architectural style suggest?” and “As vendors of software of this style typically do?”. However, in certain specific cases, significant differences do exist. This does not occur frequently. In such cases, I have made efforts to provide specific comments. However, some degree of confusion may remain.
  • #2. We focus on the general principles, not vertical functionality details. Not paying crucial attention to the very technical details, we also don’t consider the very industry-specific functionality. Thus, we are interested in what can be called the general fundamental principles of functional architecture. In a sense, they can be considered horizontal. The exception is certain features specific to cost analysis (since our study focuses on it), as well as reservations that were necessary in cases where it was impossible to separate vertical from horizontal.
  • #3. The comparison is focused on classic functionality. Although a number of the latest trends have been revealed, when analyzing them, the question of “How software of a particular style is developing (changing) now?” was less of a priority compared to the question “What are the typical limitations of the style that have led to the current changes?”.
  • #4. In Criteria [8–13], we compare the capabilities of programmer-made formulas in the first style and user-guided formulas in other styles. This may look strange, however, in the context of comparing final solutions, this seems necessary.
  • #5. Of course, not all software styles and not all possible criteria are overviewed. For example, BPMS, MDM, DQ/DM (Data Quality, Data Management) styles, mobile interface styles, and many others were not touched. In addition, IBP/S&OP software was not considered, although it partially combines EPM and Cost Chain Modeling advantages for some specific business domains.

Main Article Structure:

  • Chapter 1. Quick spoiler
  • Chapter 2. A few words about each style
  • Chapter 3. Detailed criteria-based comparison
  • Chapter 4. Integration problems and the future

Chapter 1. QUICK SPOILER

1.1. The main comparison table

Table 1. Averaged capabilities of the considered software styles

A more detailed criteria-based analysis is presented in Chapter 3.

1.2. What does it mean for business? An additional rough assessment

Although they are not the subject of the article, based on the previous table, some rough conclusions can be drawn about the suitability of the software styles for various business tasks in the Cost Accounting domain.

Table 2. Software applicability for particular business tasks (an additional rough assessment)

Some comments about the business tasks:

  • Managing primary cost data means recording and editing primary data such as purchase prices, accrued costs of purchased services, materials, accrued wages, etc.
  • Simple cost allocations mean cases where you have some pre-defined logic and number of allocation steps, and costs can be aggregated before allocation, for example when you calculate the entire costs of some department for a month and then allocate them between other departments or some activity directions.
  • Detailed cost allocations represent cases where you can’t aggregate costs before allocation and must allocate each position individually. For example, you spend 30 different types of resources (materials, utilities, works, etc.) in a manufacturing cost center, and each resource has its own rule of allocation (or non-allocation) to each manufactured product. Any application of BoM (Bill of Materials) requires detailed allocations.
  • Cost allocation via “long” dynamic chains means any case when you are not ready to pre-define the number of cost allocation steps (in other words, the number of steps of business activities and/or resource consumption) in the cost calculation algorithm, but want it to be dynamic. This is a very common case where you can get different units of the same product using different supply chain options, including flexible Make-or-Buy decisions for each component, different production technologies, different transportation routes, etc. where each choice is made dynamically, changing the composition, types and number of nodes in the supply chain.
  • Cost allocation for co-depended production means any case when two or more products/resources/activities are consumed by each other within the same period so that you can’t decompose their consumption processes into a consequent flow of one-directional steps, and then need to solve a system of linear equations.
  • Cost consolidation from many sources and reclassification, as well as any conso & transformation in the finance domain means changing attributes for the records, with possible aggregations (folding several records into one) but typically without disaggregation (dividing one record into several, as is the case in cost allocation).

Chapter 2. A FEW WORDS ABOUT EACH SOFTWARE STYLE

2.1. 'Usual' (Transactional-like, or Accounting-like)

This is perhaps the most common approach to the design of enterprise software products. From the user’s standpoint, it typically consists of the following elements:

  • a set of “cards” and “lists” for the main classes of domain logic (including master data and business transactions);
  • registries (registers), which primarily serve as systematic central repositories of transactional data;
  • computational procedures that can be launched by the user from the UI, but are executed in the background according to internal algorithms;
  • while both computational procedures and cards can conduct entries into ledgers;
  • reports, which dynamically visualize data from ledgers and/or directly from documents.

The following software products are LARGELY designed in an accounting-like style:

  • Most accounting software products such as Zoho, Quickbooks, and Sage 50, as well as a lot of operational software systems for non-accounting fields such as CRM and SRM
  • Most ERP systems for medium and in many cases even for large enterprises. For example, ERPNext, Acumatica, Oracle NetSuite (at least before releasing NetSuite EPM at the end of 2023), MS Business Central, MS Dynamics 365 Finance, as well as at least the previous generation of financial modules of large-business-focused ERP systems such as SAP ECC FI/CO.
  • Most low-code platforms for creating ERP-like systems, for example, Frappe and Oracle APEX.

What’s most important, when creating custom software systems for business automation, it is highly likely that the Transactional-like style will be adopted.

Note #1. Transactionality here is just a conditional concept. We’re talking more about the fact software usability is built around working with single business transactions than about technical database transactionality features (although they are usually also provided).

Note #2. This style could be divided into two, and considered separately: based on hard-code and based on no-code/low-code/code engines. However, within this article, this was not done since it might be redundant.

2.2. Spreadsheets (Excel etc.)

The core of the spreadsheet functionality is a sheet of cells where you, as an end user, can build almost any arbitrary models.

The most typical software products of this style are MS Excel, Google Sheets, LibreOffice Calc, and OpenOffice Calc.

Note. Tables in software like Airtable, Notion, and Monday, are NOT of this style and have a different functional architecture.

2.3. BI (Business Intelligence)

BI is a software style based on UI-supportive tools for dimensionally structured data visualization and manipulations such as aggregation/calculation.
The user creates tabular-oriented datasets* (which can be used either for transactions, master data, aggregated quantitative data, etc.), makes formulas for certain columns (‘calculated fields’), and then pulls data from these tables into different forms of reporting visualizations that he can also construct himself.

Examples of software products built around BI functionality are Power BI, Tableau, Qlik Sense (previously Qlik View), Google Looker (formerly Data Studio), Amazon QuickSight, SAP BusinessObjects, IBM Cognos Analytics (please do not confuse it with IBM Planning Analytics, which will be mentioned in the EPM section), MicroStrategy, Teradata, and many others.

Note. In some cases, BI functionality can be easily identified, for example in Tableau, but in other cases, it can be tightly integrated with adjacent functionalities such as data engineering, data warehouses/lakes, and data management, etc., making it more challenging to clearly identify it’s specific boundaries within the vendor’s ecosystem, for example, in case of Snowflake, Teradata, to some extent Qlik. In the article, I try to extract the classical BI feature scope, however, it is not always possible.

* Footnote: datasets that can be modeled and stored in various forms, however, I consider their functionality similar to the row-oriented even

2.4. EPM (Enterprise Performance Management)

EPM may also be called CPM (corporate performance management) and less commonly BPM (business performance management).

Although the EPM term may cover a variety of vertical solutions for budgeting and adjacent fields (having different functional architecture), in the context of this article we are referring to the main architectural style in which, from the user’s point of view, working in EPM is as if he can manipulate “cubes”, being able to flexibly use some of the cells for automatic calculations and others for manual data entry.

There are a lot of common EPM software products on the market, such as Anaplan, Oracle EPM Cloud, Workday Adaptive Planning, Phophix, OneStream, Planful, Board, Cube Software, IBM Planning Analytics, CCH Tagetik, and many others.

Note. Such a famous product as SAP Analytics Cloud appears to be in the middle between BI and EPM software styles.

2.5. Cost Chain Modeling

It is a style which is focused on providing the user with specific tools for modeling and processing data associated with long nested cost chains*.

Although the article uses the “Cost Chain Modeling” wording, it is not really a common name. Instead, engines of this type are often popularized using keywords such as “Cost Engineering”, “Product cost management”, “Product costing, ”Cost management”, or just “Costing”.
Also, some functionalities called using the term “Profitability” can be implemented in this style.

Examples include software products focused on cost management like CostPerform and FACTON, as well as cost management functionality within PLM (Product Lifecycle Management) environments such as Siemens TeamCenter, SAP LifeCycle Costing, aPriori, OpenBOM, and other products.

* Footnote for mathematicians: actually we are talking about Directed Acyclic Graphs and in some cases cyclic graphs.

2.6. Some info related to the software market

These considerations about the popularity of the software styles within the IT market may argue for some differences in functionality.

  • Software products based on Transactional-like and Cost Chain Modeling styles are more often “vertical”, having more industry-specific and process-specific nuances in their functionality. Products of the other styles (Spreadsheets, BI and EPM) are more often “horizontal”, that is, universal.
  • BI and EPM are commonly associated with the term “OLAP” (Online Analytical Processing), which can have functional or technical meanings depending on the specific context. This is in contrast to the “OLTP” (Online Transaction Processing) concept associated with Accounting-like architectures. Spreadsheets and Cost Chain Modeling software are not typically categorized using these terms.
  • Spreadsheets are extremely popular and obviously have the largest user base among the software styles reviewed.
  • Quantifying the exact market size of Cost Chain Modeling and Spreadsheets in $ monetary terms proves elusive, as they are often integrated within larger corporate entities or products.
  • The BI market is larger than that of EPM and seems also larger than the Cost Chain Modeling market.
  • It looks like EPM technologies to some extent “follow” BI, which may be due to the previous point. This is reflected in the dynamics of the development of such universal capabilities as import, consolidation, formula language, and AI. Also, there is a hypothesis that interest in implementing EPM often appears after interest in BI. These statements are subjective in nature, and perhaps you can provide criticisms or alternative perspectives.
  • The Cost Chain Modeling software also seems to be following BI in terms of universal functionality (formula language, connectors, consolidation, etc.), but in many ways slower than EPM. This is explained by the specifics of the market: vendors of engines of this type are not so focused on R&D of such universal capabilities and are more focused on solving engineering problems. However, these statements are also subjective.

Chapter 3. DETAILED CRITERIA-BASED COMPARISON

Now, let’s dive into each of the 20 criteria for comparing software styles, to detail the generalizations that were made in Paragraph 1.1.

[ 1 ] Creation of business data structures: users vs developers

Here we consider creating not new meta structures (such as “a table” or “a list”, which are still always determined by a programmer) but new business domain data structures (such as “production cost ledger”, “list of suppliers”, and so on).

  • In an Accounting-like software style, business data structures are mostly created by the programmer (-). Although in no-code platforms built in an accounting-like style, end users can create some data structures but they are still non-complicated and to implement a relatively complex logic, programmer intervention is required.
  • In Spreadsheets, BI, EPM, a user can create business data structures (+). While there may be some exceptions (such as EPM vendors occasionally hard-coding basic dimensions like “time” and “version”), in general, it is the user’s responsibility to create the necessary data structures and build the entire domain data model.
  • In Cost Chain Modeling software, the user is typically given moderate opportunities to create data structures (+/-). This is markedly different from the Spreadsheets/BI/EPM concepts. A hybrid approach is used here: some data structures can be customized by the user, while others remain strictly limited by the developer. This is usually due to domain-specific features.

[ 2 ] Data structure types supported

Here, let’s take a look at three basic ways to present data.
As always, we are primarily focused on how the (virtual) business data structures can be presented to the user for manipulation, rather than on how the data is physically stored.

Row Table is perhaps the most popular form of data structuring, in which each record (a business transaction, event, rule, master data element, etc.) is represented as a separate row.

Picture 1. A conceptual example of a Row Table

Row Tables are great for collecting data, analyzing record batches, sorting data, and ensuring clear “drill down” functionality.

Card is a way to present a single holistic “object” of business logic (such as a single transaction, a single master data element, etc.), especially in cases where it has a complex structure.

Picture 2. Example of a Card interface for a single business transaction

Cards are good for CRUD (create/read/edit/delete operations), good to control user actions when working with the object (including automatic triggers and dynamic behavior of the visual form when the user works in it), and are convenient for managing versions of the specific record. As can be seen from the picture, the advanced card interfaces may contain row tables for subentities of the object, although, the functionality of such row tables in cards is usually very narrow.

Intersection Table, which can also be conditionally called a сube-like or multi-dimensional form is the best for representing aggregated quantitative data such as costs. This table allows us to represent each of the quantitative values at the intersection of “classificatory” values (both master data dimensions and elements), which are structured in a convenient manner at the “cube headers.”

Picture 3. A conceptual example of an Intersection Table

Intersection Tables are great for data visualization and are also necessary for planning, in both cases because they allow large amounts of data to be grouped much more concisely than in a Row Table, which is critical for these use cases.

Let’s consider which data forms are implemented in different software styles.

  • In Transactional-like software, Cards are the best presented (+), Row Tables are widely presented (+), and Intersection Tables are typically presented in view-only mode (+/-). In many cases, vendors disable direct data editing in Row Table interfaces. Instead, users are redirected to a separate card or form to create or edit records. Regarding Intersection Tables, you can program reports that present the data as a result of grouping in a “cube-like” form. However, vendors don’t often use the flexibility of cube-like forms to the fullest and, in some cases, prefer to provide only one dimension in the top header, while the number of columns may be hard-coded, i.e. only row elements are dynamic. There is usually no ability to enter or change data in such report forms.
  • In Spreadsheets, Cards are not typically represented (-), while, as users, we can imitate well the Row Tables (+) as well as Intersection tables (+). We can create arbitrary matrices, create multiple levels of headers for rows and columns, and freely edit any data both at headers and at the intersection. Also, there is Pivot, which is a universal tool for creating view-only intersection tables. Among other things, it is very important that Spreadsheets are the only software style of those considered where the user can work with individual cells without creating any data structures for them.
  • In BI, document-like Cards are usually absent or poorly developed (-), a native implementation of Row Tables is provided (+), and Intersection Tables are developed greatly but are usually view-only (+/-). While there are at least basic opportunities to enter data (usually through import) into a Row Table, it is often impossible to enter data into an Intersection Table by default, which significantly limits the applicability of BI for planning tasks and what I consider the main reason for the development of EPM as a separate software style
  • In EPM, Cards are weakly developed (-), Row Tables are presented rarely and mostly for data import (-), while Intersection Tables are advanced (+). The main specificity of EPM systems is the creation of relatively functional Intersection Tables with the ability to view, calculate using formulas and manually enter data in cells. However, in some EPM products “cube-like” forms have limitations, especially in terms of upper headers. Also, dynamic cube-like forms for viewing data (Pivot) are usually less developed than in BI and even Spreadsheets.
  • In Cost Chain Modeling software engines, Cards (+) and Row Tables (+) are typically presented, while Intersection Tables are moderate (+/-).

Please, pay attention that we didn’t consider some other data structure presentation forms here such as a hierarchy list. That’s an omission, however, some relevant information is disclosed in Criterion [18].

[ 3 ] Data model strictness

From a user’s perspective, there is a need for integrity, consistency, uniqueness, and clear identifiability of business data throughout the software system. Let’s call it “Data Model Strictness”. However, technically this can be achieved in different ways.
Let’s distinguish two types of data models: hard-coded and constructor-like, while both of them are strict.
In a Hard-Coding
case, a programmer can create a data model at several layers of software architecture. In the figure, only two of them are shown (one physical and one virtual), but in fact, there may be more of them. Both the Physical Data Model and Virtual Data Model are usually affected by the particular real-world domain data model, reflecting its classes (customers, suppliers, cost centers, etc.).

Picture 4. A possible hard-coded data architecture

In the case of a Constructor, the programmer mainly creates a constructor data (meta) model which pre-assumes some meta concepts that allow a user to further create the particular domain data model. Such a (meta) model is typically virtual, while the underlying physical layer can also exist and has strictly technical meaning.

Picture 5. A possible constructor-like data architecture

As we mentioned above, both approaches make it possible to provide a strict data model for an end user.
However, in the second case, the software system developer makes it once. By creating a constructor, he creates universal rules for connections between the architecture layers, universal rules for generating user interfaces, a universal formula language for an end user, and so on. Then, the user is able to create/delete both business logic classes and business formulas, and all the developed mechanisms are automatically applied to them by default. UI for each class is automatically generated, records are automatically created/deleted in the physical database, and so on.
In the first case, system developers have to ensure consistency between architecture layers individually for each case when the domain data model and/or business formulas are updated. For example, developers may need to add a “supplier” table to the database, create a “supplier” class at the core back-end virtual data model layer, develop UI for the supplier list and card, and ensure consistency between these architectural layers individually for the supplier logic case.

Now let’s see how the data model is built in different types of software:

In the “usual” software style, a data model is strict (+).

  • The default is the hard-coding way, although constructors can also be used in some cases.

In Spreadsheets, the data model is NOT strict by default (-), HOWEVER, vendors are gradually adding the necessary tools (+/-).

  • As it is, the non-strict default data model is the downside of maximum flexibility. Users can create data first and then structure it, which differs from other types of software where structures are typically created before data entry. By default, spreadsheets identify the data by cell coordinates, which are completely anonymous and do not have any domain meaning (“A1”, “A2”, “B2”, etc.). This distinguishes spreadsheets from other software, where data exists in tables/fields/classes/collections whose names have at least some meaning. Here identification occurs dynamically, depending on the specific use case. For example, in the “filter” use case, the user selects an array of cells, and the algorithm can consider all the cells on the top row as headings. Such headers temporarily become a kind of analog of column names in a SQL database.
  • This causes almost all the classic disadvantages of spreadsheets: increased risk of manual errors, lack of automatic control of data types in cells, difficulty in ensuring data consistency and integrity, lack of convenient navigation, inability to manage “models” and their large parts as a whole, security restrictions. Also, this affects difficulties in accessing cells and overloading query language, which will be discussed in other criteria.
  • HOWEVER, there are some data-model-related tools presented today. They allow the end user to organize data into structures like the relational model (co-depended row tables) and then use simplified constructs in formulas to refer to it. It is especially important to note that not all of these tools have names explicitly related to “data models” keywords. Such tools are vendor-specific and may have some limitations but are still developing over time.
  • It is interesting that third-party analysts often continue to criticize the classic shortcomings of spreadsheets, which is largely justified, but they do not provide a detailed analysis of how modern data modeling tools influence the resolution of these issues is rarely presented.

In BI and EPM styles, a data model is strict (+).

Regarding the data model type, it is usually constructor-like. By the way, theoretically, such an architecture should ensure that these BI/EPM software products will always have a single formula language implemented at only one architectural level, however, in practice, this is not always the case.

In Cost Chain Modeling software, a data model is strict (+).

In such a style, a hybrid of hard-coding and constructor-like ways can be found.

[ 4 ] Smooth continuous sequential real-time auto-recalculation

  • In an Accounting-like software architecture, smooth recalculation is typically not supported in a general way (-).
  • In Spreadsheets, BI, EPM, and Cost Chain Modeling software, smooth recalculation can be found (+). In Spreadsheets this is a general feature: each time when a user makes a change to a cell, the engine automatically recalculates the data in all other cells that depend on it (and only them). Regarding BI, EPM, and Cost Chain Modeling engines, this functionality varies greatly by vendor. Some vendors support native recalculation in a similar way to spreadsheets; Some vendors offer mechanisms for user-controlled, triggered recalculation chains; Some vendors offer to run recalculation of entire models.

[ 5 ] Computing performance limit (for interactive working with cost data)

Please note that we are looking at cost model recalculation here, which is a specific type of computational workload

  • In Accounting-like software, the performance limit seems low for end users when working with cost data (-). In fact, in this type of software architecture, the vendor has tremendous technological capabilities for optimizing specific domain-specific computations.
  • However, for all user scenarios where rapid data processing is required for cost-related purposes (such as what-if analysis or recalculating driver-based models as they are edited by the user), the performance limit appears to be very low from the user’s perspective. The first and main one is that such tasks are not in focus of such software at all. Firstly, the vendor’s main focus when optimizing performance at the back-end level here is to ensure smooth regular multi-user operation of the system (which is closer to OLTP), rather than real-time user manipulation of large amounts of data. Secondly, interfaces and use case logic are also not adapted for this, auxiliary measures such as data aggregation by the user at the required levels before recalculations are not imposed, and so on.
  • Spreadsheets have a relatively low (compared to advanced OLAP technologies) performance limit (-), which, HOWEVER, is rarely achieved by most high-level finance models. Achieving performance limits is highly dependent on the granularity of the records used in the models. If you prepare budgets in top-level sections (cost centers, cost items, projects, product lines), and at the same time perform intermediate data aggregation to these sections, then you are unlikely to achieve tangible productivity limits even if your business is very large. But if you want to provide a full drill down and then use lists with a large number of records in cost models (for example, batches, parts, etc. low-level data elements), the limit is reached faster.
  • In BI, EPM, and Cost Chain Modeling engines the performance limit is relatively high (+) but varies greatly by vendor. In BI and EPM systems, users typically work with relatively aggregated data, which may create an initial impression of high performance. When working with large volumes of highly detailed data, the performance of many BI/EPM solutions is quite limited. Nevertheless, the upper performance thresholds encountered in the BI industry are very high. The upper thresholds in the EPM industry are quite high but (according to my information, which should not be taken as absolute truth, but which deserves verification) somewhat inferior to BI. The upper thresholds of Cost Chain Modeling engines are very high and are approaching BI (again according to my info).

Detailed performance analysis is quite difficult to conduct. First, different solutions provide slightly different data models, and when modeling business scenarios, we in fact receive models of different sizes. Moreover, the proportions between these sizes are also not fixed but depend on the specific business scenario. Second, vendors offer different performance optimization means, and you should implement models taking into account as many of them as possible (building models according to vendor-specific best practices, identifying bottlenecks through load monitoring, parallelizing user-run computations, deployment infrastructure options, etc.).

[ 6 ] Rules (formulas) creation: user vs developer

  • In Accounting-like software, formulas are typically pre-determined by the programmer (–). In some cases, developers may allow end users to make certain adjustments within a limited range of settings. However, these settings are usually narrow, which means that such mechanisms can only be developed with narrow adaption to strictly defined, highly specific variants of business rules.
  • In Spreadsheets, BI, EPM, and Cost Chain Modeling architecture, users can create calculation formulas (+). However, approaches to formula architecture vary significantly among vendors, especially in the case of EPM. The range of approaches includes, in particular, low-code formulas within a cube’s user interface, Excel add-ins, internal interfaces for scripting programming languages, and an environment for establishing formulas at the level of “physical cubes”, and in many cases, a vendor offers multiple approaches simultaneously. Of course, this means factual approaches are very different in the convenience and the level of technical skills required.

[ 7 ] Formula for a cell vs formula for a dimension

  • In Accounting-like, BI, EPM, and Cost Chain Modeling architecture styles, a formula by default works for an entire dimension. By the way, it is related to end-user formula editor tools as well as to the approach to writing algorithms with programming code.
  • In Spreadsheets, the formula works for an individual cell by default. It means that basically, you should maintain a separate formula for each cell value. If you want to use similar formulas for many cells, you must spread the formula to each of the cells by “cloning” it. Note: Over time, some tools have become available that allow you to create formulas for dimensions. However, such tools are not classical, and their use may still have some limitations.

Both ways have their pros and cons, so it cannot be said that one is better overall.
The dimension-based formula is obviously better when you have low variation in calculation rules within a dimension (that is, the same measure is calculated the same way in all cases). And vice versa, if the rules for calculating the same indicator are highly variable, in this approach you should put them all into one formula overloading it with a large number of IF/THEN constructions, which is inconvenient.
A cell-based formula, on the other hand, is better if you have a lot of unique formulas. If the formulas are of the same type, then in such an architecture you get additional labor costs for cloning formulas.

[ 8 ] Convenience to create and maintain calculation rules (formulas, algorithms)

Since the Accounting-like architecture doesn’t suppose user-made formulas, in this criterion (and in some following up to Criterion [10]) we’ll compare its programmer-made formulas with user-guided formulas in other styles.

This criterion is more complex than it may seem at first glance since we must consider the ease of maintaining formulas throughout their entire “life cycle”.

  • In the Accounting-like software architecture, formula language is regular for developers to use (+/-). For simple formulas, this way can be code-heavy and require the programmer to perform relatively many actions. At the same time, the more complex the set of computations becomes, the more justified it is to use this approach. Typically, most of the calculation logic is concentrated on the back-end side, where modern software development technologies provide programmers with the most advanced tools for code reuse, code navigation, and the integration of auxiliary tools (libraries, etc.) for specific non-standard tasks. In cases of the hard-coded data model, programmers can arbitrarily distribute business calculations between the technical architecture layers, while each of the layers and languages can have different pros and cons (for example, SQL may have more simple constructs for some common row-based calculations compared to back-end languages such as Java, however, requires JOIN-overloaded code to access data instead of simplified constructions like “CostCenter.Name”).
  • In Spreadsheets, the cell formula language is the easiest to use when working with data of low variability (+), but becomes overwhelming when building large, complex models (-). Spreadsheets provide the easiest way to reference a specific cell. Largely due to this, with small computational models, it seems that the formula language in spreadsheets is the simplest and most business-oriented. However, with large models, the formulas become difficult to handle, especially in cases where you don’t use maximum tools to simulate a strict data model. In this case, the language can get overloaded with dozens of redundant constructs, such as nested INDEX/VLOOKUPS. In addition, the tools for managing the codebase in Spreadsheets are underdeveloped (such as dividing formula code into logical blocks, code commenting, code reuse, and navigation), which makes usual constructs like IF/THEN also uncomfortable to read and maintain. On the other hand, spreadsheets provide a unique capability to highlight influencing cells: while reading a formula, you can select its element, and the system will visually highlight the cells that this element refers to. For some reason, this option is usually not found in other software styles.
  • In BI, the formula language can be advanced (+/-). Compared to spreadsheets, referencing specific cells can be more challenging in BI systems, and the visual highlighting of dependent cells may not be available. However, overall, I find formulas in BI systems to be more convenient, not only because of the natively strict data model but also largely due to the more advanced capabilities for structuring, navigating, and debugging formulas. However, please note that there is a significant variation among vendors.
  • In EPM, formula language convenience significantly depends on the vendor and on average is approaching BI (+/-). When considering the multidimensional data modeling concept underlying EPM engines, it is expected to provide extensive capabilities and convenience in terms of the formula language. However, in practice, EPM vendors adopt diverse approaches to formula design, and while the best examples can be comparable to top BI systems, they generally do not surpass them.
  • In Cost Chain modeling software, formula maintenance capabilities are different. On the one hand, with regard to the universal formula language, it seems that they are following BI/EPM systems, with some lag. On the other hand, in this type of software some very advanced tools, including no code, can be found for setting up certain domain-specific calculation types.

[ 9 ] Complex & truly dynamic calculations

Since an accounting-like architecture expects the software developer to program domain calculations, these capabilities will be compared to how users can perform these calculations in other software styles.

Here we consider general capabilities to make “complex” calculations manifested in two correlated things:

  • Ability to perform relatively complex calculations that go beyond basic arithmetic, groupings, if/then conditions, lookups, and data transformations. The examples are loops, solving systems of linear equations and linear algebra generally, cross joins, numeric analysis, and others.
  • Dynamism that can be also referred to as a possibility of formula contextualization or a secondary (tertiary…) automation. This is an opportunity to make your calculation rules so flexible that they can use, as influencing parameters, not only data lying in pre-defined structures, but also more dynamic context defined on-the-fly and metadata values. The less dynamism, the more often you will have to create intermediate data structures for the results of the previous calculation steps and explicitly reference them in ruling subsequent calculation steps.

As an example of a truly complex dynamic algorithm. Imagine that we have many stages of a production process. At different stages, joint production or cross-consumption may situationally occur. If we had full dynamism, we could theoretically create an algorithm that would analyze a production process until co-production or cross-consumption was discovered, then automatically create a system of equations for it, and then solve it and use the results of the solution for further calculations, while using loops until the complete chain has been solved. If there is no dynamism, we will most likely have to first identify each specific cross-consumption and co-production case; we will then manually set its parameters as parameters for the system of linear equations; then, after solving the system of linear equations, put its results into a data structure; then repeat the same steps for other specific cases.

Let’s look at how the complex and dynamic calculations can be implemented in different types of software.

“Usual” software architecture provides the most capabilities to make complicated computations (+) for software developers.

  • Among the types of software under consideration, this architectural style provides the greatest opportunities for programming an applied computing complex for specific business needs. It allows us to turn into variables what vendors of other types of software define as a non-obvious constant, including operations with metadata. The calculation rules we program can generate intermediate, “virtual” structures of various types of not pre-defined size (in all planes, not just along rows), then analyze their size and use it as a condition for further calculations. The rules can automatically assign meaningful identifiers to the dynamically created virtual structures, cache, and reuse the created collections. They can apply loops to conditions, conditions to loops, loops to create virtual structures, and so on and so forth.

In Spreadsheets, formulas are advanced but still limited especially in part of secondary automation (+/-).

  • In spreadsheets, you can implement many advanced types of calculations, including calculation of integrals, loop calculations, and basic linear algebra algorithms. etc. Also, in addition to functions that calculate a single value, a number of non-standard functions are available, the result of which is to obtain a certain collection of data of dynamic size (for example, see the “=FILTER” function in MS Excel). However, there are some limitations and/or inconveniences to implementing very complex dynamic calculations.
  • An ultimate case is to use script languages (such as Visual Basic or App Script) which are suggested within the modern spreadsheet ecosystem. On the one hand, this approach breaks most of the limitations, since such languages are full-fledged general-purpose programming languages and their functionality is close to languages used on the back-end such as Java, Python, C#, etc. On the other hand, by following it you may soon find that you get almost all the disadvantages of traditional backend programming (such as a heavy codebase) without getting its advantages (in particular, in code reusability, control over the environment, and dependencies, computation distribution, code development and navigation tools, wider virtual data modeling capabilities, etc.). Thus, the main features of the spreadsheet architecture are leveled out when we follow this path.

In BI, the depth of calculation capabilities is relatively high (+).

  • BI vendors support a varying number of computing functions, some with hundreds of functions, including quite complex calculations.
  • In addition, it can be noted that they offer separate tools for carrying out stochastic computations (including AI/ML), which can also be considered complex algorithms.
  • As for the dynamism of calculations, they look higher than in Spreadsheets, however, still more limited than in comparison with regular back-end programming. For example, a fairly typical limitation is that columns of dynamically created data collections must be pre-defined, so the dynamism in this case extends only to the creation of rows.
  • And about script languages in BI. A separate important point is the tools provided by BI vendors to connect external scripts, for example in Python, to their data. In general, here can be applied considerations similar to that I applied above to the use of scripting languages in spreadsheets.

In EPM, the calculation capabilities are not as deep (-).

  • In my estimation, the depth of calculation tools provided by EPM is on average lower than in Spreadsheets and BI. As for the restrictions on dynamic calculations, they can be quite serious here: for example, in comparison with BI, the limitation of not only columns but also rows in the generated data collections looks more common. As a result, in EPM the situation often turns out that you must first define all the data structures and then carry out calculations on them, which imposes restrictions on some ways of processing cost data.
  • On the other hand, it should be noted that EPM systems (like BI) today often offer separate tools for performing AI/ML computations, which is not found so often in Spreadsheets and Accounting-like software.
  • With regard to the use of external scripts (such as Python), there are typically fewer capabilities to be found in EPM engines than in BI.

In Cost Chain modeling software, the general-purpose computation capabilities are less developed than in Spreadsheets and BI, however, at the same time, some complicated domain-specific algorithms can be added that are not found in other types of software (for example, automatic cyclical cost calculation for modeled chains, which will be mentioned below).

[ 10 ] Vlookup (Left Join) and ability to refer to nested Attributes

Since the Accounting-like architecture doesn’t suppose user-made formulas, in this criterion we’ll compare its programmer-made formulas with user-guided formulas in other styles.

Functions related to searching through one array for records that can be associated with records in another array (are fairly common functions used in financial data processing. Such functions can be called Left-Join-like, or Lookup-like.

In particular, an important Vlookup-like calculation is the reference to a record’s “Attribute of an Attribute…” (with a different number of nesting levels). For example, a relatively complicated case may look as follows:

Picture 6. An abstract example of nested attribute data structure

When analyzing software, it is important to pay attention, firstly, to the very ability to refer to nested attributes and, secondly, to the possibility of using simplified constructs for this, something like:

= CostRecord.Project.CostCenter.FunctionalDirection.Name

or

  = Name [ Functional direction [Cost center [ Project [ Cost record ] ] ] ]

Let’s compare the software styles from this standpoint.

  • In the Transactional-Like software architecture, you are free to do a Left Join as a programmer (+). However, this option is usually not available to the end user, even in advanced no-code engines.
  • In spreadsheets, you can freely do operations like VLOOKUP and even HLOOKUP (+), but with some inconveniences related to the fact formulas get overloaded on large models, especially if you haven’t used as many tools as possible to simulate a strict data model, as I mentioned before.
  • In BI, you can join datasets (+) and even do it without code. Depending on the vendor, different user tools can be provided, for example, visual linking of tables, adding chains of calculated columns to tables, and/or writing nested joins. Besides, it may even be possible for the user to use link conditions in adjacent cases, such as filtering, without writing formulas. In terms of ease of use, on average you can expect that if you are dealing with many levels of attribute nesting, the ease of accessing them will likely be greater than if you needed to access an attribute of one directly related table.
  • In EPM, such operation is mostly supported, but the number of vendor-specific constraints is higher on average (+/-). Although there is no native row-table architecture in EPM, some lookup-like functions are usually available to transfer values from a column of one data structure to a column of another. At the same time, depending on the vendor, there may be some workarounds and limitations, the number of which I subjectively assess as exceeding their number in BI.
  • In Cost Chain Modeling software, decent capabilities for referring to attributes can be found. Typically, there is no universal Left Join function, however, vendors provide some other features to make a reference to attributes (in some cases even nested) in formulas.

Please pay close attention to this criterion and consider how this feature is implemented in each specific software product when you design or select it.

[ 11 ] Loop calculations

Since the Accounting-like architecture doesn’t suppose user-made formulas, in this criterion we’ll compare its programmer-made formulas with user-guided formulas in other styles.

Loops can be considered as “relatively complex” algorithms (which were generally discussed in Criterion [9]). Although they are not as complex as high math, they are are still less represented in business apps than basic arithmetic and row operations. Since loops are very significant in Cost Accounting automation (and not only), they are worth considering individually

Generally, we need loops in all cases where we don’t know in advance the exact number of needed calculation iterations. The most important case is a computation for long cost chains, but also there can be cases of value allocation between different levels of hierarchy, incrementing sums by a cumulative total, which must stop after reaching some threshold value, and some other iterative calculations needed in business.

  • In a “usual” software architecture, the software developer is free to make loop calculations (+). The main way is to write loops at the level of the main backend programming language. This is the approach you should take if your computational algorithms are truly complex and dynamic. Also, domain loop calculations can be implemented at the SQL code level, but this way is of some more limitations.
  • In Spreadsheets, loop calculations are possible although there are more limitations and/or workarounds (+/-). Vendors provide different approaches but anyway, typically you CAN perform loops, and in some cases even implement limited secondary automation.
  • In BI, loops are mostly possible but with varying levels of functionality and convenience (+/-). Here the variability by vendor can be even higher. At the level of internal formula language, some vendors don’t support loops at all, some of them support it with limitations and/or workarounds, and some of them support loops natively. As for external (Python etc.) scripts, loops are of course possible there, and thus, with all vendors that support their connection, this task is at least achievable.
  • In EPM, loops are rarely supported, and the number of limitations can be higher (-). Few EPM vendors provide the ability to use loops in formulas, especially in a universal way. Even in cases where it is possible to implement loops, the workarounds and restrictions can be more likely found than in BI. As for external scripts (Python, etc.), in some cases they are available, however, this is too much of a workaround to build loops.
  • In Cost Chain Modeling software, “general-purpose” loop tools for end users are not very common, however, there can be found specific built-in mechanisms for cost loop processing (+). Such specific engines, in particular, are inextricably linked with the mechanisms of nested cost chain modeling experience (see Criterion [16]). Since this is one of the most complex and important tasks in the Cost Accounting domain, a plus is assigned to this criterion.

[ 12 ] Solving a system of linear equations

Since the Accounting-like architecture doesn’t suppose user-made formulas, in this criterion we’ll compare its programmer-made formulas with user-guided calculations in other styles.

This is a calculation of advanced complexity that is important in some cost management cases.

We need it when we have a cross-depended consumption, i.e. in cases where two or more resources/products/activities consume each other within the same time period and we can’t distribute them into a chain of discrete subsequent one-directed steps. In such a case, we need to solve a system of linear equations for two-direction cost allocation.

  • The “usual” architecture gives the software developer the freedom to handle this task (+). Many programming languages now have ready-made libraries that allow us to easily embed solving linear equations into user computational flows. By the way, regarding ready-made software it is worth noting that many off-the-shelf software applications for accounting and ERP systems designed for manufacturing industries do support some ready-made domain-specific algorithms for co-depended costing.
  • In Spreadsheets, such a function is available (+), however, scalability is limited. Typically, you need to manually run the math optimization tool on a ready-made dataset and/or write imposing scripts (e.g. VBA/AppScript). Since at least the first of them is a native Spreadsheet tool, I put a plus in this criterion. On the one hand, calling such a solution based on ready-made data sets looks simpler and more convenient than, for example, in BI. On the other hand, if you need dynamic and scalable functionality for solving systems of linear equations, spreadsheets do not appear to be the best fit.
  • BI software usually does not support solving systems of linear equations natively, however, it is possible via external scripts (+/-). Natively, BI engines typically do NOT support such a computation operation, then, if you need a bi-directional distribution of costs within a period, you may need to write an external (Python, etc.) script and connect it to BI data. On the one hand, in this architecture, secondary automation may be somewhat more convenient than in Spreadsheets, on the other hand, the implementation looks somewhat more indirect.
  • EPM typically does not support solving systems of linear equations (-). EPM vendors usually do not provide internal mechanisms for such calculations. In some cases, connecting external (Python, etc.) scripts to solve a system of equations is available, but I subjectively assess their applicability in this case to be somewhat lower than in BI.
  • Cost Chain Modeling software typically doesn’t support universal tools for solving linear systems of equations, HOWEVER, vendor-depended domain-specific mechanisms can be found (+). I managed to find a native functionality for solving cost cross-allocation problems among software products of this style, so I give a plus in this criterion.

[ 13 ] Cross Join (Cartesian product of rows)

Since the Accounting-like architecture doesn’t suppose user-made formulas, in this criterion we’ll compare its programmer-made formulas with user-guided calculations in other styles.

This is another case of computations of relatively “advanced” complexity specifically important in cost accounting.

Mathematically, this means the Cartesian product of rows of tables (matrices), when as a result we obtain all possible intersections of the rows.

Let’s look at an abstract example of Cross Join in cost accounting, where we get the intersections of cost items rows and product rows during cost allocation:

Picture 7. An abstract example of Cross Join

Cross Join in this case performs the task of populating the resulting table with various possible intersections of Cost item * Product. (At the same time, the calculation of the Costs allocated column is not the result of a Cross Join per se: it can be calculated using more widely supported mathematical operations such as summations, division, Left Join/Vlookup).

This function is especially important for business cases where the catalogs of elements “from which” costs are allocated (for example, Resources) and elements “to which” costs are allocated (for example, Products) are dynamic that is, the number of elements in them can change, and the elements interact as many-to-many (for example, the same resource can be used to produce different products). Cross Join can allow the computational algorithm to automatically scale to newly emerged master data elements without requiring users/developers to maintain and make manual modifications in the result table structure.

  • In the “usual” software architecture, the programmer can easily multiply rows (+). Typically, for accounting tasks, a ready-made Cross Join function at the level of SQL is used but this is also possible to program on the side of the main backend logic. When it comes to platforms like low-code or no-code accounting, personally, I was not able to find opportunities there to perform cross-john for the user. Regarding off-the-shelf software, cost allocation functionality is entirely up to the vendor
  • Spreadsheets generally allow the user to multiply rows (+/-). Some vendors support it natively within their formula language, some vendors may provide workarounds. However, dynamism and secondary automation are typically limited.
  • BI software often allows to make Cross Join (+). Convenient internal mechanisms for multiplying tables can be found, which I appreciate highly. In addition, of course, solving this problem is possible by connecting external scripts.
  • EPM software typically doesn’t support Cross Join in a general way (-). This is largely due to the architecture itself since row tables are not developed. However, individual vendors may provide partial domain-specific mechanisms to achieve similar results, but however, you are more likely to encounter workarounds than in Spreadsheets or BI software.
  • Regarding Cost Chain Modeling software, the universal tools for making Cross Join are typically not presented, HOWEVER, specific internal cost allocation tools contain mechanisms that allow you to obtain similar benefits can be found (+).

[ 14 ] Capabilities for manual data entry and editing

Here it is necessary to take into account, firstly, the possibility and convenience of entering/editing data in the cells in UI, and secondly, the possibility of imposing features of the system’s behavior during manual data entry, for example, extended validations, restrictions, security rules, triggers, etc.

  • Accounting-like, EPM and Cost Chain Modeling architectures generally provide good capabilities for manual data entry (+). Among the nuances, we can highlight the following. Firstly, accounting software is usually very inconvenient for entering planned values for a number of reasons (lack of flexible work with aggregates, poor support for cuboid input forms, and generally rigid data structures). Secondly, theoretically, transactional-like software is best suited for data entry capabilities when it comes to actual data. Nevertheless, EPM/Cost Chain Management mechanisms can provide enough necessary capabilities for most practical cases. However, some vendors (especially in the Cost Chain Modeling field) may impose workarounds such as the use of Excel as the UI for some data entry tasks — recently, such approaches have become less common, but they can still be found.
  • Spreadsheets provide the most flexible data entry capabilities (+), but poor support for customizing the system’s behavior (-). As in all other cases, the greatest freedom is both a plus and a minus of spreadsheets. The user can enter data almost anywhere (there are some exceptions to this rule, such as Pivot cells). Tables support basic validation (by data type), and there are some capabilities for creating triggers, however, generally, the implementation of behavior mechanisms is comparatively much less developed than in other types of software.
  • BI typically provides comparatively worse data entry capabilities (-). Some vendors are gradually adding some basic data entry capabilities to the user interface. In addition, BI supports the creation of validation rules. However, in general, there is still a severe lack of manual data editing capabilities in financial models.

[ 15 ] Visualization (Reporting) capabilities for a user

Indeed, the reporting functionality in information systems has a lot of aspects, and a separate study might be required to explore this topic in more depth. Here we mainly focus on the user’s ability to create dashboards and reports using various advanced forms of data visualization.

  • In the transactional-like architecture, reports are mostly pre-programmed and rigid (-). Despite the fact that each report can be highly customizable, the number of reports that need to be created by the programmer typically scales proportionally with the number of data objects processed in the system. As the software incorporates a wider range of business transactions and utilizes more dimensions, the workload for the programmer increases when developing reporting functionalities.
  • Spreadsheets offer advanced visualization capabilities (+). When it comes to unstructured reports as well as ad-hoc reports, Spreadsheets seem to be the best tool because they allow you to manipulate data and cells quickly and arbitrarily. When it comes to structured reports, there is a basic Pivot tool, as well as various charts and graphs for visualizing data. As for things like cross-report navigation usability, they have traditionally been quite poorly developed here, but modern versions of Spreadsheets are working to improve this.
  • BI is traditionally considered the best general-purpose data visualization software style (+). Compared to Spreadsheets, BI software typically offers users enhanced navigation, a broader range of advanced visualization types, increased interactivity, and dynamic reporting capabilities. However, BI architecture typically adheres to more structured forms of data presentation. As a result, when it comes to unstructured reports that do not conform to tabular forms, as well as many ad hoc visualization tasks, Spreadsheets still outperform BI.
  • EPM contains decent visualization capabilities (+/-). Despite the fact that in general, they are certainly inferior to BI, nevertheless, the general architectural approach is similar, which provides some corresponding advantages over Spreadsheets (especially in terms of navigation). On the other hand, in some universal things, such as Pivot, EPM is typically inferior both to BI and Spreadsheets.
  • Cost Chain Modeling engines have various vendor-specific and domain-specific visualization capabilities. It seems difficult to give a general assessment here, so let’s just look at some considerations. If we talk about universal visualization capabilities here, such engines definitely couldn’t compare with BI and Spreadsheets. Comparison with EPM is very complex and vendor-specific. From a theoretical point of view, the EPM architecture should be much better suited to provide the user with the convenience of universal data manipulation when building reports. From a practical point of view, both in the case of EPM and in the case of Coastal Chain modeling, the capabilities are extremely vendor-specific. For example, in some individual cases, Cost Chain Modeling engines provide more flexibility in such basic tools as universal pivot than some EPM engines do. At the same time, Cost Chain Modeling vendors still often use MS Excel as an external tool for data visualization, and in my opinion, in this regard, they are less progressive than vendors of all other types of software that are already moving away from this trend. Among other features, some special domain-specific visualizations for cost trees can be presented, which are not available in other solutions. Moreover, you can find an opportunity to visualize cost data integrated with 2D/3D models of your products and/or parts.

[ 16 ] Nested cost chain modeling experience

Here we refer to the functionality where end users can create a chain of resources consumed and/or processes performed in a business, which serves as the foundation for further cost allocation. The idea is to transfer the cost values through the links of the chain to calculate the cost of a desired target link.

  • “Usual” software architecture style is generally not suitable for modeling chains of indefinite length with UI (-), HOWEVER, some vendors offer manufacturing-specific BoM tree building engines (+/-). Typically, accounting-like software prompts the user to create documents that correspond to several predefined types of stages of manufacturing, transportation, and other similar processes. However, there is an exception that can be commonly found in manufacturing-oriented accounting/ERP-like software where an end user can create nested BoMs (Bill of Materials). Their interfaces are convenient and serve as a modeled chain of costs for further automated calculation. However, when compared with the tools presented in specialized software, their capabilities seem more moderate. Note. Maybe it would be correct to consider such functionality not as part of Accounting-Like engines as such, but as part of the functionality of the Cost Chain Modeling style, which vendors of Accounting-like software products build into their products, but in this article we don’t do that yet.
  • In Spreadsheets, the user can model arbitrary cost models using cells and tables. Complete freedom, but no auxiliary amenities to support large nested chain models (+/-). The main user tools remain cells and tables (whether Row Tables or cube-like tables). This means that in order to model “costly business processes”, it is necessary to create a chain of such cells/tables that will be connected by sequential computational formulas. Spreadsheets are relatively convenient for this when there are not many steps in processes or when each step can fit into a row of a unified table. However, if a step requires a significant amount of descriptive information such as physical coefficients, transfer pricing values, or links, we have to either create a separate table for each step or use additional descriptive tables while keeping the step chain within one unified table. Until recently, spreadsheets lacked (and still lack to some extent) tools for managing a chain of tables as a single “model”, even in simple aspects like navigation. Consequently, when cost chains are long, maintaining them becomes difficult and labor-intensive. Usually, every time we need to add or exclude one of the links in the process, we need to both add/exclude one of the tables and change the form rule chain.
  • In BI, the nested cost chain modeling experience is generally similar to spreadsheets, but the maintainability of large models appears to be slightly higher (+/-). This is due to the fact that some vendors provide UI tools to manage large models, including more convenient navigation across tables and calculations. In addition, visual interfaces for representing hierarchies are more developed. Otherwise, the experience is similar to that in Spreadsheets, with the only remark that here you will mainly model a chain of Row Tables and formulas between them (while in Spreadsheets you usually make extensive use of both individual cells, row tables, and cube-like tables).
  • In EPM, the nested cost chain modeling experience is generally following that in BI (+/-). There can be some differences, particularly taking into account the fact in EPM you can build a chain of “cube-like” tables instead of Row Tables in BI, however, the overall experience appears to be similar.
  • Cost Chain Modeling software provides the MOST advanced capabilities for these types of tasks (+). This includes advanced hierarchy management specialized for cost chains, tools to handle long models (including advanced multi-versionality), the most advanced BoM management interfaces, additional interface tools such as the dependent tables and window elements for navigating dependencies, and so on. However, the scope of functionality largely depends on the vendor and is often industry-specific.

[ 17 ] ‘What-If’ analysis

Although particular cases of What-If analysis are different, they all revolve around the same main idea: comparing alternative models of business activities and observing how key cost indicators, such as total costs or cost of a particular product, will differ.

Eventually, the convenience of software for a specific What-If business case depends on a number of factors. The key factors are the need to support several alternative versions of models simultaneously (or iterate through values on the fly); the number of supported versions; the need to vary the number of nodes in models, which means changing the number of tables and/or computation stages; frequency of changes to the model; the need to maintain many alternative models (model versions) and make the same changes to them synchronously. However, here we will not go deep into the analysis in the context of each of the factors and will only give just a very general overview.

  • In Accounting-like software, What-If analysis experience is not developed (-). In many cases, the only way to see the differences is by making changes in primary documents and BoMs and rerunning calculation procedures to observe the variations, which is not a really convenient approach.
  • In Spreadsheets, What-If analysis offers great freedom (+), BUT it becomes inconvenient when dealing with multiple alternative scenarios simultaneously (-). While you can create any number of calculation steps and alternative scenarios freely, supporting each additional alternative scenario will significantly increase the user effort.
  • In BI/EPM/Cost Chain Modeling engines, What-If analysis is supported (+). When comparing these engines, I would like to highlight the following arguments. Firstly, BI engines offer the most flexible and responsive forms. Secondly, EPM engines provide the most flexible capabilities for data entry and general-purpose manual data adjustments, which is particularly valuable in the context of cost management. Thirdly, the Cost Chain Modeling style places significant emphasis on the often overlooked task of ensuring the flexible interchangeability of data items within models, such as the interchangeability of product details in manufacturing processes. Regarding convenient tools for managing multiple versions, they can be found in all three types of engines, although the effectiveness of these tools may vary depending on the specific vendor. From a subjective standpoint, I would like to commend Cost Chain Modeling vendors for their positive approach to multi-versionality management.

[ 18 ] Advanced manipulation with hierarchy and aggregates

Here we focus very generally on whether the software allows us to somehow manipulate aggregates beyond simply auto-summing values from subordinate elements into them. However, in the future this should be studied in more detail since working with aggregates can include quite different cases: this includes entering aggregated data when detailed data is not yet known, followed by clarification and itemization, dynamic switching of aggregate types (“sum”, “average”, “max”, “min”, etc.), and other cases.

  • Accounting-like software is typically inflexible in this regard (-).
  • Spreadsheets, BI, EPM, and Cost Chain Modelling architectures typically provide advanced capabilities for manipulating aggregates (+).

[ 19 ] Data Mapping & Transformation

This is a very complex point, so I apologize for taking quite a long time to consider it.

The transformation functionality can be divided into at least two mechanisms:

  • Cataloging of “data mapping” rules. Such a mapping shows matches between the source and target data structures. Although we can work without such a catalog by writing all the rules directly in the transformation logic code, such a way is inconvenient in terms of maintaining the rules in the future (while in business it is quite often required to make changes to them);
  • Transformation action logic.

Let’s start with considering the mapping rules catalog. A basic example of a convenient mechanism might look like this:

Picture 8. An abstract example of a Data Mapping table

It’s good if when filling out these rules the user doesn’t have to enter values (for example, “Salary”) as input text, but can select them as items from drop-down lists. Regarding physical storage, the rules should be stored using technical unique identifiers instead of names. Then, in the future, when the catalog element names change, nothing breaks in the mapping and it still works smoothly.

In addition, further complications may be valid for the data mapping tasks. For example, a user may configure the mapping rules that depend on various similarity conditions (=, <, >, LIKE, etc.) and/or values of nested attributes of attributes.

As for the data transformation logic, it should “be able to” read the configured mapping rules.

Although it is challenging to provide a comprehensive comparison in this view, let’s discuss some detailed considerations regarding various software styles.

An accounting-like style of software architecture provides the best capabilities for Data Mapping cataloging (+), which is nevertheless rarely utilized by vendors (-), and relatively rigid data transformation processes for an end user (+/-).

  • If you are a vendor, I would recommend that you choose a Card-like UI for data mapping rules, with drop-down lists of available dimensions and values (at least for target data structures), storing rules by technical element IDs, but displaying in custom interfaces based on the current names of elements, with advanced validations, with the ability to version rules, and other mechanisms offered by the accounting-like design style.
  • However, if we consider the existing software products on the market made in this style, their mapping functionality is quite poor. This can largely be attributed to the fact that the development trend of IT landscapes is such that data mapping functions are usually used either between systems or at the level of analytical applications, and therefore transaction software vendors, in principle, do not focus on this functionality. However, deviating from the topic of the article, it can be noted that some software such as Data Management applies a similar style for the functionality of data mapping management.
  • As for data transformation processes, accounting-like software typically supplies a user with a number of pre-programmed procedures with strict logic, while the end user’s ability to configure them is very limited. However, there are exceptions when individual vendors provide advanced Workflow configuration capabilities for an end user, and allow data mapping rules to be connected to it. Such mechanisms are gradually developing, but I do not consider them classical and they still have not become common for software of this type

Spreadsheets provide the GREATEST freedom and flexibility in data transformation for a user (+), however, making large-volume data transformations reliable and manageable can be challenging (+/-)

  • Spreadsheets provide the greatest flexibility and functionality for data mapping. It seems they’re the only software style where the user can implement perhaps ANY transformation rule.
  • Here it’s not necessary to program or run transformation processes as the calculations will be done automatically once the formulas are set.
  • At the same time, there can be a lack of usability and reliability in mechanisms for cataloging and managing transformation formulas/rules if maximum possibilities related to Data Model setting haven’t been used. When transformation rules are really complex and contain a lot of nested conditions, we can be affected by the problem of formula overload and inconvenience in maintaining them. If we use element names in data mapping (for example, “Customer Success Department”), then when changing these names we begin to encounter difficulties. Alternatives are to use identifiers or system names that can serve as static end-to-end, human-readable identifiers across various software systems within the company (something like “001_Sales”). This will provide greater reliability, but in this case, after transformation, we will then need to join the current name from separate tables to the transformed data. However, as already mentioned, the use of Data-Model-related tools helps solve this problem.
  • Data transformation by default is implemented in an ELT way, which means that we have to first load all the source data into a transformation spreadsheet and then transform the data inside the system. On the other hand, there may be additional connectors (for example, MS Power Query, which is not formally part of MS Excel spreadsheets) that will allow us to implement ways similar to ETL.

BI provides relatively capable data transformation mechanisms (+)

  • In general, the mechanisms are relatively functional. The native style is ELT. Having source and target data structures inside BI, we typically can create mapping tables in the UI, then create transformation algorithms (“formulas”), and then either instantly receive the result of the transformation or (in cases where the vendor supports Workflow customization) set rules for triggering transformations.
  • If the transformation must be multi-criteria i.e. the value of the target item must depend on several factors, then more code will usually be required in the data transformation algorithm than with simple 1-to-1 mapping. The complexity of the transformation code in BI can be assessed as lower in comparison with the case of using Spreadsheets without tools for simulating a strict Data Model.
  • In addition to native ETL, vendors often provide specific tools for retrieving data in ETL style from external sources within import capabilities. By the way, Microsoft offers to use the same Power Query with Power BI that is used in conjunction with MS Excel.
  • I rate navigation by data mapping rules and transformation formulas in BI as simpler and more convenient compared to Spreadsheets.
  • BI tools may have the same problems with element IDs/Names as Spreadsheets, but there can be special tools to solve them, so the user needs to pay attention to them

EPM offers data transformation capabilities that are comparatively more constrained than those available in BI (+/-).

  • It looks like EPM follows BI in developing data mapping and transformation tools. However, this is largely due to the fact that the BI market is one of the largest and sets trends.
  • Regarding architectural differences, it is important that the main EPM interfaces being “cube-like” are poorly suited for mapping. This is why vendors often have to develop some basic implementation of Row Tables for these cases. Frequent features are that this is tied to data import capabilities, while the internal transformation of data between models within EPM can require a larger amount of hardcode than in BI.
  • Nevertheless, in the financial consolidation market, EPM technologies are sometimes considered core, which is justified by the advanced data entry capabilities that are very important in this area and which are lacking in BI (see Criterion [14]).
  • The problem of using ID vs Names in data transformation in EPM can be found as well as in Spreadsheets and BI, and the ease of solving it depends on the vendor.

Cost Chain Management software utilizes a range of data transformation tools (mostly ETL) that are situationally added based on specific needs.

  • In most cases, software of this type is not focused on flexible and universal cost data transformations in a general way. Vendors situationally develop some tools that are generally similar to those presented in BI/EPM but can be some more specific. Based on my info, most of them are related to data import functionality and follow the ETL approach.

[ 20 ] External reuse of data

Here we consider the ability to extract data processed in the system for use in external systems.

  • Accounting-like software, Spreadsheets, EPM, and Cost Chain Modelling engines offer conventional mechanisms for data reuse (+/-)
  • In the case of BI, more limitations can be found in the external reuse of data (+/-). As a vendor, you theoretically should not be limited when creating an API for your BI product (although you will face the same challenges as in the EPM, associated with a flexible data model and the temptation to transfer in-memory data). However, from a customer perspective, it is important that if you need to reuse data from BI, you are more likely to encounter limitations and workarounds. Perhaps this is due to the basic vision of “on-the-fly” data processing and the fact that BI is often considered an “on-top” element of the corporate architecture, meaning that external data reuse is not a typical scenario. However, today the situation is changing somewhat (which also means it becomes more difficult to analyze) because, as we already mentioned above, many vendors are merging their classic BI functionality with the cloud data engineering and pipeline orchestration ecosystems, where reusability problems are usually less pronounced.

Ultimately, pay close attention to this criterion and, if you need to reuse data from a software system in other systems, carefully consider the capabilities of a particular solution when choosing it.

[ … ] Some additional considerations for the comparison

Several words about other differentiating characteristics that were not considered above.

  • Alternative names for all data types are important. We already mentioned this in the context above. The user can work with data structures and their elements (including selecting values from drop-down lists), seeing their current “business” names. At the same time, business names may change over time, but nothing changes either in the data cards or in the system settings — since in the database all structures and connections between them are stored using technical identifiers. The solution to this problem depends on the vendor. In an accounting-like architectural style, programmers usually (though not always) immediately provide a convenient solution. In other cases, it depends more on the vendor. This criterion should also be kept in mind both when analyzing the architectural style and when analyzing a specific software product and when configuring it.
  • Spreadsheets classically have the poorest security-related capabilities (-). In particular, you can’t implement row-level security in a general way because of the fundamental data model specific.
  • Spreadsheets poorly support collaborative work and workflow capabilities (-). However, some development of workflow mechanisms occurs, for example, within integration with Power Automate in the Microsoft environment.
  • BI products often contain built-in tools for working with many external data stores (+), so it can be used as an ad-hoc interface for viewing data from raw databases, including DWH. In general, BI usually differs from other types of software in that vendors focus on greater opportunities for collecting data from external sources
  • At the same time, the principle “What goes into BI, stays in BI” still often applies, that is, what I talked about in the criterion of poor data reuse. Therefore, in some cases, corporate data is moving from the level where standardized cross-vendor data storage and communication formats and open source technologies are used, which generally ensures both good data reusability and migrability — into a relatively more “closed” and vendor-specific level of BI.
  • However, regarding cross-vendor migration, you can reasonably say that to a large extent, the problems will apply to almost all technologies. Indeed, you will more likely need a complete re-implementation than at least a partially simplified migration when you want to change the vendor in case of either enterprise-grade interface software type such as Accounting-like, BI, EPM, Cost Chain Modelling, or many more fundamental technologies — for example, “cube-like data storage”, say, switching from Oracle Essbase into IBM TM1 or back. As for migrating code between open-source backend programming languages, say, between Python and Java, that is also difficult, although some supporting mechanisms now exist. Finally, SQL-supportive databases and Spreadsheets (where there is some ability to copy both data and formulas between different vendors’ products) stand out to some extent in this regard. I would say that cross-vendor standardization for these two classes of technologies is one of the most developed today.
  • The total cost of processing one unit of data appears to be: Spreadsheets < BI < EPM < Cost chain modeling software. Spreadsheets are actively used as an intermediate “transit” tool for processing data received from various systems and transferred to others. BI systems, although to a lesser extent, are also used in everyday work as universal, convenient tools for quickly connecting to certain sets of data and quickly processing them. In contrast, EPM engines are typically implemented for specific tasks, and the handling of relevant data is introduced into them more strictly and purposefully through an implementation project. Finally, cost chain modeling mechanisms appear to be the most specific among the four types of software.
  • Cost Chain Modeling engines have a pronounced overlap with other software functionalities. Thus, on the one hand, the What-If analysis of cost data is closely related to the simulation of related physical processes which is covered by Manufacturing/Logistics Simulation and Process Management software (which is not a subject of the article). On the other hand, such capabilities as strategic cost calculations which include non-manufacturing overheads, preparation of profitability and ROI calculation reports are closely related to EPM functionality. These considerations may explain the difficulty of implementing Cost Chain Modeling tools, and appear important when analyzing past and future trends in their development in the market.

Chapter 4. INTEGRATION PROBLEMS AND THE FUTURE

4.1. Software products to styles: from ‘many-to-one’ to ‘many-to-many’

Now I will risk giving my thoughts, which I usually try to keep to myself.
I already mentioned in the disclaimers that the boundaries of a product are not equal to the boundaries of technological styles. Now I want to develop this idea.
A typical example is Pivot in Excel. What if I would suggest considering it a BI component instead of a spreadsheet component? Look at it. It replicates the reporting engine that is the basis of BI systems (it generates the report dynamically, the report is dimension-structured, updating is not smooth by default, read-only by default, and individual cell filling/edition is restricted so that you can only add dimension-based rules). And all this differs from the basic features of Spreadsheets.
But this is just for thought.

4.2. Problem of integration

As is clear from above, no type of software is perfect or all-encompassing. Therefore, when designing your IT landscape, you need to integrate systems/components of different types.
In theory, it seems very tempting to allocate each type of task to the most appropriate type of software.
However, here we are faced with the problem of integration. We already partially touched on it when we spoke about the problem of migration. Even if we are talking about simple catalogs (lists), they may require different implementations in different software products, either if these are products of different styles or from different vendors. But in the case of automating such complicated business domains as Cost Accounting, we deal not only with static structures. We also deal with the sequence of processes that occur in the real world, and such processes can be unique each time they occur. Thus, our financial model describes a certain “story” from a real business, and with each integration, we copy a certain piece of this “story” between the software products, while the same piece of the story, may require a different implementation in each software product. Almost everything can differ: the number of dimensions that we will ultimately need, the types of metadata that we will have to use in different parts of the story, the composition of the master data, a set of formulas, etc.
Then, when the story from our business changes (and it does change constantly), we must support those changes by editing the implemented model in each software solution. Moreover, there are many more such edits than might seem at first glance when choosing software. We have to edit data structures, calculation rules, and data references in our formulas, and all this according to the number of software products.
At the same time, the implementation of centralized software systems, for example, Data Management, fundamentally cannot solve the problem, since the elements of the “language” in which the business story must be told include significantly more data structures than just the master data.

4.3. Forecasts, part I: The future of the reviewed software styles

In conclusion, let’s discuss the possible future of reshaping software types.

Here we look at the first phase, in which vendors of each style of software, having identified their gaps in certain technological capabilities, begin to try to fill those gaps. In fact, they introduce additional mechanisms that are not the common characteristic of this style, but in many cases have been implemented in others.
Many of the trends of this phase are already being implemented.

The Data Model will be further developed in software products of all styles (especially in Spreadsheets), and therefore the formula language will be simplified.

  • The need for LOOKUP/JOIN-like constructions will be minimized in most cases.
  • We will be able to use shortcuts like “= Record.Attribute.Attribute” in formulas to access nested attributes without restrictions.
  • We will be able to support many alternative names for both data structures and records. Moreover, each user will be able to choose which of the names members in formulas will be displayed to him. Thus, one user will be able to type/see formulas in some names, while another will be able to type/see formulas in others. Consequently, the use of technical system names in the UI will be a thing of the past, and both the data model and formulas will be completely business-term.

The number of data structure types supported will develop in all the software styles

  • In BI, EPM, and probably even in Spreadsheets, card-like interfaces for presenting complex “objects” can evolve.
  • In Spreadsheets, advanced support for visualizing hierarchies will be added.
  • In EPM, row-table-management functionality will be added in a universal way (and not just for import, as is usually the case now), and unique identifiers for all data types will be natively supported.
  • In Cost Chain Modeling software, support for multi-dimensional (cube-like) data structures may be further developed.
  • Support for rare data types such as attached images and links will be added to potentially all types of software.

In all systems, tools for simplified management of the calculational models (such as a cost model) will continue to evolve.

We will be able to package a chain of data structures (tables, cells), as well as formulas built into them, into directories that provide the ability for simplified navigation, batch changes, batch management of access rights, etc.

Cartesian Production will be added to technologies that don’t support it for now.

Primarily this forecast is related to EPM systems, where it can be added after the native implementation of row tables.

“Indicators” will be transformed into explicit objects, allowing programmers and users to create them once and enable automatic smooth real-time recalculation when the changes.

  • First, in all types of architectures, business “indicators” and calculated fields will be turned into objects, as, for example, master data is now. They will be cataloguable, navigable, referenceable, linkable, suitable for giving them alternative names, etc.
  • Secondly, in all types of architectures (especially in Transactional-like), a smooth automatic real-time recalculation will develop (as it is now in Spreadsheets).
  • Then, as a programmer and/or user, we will create a certain indicator in the system (for example, “Balance of available payment limits in the current month”) and enter a calculation formula for it once. After that, the system itself will constantly recalculate the indicator value, automatically tracking every change in every influencing transaction/indicator. That is, we don’t have to program a trigger for each of these transactions and their compensations (cancellations). Deep manual configuration will remain available but will be necessary only for cases requiring high performance.

In other words, systems that were not considered OLAP will adopt some of the OLAP features. We can say that some of the capabilities of “functional databases”, the theory behind classic cube-like OLAP databases (eg IBM TM1), can be added to all architectural styles.

4.4. Forecasts, part II: Cross-systems trends and the future of ‘composable ERP’ (a possible projection)

here we will consider only one of the possible ways of developing architectures after the completion of the previous trends.

  1. From software focused on UI (such as Spreadsheets, BI, EPM, Costing, HRMS, CRM, SRM systems, etc., and maybe even integrated products like ERP or PLM), functions for long-term data storage will gradually be phased out. Instead, long-term-significant data storage will be consolidated into external Shared Data Layers that can be used across multiple applications from different vendors. This will significantly differ from current Data Management practices, where centralized data layer is built on top of operational systems. Initially, simple master data (for example, product lists, cost center lists, etc.) will be moved to Shared Data Layers, and in the future, even formulas may be moved.
  2. UI-oriented apps will retain hot storage functionality (like a cache). In this way, the UI product will retrieve data from the Shared Data Layer and write to it. Long-term data storage within the UI-focused apps will only be important in individual cases, but not as a general rule.
  3. The Shared Data Layer can consist of at least two technologies, Database and Data Model Engine. The second one will manage the shared Virtual Domain Data Model and ensure its automatic consistency with the physical data model. It is possible that a combination of a Database from one vendor and a Data Model Engine from another vendor may be used.
  4. Since the Shared Data Layer should provide a reusable Virtual Domain Data Model for multiple applications, it will normally support not only relational data structures but also others (object-like, multi-dimensional, columnar-based, and so on). This trend is already partially realized in the sense that some cloud data platform vendors provide multi-structure databases.
  5. Of course, in order for all of the above to become a reality, universal integration protocols between Data Model Engines, Databases, and UI-focused apps (BI, EPM, ERP, PLM, etc.) will need to be developed and advanced.
  6. It can’t be ruled out that the engines for executing calculations will be separated as well. In such a scenario, these engines could be non-UI tools that are connectable to the Shared Data Layer. In this case, significant long-term calculations will not be performed within any of the UI-supported apps. Instead, these applications will create data structures and formulas, place them in the Shared Data Layer, and a specialized calculation engine will connect to it and perform a series of calculations in a customized order.
  7. It can’t be ruled out that relatively large classes of UI-focused systems (like BI or EPM, not to mention ERP or PLM), will be decomposed into micro-functional products, both horizontally and vertically. As a result, as a user, instead of having consolidated BI and EPM software products, we may have multiple separate micro-functional products for narrow tasks such as Pivoting, Data Mapping, creating indicators and formulas, and so on. Moreover, each of these products may come from different vendors. This decomposition approach allows for greater flexibility and customization, as users can choose and integrate specific micro-functional products that best fit their needs from various vendors.
  8. Then, the final enterprise solutions will be composable, and assembled from Databases, Data Engines, and UI-focused micro-products (both horizontal and vertical). Moreover, it is possible that the relationship between these components may follow a “many-to-many” principle. This situation in the market will increase competition among vendors and, at the same time, enhance the flexibility of building Enterprise Architectures. Organizations will have the freedom to choose and combine components from different vendors, enabling them to tailor their architecture to meet specific business requirements and leverage the strengths of various solutions.

Thus, for example, all the components shown in the picture could be separate software products:

Picture 9. One of the possible ways of software product division in the future

However, it is crucial to note that we are NOT talking about enterprise-wide singularity of each component, including Databases and Data Model Engines, which would lead us to a monolithic architecture. No, the intention is not to have each component deployed exclusively on an enterprise scale. Architects will have greater freedom of action. For instance, they can create a completely separate solution (Shared Data Layer + multiple UI-focused apps) for the HR domain, another separate solution for the Logistics domain, another separate solution for the Finance domain, and so on. By the way, this path would be a bit like a Data Mesh, only with the overall data model placed “under” the UI-oriented applications rather than “above” them. They can introduce a single Shared Data Layer for several domains and then add different apps. They can implement a shared Budgeting solution that will be used for budgeting across all domains (Logistics, HR, Finance, and others), while each domain can also utilize other apps and/or Shared Data Layers for other operational tasks.

As noted above, these are all just suppositions designed to get us thinking. What’s ahead — let’s see.

If you have any wishes, ideas, contributions, and collaboration, regardless of your experience specifics I’m glad to contact you:

linkedin.com/in/andi-paneof

--

--