Requirements and the Beast of Complexity

Featured
54433 Views
1 Comments
41 Likes

Part 3 – The Decision Centric Analysis Approach

Decisioning

Defn: The discrete and systematic discovery, definition, assembly and execution of decisions.

We have asserted that decisions are first order citizens of the requirements domain. To provide a conceptual basis for this, let us go back to the foundations of any requirements analysis.

A business derives value from changing the state of some object or 'thing' that the business values (usually literally, on its balance sheet). This focuses us on the very core of the business – what is traded, managed, leveraged, used, built, or sold in order to achieve the objectives of the business. Until something happens to the object, the purpose cannot be achieved and no value can be derived; therefore, in order to generate value, we must have a change in the state of the object. Changing state implies that there is activity against an object, and this observation gives rise to the traditional process (activity) and data (object) approaches. But if we look closely at any process causing state change, we will see that the change is always the result of a decision within the process rather than the process itself. Many processes can execute against the object, but value is only derived when a state change occurs. Whenever state change occurs, the process can be shown to be a container for decisions – the state change is a result of decisions made rather than an inherent characteristic of a process.

This confusion between decision and process is a systemic failure in today’s methodologies and hides the importance of decisioning. A process is the glue that links decisions to the events that initiate them; and in doing so it provides a mechanism for supplying data to the decisions, and for reacting to the decisions made. If these pre and post decision activities are wrapped together with the decision logic itself, then we have a complete end-to-end process that takes the business from one valid state to another, and which generates value in doing so. But it is clear that the entire process is built around the decisioning kernel.

Decisions

Defn: A proprietary datum derived by interpreting relevant facts according to structured business knowledge

Decisions are the critical mechanisms by which we choose one action from the set of all possible actions. The purpose of a decision is to select the action that is most beneficial to the decision maker in a given situation.

A decision is ‘made’ when we resolve the available facts into a single definitive outcome; a decision cannot be multi-valued, ambiguous or tentative. A decision results from applying discipline, knowledge and experience to the facts that describe the decision context. This application of discipline, knowledge and experience within decision-making is the characteristic that most closely defines the unique character of any business. Because of this fundamental truth, decision-making behavior is the only truly proprietary artifact in any system specification - most other artifacts can be inferred from industry practice, and do not differentiate each business specifically.

Businesses do not make decisions merely because they have available data; nor do they act without making a decision. Decisions are clearly identifiable at the heart of any automated systems response, giving purpose to the data and direction to the process. If decision centric analysis is used to rigorously identify and describe decision-making behavior prior to systems development, then it can also be used to drive the discovery of data and its relevance to the business – it is the need for the decision that is the primary driver of the need for the data. Decisioning is therefore a primary data analysis tool, and a precursor to formal data modeling. When data analysis is driven by Decision Modeling, it gives rise to concise and provably relevant data models. And because decisions also predicate process responses, decisioning also implicitly drives process definition.

The starting point

The business derives value or achieves its purpose by changing the state of primary business objects (or things), whether they be insurance policies, loan accounts, patients, trains, or any other object of value (perhaps even Decision Models!). These primary business objects are usually self-evident, and a cursory review of business strategy documentation will highlight and clarify any ambiguities. But if the primary objects cannot be defined quickly (minutes, not days) and with certainty, the business should not be building a system – it has much bigger issues to worry about!

A primary business object is by definition both tangible and discrete; therefore, it can be uniquely identified. Also by definition, it will be a source of value for the business and so the business will usually assign its own unique identifier to it – it’s first data attribute and the beginning of our Fact Model. In fact, a useful way to find these objects is to look for proprietary and unique identifiers assigned by the business (even in manual systems). Because it exists and generates value, external parties may also be involved and cause us to add specific additional non-discretionary attributes for interface or compliance purposes (e.g. a registration number or unique address), and we might also add other identifying attributes to ensure that the primary object can be uniquely found using ‘real world’ data. So there is a small set of non-discretionary data that can be found and modeled simply because the primary object exists; this set of data is generic and will be more or less common across all like businesses. We can think of this as ‘plumbing’ – it cannot be a source of business differentiation.

So what other data is there that will allow us to differentiate our business? The amount of data that we could collect on any given topic is boundless – how do we determine what additional data is relevant? Decisioning! How we make decisions is the key differentiator of any business – how we decide to accept customers, approve sales, price sales, agree terms and conditions, etc. The important additional data will now be found because it is needed for decision making. We are now ready to do decision analysis; that is, after the initial strategic scoping, and prior to either detailed data or process analysis.

Decision Discovery

Decision analysis is both simple and intuitive because this is the primary activity of the business – this is what the business knows best. A decision is a single atomic value – we cannot have half a decision anymore than we can have half an attribute. So we are looking for explicit single valued outcomes, each of which will be captured as a new data attribute. Let’s start with our business object (say, ‘insurance policy’), and with a state change that will cause a change in the value of that object (say, ‘approve insurance policy’). If we have a documented set of governance policies that describe the business intent, we will interrogate them looking for noun phrases, and related assertions and conditions. Noun phrases can be interpreted directly into the Fact Model. The assertions and conditions will be reduced to a set of operations and declared as decision logic.

We should also access the domain expert within the business – this is the person charged with responsibility for strategy directed decision-making, and who can readily answer questions like the following:

“What decisions do you make in order to … approve this insurance policy?”

There is a pattern to most commercial decision making that helps structure the decision discovery process (see figure 3).

Figure 3

The green squares are the most important in the cycle. We can start analysis with the first of the primary (green) decision components (‘Authorization or Acceptance’ in Figure 3) – these are the ‘will I or won’t I’ decisions. For our insurance policy, ‘will I or won’t I’ accept this risk? (for other problem domains simply replace the business object and state as required; e.g. for a hospital, ‘will I or won’t I admit this patient?’ etc.). Determining this decision may identify many precursor decisions that need to be considered first. For instance, for our underwriting question we might need to ask:

  • What is the inherent risk of the object?

  • What is the customer risk?

  • What is the geographic risk?

These decisions in turn may give rise to further decisions so that we develop a tree of decisions – this is the Decision Model for authorization. Now we can move on to the next in the primary class of decisions: at what price? (or cost if a cost centre) – see ‘Pricing or Costing’ (Figure 3). In this case, the question is ‘what decisions do you make to determine the price... of this risk?’ Again, this may result in a tree of decisions (for instance, pricing based on various optional covers, pricing for the different elements of risk, channel pricing, campaigns and packages, etc.). Following ‘at what price?’ we can repeat the process for the ‘Terms and Conditions’ and then the other pre and post decisions:

  • Pre-Processing Check: Do I have sufficient information to start decision making?

  • Context Based Validation: Is the supplied data valid?

  • Enrichment: What further data can I derive to assist with the primary decision making?

  • Product or Process Selection: Do I need to determine one decision path from other possible paths for the primary decision making?

And after the primary decision making...

  • Workflow: Now that I have made my primary decisions, what do I need to do next?

  • Post-Processing: Am I finished and complete without errors? Are my decisions within acceptable boundaries?

Normalization

Data normalization is a semi-rigorous method for modeling the relationships between atomic datum. It is ‘semi-rigorous’ because normalization is a rigorous process that is dependent on several very non-rigorous inputs, including:

  • Scope: Determines what is relevant – we don’t normalize what is not relevant;

  • Context: Determines how we view (and therefore how we define) each datum;

  • Definition of each datum: This is particularly subjective yet drives the normalization process.

Data normalization is based on making a number of subjective assertions about the meaning and relevance of data, and then using the normal forms to organize those assertions into a single coherent model. Normalization with regard to decisions is similar. Each decision derives exactly one datum, and is ‘atomic’ in the same way that the datum is. Similarly, each decision has relationships to the other decisions in the model. The decisions are related by both sequence and context. In this regard, context plays a similar role to the ‘primary key’ in data normalization. Some of the inter-decision relationships within a model include:

  • All decisions have a context that is derived from the normalized placement of its output datum.

  • Each decision definition may precede and/or follow exactly one other decision definition.

  • Each decision may belong to a group of decisions that share the same context.

  • In a direct parallel of 4th normal form, unlike decisions that share context should be separately grouped.

  • A group of decisions itself has sequence and may also be grouped with other decisions and groups according to shared context.

Decision Driven Data Design

We have suggested that decision and Fact Models can both evolve from strategy directed decision analysis. Following the initial discovery and elaboration of the primary business objects, we worked through the decision discovery process by analyzing the strategy defined policies and any identifiable business intent, which in turn identified new data attributes (the decision outputs). If we locate these decisions around the data constructs in the Fact Model, we can build an integrated decision/data model as shown in Figure 4

Figure 4

Following the discovery of the decisions, we can then elaborate them with formulas. Formulas provide additional detail regarding the consumption of data by decisions, thereby driving further demand for data. If the system cannot provide this data, then by definition the business cannot make the decision and the business objective of the model cannot be achieved. In this way, decisioning can be shown to drive the demand for data. The interaction between data and decisions helps the normalization and modeling processes for both. This combined model is self-validating because when complete all data should be produced and/or consumed by one or more decisions; and all decisions must have context within and consume data from the Fact Model. This helps to ensure the overall rigor of the analysis.

Note that the Fact/Data Models used in Decision Modeling are subsets of the domain data model – they need to contain only the data required by the decisioning that is currently in focus. They are, in effect, normalized models of the value creating transactions rather than a model of the business domain. As noted in Part 2, the XML schema standard is a useful standard for describing normalized transactional data. The set of Fact Models converge to define the primordial data model that will underpin all further project related data analysis. It can be easily overlaid onto the business object model to synthesize a more complete domain model that can then be further extended to include all of the ‘plumbing’ requirements (e.g. security, users, history, logs etc). The resultant model will only contain the data actually required by the business – there is no second guessing of the data requirements as often occurs in a traditional data analysis approach, with significant and positive implications for development cost, risk and time.

Decision Driven Process Design

Decisioning also drives process. For a decision to execute, the Decision Model must be supplied with its Fact Model by a process. This data, and therefore the process, cannot be defined until the decision requirements are known. Then, for the Decision Model to have any effect, a process must enact some response to the decisions made. While it is possible to define and build processes in anticipation of the decisioning that will drive them, it is sensible to analyze the decisioning in order to determine the range of inputs and outcomes, and then to normalize the process responses to them. Again this has a positive effect on development cost and complexity.

Bespoke discretionary processes do not occur in a vacuum. Such processes should only exist to support value creation for the business – as described by the decision analysis. Regulation and industry practice may require additional processes, but these are non-discretionary by definition and may not add value. Bespoke processes that are found to exist that do not have this fundamental decision driven requirement are not good subjects for primary analysis – certainly their mere existence does not make the process requirement a necessity, and they should always be considered to be candidates for removal or re-engineering. There may be many process options for supplying data and responding to decisions made. In particular we should look for opportunities for direct integration with external systems to create integrated industry solutions. This process re-engineering opportunity offers significant value, but may be missed if analysis of existing processes is conducted as the primary analysis. For this reason we do not consider processes to be attractive first order requirements artifacts – they are in fact at the tail-end of the analysis chain, the glue that binds events, data, and decision making to a platform and its devices.

Handover and the SDLC

We can achieve a verified and tested Decision Model, and it’s integrated and co-dependent Fact Models, with relatively modest effort – often only a fraction of the cost of traditional approaches. All technology architecture and design options remain open. We have, in fact, defined and constructed a ‘requirements model’ of the core functionality of the system from the business perspective without constraining the technology options.

Further, this ‘requirements model’ is testable and can be proven to be complete, consistent and correct according to a candidate set of business test cases. Even better, we can retain the separate identity of this critical requirement over the long term, even across multiple system implementations – there need never be another legacy system. Using a biological analogy, this is the “DNA” that defines how the organization is operated irrespective of how it is implemented. Implementation is now of little interest to those business users who are responsible for decisioning – we have a clear and well defined handover point to begin the traditional development cycle. It is feasible, even desirable, to simply hand over the Decision Models and Fact Models to systems designers as their SDLC starting point.

It is worth re-emphasizing a critical point: Decision models live outside of any project or process – decision analysis precedes the SDLC entirely. By definition the subsequent system design must be able to supply and receive data that complies with the decisioning schemas. And the system must provide appropriate process responses to the decisions made. While these processes remain undefined, this is secondary analysis and is tightly bounded by the decision design that precedes it, which is in turn secondary to the specification of business intent as described by business strategy. Therefore, process analysis has been de-risked, and at the same time, it offers rich opportunities for business process re-engineering. The traditional SDLC approaches can then be used to design, build and/or reuse various software components (the plumbing) as appropriate to support the decisioning requirements. It is the developer’s task to provide an infrastructure within which the decisioning can occur, as shown in Figure 5.

Figure 5.

Conclusion

The decision-centric development approach represents a significant advance on traditional development methodologies. It focuses on a ‘missing link’ between business strategy and operational systems – the Decision Model. The Decision Model is a new and important requirements artifact that is accessible and understood by both business and development practitioners, giving rise to a previously unobtainable level of shared understanding that can help bridge the gap between business strategies and the systems that support them. The Decision Model gives the business 'hands-on' control over the definition and implementation of its most critical IP – its decision making know-how.

Author: Mark Norton has more than 30 years development experience, mostly on enterprise scale, mission critical systems in finance, insurance, government and health administration. In 2001 Mark established Idiom Ltd. to develop and market tools and techniques in support of the decisioning concept. Through joint project participation with integration partners, these tools and techniques have been personally developed and tested on dozens of projects to deliver the pragmatic and conceptually sound decision centric development approach as described in this article.

Contact: +64 21 434669
[email protected]
www.idiomsoftware.com


 

[1] http://www.objectwatch.com/whitepapers/ITComplexityWhitePaper.pdf

[2] http://www.theiiba.org/AM/Template.cfm?Section=Body_of_Knowledge

[3] http://en.wikipedia.org/wiki/Requirement

[4] Grammar: A group of words containing a subject and a predicate and forming part of a compound or complex sentence.

[5] http://en.wikipedia.org/wiki/Network_effect

[6] http://en.wikipedia.org/wiki/Capers_Jones

[7] Morgan, Tony. “Business Rules and Information Systems: Aligning IT with Business Goals” (2002), ISBN 0-201-74391-4

 

 

Article Pages

Page: 3 Of 3First  Previous  1  2  3  Next  Last  
Like this article:
  41 members liked this article
Featured
54433 Views
1 Comments
41 Likes

COMMENTS

mark.norton posted on Wednesday, May 26, 2010 2:50 PM
For those interested, Idiom is currently diarising a project using this approach. You can track its progress here: http://sales.idiomsoftware.com/Idiom/idiomblog.nsf/dx/08052010063840p.m.MNO9T9.htm

Regards, Mark
mark.norton
Only registered users may post comments.

 



Upcoming Live Webinars

 




Copyright 2006-2024 by Modern Analyst Media LLC