Organisations quickly think of tools as the cure for business transformation, business insight, business analytics and operational challenges. The tendency to believe that tools are the miracle cure has substantially increased due to the ease of developing and deploying applications on smart mobile devices. The other factor organisations tend to ignore when thinking about business insight is the root cause of the problem.
Organisations rush into investments in band-aid solutions with the justification that getting something done is better than nothing. In doing so, business insight and guidance mainly rely on simplistic analysis by reporting averages, tally [i.e. bar charts], and change in the percentage of “items” over time. Such “bad” evidence will lead people, whether at the C-Level or downstream at the lowest levels of the operation, to make the wrong decisions where operational dynamics and measurable business objectives are out of sight.
This blog focuses on selected essential actions that any organisation should take to avoid making the wrong decisions where such organisations mistakenly think that they are using “evidence” [instead of opinions] with the proper measurement and analysis methods. The reality is that the “evidence” at hand is misleading.
Action 1: Hire the Right People with the Right Skills
When organisations become interested in exploring business insight, the first common mistake organisations make is to start a bunch of surveys or, if the origination is large enough, they will appoint change managers, program and project managers and a suite of business analysts starting a series of workshops to brainstorm “things”. Before you know it, you would have probably burnt hundreds of thousands of dollars of your budget. The payback and ROI are much greater if you assemble the team with the proper skill set. The team does not have to be significant. A few people are typically adequate.
Instead, such a team’s role is to select the vital data you have to gather, the best mechanisms to collect it, facilitate the “gathering” of the essential data and adequately analyse such data. Further, equally, if not more importantly, such a team shall be able to explain clearly and help you decide how the analysis outcome will be used by who, when and for what purpose to address specific segments of one or more measurable business objectives. Such an approach yields guaranteed success in quickly gaining the sought business insight.
Organisations that have demonstrated continued success in using analytics, which has significantly impacted operational excellence and improved bottom lines, always strike the right balance with the team’s skills, combining the subject matter experts, operational staff and system models. Analytics and business insight are a lot more than undertaking statistical analysis. Data scientists or statisticians who do NOT fully comprehend the operational dynamics of the business and challenges faced by the operational [i.e. ground or frontline] staff can undertake the most sophisticated analysis on the planet. Still, the outcome may not mean anything to the operational arm of the business. The same statement applies to the subject matter experts who can use one or more analysis tools (such as Excel or any other tool), thinking that by simply inserting data and clicking a few buttons, the answer will lead to the best decisions. The reality is that modelling business dynamics of the situation and applying advanced analytics require a thorough understanding of the underlying concepts behind the algorithms used for analyses. One of the main reasons, in addition to the classical statistical methods, is that it is vital to incorporate the behavioural aspect of the “factors” or “elements” [be it people or things] as time elapses, leading to predictive outcomes. In this way, you will know what to do and how to do it and understand the expected outcome BEFORE it is too late.
Action 2: Do not Jump into Big Data, Machine Learning and Over-Engineered Analyses
Most organisations have a large amount of data, and they either need to figure out what to do with it or will do the wrong thing with the data. Classic examples are averaging data and calculating thethings” over time, labelling such outcomes as the occurrence of “things”, and treating such numbers actors of the probability of leading, indicating percentage of “ whether the business is heading in the right or wrong direction. In most cases, the timing of the analysis is wrong since you are constantly analysing the past without predicting the future with “proper” analysis.
Examples of such “things” include customer satisfaction, projection of sales and profit margins, mergers and acquisitions, managing a healthy diet, and reducing or retaining staff. Other examples to consider include the time it takes to discharge patients, time to clear road incidents, the journey time of travellers, the cost of insurance policies, the quality of products, effective management and scheduling of patrols and other emergency services, customer churn, and so on.
It has become the trend that after a few classical workshops conducted under the banner of business insight, a decision leads to the conclusion that the main reason the organisation is not getting its decisions right is the inability to handle a large amount of data and, hence, the cure is to start spending on big data and some machine learning “stuff”. There is nothing wrong with big data and advanced machine learning. However, the organisation MUST prove the limitations of using the more straightforward yet adequately sophisticated tools to analyse appropriately before getting into big data. Most desktop classical statistical and system dynamics off-the-shelf tools include very sophisticated machine learning algorithms that can handle millions of rows of data. Vendors of such tools also have a very reasonable and affordable price point. The issue is not about the tools or big data. Before you know it, you will have spent millions of dollars on CRM, Case Management and ERP systems, workflow engines, tools for business intelligence and the like with little or no increase in business yield. And, when you figure it out, it is typically too late.
Action 3: Do Not Rely on Arithmetical Average of Data
Sometimes, technological advances tend to backfire. Various industry verticals have shown that while a new technology helps achieve something new, the same technology inherently resurfaces already-solved challenges. In other cases, the latest technology makes it easier for people to misuse it, or they become lazy and rush into decisions before properly tackling the topic. There are too many examples, but we will illustrate the concept by providing a few “thinking” about.
When stealth technology used in fighter jets was in its trial period, experts found out that while the technology makes fighter jets almost undetectable by radars, the fighter jets were more susceptible to electronic interference when compared with those manufactured using older technology. The older technology of metal skins provided inherent protection or immunity to the electronic devices installed inside the aircraft. Another example is the classical word processor, where, in the old days, people used to think a lot when writing the draft material by hand before typing the material on the classical typewriter. Once someone types several pages, there is no room to insert new lines or change words around them. Advances in word processing help authors with editing and spell-checking, amongst many other benefits, but people tend to think more about what they want to write, why and how to write about it. Just so you know – everyone receives numerous emails that need clarification, which still require some time to read before you decide whether or not you want to reply. Think of how much time you typically waste weekly because of the misusage of the “tool” only.
Another classic example that is more pertinent to the topic of this blog is Excel. The tool is one of the best technological achievements, providing an extraordinary power to undertake a simple or sophisticated analysis. The problem is that anyone can insert data and calculate the average of something over time, plot colourful tally and other charts, and the average shows an increase or decrease in the prescribed measurement of the item under consideration, which, in turn, will lead to a wrong decision. Decision-makers are misled by the averages and charts that represent them and, more importantly, by the fact that they think they are now using evidence. The average of a data set is genuinely “evidence”, but, in most cases, it represents the wrong evidence.
There is a natural tendency to “centralise” our focus when thinking about or discussing a topic. Managers typically ask for a magic number to represent too many uncertainties or factors. Just think of how often your manager or someone has asked you to provide one number, which means time to complete a task, delivery of goods, number of defects in a product, etc. What makes it worse is that the senior managers tend to average the already averaged numbers provided by their subordinates. By the time the decision makers get the report (or the numbers), the data is highly aggregated, and the average now makes no sense, even if we assume that it made some sense at the lowest level at some point.
Some organisations think they have got it right by using what is commonly known as the three-point scale, where they explore worse-case, base-case and best-case numbers (or scenarios). While this approach is slightly better than relying on one magic number, the situation is not much healthier either. Organisations tend to forget that once the lowest or highest “numbers” or “limits” are stated, the inherent assumption is that nothing will or can change beyond such magic thresholds. Also, there is little or no regard for the distribution or the uncertainty in the “change” from the worst to the best scenario passing through the “normal” as time elapses [i.e., where one SHALL incorporate the operational dynamics or the moving parts of the business]. In most cases, there is little or common understanding or agreed definition of a “scenario” in the first place. To make things worse, once you state the magic numbers, they are carved in stone, leading to major [sometimes] catastrophic decisions.
Action 4: Do Not Rely on Percentages and Tally of Data
When you are at a workshop or a meeting, try to pay attention to the number of times the participants refer to the “percentage” of something. The audience’s focus is locked into the highest or lowest bar charts with attractive animations, leading to unconsciously making mental decisions. Like the term “average”, the tendency to calculate the “percentage” of something is the most common word aired among the participants. The “average” and “percentage” are easy to calculate using Excel. The topic of production of various types of charts, colours and labels becomes the focus, quickly missing the fact that the “percentage” is another commonly accepted yet often misleading outcome used by decision makers. The rush into reporting the “percentage” also demonstrates how the ease of use of technology backfires, leading to people making severe and fundamental mistakes.
A classic example is averaging categorical data they collected via surveys or sets of questionnaires (e.g. where answers to questions are “Good”, “Poor”, “High”, and “Low”). Another example is calculating the probability of something taking place (or otherwise) using the wrong methods with little or no regard for data type, uncertainty or distribution. The situation is worse when people sometimes use the “average” and “percentage” to make decisions on the reliability of an “item”. It is constructive to note that every figure that represents the reliability or probability of something occurring is expressed as a “percentage”, but NOT every “percentage” [i.e., a fraction of something] is a probability or reliability of an item.
In some cases, organisations must use the proper methods to analyse reliability. For example, people mistakenly apply the Weibull [aka Bathtub] distribution to non-hardware domains, such as factors that represent people’s behaviour or factors that contribute to the reliability of journey travel time in transport. Similar domains include the time to discharge patients or factors that relate to the reliability of a software application. The common mistake is that people assume that the Weibull distribution can represent the software or people’s behaviour. One of the main concepts behind such distribution is that the reliability of “items”, such as hardware, is characterised by the fact that things typically go wrong during the initial lifecycle period. Then, stabilisation occurs beyond the teething period until the wear-and-tear period kicks in. Organisations tend to forget that software or people behaviour (which is a paramount factor in transport, the delivery of customer services or manufacturing-related challenges- to name a few) has nothing to do with “wear-and-tear”. In some cultures, while driving, people slow down if they observe an accident on a breakdown lane, leading to traffic congestion on the usual “flow” lanes, or drivers slow down for no apparent reason, leading to a queue build-up blocking intersections. Similarly, once a piece of software works, if the code or operating environment has not changed, the software does not exhibit wear-and-tear, unlike a piece of hardware (e.g., a mechanical switch, a spring, classical music or a door hinge).
Action 5: Do Not Ignore the Moving Parts of the Business
The topic of Action 5 is far more than classical statistical analysis. The topic is the heart of a relatively large domain of systems engineering, Design Of Experiments [DOE] and business operational dynamics. A one-line summary is that even if you undertake the perceived “proper” analytics, the organisation may still make the wrong decisions with unqualified high risks if the analysis does NOT incorporate the operational dynamics [i.e., the moving parts of the business]. When the study includes the operational dynamics tightly coupled with business operational [NOT paper] strategy and the ACTIONABLE [NOT paper] operating model, one can quickly identify the most influencing factors on the desired measurable business objectives.
While one can argue that Excel facilitates undertaking complex analyses, it is not easy to visualise the “changes” in operational dynamics as time elapses. Please refer to Stella Architect, FlexSim or ExtendSim as a set of tools that Go2Cab promotes and uses as an alternative to Excel when appropriate. For example, when you change a number in Excel in one cell, all related numbers instantly change with NO DELAY. The other typical deficiency in using Excel is the lack of traceability of “units” unless you are extremely careful when hooking up formulas. Excel will produce a number regardless of whether or not the units are correct. Such phenomena lead to higher risks when implementing complex cell structures, formulas and linkages between cells, sheets and workbooks. So long as you are NOT dividing by zero, Excel does not care, and, in almost all cases, it will produce a number. It is not that easy to inherently trace the units associated with numbers, nor is it easy to incorporate the dynamic interactions between several factors that most influence the desired outcome as time elapses.
Go2Cab can share numerous real-world examples of businesses and operations that have suddenly collapsed simply because Excel could not cut it despite undertaking the correct analysis.