I think, therefore I am….wrong

29 June 2018

In the risk management fraternity, we’ve often said or at least heard said that a risk manager has a multitude of hats to wear in terms of the role they play in the organisation. At some point in their careers, a risk manager would have performed functions typically associated with positions such as company secretary, legal consultant, HR practitioner, health, safety or security manager, credit and market analyst, financial manager, logistics manager, strategy guru, IT specialist, project manager, office manager and so many others in between. Versatility has become a vital characteristic in ensuring the long-term success of your risk management framework and methodology, and ultimately the embedding of a risk culture that is beneficial to achieving objectives. Successful risk managers have often been the ones who could “speak the corporate lingo” and utilize this understanding to foster relationships with potential risk champions. But once these relationships are fostered, how do we maximize the leverage this offers us in order to achieve our ultimate goal of nurturing effective decision-making in a sea of risks and opportunities?

I believe it is time we added two more hats to our growing repertoire of headgear, and those are the role of psychologist and philosopher. After all, no risk manager worth their salt should have to operate in isolation. We are most reliant on people, and you don’t need me to tell you that people are incredibly complex creatures. Almost every risk manager I’ve spoken to has experienced the taxing task of pulling information out of colleagues who just don’t have the same passion for the management of risk that they do, and don’t quite grasp the importance of sharing risk information and metrics to the benefit of the organisation as a whole. The communication we receive is often disjointed, sometimes incomplete, and, every now and then, just not true. These potential risk champions frequently operate in silos, oblivious to the value of their square of narrative when it comes to the risk management quilt that you are trying to fashion.

The Cambridge dictionary defines a psychologist as “someone who studies the human mind and human emotions and behaviour, and how different situations have an effect on people”. Qualified psychologists have substantial years of study behind them with a view to understanding and applying the vast array of theories and research papers that form the foundation of the profession, yet I’m sure many will tell you that their true learning started with their first patient interaction. As a risk manager, submitting yourself to several years of studying psychology is obviously not a practical solution. Yet if we look at the definition of psychology again, having an understanding of how different situations have an effect on people would quite clearly prove to be advantageous when managing the impact of uncertainty on the achievement of our objectives. And this is exactly where channelling your inner psychologist could be the means to accelerating risk maturity in multiple facets of the business.

In his book “Behavioral Risk Management”, which I’d highly recommend to anyone wanting to understand the psychological dimension of managing risk, Hersh Shefrin contends that “every single risk management disaster in the last 15 years, including financial disasters, has had psychological issues at the root”. And when one considers that disasters are frequently compounded by people’s reaction to them, this contention makes sense. Assuming that everybody uses the same thought process when making decisions, even when faced with the same crisis or disaster, is naïve, at best, and at worst, simply dangerous. It is extremely important to understand what makes individuals tick. This might sound a little over the top, but you might even want to consider maintaining a “patient file” with psychological profiles of your key decision makers so as to better understand what drives their thinking in times of crisis, or moments of overwhelming opportunity. This might enable you to better predict what decisions will be made, but more importantly what type of information needs to be shared with that individual to provoke the desired response. With some people, providing them with reams of data, with multiple variables, and varied outcomes would prompt their inquisitive mind to research the information further, incorporating that information into their assessment process, basing their conclusions on solid intelligence and coming to a conclusion that virtually guarantees that the correct choice will be made. With others, something as simple as an exchange rate ticker on their computers is enough to encourage them to pursue the right option.

In 1814, Pierre-Simon de Laplace published the first real expression of the concept we have come to call determinism, amusingly called the Laplace Demon. Wikipedia defines determinism as “the philosophical theory that all events, including moral choices, are completely determined by previously existing causes”. In a thought experiment entitled “A Philosophical Essay on Probabilities”, Laplace theorised the following; “We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.”. A good example of this in practical terms would be to think of the opening break in a game of pool. If one were to have access to all the data, on every single variable, including the direction of the fibres on the table, the amount of polish on the balls, the force of the break, the springiness of the cushions, the friction between the pool cue and the leading hand of the player, air pressure, temperature and humidity in the room, and countless other factors (i.e. all of the factors), then one would be able to predict with absolute certainty exactly where every ball would end up. Of course, in the real world, we don’t have access to all the data, nor do we have the computational ability to analyse all of these data points. Nor should we have to. Risk management is not, and probably never will be an exact science. We’re not in the game of Nostradamus like predictions (our warehouse will burn down at 11h47 on the 12th of July 2019), as much as we might love to be. But eliminating as much of the uncertainty around an event, as comprehensively as is possible with the limited information we have, is what separates good risk managers from great risk managers. In certain environments (e.g. banks) the analysis of massive amounts of data is necessary and highly recommended. If you have an ice cream stand on the Umhlanga beachfront, you might just need a good weather meter to know when to mitigate risk, and when to maximise opportunity. And this is fundamental to understanding how people make decisions, and how we contribute to them making brilliant ones.

No examination of the impact of current and past experiences, observations and patterns on the day-to-day decisions we make is complete without bringing in the tricky subject of cognitive bias, and how ingrained “beliefs”, whether accurate or not, sculpt and determine how we interact in every single situation. Those wonderful, learned psychologists mentioned earlier have identified so many different types of cognitive biases that an examination of each one would require a research paper worthy of a degree in and of itself. The range encompasses phenomena such as automation bias, where an excessive dependence on systems and machines leads to poor decisions based on flawed information, to negativity bias, that wonderful human phenomenon of recalling negative experiences far more vividly than positive ones, both clearly affecting impact and likelihood ratings when it comes to assessing risk. Outcome bias leads to decisions being assessed based on their eventual outcome, without a thought given to the quality of the decision at the time it was taken. Should that decision automatically be considered the right one to make the next time the situation arises? Most people are probably aware of the Dunning-Kruger effect, and if you haven’t heard of it before, I’ll bet my house on you knowing a few people that fit the mould. Psychologists David Dunning and Justin Kruger essentially confirmed what most of us already know. Some people are just too stupid to realize they are stupid, and some really smart people realize they know very little and often underestimate how capable they are.

Occasionally I’ve fallen victim to my own belief bias where the logical vigour of my argument is biased by the absolute plausibility of the conclusion. One of my favourite examples of this is the highly respected Sir Isaac Newton’s law of universal gravitation which states that a particle attracts every other particle in the universe with a force which is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centres. This reference to a mysterious force led to much derision for poor old Izzy but he stuck to his guns and the publication of his “Philosophiae Naturalis Principia Mathematica” abruptly stopped the laughter, and certainly laid the foundation for the study of classical mechanics. It was final. Isaac Newton argument was sound. It tied into and explained his conclusion, and fitted tightly into the principles he proposed. But it was wrong. In the early 1900’s an eccentric chap by the name of Albert Einstein, in the process of laying down his general theory of relativity provided his own alternate model of gravity as a distortion in the shape of the space-time continuum. He advanced the idea that although objects in motion travel along the straightest possible line, the effect of their mass on space-time, essentially causes a dent, meaning that the straightest possible line is along a spherical path. And it is only a matter of time before Einstein’s theory is conclusively proved to be false, or at the very least, incomplete. Scientists observe patterns in the past and have faith that those patterns shall continue to happen in the future. Well, sometimes they just don’t.

A thorough assessment, analysis and understanding of all the role-players in your risk management bubble are essential if you are to create an environment where the facts and figures at your disposal are accurate, reliable, and as free from bias as is humanly possible. For some of us, this might require a fundamental change in the way we interact with people. But don’t be intimidated by the process. The rewards are well worth the effort if it translates into a cutting-edge level of decision-making, risk resilience, and ultimately the achievement of our objectives.

Author – Paul van der Struys

July 2018

© 2018 IDI Technology (Pty) Ltd | PAIA | BBEE Certificate | Tax Clearance Certificate