When Theresa May became Prime Minister, it was the second time that the glass ceiling at the top of British politics had been shattered. However, politicians and business leaders alike should beware the ‘glass cliff’ during times of crisis. So what is the glass cliff, and what are the implications for leadership?The Glass Cliff
Whilst the glass ceiling means that women are still less likely than men to progress into senior leadership positions, researchers have found that, in times of crisis, women are more likely to be appointed to leadership positions. This is known as the ‘glass cliff’ as it carries an increased risk of failure and criticism.
For example, researchers examined the share price performance of FTSE 100 companies immediately before and after the appointment of a male or female board member. They found that when companies appointed men to their boards of directors, share price performance was relatively stable before the appointment. However, companies that appointed a woman had experienced consistently poor performance in the months preceding the appointment. In essence, men and women were being appointed to directorships under very different circumstances, with different likelihoods of success.“Cleaning up the mess”
Why does this happen? In part, glass cliff appointments reflect gender stereotypes - that women are peculiarly suited to crisis management. This is clear from recent commentary regarding women politicians. Bloomberg recently ran an article titled ‘Women Are Cleaning Up Britain’s Brexit Mess’, whilst Baroness Jenkin of Kennington was quoted by The Guardian discussing the Conservative Party leadership contest: “I think they [the country] feel that at a time of turmoil, a woman will be more practical and a bit less testosterone [driven] in their approach. More collaborative, more willing to listen to voices around the table, less likely to have an instantly aggressive approach to things.”
Consistent with these views, researchers have found that in times of success, stereotypically male attributes are seen as being most important for the selection of a future leader; yet in times of crisis, stereotypically female attributes matter most for leader selection.When opportunity knocks…
A second driver of the glass cliff effect is that crisis situations are seen as providing women (but not men) with good leadership opportunities. They are more likely to be construed by decision makers as ‘golden opportunities’ than as ‘poisoned chalices’. This is exacerbated by the relative lack of leadership opportunities for women - while men who are invited to take-up a leadership role in a crisis may feel able to decline the invitation and ‘wait for something better to come along’, women may have no such luxury and be encouraged to ‘take whatever they can get’.
The result is that when women do take-up senior leadership roles, they are more likely than men to have to deal with crisis situations, with a greater chance of failure. In addition, a psychological effect called the ‘fundamental attribution error’ means that in seeking to explain the reasons for failure, people tend to focus on individual characteristics of the leader, rather than the situational and contextual challenges that affect the organisation. As such, compared to men, women who assume leadership positions can be more exposed to criticism.Data driven talent
A key take-away is that the glass cliff effect is most likely to occur when stereotypes influence appointment decisions. Talent moves, especially for senior leadership roles, need to be driven by objective, rich and relevant data. This provides a platform for talent and resourcing specialists to make a real impact – by ensuring long-term succession plans are in place, by systematically collating performance data, by putting in place a strong due diligence processes to inform appointments, and by ensuring that HR has the insights and the influence to shape decisions at the top table.
To read more of James Meachin's blogs on leadership and and other related business psychology topics, click here.
Judges are impartial, objective arbiters of the law, making important decisions that affect the lives of people everyday. At least that’s the theory although in practice they are as fallible as anyone else, particularly when it comes to unconscious biases.
Two recent judgements in the UK shine a light on this issue. Andrew Mitchell recently lost his libel case against the Sun newspaper over the infamous ‘plebgate’ accusations. Without commenting on the merits of the case – I have no special knowledge in this area – it was interesting to hear the judge’s passing comments, as reported widely in the media. Consider the following quote from the BBC website: “the judge said PC Rowland was “not the sort of man who would have had the wit, imagination or inclination to invent on the spur of the moment an account of what a senior politician had said to him in temper””
This is worth restating - the judge did not believe that PC Rowland had the wit or imagination to make up his claim. How did the judge form this view? We are not dealing with a claim that PC Rowland is an expert in quantum mechanics. Surely, there is a huge irony in the judge concluding that Andrew Mitchell must have called PC Rowland a pleb because, in the judge’s view, PC Rowland is a pleb? On the assumption that PC Rowland was not subject to a battery of intelligence tests as part of the trial, it would seem that the judge’s less than complimentary view of his intellectual prowess was based on his assumptions about police officers, or perhaps the sort of police officer that he believes PC Rowland is.
Of course, ‘assumptions’ and ‘believes’ are the key words in the last sentence. They are an intrinsic part of unconscious bias – they are the gateway that links our implicit beliefs to our conscious decision making. In another recent example reported by The Times, an immigration judge has resigned since making comments about Deepa Patel, a 22-year-old victim of alleged harassment. When the prosecutor, Rachel Parker, said she was unsure whether Ms Patel could attend the court at such short notice, the judge replied: “It won’t be a problem. She won’t be working anywhere important where she can’t get the time off. She’ll only be working in a shop or an off-licence.” When Ms Parker asked him to clarify his comments, he allegedly replied: “With a name like Patel, and her ethnic background, she won’t be working anywhere important.”
In this rather breathtaking example there appears to be conscious bias at work, as well unconscious bias and assumptions about the Ms Patel based on nothing more than her name. In both cases we can see how the judges’ – and potentially our own – implicit assumptions combine with superficial information (he’s a policeman, her name is Patel) to form a ‘sensemaking narrative’ which, in turn, influences the actions and decisions that are made. As is often the case, unconscious biases are easier to see in others than they are in ourselves.
On the front cover of the Financial Times' appointments section there was an article about the benefits of 'big data' for recruitment - Forget the CV, data decide careers. As happens from time-to-time this is a classic case of old fashioned ideas being re-branded and re-sold; here, with the added allure of technology.
To summarise the premise of the article, organisations can use large volumes of data about their applicants and staff to identify characteristics that predict success – such as job performance and length of tenure.
Is this a new idea? No. Since the 1920s this approach has been known as 'biodata'. An example from the Financial Times article is as follows: "Employees [at Xerox call centres] who are members of one or two social networks were found to stay in their job for longer than those who belonged to four or more social networks".
Is this a good idea? No. Unfortunately, this approach has myriad problems. First, it 'capitalises on chance' – if you look for statistically significant relationships amongst a large number random variables you will find one – have a look at the excellent website spurious correlations for many examples, such as the almost perfect relationship between the divorce rate in Maine and the per capita consumption of margarine in the US. This problem can be overcome by using a 'hold out sample' – a group that was not in the original analysis that can be used to test the relationship – although this is not often practical.
A second issue is that biodata is notoriously volatile over time and context. Several years ago a psychologist at a consulting firm published a research paper showing that students who were quicker to apply for graduate recruitment programmes performed better in the subsequent recruitment process. At the time this research gained a fair amount of exposure in the media. The next year a colleague and I tested this effect in a large sample of graduate recruits – numbered in the thousands – across three separate sectors. There was no effect either collectively, or in the three individual sectors. In essence, it did not matter when candidates applied – good candidates were evenly distributed across the application process.
Taking the Xerox example of the number of social networks that candidates use, how will recruiters know if this has a meaningful relationship next year or the year after? Times change – changing technology and social attitudes mean that patterns of social network use are liable to change. Likewise demographics change; those future candidates who are still in their teens may well have different online habits compared to people who are only four or five years older.
Finally, and perhaps most damning of all, is the law of unintended consequences. Take candidate diversity as an example – what if use of social networks (or any other obtuse predictor) varied between different ethnic or socioeconomic groups, by culture or by gender? As researchers in 1977 found* having a city centre address as opposed to a suburban address in Detroit distinguished thieves from non-thieves, but it also tended to distinguish between white and BME groups.
Beware the Emperor's new clothes.
* Pace, L. A. and Schoenfeldt, L. F. (1977), Legal Concerns in the Use of Weighted Applications. Personnel Psychology, 30: 159–166.
The UK's economy has been in the doldrums for several years now. But don't worry; here are some views from experts that will put your mind at ease:
"The recovery will gain a bit more momentum in 12-18 months when exports are expected to increase further and business investment to grow more robustly".
"We remain hopeful of a gradual acceleration in GDP growth over the next 12 months".
"We expect quarterly growth to increase gradually over the next two years, but we have to accept that it will remain modest and below-trend for some time".
So, a clear consensus that in 12-24 months time the good times will roll again as we climb back in to proper growth. Except that it isn't a consensus.
The first quote, referring to the UK, was published by the Organisation for Economic Co-operation and Development back in 2010. The second quote came from James Knightley of ING Financial Markets in 2011, and the third quote was from the British Chamber of Commerce in - can you guess? - 2012.
I've chosen these examples not because these forecasters are any worst than any others - I chose them precisely because they are just like other forecasters! When faced with the essentially impossible task of predicting the future experts and non-experts alike feel compelled to provide an answer; doing so brings two key errors into play.
Error number 1 - substitution. When faced with a difficult or impossible question people unwittingly substitute the original question ("When will the economy recover?") with an easier question ("How long do I think it will it be until something changes?"). Although these questions 'feel' similar they are, of course, very different. The second question is easier to answer ("It's been unseasonably cold so people haven't been out shopping as much. When it gets warmer in spring retail sales will pick-up") but is also much narrower and misses the point of the first question. This is compounded by a second error.
Error number 2 - optimism bias. Whether it's estimating how long it will us take to do something, how much it will cost us, or how good we are at it, we all share a strong optimism bias that colours our judgments. In particular, when looking ahead, we fail to account for unforeseen problems (that's why we fail to account for them!) that almost always arise within complex systems. Essentially, then, our plans end-up being 'best case scenarios' that most often crumble under the harsh realities of the unexpected. After all, if every '5 year plan' came to fruition there would be 92 football clubs in the premier league!
These two errors, in combination, explain why many economists believe that the economy will improve in 12 months or so. And why they've been doing this for more than 3 years! They can see the short-term problems such as inflation spikes, poor weather, and reduced consumer confidence and they can guess when those problems will ease. What they don't see (because they can't) are the problems that will occur later in the year. And that's why you shouldn't be too surprised to read an economist in December 2013 predicting that whilst growth will be sluggish in the short-term, things will really start picking up in 2015.
Real Madrid recently agreed to pay £80 million for one man - Cristiano Ronaldo. Not only is this a huge amount of money in real terms, it's a huge amount when compared to other football transfers. It eclipsed the previous world record, which was set only days earlier, by more than 40%.
So why did Real Madrid spend so much? Firstly, they had been performing relatively poorly and were seeking to improve their performance. In doing so, they bought into the 'hero fallacy'. This is a tendency we all share to place a disproportionate emphasis on individuals when explaining success. Essentially, we explain the outcomes of amazing events by paying attention to the most obvious causes - the leader or outstanding performer. The fallacy is that the real causes are often too subtle and less visible to us, so we overemphasise the part played by the most visible - the heroes.
The second reason Real Madrid spent so much is they failed to take into account 'regression to the mean'. This is the principle that the more extreme a performance (either very good, or very bad) the more likely it is to move back toward the average. Why? Ronaldo's outstanding performances were based on three factors - his ability, his form, and luck. Whilst his ability is a constant, his form and luck will vary over time. The £80 million price tag reflects Ronaldo's performances when he was in good form and having the rub of the green. As his form and luck inevitably changes so his performances will decline, even though he still retains tremendous ability.
So what does this tell us about managing a business in a downturn? Like Real Madrid, many businesses will be desperately keen to improve their performance. With this there comes the risk of overreliance on following or recruiting 'heroes' - the highly visible individuals who will single-handedly turn things around. Instead, organisations should remember that success is rarely predicated on one person alone and that everyone, even the best, will waver at times. In many ways the answers can be found in the world of football - the best teams tend to have two features - good players throughout the team and good teamwork. The conclusion, therefore, is to focus on getting and retaining good people across the business, and to ensure good communication, with everyone working towards the same objective.
And what lies in store for Ronaldo? No doubt he'll score some spectacular goals, but without 10 colleagues backing him up, few of them will be winning goals.