On a flight last night, I took the opportunity to grill the Chief Steward of one of the
world's top service airlines (Singapore) about how they achieve their high
standards; is it down to training? Selection? His response is that it's
mainly down to:
1. Pride in the brand
2. Selecting people likely to show this pride
3. Only rewarding behaviour that exceeds expectations - not behaviour that simply meets expectations.
The
pride logic is similar to a certain BBC 24 newsreader who turned down a
500% salary increase offer from CNN "because CNN is just not the BBC".
Sunday, 27 January 2013
The impact of social media et al on team bonding
My Singapore Airlines Chief Steward - Mohd - also pointed
out to me that Facebook & Skype are making it much harder to build team spirit. Back in the old days before social media (5/10 years ago), he says
the crew would land in London/wherever and then party hard together - all good
bonding stuff. But today, they all hit the hotel and then hardly see
each other cause everyone's straight onto fb. One crew member cabin even
ordered room service at their London hotel and had a romantic Skype-dinner with her boyfriend who was in
Singapore - instead of joining the rest of the crew for dinner.
I guess for Gen Y, social media is removing a number of old-style bonding activities; I wonder whether they'll be replaced?
I guess for Gen Y, social media is removing a number of old-style bonding activities; I wonder whether they'll be replaced?
Friday, 26 August 2011
Why senior management don't often rely on empirical research
In a post below, I asked how much value is added by people-related activity in organizations using this model:
http://img.photobucket.com/albums/v480/maxblumberg/PeopleValue.png
The model can be used to assess how the extent to which people-related activity adds to Total Shareholder Return (TSR). TSR reflects total equity growth and is the ultimate outcome measure chased by senior management in most `commercial organizations*.
I suggest that we have little evidence about the value added by People in many organisations. By People, I mean employees (payroll, workforces, human capital, talent) and people-related investments (cradle-to-grave: attraction, selection, onbboarding, development, succession planning, engagement, retention, exit). The size of People investments vary by industry, but as long ago as 2002, CFO Research Services reported that as a percentage of of revenue, it was 37% in TMT, 25% in Heavy Manufacturing, 45% in Pharmaceuticals, and 43% in Financial Services. I can only suspect that investments have increased since then. Thus a Financial Services company with $5bn turnover would be spending $2.25bn on People.
This is a lot of money and if the statistic is correct, presumably senior management believe that this investment is yielding a better TSR than investment in alternative investments. Again, as suggested in a previous post here, this diagram presents alternative possible investments that senior management could have made to maximise TSR:
http://img.photobucket.com/albums/v480/maxblumberg/Humancapital.jpg
Senior management must therefore decide which mix of investments in tangible assets, intellectual capital, and financial capital is likely maximize TSR (in other words, it is a portfolio management problem). On what do they base this decision? As scientists, we would suggest that the effective method is empirical research. But senior management are nervous of research for the following reasons:
o Robust valid research is often difficult – if not impossible – to achieve in the ‘real world’ because of the reasons cited above (essentially, there are simply too many variables, most of which cannot be controlled to yield robust valid results)
o Research is expensive
o Research takes time and by the time outcomes are known, investment opportunity window may have disappeared or a competitor may have taken the market sooner by making a (admittedly riskier) non-research based decision
o The research may reveal that the asset class (e.g. tangible assets, human capital, etc.) does not in fact yield the greatest returns.
For this reason, instead of relying solely on empirical research, senior management tend to rely on a blend of past experience, intuition - and perhaps some limited research. This outcome may be disappointing to those of us who favour empirically-based decision-making, but that’s the real-world for you.
I would therefore suggest that, with good cause, real-world management is probably less dependent on research for their weighty decisions than one might like to think.
*(It can be argued that Balanced ScoreCard and Corporate Social Responsibility should also be included as ultimate measures, but these are probably better viewed as mediators of people-related activity on TSR - perhaps a discussion for another day).
http://img.photobucket.com/albums/v480/maxblumberg/PeopleValue.png
The model can be used to assess how the extent to which people-related activity adds to Total Shareholder Return (TSR). TSR reflects total equity growth and is the ultimate outcome measure chased by senior management in most `commercial organizations*.
I suggest that we have little evidence about the value added by People in many organisations. By People, I mean employees (payroll, workforces, human capital, talent) and people-related investments (cradle-to-grave: attraction, selection, onbboarding, development, succession planning, engagement, retention, exit). The size of People investments vary by industry, but as long ago as 2002, CFO Research Services reported that as a percentage of of revenue, it was 37% in TMT, 25% in Heavy Manufacturing, 45% in Pharmaceuticals, and 43% in Financial Services. I can only suspect that investments have increased since then. Thus a Financial Services company with $5bn turnover would be spending $2.25bn on People.
This is a lot of money and if the statistic is correct, presumably senior management believe that this investment is yielding a better TSR than investment in alternative investments. Again, as suggested in a previous post here, this diagram presents alternative possible investments that senior management could have made to maximise TSR:
http://img.photobucket.com/albums/v480/maxblumberg/Humancapital.jpg
Senior management must therefore decide which mix of investments in tangible assets, intellectual capital, and financial capital is likely maximize TSR (in other words, it is a portfolio management problem). On what do they base this decision? As scientists, we would suggest that the effective method is empirical research. But senior management are nervous of research for the following reasons:
o Robust valid research is often difficult – if not impossible – to achieve in the ‘real world’ because of the reasons cited above (essentially, there are simply too many variables, most of which cannot be controlled to yield robust valid results)
o Research is expensive
o Research takes time and by the time outcomes are known, investment opportunity window may have disappeared or a competitor may have taken the market sooner by making a (admittedly riskier) non-research based decision
o The research may reveal that the asset class (e.g. tangible assets, human capital, etc.) does not in fact yield the greatest returns.
For this reason, instead of relying solely on empirical research, senior management tend to rely on a blend of past experience, intuition - and perhaps some limited research. This outcome may be disappointing to those of us who favour empirically-based decision-making, but that’s the real-world for you.
I would therefore suggest that, with good cause, real-world management is probably less dependent on research for their weighty decisions than one might like to think.
*(It can be argued that Balanced ScoreCard and Corporate Social Responsibility should also be included as ultimate measures, but these are probably better viewed as mediators of people-related activity on TSR - perhaps a discussion for another day).
Thursday, 25 August 2011
What counts as evidence?
Ron Kennedy as ever asks great questions. Today he asked, what counts as evidence in the world of organisational psychology? My response:
Does this have the potential to be a religious war? Qualitative researchers are not always going to accept the positivist's evidence and vice versa. So prior assumptions about reality seem to play a role. For example: Can anything be objectively measured? Does the presence of the researcher affect the outcome of the investigation? Is the outcome a linear consequence of the cause or does reciprocity play a role? All of these affect the weight of evidence presented and should be acknowledged by the researcher.
From my perspective, what constitutes evidence depends on what is being investigated. Let's say a client wants to know which organisational and psychological attributes drive employee retention in their investigation. And say I am able to build a model whose predictors distinguish between stayers and leavers with 75% accuracy. On the surface, this is remarkable since the probability of achieving 75% accuracy by chance is just 2.81 x 10^-7 (binomial theorem) i.e. close to zero. But before celebrating my evidence, I must ask:
o Was this based on genuine groups of stayers or leavers or did I base it on the tenure of existing employees? If so, my evidence is weakened.
o Does this reflect the existing employee population? Can I generalise it to future populations accounting for the attitudinal shifts of Gen Y & Z etc? If not, evidence is weakened.
o Might my findings be 90% accurate for one business unit, but only 60% accurate for another? If I haven't controlled for this my evidence is weakened.
o Might external events in time affect retention (e.g. economy, M&A, etc.) have affected it? If I can't control for these, my evidence is weakened.
o And so on.
So evidence is only evidence after all factors which might affect it have been weighed and what on the surface appears to be evidence for a large effect size may dwindle down to evidence for not much at all.
On the other hand, if I need evidence for how employees feel about a newly proposed pension scheme and I am not trying to predict anything, I could probably interview a representative sample of sufficient size and come up with reasonable evidence one way or another.
Bottom line: Evidence resulting from predictive models should be treated with extreme caution and all limitations and assumptions noted. Evidence resulting from descriptive investigations is probably quite robust providing samples are representative and of sufficient size.
Original post:
http://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&discussionID=67058084&gid=78865&commentID=49666523&trk=view_disc
Does this have the potential to be a religious war? Qualitative researchers are not always going to accept the positivist's evidence and vice versa. So prior assumptions about reality seem to play a role. For example: Can anything be objectively measured? Does the presence of the researcher affect the outcome of the investigation? Is the outcome a linear consequence of the cause or does reciprocity play a role? All of these affect the weight of evidence presented and should be acknowledged by the researcher.
From my perspective, what constitutes evidence depends on what is being investigated. Let's say a client wants to know which organisational and psychological attributes drive employee retention in their investigation. And say I am able to build a model whose predictors distinguish between stayers and leavers with 75% accuracy. On the surface, this is remarkable since the probability of achieving 75% accuracy by chance is just 2.81 x 10^-7 (binomial theorem) i.e. close to zero. But before celebrating my evidence, I must ask:
o Was this based on genuine groups of stayers or leavers or did I base it on the tenure of existing employees? If so, my evidence is weakened.
o Does this reflect the existing employee population? Can I generalise it to future populations accounting for the attitudinal shifts of Gen Y & Z etc? If not, evidence is weakened.
o Might my findings be 90% accurate for one business unit, but only 60% accurate for another? If I haven't controlled for this my evidence is weakened.
o Might external events in time affect retention (e.g. economy, M&A, etc.) have affected it? If I can't control for these, my evidence is weakened.
o And so on.
So evidence is only evidence after all factors which might affect it have been weighed and what on the surface appears to be evidence for a large effect size may dwindle down to evidence for not much at all.
On the other hand, if I need evidence for how employees feel about a newly proposed pension scheme and I am not trying to predict anything, I could probably interview a representative sample of sufficient size and come up with reasonable evidence one way or another.
Bottom line: Evidence resulting from predictive models should be treated with extreme caution and all limitations and assumptions noted. Evidence resulting from descriptive investigations is probably quite robust providing samples are representative and of sufficient size.
Original post:
http://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&discussionID=67058084&gid=78865&commentID=49666523&trk=view_disc
Tuesday, 16 August 2011
Superficial analysis and the wrong interventions
Labour opposition leader Ed Milliband says the UK needs a proper, deep analysis into the real causes of the recent riots. But Prime Minister David Cameron says the UK knows the causes and all that is needed now is rapid action and intervention to prevent recurrence. Milliband responds to this by saying that this means Cameron's intervention will simply be a knee-jerk reaction based on a superficial analysis of the situation, and which will result in the wrong intervention and not cure the rioting.
As diagnostic analysts, we see this situation almost daily in complex organisations. We hear about their problems finding the right employees, and how they spend a fortune on development, engagement and retention, but without the results they were hoping for. But when we offer to perform a formal diagnostic to identify the root causes of the problem, there is inevitably a Cameron in the company who tells us the answer is obvious and then proceeds with an intervention. Nine times out of ten, we're back there within two years.
It is true that in non-complex organisations, the causes of problems are indeed usually easy to identify. But in large, global, multi-cultural situations, they are not.
Why do people fear diagnostic analysis? Some possible reasons:
1. They cost money, it is true. But they are usually cheaper than applying the wrong intervention.
2. Some people get systematic problem analysis; others don't. I notice this often based on the looks of awe that I get when explaining what to me seems like a simple systematic approach to, for example, identifying the causes of employee turnover. I often wonder why the client didn't think of it themselves, but the fact is that many people do not think this way and find it difficult. I suppose it's good because that creates opportunities for me.
Of course sometimes even formal diagnostics can get it wrong, but one thing is for sure: they get it wrong less often and cost much less than superficial guesses about the underlying causes.
As diagnostic analysts, we see this situation almost daily in complex organisations. We hear about their problems finding the right employees, and how they spend a fortune on development, engagement and retention, but without the results they were hoping for. But when we offer to perform a formal diagnostic to identify the root causes of the problem, there is inevitably a Cameron in the company who tells us the answer is obvious and then proceeds with an intervention. Nine times out of ten, we're back there within two years.
It is true that in non-complex organisations, the causes of problems are indeed usually easy to identify. But in large, global, multi-cultural situations, they are not.
Why do people fear diagnostic analysis? Some possible reasons:
1. They cost money, it is true. But they are usually cheaper than applying the wrong intervention.
2. Some people get systematic problem analysis; others don't. I notice this often based on the looks of awe that I get when explaining what to me seems like a simple systematic approach to, for example, identifying the causes of employee turnover. I often wonder why the client didn't think of it themselves, but the fact is that many people do not think this way and find it difficult. I suppose it's good because that creates opportunities for me.
Of course sometimes even formal diagnostics can get it wrong, but one thing is for sure: they get it wrong less often and cost much less than superficial guesses about the underlying causes.
Friday, 5 August 2011
Does human capital inevitably yield higher returns than other "assets"?
Commercial organisations may wish to consider this model:
http://img.photobucket.com/albums/v480/maxblumberg/PeopleValue.png
It argues that people decisions ultimately underpin total shareholder returns (TSR). Specifically, the effect of people on TSR is mediated by Quality, Innovation, Producvity and Customers. (Few consultants actually go so far as to try and measure this - but they really should to justify their interventions).
But people really the biggest TSR driver in most organisations? Consider instead this asset-based model of the enterprise:
http://img.photobucket.com/albums/v480/maxblumberg/Humancapital.jpg
This model argues that the sum of tangible assets, intellectual capital (including human capital) and financial capital drive TSR. Can you swear for sure that the effect of human capital accounts for most of the variance in TSR? For example, in a very automated environment, it may be that machines account for more TSR than people. Or in a financial services environment, perhaps financial assets generate larger returns than people. For example, I remember the CEO of Barclays once telling to me (long ago in the 80s I grant you) that their financial asset base was so large that it made money in spite of employee interventions. I asked what he meant by this and he explained that if it was mainly when employees meddled with their huge financial assets that they lost money!
The most common argument for the power of human capital - the domain of the IO psychologist - is that ultimately, all decisions about enteprise assets - acquisition, maintenance, and disposal - are made by people. But can we acknowledge at least that sometimes, just sometimes, non-people assets can generate higher returns than the people who set them up?
http://img.photobucket.com/albums/v480/maxblumberg/PeopleValue.png
It argues that people decisions ultimately underpin total shareholder returns (TSR). Specifically, the effect of people on TSR is mediated by Quality, Innovation, Producvity and Customers. (Few consultants actually go so far as to try and measure this - but they really should to justify their interventions).
But people really the biggest TSR driver in most organisations? Consider instead this asset-based model of the enterprise:
http://img.photobucket.com/albums/v480/maxblumberg/Humancapital.jpg
This model argues that the sum of tangible assets, intellectual capital (including human capital) and financial capital drive TSR. Can you swear for sure that the effect of human capital accounts for most of the variance in TSR? For example, in a very automated environment, it may be that machines account for more TSR than people. Or in a financial services environment, perhaps financial assets generate larger returns than people. For example, I remember the CEO of Barclays once telling to me (long ago in the 80s I grant you) that their financial asset base was so large that it made money in spite of employee interventions. I asked what he meant by this and he explained that if it was mainly when employees meddled with their huge financial assets that they lost money!
The most common argument for the power of human capital - the domain of the IO psychologist - is that ultimately, all decisions about enteprise assets - acquisition, maintenance, and disposal - are made by people. But can we acknowledge at least that sometimes, just sometimes, non-people assets can generate higher returns than the people who set them up?
Tuesday, 2 August 2011
Is it possible to be evidence-based and rigorously scientific in the real world?
"Is it possible to be evidence-based and rigorously scientific in the real world?"
If evidence-based and rigorously scientific means applying scientific method and reporting any limitations to the client, then yes, it is definitely possible. Even in the academic world, one cannot always use that gold standard, the randomised controlled trial. In this case, one simply ensures that readers understand this by clearly noting it as a limitation of the research. Similarly when one encounters limitations in real world research, all we have to do is make our clients' aware that there may be some risks in adopting our recommendations (selection, development, re-org, etc.)
And I believe that limitations reporting is an area perhaps where perhaps real world I-O practitioners could do better. We don't always tell our clients that this psychometric was normed on a population different to yours, that the original results came from a prospective rather than a retrospective investigation, that we can't be 100% sure of the conclusion because correlation is not causation, and so on. And maybe with good reason: because the clients may stopping buying if they knew this!
Bottom line: Evidence-based practice is an ideal seldom achieved in the academic world, and even more difficult to achieve in real world (because conditions are even less controllable). But the role of scientist-practitioners is to implement it to the best of their ability while making sponsors aware of any limitations.
If evidence-based and rigorously scientific means applying scientific method and reporting any limitations to the client, then yes, it is definitely possible. Even in the academic world, one cannot always use that gold standard, the randomised controlled trial. In this case, one simply ensures that readers understand this by clearly noting it as a limitation of the research. Similarly when one encounters limitations in real world research, all we have to do is make our clients' aware that there may be some risks in adopting our recommendations (selection, development, re-org, etc.)
And I believe that limitations reporting is an area perhaps where perhaps real world I-O practitioners could do better. We don't always tell our clients that this psychometric was normed on a population different to yours, that the original results came from a prospective rather than a retrospective investigation, that we can't be 100% sure of the conclusion because correlation is not causation, and so on. And maybe with good reason: because the clients may stopping buying if they knew this!
Bottom line: Evidence-based practice is an ideal seldom achieved in the academic world, and even more difficult to achieve in real world (because conditions are even less controllable). But the role of scientist-practitioners is to implement it to the best of their ability while making sponsors aware of any limitations.
Subscribe to:
Posts (Atom)