Archive | Research First RSS feed for this section

Are You Paying Attention?

12 Apr

Written by Carl Davidson, Head of Insight at Research First

When you look at a zebra do you see a white animal with black stripes or a black animal with white stripes? Similarly, when you look at a map of Europe why do we all see the ‘boot’ of Italy but tend to miss the elephant’s head of the Western Mediterranean?

While you’re thinking about those two questions, count how often the letter f appears in this sentence:

finished files are the result of years of scientific study combined with the experience of years.

All three of these questions are about how we focus our attention. In particular, what they show is what we notice and don’t notice. This is a question that has fascinated social scientists for a long time.

In one regard, attention can be thought of as a spotlight. There is so much going on in the world around us (and inside our own heads) that we cannot possibly process all of it. Attention is what focuses our awareness on the subset of things that we identify as being important. But precisely because there are always other things to attend to, focusing on any one things takes real effort. This is why we talk about ‘paying’ attention.

And paying attention has real costs. If you are focused on one thing, it’s much harder to notice others. A good illustration of this is Christopher Chabris and Daniel Simons’ “Invisible Gorilla” experiment. This asked observers to count how often a basketball was passed between players on a team. Except, in the middle of the video, a person dressed in a gorilla suit walks onto the court, stands in front of the camera. The gorilla is in shot for nine seconds but half of the people who took part in the experiment never saw it.

It may be that those who did notice the gorilla just couldn’t focus as long as the others. Psychologists estimate that your mind wanders at least 30% of the time, and that wandering is probably its natural resting state.

Even more interesting is that most people have an attention span of between seven and ten minutes. It’s not that we can’t focus on tasks longer than this but that we need to renegotiate for that attention after this time.

Now just reflect on how long most work meetings last, most classes run, and most conference papers drag on. If you have ever found yourself losing the will to live in any of those be reassured that you are not alone. This is why the Pecha Kucha approach to presentations is so refreshing. Here you are allowed only 20 slides, and 20 seconds per slides. Which sums to about seven minutes of talking.

It’s easy to see ‘mindfulness’ as being the antithesis of the need to ‘paying’ attention. Mindfulness is about living in the moment and it is involves what is known as ‘open’ attention. This means observing your thoughts and feelings but without judging them (or letting them judge you). Think of it as sitting on the bank of a slowly flowing river and watching your thoughts float downstream.

There is a lot to recommend mindfulness but it still involves the deliberate application of attention. Because so much of what we do at Research First deals with consumer behaviour these days, we find the ‘sociology of attention’ approach is often more useful. This looks at how what we pay attention to is a constructed by social settings and structures that we are often not aware of.

In the sentence about ‘finished files’ above the letter f occurs six times. People often count fewer Fs because they miss those in the word ‘of’ (which occurs three times). They do that because the conjunctions in sentences are often irrelevant to its meaning. Given we tend to read for meaning first, our attention ‘skips’ over the conjunctions.

We see the same kind of pattern in all kinds of different ‘attentional communities’. These can be cultures or professions. Think about how something as a simple as a game of rugby looks if you’re a spectator, a coach, a sponsor, or the team doctor. In other words, our culture and our education shape what we pay attention to and what we dismiss as irrelevant.

But seeing attention in this way is not just an interesting sociological observation. There is a strong argument that the root of disagreement is really in a dispute about what we should pay attention to. This goes for disagreements in your relationships, your workplace, and in the world of policy. From this perspective, creating a vision and getting people signed up to it first means creating a shared sense of relevance. Which is why the first question we should all ask when faced with a problem is not ‘what should we do about it?’ but ‘how should we think about it?’.

Research First is PRINZ’s research partner, and specialises in impact measurement, behaviour change, and evidence-based insights.

Image credit: iStock

Social Science Saves Your Life

8 Mar

Written by Carl Davidson, Head of Insight at Research First

Do you remember back in 2013 when Nigella Lawson was assaulted by her (then) husband, Charles Saatchi? One of the reasons the story shocked people around the world is because no-one stepped in to assist. Given the assault happened in the middle of the day, outside a busy restaurant in Mayfair, any number of people could have stepped in. So why didn’t they?

If you’re wondering that, then you probably also believe that you would have behaved differently. When we read of events like the assault on Nigella, it’s always tempting to think that we would have been the ones to intervene. Unfortunately, the evidence seems to indicate otherwise.

The tendency not to act is known as ‘bystander apathy’ and cases like Nigella’s are all too common.

Bystander apathy happens because, when people get together in groups, it is common to think that someone else will be the first to act (what psychologists call ‘diffusion of responsibility’). We are also reluctant to act because situations are often ambiguous, and most of us do not want to appear foolish (by acting inappropriately) in front of others.

Both of these effects are magnified with the size of the crowd, which means we are less likely to act when there are more people around. This is because groups of people behave differently from the individuals within them. A major reason for this is in what social scientists call ‘deindividuation’. This describes the reduction in a sense of individual identity within groups and crowds. It isn’t always negative (as anyone dancing with abandon at a concert knows) but it is most frequently used to explain why people behave worse in crowds than they would on their own.

Deindividuation leads to ‘bystander apathy’ because people in crowds tend to think that someone else will act (that is, the responsibility to act diffuses through the crowd). And the larger the crowd, the less likely to act we all become. There also seems to be no difference in this kind of apathy by gender, age, or ethnicity. We’re all as likely as each other to stand by and do nothing.

The good news is that, while deindividuation (in all its guises) is common, it is remarkably easy to overcome. To do that, we simply need to re-engage with people in the crowd as individuals.  If you’re ever unfortunate enough to find yourself in a situation like Nigella, the way to get help is to focus on someone particular in the crowd and ask them, specifically, for help. Try something like ‘you in the green jacket, please help me’. Be specific and direct. This will cut through the diffusion of responsibility and any lingering sense among bystanders about the ambiguity of the situation.

Make sure to share this tip with your family and friends. One day it could make all the difference in the world.

 

Research First is PRINZ’s research partner, and specialises in impact measurement, behaviour change, and evidence-based insights.

Image credit: iStock

How Did the Polls get the US Elections so Badly Wrong?

14 Dec

Written by Carl Davidson, Head of Insight at Research First

USA Map Vote and Elections USA Patriotic Icon Pattern

The day after Donald Trump won the US Presidential Election, The Dominion Post ran a headline saying ‘WTF’. It left off the question mark so not to cause offence (and asked us to believe that they really meant ‘Why Trump Flourished’). But the question lingers regardless.

For those of us in the research business, WTF? was quickly followed by ‘how did the polls get it so wrong?’.

It’s a good question. And coming hot on the heels of the polls’ failure to predict Brexit, an important one.

People have attempted to answer this question in a number of ways, and each of them tells us something a little different about the nature of polling, the research industry, and voters in general.

The first response might be called the ‘divide and conquer’ argument. This is the one that says not all the polls got the election result wrong. The USC/LA Times poll, for instance, tracked a wave of support for Trump building and predicted Trump’s victory a week out. Similarly, the team at Columbia University and Microsoft Research also predicted Trump’s victory. But this seems to me to be a disingenuous argument because most polls clearly got the result wrong. And with enough polls running, some of them have to give the contrary view. Another way to think about this is that even a broken watch is right twice a day.

There is a variation on this argument that we might call ‘divide and conquer 2.0’. This is the argument that says people outside of the industry misunderstood what the polls actually meant. The best example here might be Nate Silver’s FiveThirtyEight.com. Before the election 538 gave Trump about a thirty percent chance of winning. To most people, that sounds like statistical short hand for ‘no chance’. But to statisticians, it means that if we ran the election ten times, Trump would win three of them. In other words, Silver was saying all along that Trump could win. Just it was more likely that Hilary would. As Nassim Nicholas Taleb might put it, the problem here is that non-specialists were ‘fooled by randomness’. There is merit in this argument but it seems too much of ‘a bob each way’ position (and note how it shifts the fault from the pollsters to the pundits).

The next argument might be called ‘duck and run’. This is the argument that says the fault lies with the voters themselves because they probably misrepresented their intentions. Pollsters typically first ask people if they intend to vote, and only then who they’re going to vote for. But, of course, there’s no guarantee the answer to either is accurate. This seems to be the explanation that David Farrar (who is one of New Zealand’s most thoughtful and conscientious pollsters) reached for when approached by Stuff. Given how many Americans didn’t vote in the election, expect to hear this argument often. But surely all this really means is that the pollsters asked the wrong questions, or asked them of the wrong people?

A variation on this ‘duck and run’ argument is that polls are at their least effective where a tight race is being run. On election night nearly 120 million votes were cast but the difference between the two candidates was only about 200,000 (or less than one third of one percent). It could be that no polling method is sufficiently precise to work under these conditions. If you want to try this line of argument in the office, award yourself a bonus point for referring to the ‘bias-variance dilemma’.

But I think all of these arguments are a kind of special pleading. Worse than that, much of what the industry is now saying looks like classic hindsight bias to me. This is also known as the ‘I-Knew-It-All-Along Effect’, which describes the tendency, after something has happened, to see the event as having been inevitable (despite not actually predicting it). While it’s easy to be wise after the fact, the point of polling is to provide foresight, not hindsight.

And no matter how well intentioned any of these arguments might be, it’s hard not to think we’ve seen them all before. Philip Tetlock’s masterful Expert Political Judgment: How Good Is It? reports a 20 year research project tracking predictions made by a collection of experts. These predictions were spectacularly wrong but even more dazzling was the experts’ ability to explain away their failures. They did this by some combination of arguing that their predictions, while wrong, were such a ‘near miss’ they shouldn’t count as failure; that they made ‘the right mistake’; or that something ‘exceptional’ happened to spoil their lovely models (think ‘black swans’ or ‘unknown unknowns’). In other words, the same arguments that we’re now seeing the polling industry rolling out to explain what happened with this election.

For me, all of these arguments miss the point and distract us from the real answer. The pollsters (mostly) got the election wrong because the future – despite all our clever models and data analytics – is fundamentally uncertain. Our society loves polls because we crave certainty. It’s the same reason we fall for the Cardinal Bias, the tendency to place more weight on what can be counted than on what can’t be. But certainty will always remain out of reach. What Trump’s victory really teaches us is that all of us should spend less time reading polls and more time reading Pliny the Elder. It was Pliny, after all, who told us ‘the only certainty is that nothing is certain’.

Research First is PRINZ’s research partner, and specialises in impact measurement, behaviour change, and evidence-based insights.

Image credit: iStock

You’ve Got to Know When to Fold ’em

9 Nov

Written by Carl Davidson, Head of Insight at Research First

Businessman standing on stack of books with a magnifying glass. Business vision

If you have read The Luminaries then you will know that it’s a substantial book. Indeed, one of the reviewers on National Radio joked that it’s a book that ‘gets better after page 400’. Regardless of what this says about the merits of The Luminaries, it raises an interesting general question about when it is okay to abandon a book you have started reading. After all, if you give up too soon then you might miss an amazing plot twist that transforms your experience. But if you plough on regardless, you’ll lose those precious hours you could have used doing (or reading) something better.

Clearly this is not a recent problem. Mark Twain once famously said that a ‘classic’ book is one that ‘everybody wants to have read but nobody wants to read’. A quick search of the internet demonstrates that little has changed since Twain’s day, with any number of sites listing books that people pretend they have read. Amazon will even sell you a book to help with the pretense (Anne Taute’s Bluff Your Way in Literature).

You probably won’t be surprised to hear that social scientists have something to say about our reading behaviour, but you may be surprised about what that is. In short, the view from the social sciences is that we should all learn to ditch unsatisfying books sooner.

The first part of this argument arises from what is known as ‘the sunk cost trap’. This describes the tendency to stay with an activity simply because of the time (or money) we have already spent on it. It’s also known as ‘throwing good money after bad’. But we all fall for it to a lesser or greater extent because overcoming sunk costs first means accepting that we have made a bad choice. Our reluctance to make this admission explains why people finish movies or meals they aren’t enjoying; hold on to investments that are underperforming; and keep clothes in their closet that they’ve rarely worn.

The second part of the argument focuses on what is known as ‘loss aversion’. This shows that we feel the pain of losing much more acutely than we do the pleasure from winning. The fear of losing may be what motivates the All Blacks to their great heights of performance but it often inhibits the rest of us. This is because when it comes to making a decision, we are always confronted with the possibility that we’ll make the wrong one. Given this possibility, sticking with the status quo can often seem safer. And I don’t mean a little bit safer – the evidence from the psychology lab suggests that losses are felt about twice as powerfully as similar gains.

Finally, social scientists point to what is known as ‘the Zeigarnik Effect’. This describes how we remember incomplete tasks much more readily (and vividly) than we do complete ones. This Effect has been shown in a number of studies but it began when Zeigarnik’s professor noted how a waiter in a local restaurant could recall unpaid orders but not those that had been paid. The subsequent research demonstrated that the things we start and don’t finish weight much more heavily on our minds than tasks we finish.

Taken together, ‘the sunk cost trap’, ‘loss aversion’, and ‘the Zeigarnik Effect’ mean we are predisposed to staying with tasks long after we should have given up on them; are intrinsically biased towards the status quo; and much more likely to remember our failures than successes.

But while the psychologists have much to say about why it’s so hard to give up on a book you have started to read, they provide little guidance about when we should stop. For this, we need the no-nonsense wisdom of aviation. Mark Vanhoenacker is a 747 pilot and the author of Skyfaring. In his book, he notes that on the final approach to an airport there is a point where the pilot in command has to make a ‘decide call’. To make sure this happens, when the plane reaches the decision-altitude, the flight computer says ‘DECIDE’ out loud and unmistakably. Vanhoenacker talks about how this has become a tool that he uses in his own life when he finds himself procrastinating.

I like the idea of readers creating their own ‘decide’ calls for books. This decision point might occur after you have read the first 60 pages, the first three chapters, or after spending one whole morning reading. But an idea I like even better is to deduct your age from 100 and reading that many pages before giving up. After all, the older we get the less time we have to spend on bad books. And we really do need to know when to walk away and when to run.

Research First is PRINZ’s research partner, and specialises in impact measurement, behaviour change, and evidence-based insights.

Image credit: iStock

What’s Wrong with The Millennials?

12 Oct

Written by Carl Davidson, Head of Insight at Research First Ltd

The Millennials (aka Generation Y) – you must have seen them. They’re that cohort of your colleagues born between the mid 1980s and the year 2000. They’re the ones who are self-obsessed, disengaged, and the reason the world is going to hell in a handcart.

References to ‘Millennials’ are everywhere.

A quick search on Google found over 200 million hits, and Amazon has at least 7,000 books on the subject. Time magazine attempted to summarise all this writing by noting that this generation are “lazy, entitled, narcissists, who still live with their parents” but who, apparently, “will save us all”.

Which would be nice, except none of it is true.

Not only are your Millennial colleagues not like this, but the notion that we can cluster people into cohorts based on their age is simply nonsense.

The generations’ idea has a long history but it really started gaining momentum with what is known as the ‘Strauss-Howe’ generational theory. This is based on a model that William Strauss and Neil Howe set out in their book Generations, and it is where the notion of Baby Boomers, Generation X, and Millennials really took hold.

It’s a beautifully elegant scheme.

Al Gore called Generations ’the most stimulating book on American history’ he’d ever read. He even sent a copy to each member of Congress. But, as any decent social scientist will tell you, extraordinary claims require extraordinary evidence. And here things start to fall apart quickly.

When scrutinised, the ‘evidence’ for generational differences reveals itself to be a bundle of non-falsifiable truisms which explain everything and predict nothing. Sure, the stories they tell about Millennials are often upbeat, fun to read, and eminently quotable, but that doesn’t mean they’re right. What they are is all pastry and no pie. After all, the plural of ‘anecdote’ is not data.

However, there is no real need to debate the evidence for once. This is because it is pretty simple to demonstrate that the notion of ‘generations’ is ridiculous on the face of it. The idea that tens of millions of people across the world will share values or ways of communicating (or even an aptitude for technology) just because they were born in the same 20 year period is laughably absurd. If you simply stop and think about what is being claimed about Millennials (or any of the other Generations), then it becomes obvious that those claims are as implausible as they are contrived.

Try it another way: Why do we accept that we can divide our colleagues at work (to take just one example) into three or four distinct groups based on the year they are born in but reject as ridiculous the notion that we can divide them into twelve groups based on the month they are born in? In other words, why is there a serious discussion about Millennial employees but not about Sagittarian interns?

Social scientists are clear that – when groups get big enough – the differences within the groups will be greater than the differences between the groups. This is precisely what the serious research about attitudes and attributes by birthdate show us. The story is one of continuity, showing that members of subsequent generations are much more alike than they are different.

But if the case against ‘Millennials’ is so strong, why is it so popular? (recall those 200 million hits and 7,000 books mentioned earlier). The answer is because there are whole industries who benefit from that belief. As a result, the notions of generations are often uncritically promoted in the media and slickly marketed. Think about all the times you have seen some offering, for a fee, to help improve how you communicate with, engage with, or sell to, the Millennial generation.

That’s what the notion of generations really is, an idea to persuade you to buy something. It’s a marketing success story but it remains terrible social science. Instead of focusing on when we were born, social scientists talk about differences by referencing our gender, our ethnicity, how affluent we are, where we were born, who we socialised with, and the whole rich tapestry of human experience. Social scientists wish the world was as simple as the notion of generations promises but it stubbornly isn’t.

In one regard, my argument here is that the notion of generations is an elaborate con and that social science provides a powerful riposte to being conned. But the argument also hides a criticism of many of the ideas that we uncritically use to explain the social world. How many of those ideas fall into the same trap as the notion of ‘generations’? More to the point, how often do you stop to think about the ideas you use to make sense of the social world?

Research First is PRINZ’s research partner, and specialises in impact measurement, behaviour change, and evidence-based insights.

Image credit: iStock

The Multitasking Myth

14 Sep

Written by Carl Davidson, Director of Strategy and Insight, Research First

Busy business people working hard on his desk in office with a lot of paper work, Business conceptual on hard working.

Can women multitask better than men? Before you read any further, stop for a moment and consider that question: What do you really think?
It probably won’t surprise you that this is a question social scientists have given plenty of attention. Nor may it surprise you that their answers point both ways.
The argument against is perhaps the most interesting. This tells us that women are not better at multitasking than men because no-one really multitasks. The research evidence here is very clear, we can only ever give our full attention to one task at once.

When we think we’re multitasking, what we are really doing is rapidly shifting our attention from one task to another. This shuffling of tasks often makes us think that we are simultaneously attending to them but that is just an illusion. When we shuffle between tasks our performance on all of them decreases, and the likelihood of making mistakes goes up. This is why driving and talking on your phone (with or without a hands-free kit) is a bad idea.

What this shows is that we have our metaphors about attention all wrong. It’s common to hear attention referred to as a kind of internet ‘bandwidth’ but in reality attention is much more like a phone line. If you want to take an incoming call, you first have to put the current caller on hold. Attention is both finite and sequential.

The reason why some people think women are able to multitask successfully is that there is some evidence that women can switch between tasks faster than men. This comes from a series of experiments that showed mixing up a number of tasks slowed down men’s performance more than women’s. Some people believe this demonstrates that women are better at what is known as ‘thin slicing’ than men. That is, the ability to make very quick decisions drawn from small amounts of information.

So the view from the social scientists seems to be that while women can’t really multitask better than men, they are better at the tricks our brains play to provide the illusion that we can.
Yet if we shift our attention from psychology to sociology, the social science here gets even more interesting. Sociologists are less interested in what the experiments tell us about men and women and multitasking and more interested in what those things say about the world we live in. This perspective raises important questions like ‘why have we made a fetish of multitasking?’ and ‘why do we care if women or men are better at it?’

The first of those is about the general appeal of multitasking and the answer seems obvious. In a world where there are increasingly blurred lines between work and home, and where technology provides the ability to combine tasks in new ways, multitasking seems a virtuous way to be more productive. In this view, multitasking is seen as a way to respond to an increasingly time-poor world.

At the same time, we now know that our brains crave novelty. They have evolved to seek it out, and they reward us when we find it. Novelty is correlated with the activation of the dopamine system in the brain. This provides a powerful reward mechanism for doing the things evolution has wired us all to do. So while the multitasking myth explains why you shouldn’t mix driving and talking on your phone, your brain’s craving for novelty explains why you want to.

The marriage of technology and the reconfiguring of work explains why we have made multitasking a virtue. But it also explains why it’s convenient to believe that women can do it better than men. Over the last 50 years or so we have seen a radical change in the working lives of women. Their participation in paid work has increased significantly, and with it a ‘double burden’ of juggling work and home-life. Women who were raised to believe they could do anything, found themselves in a world where they were asked to do everything. In this world, is it any surprise that we came to believe that women are natural multitaskers and much better at it than men?

As any social scientist will tell you, social norms are cultural products. What we see as ‘common sense’ reveals a great deal about the world we live in. In this regard, our belief in multitasking tells us much more about who we are than we might like to admit. But is this ability to get behind those taken-for-granted assumptions that makes the social sciences so valuable to all of us. Because, as George Orwell noted, to see what’s in front of our noses needs a constant struggle.

Research First is PRINZ’s research partner, and specialises in impact measurement, behaviour change, and evidence-based insights.

Why Do People Speed Up in Passing Lanes?

10 Aug

Written by Carl Davidson, Director of Strategy and Insight at Research First

HiRes

It seems like such an annoying problem: You find yourself stuck behind a car that is crawling along as the road twists and turns its way through the countryside, only to have them speed up once you reach the passing lanes. Why does this happen?

One way to explain this phenomenon is to assume that the driver in the slower car is acting deliberately; that he or she is somehow trying to stop you overtaking them by accelerating ahead. And, in the process, that the other driver is consciously attempting to prevent you from reaching your destination in a timely manner. This view of other drivers sees the road as a place of contest and malice. A Darwinian struggle, red in tooth and claw, just to get to your destination. Explained like this, is it any wonder that people experience road rage?

Fortunately, there are better explanations we can draw on. As any good social scientist will point out, Hanlon’s Law tells us that we should never attribute to malice that which is adequately explained by human frailty. The ‘frailty’ in this case is one of perception, and in particular how our brains perceive speed. Simply put, narrower roads increase the perception of speed, and wider roads decrease that perception.

Which may seem obvious, but how does it explain why people actually speed up when the road widens? To do that, we need to refer to what is known as ‘risk homeostasis’. This is the idea that all of us have a certain amount of perceived risk that we think is acceptable. When the perceived risk is below that particular level (or goes above it), we change our behaviour to adjust how much risk we feel. When a narrow road becomes wider (such as with the addition of a passing lane), the risk sensation decreases and our behaviour changes to reflect that.

Homeostatsis works just like the thermostat in your heat pump at home, turning up the heat or cooling down the room to keep the desired temperature. You can see it in action in passing lanes as people speed up as the road widens and slow down as the passing lane ends and the road narrows. It may look like they are playing cat-and-mouse with you, but they’re not (at least not most of the time).

Research from Europe demonstrates just how much impact road width can have on driving behaviour. Increasing the width of a road lane from 6m to 8m sees average speeds increase from 80kmh to between 90 and 100kmh. Moreover, adding to the number of lanes on a road (such as with passing lanes) produces faster speeds even where the width of individual lanes remains constant.

What is interesting about the link between road width and the perception of speed is that road designers clearly know this. They often use what are called ‘gateway treatments’ to make roads appear narrower as they enter populated areas. These ‘gateways’ can be physical or they can simply be visual (such as different road markings).

Yet this understanding of how width affects the perception of speed seems strangely out of synch with the posters and signs that often get erected to remind drivers to be considerate, to pull over, and let others pass. That is, the built environment sends drivers one set of signals while the signs and posters attempt to send the opposite signal. In many ways that is like sitting down to the all-you-can-eat buffet at your favourite restaurant while surrounded by posters warning about the dangers of obesity.

Researchers also know that perceptions of speed are strongly influenced by peripheral vision and noise. The evidence is clear that peripheral vision deteriorates with age (with the size of our visual field decreasing by about three degree per decade). Researchers from the University of Chicago have argued that this leads to older drivers having lower risk thresholds (and hence driving slower) to compensate for this lack of vision.

Similarly, we all use noise to help estimate our speed. This means that better sealed roads (such as in passing lanes) will also lead to lower perceived speeds. Equally, it means that people in older cars may well think they are travelling faster than they are.

So why do people speed up in passing lanes? Because we have created the perfect environment to encourage them to do so. With the best will in the world, we have created a passing infrastructure that makes it difficult to pass.

This may seem like a cosmic joke but it is an example of what social scientists call ‘the law of unintended consequences’. This warns us that interventions in complex systems tend to have unanticipated and often perverse outcomes. Which might point to the real insight contained in Hanlon’s Law: that in the absence of proper understanding, human frailty often appears indistinguishable from malice.

Research First is PRINZ’s research partner, and specialises in impact measurement, behaviour change, and evidence-based insights.

%d bloggers like this: