Being open minded in a bubble.

I am still not comfortable at handling conflict. I’ve practised it many times now, and have most definitely improved over my adult years, but it still comes with a sense of foreboding, embarrassment, awkwardness and the shock-horror-potential of realising I might be wrong… yeurghh!

One thing I have loved though is being able to find a coherent and strong argument for my point of view and putting that forward, perhaps with some tempered smugness if I think I have truly nailed it. It feels good, doesn’t it? Feeling like you’ve got the stats to back your stance. I also like to think I am able to change my mind if someone comes back at me with an even more fabulously logical and robust argument that contradicts or challenges my view. I like seeing myself as someone who likes to learn and grow… even if it comes with some inevitable awkwardness from realising I was wrong. So it was with some minor discomfort that I learned about the Backfire Effect and Motivated Skepticism, and had to pause to think about whether I am truly open-minded in these instances. And whether my arguments are truly changing others minds too.

I only recently discovered podcasts (yes, yes, yes, I am VERY late to the podcast party, but hey – I got there, didn’t I? Let’s move on…) and this year I discovered one of my favourites so far – You Are Not So Smart (or YANSS to its fans). Anyone who knows me well wouldn’t be surprised to learn how much I love YANSS, particularly if I told you the theme of these podcasts is to shine a light on our bizarre and fascinating brains, and look at the ways we delude ourselves. Three recent episodes in this podcast series looked in depth at the Backfire Effect – which is essentially the effect you see when you try to change someone’s mind about something by firing facts and arguments at them, only to have them refuse to stand down and instead get even stronger in their stance against you. As explained in the podcast – they ‘double down’ on their beliefs. This is extremely topical at present, with the rise of ‘fake news’, ‘alternative facts’ and the rise of extreme partisan beliefs and division in politics. There’s a lot of ranting and fist-waving going on but doing it to those we disagree with is not always helping our cause.

In short, there are a number of things at play that are causing this effect. And liberal left-wing types are just as prone to this happening as the conservative right. It also isn’t necessarily to do with how much education you got – in fact, one of the findings is that having more education can make this effect worse.

When we hold strong political beliefs, we tend to integrate them into our sense of self. We see them as part of ourselves. When others argue against our beliefs and try to convince us we are wrong or misguided, we respond as if we are under threat. We display similar cognitive reactions to these verbal threats to those we would have if we were being chased by a ravenous and terrifying large animal: Under threat. Must protect self. Defences UP! This is the first thing that happens, and it makes us less open to all sorts of stuff, but crucially – less open to really hearing and considering the argument that made us feel threatened in the first place. It isn’t a life-saving strategy to reason with a bear when you think it’s trying to tear you apart…

The second thing that can happen is something called Motivated Reasoning and its (I like to think shady and sinister) counterpart, Motivated Skepticism. This is closely linked to Confirmation Bias – which is when we look for evidence that supports our already established views (and mentally slap ourselves on the back for finding the evidence). Motivated Skepticism is when we view opposing facts and evidence with higher… well… skepticism. It isn’t in our interest to change our mind, so we tend to put those arguments against ours through a lot more paces. We put barriers up – mentally insisting on more information, questioning the source, overly scrutinising the messenger. We don’t often realise we’re doing this, but it makes us less likely to seek out new information and even to challenge ourselves and test our own beliefs. Some open mind, hey?

Add in the layer of social media, where whole websites are using algorithms to put content we LOVE and AGREE WITH right up in our faces, and this whole process just gets more intense. We build up and reinforce our own echo chambers – both consciously and unconsciously. We hide posts from news sources we don’t trust. We stop following those in our social media circles who post outrageous views we disagree with. And so we come across counterarguments less, and for those we do encounter – we view them skeptically and dismiss them with other excuses.

Needless to say I found this all a bit sobering. As someone learning to really assertively argue and debate, and whose starting to enjoy the thrill of an argument more and more, it has made me question how truly open-minded I am. Made me question as to whether I am arguing with a view to really learn more, or instead to just get better at digging my heels in – but more eloquently.

How do we reduce the Backfire Effect? Well, there are some instances where arguing boldly is going to do more harm than good. Some arguments should just be left aside… but if you are keen to try help someone see your point of view and the evidence you are presenting – then connecting with those you’re debating with, showing similarities between both of you can assist, as does providing a safe environment within your conversation. A lot of this is in the nuances of how you talk and ensuring you come across as someone willing to change your mind too. Essentially making yourself look more vulnerable and flexible first, so the other person doesn’t feel so easily threatened. It’s a start. And to achieve change with someone whose really dug in, then you’ll need to do this often and in small pieces at a time… optimistically there is some lab evidence to show that people will eventually let go of their beliefs when presented with enough counter-evidence (quality and quantity of evidence is important, and consistently it must be happening regularly), but before that happens it is likely they’ll get more dug in…

But then, aren’t you dug in too? Why else would you be trying to change their opinion? (insert awkward smug smile here).

Behaving right for all the wrong reasons.

There seems to be an ongoing dilemma faced by those of us working in behaviour change across environmental and social justice fields. It is a dilemma that I grapple with regularly. My allegiance switches often, perhaps because of a paper I’ve read, or an example I’ve been given or in hearing a counter-argument to my current (still early) understanding. I like to keep an open mind, but granted – it can probably appear a little noncommittal at times.

So what is this dilemma?

For those of us trying to change or initiate behaviours that are pro-social or pro-environmental we quickly learn there’s many ways this can be done. Some of these methods involve techniques that may be viewed as almost ‘tricking’ an individual into undertaking a behaviour they might not normally perform, and for reasons that may have little to do with why I ultimately want them to be undertaking that behaviour.

Examples would help at this point. I could incentivise my target activity or behaviour with a plain old simple reward, a reward that is entirely unrelated to the behaviour itself (heck, I do this far more often than I ever intended whilst parenting a toddler). I could position the behaviour or activity as something that will get the individual lots of attention and social status. I could just tell them everyone else is doing it and they should probably get on board, or be left behind. And nobody wants to be left behind.

All these methods are valid. They have worked for many researchers and practitioners. So where is the dilemma? You got the behaviour you wanted. What’s the big deal, I hear you ask.

As someone who got into this behaviour change world because of a desire to address pressing environmental and social issues (like many around me) I want others to take on pro-environmental and pro-social behaviours, but my ideal scenario is for them to take on those behaviours because they want to assist with addressing these global issues as well. The logic behind this is that doing it for those motivations, and changing their attitudes and beliefs (the behavioural Holy Grail) should in turn help reduce the likelihood of them undertaking other destructive behaviours in the future. Ultimately I want them to be changing their behaviour for the right* reasons, in order to lessen the likelihood of them contributing to other environmental threats, or social injustice, in the years to come. Everyone can dream, right?

The reality is that this is a long-term game I’m playing in order to achieve this dream. It is possible, and certainly there are examples where people have taken on behaviours, and have been directly motivated to act, because of the issue itself. The dilemma kicks in because for certain issues, time is really limited, and action is needed now – so naturally you turn to those other tricks of the trade in order to a get result, and you exploit whatever motivation works.

Is it really a problem to exploit these other motivations? Well, it depends a little on who you ask – and looking at the research being produced by the likes of Tim Kasser, Tom Crompton and other  academics involved in the global Common Cause movement – there is evidence to suggest that these tactics can backfire on you in a big way. There’s enough evidence there to certainly give me pause. If you are going to use these tactics, then it’s probable that you’ll be reinforcing attitudes and values that could ultimately result in further self-interested behaviours, and people continuing to put themselves ahead of the environment. Hmmm…

There is a glimmer of hope I hold on tightly to in the instances that these tactics are used. Some people will take on pro-environmental behaviours for the ‘wrong’* reasons, but a strange thing can happen if you reframe their behaviour in terms of the ‘right’ reasons. That is, someone might do something for the social status or the incentive, but when you thank them for their action and let them know how that behaviour is helping your (dare I say it  -pro-environmental) cause – many of them start to reframe the reasons WHY they did the behaviour at all… and when asked about it later, they often give that reason instead. Granted, you’re probably not always going to tell the world you undertook a behaviour because it gave you a boost in social status – but it is not unreasonable to expect someone to brag about their power bill cost-savings after installing solar panels. Reframing some of these behaviours in terms of more pro-environmental rationale can also have the effect of them starting to adjust their perception of themselves, and to see themselves as more pro-environmental. And this outcome is potentially very helpful for the causes we’re trying to further.

Problem solved?

Hmmm… possibly, but if it were up to you, what would you do?

 

*’Right’ and ‘wrong’ as dictated by me really. I am totally owning up to my bias here, and complete lack of objectivity in assessing what is ‘right’. I’d like to leave the planet knowing we didn’t totally stuff it.

An agenda with gender.

In 2010, I was living 100km north of Tokyo with my partner, and my mother-in-law sent us a care package from home. Included within this box of parochial delights and Australian paraphernalia was a book by Cordelia Fine, called ‘Delusions of Gender‘. It threw me for six.

156177_10150089481086405_2451210_nAs a zoologist who’d witnessed so many differences between males and females across hundreds of species, and who had assumed a huge amount of it was genetically determined or at least, significantly genetically influenced, this book challenged me. In a good way. It became one of the big defining steps in me wanting to further the feminist cause, and tackle the misconceptions that were continually being perpetuated by all sorts of people (myself included only days earlier).

It wasn’t that Fine argued that there were no differences at all between males and females. It was instead that she questioned the long-accepted and assumed reasons for why those differences have occurred, and interrogated the scientific literature behind them. It was brilliant, funny and yet confronting to see how much weak or even blatantly biased science had continued to occur in this area of psychology. She again touched on some of these major points in a more recent article in The Conversation, where she explained how scientific research itself can be ‘neurosexist’ – that is, subtly reinforcing and strengthening gender stereotypes in the experimental design and measurement – which then leads to researchers essentially getting the results they always expected to see, and often omitting key variables or influencers in their sample population. Not all of it is deliberate or malicious in intent but irrespective of intent, the fallout is significant.

For a female who works in science, and has to regularly encounter salient norms around the ‘female ability (or perhaps rather inability) to be a good scientist’ – strengthening these types of unhelpful stereotypes is both unethical and harmful, especially to the many who later fall victim to the prejudice and bias that can arise.

To illustrate this more explicitly – there is such a thing as ‘priming‘ gender in people. It could be done overtly or subtly, but priming makes people aware of their gender, and then aware of their gender’s stereotypical abilities or inabilities. It’s been demonstrated that people adjust their perception of themselves to better fit that stereotype, and subsequently adjust their performance in tests to then match that stereotype.

In the case of assessing emotions, the perpetuated myth is that women are better. In fact, there’s no differences between genders in how good we are at this. It gets more interesting though when you prime the subjects you’re testing (i.e. remind them of their biological sex and its associated stereotype) – and suddenly that’s when the differences start to show.

You can even go further than just making them aware of their gender:

  • Tell participants that ‘women generally perform better on these tasks’ – and they do.
  • Tell participants that there’s no difference between sexes, and then there isn’t. When you pay participants for every correct answer, men are curiously suddenly VERY good at being emotionally sensitive – and equal to women in their abilities.
  • Tell the male participants that emotional sensitivity makes them very attractive to women, and they do even better than the other conditions.

Similarly – for tasks that have been stereotypically associated with male skill (e.g. visual spatial rotation tasks) the same effect happens in the other direction. Tell men that this skill is linked more typically to ‘female/feminine’ jobs and their abilities decline. Ultimately I came away with a very healthy skepticism of terms like ‘hardwired differences’ especially when used in relation to differences in men and women (and even moreso when it is used in mass media reporting on such studies).

As a species we like to fit in. We behave in the way expected of us – to be normal or typical in our behaviour. Even if we think we are really independent in our thought, the results actually show otherwise – we’re hilariously fickle and ultimately like being part of the pack – however the pack is being defined in that period of history, or even in that minute as those studies showed. It amuses me that one of the most fixed things about us is our ability to be fickle and change. Whilst that might be frustrating at times – especially when you’re a female in a profession dominated by men, or suffering at the result of these stereotypes in any situation – it is also a liberating thought. We have a huge capacity to change, to adjust our thoughts and performance, and challenge those norms.