Photo by Arthur Mazi on Unsplash
A key casualty of Elon Musk’s takeover of twitter looked like it might be trust, in clearing out the boardroom and then entire departments, twitter users started to worry about security, safety and, as a hasty change to the blue tick verification scheme showed, integrity of the site. Imposter accounts saturated the platform within hours when users were allowed to pay their way to authenticity, and yet, we stayed, and Musk will introduce a ‘painful but necessary’ manual authentication process in the next week.
There’s no doubt that blue ticks, and semiotics like emblems crests and official logos help us weigh up the integrity and credibility of a person, product or organisation, but they are not the only sign that we use when assessing tweets. There are a number of cues that we use to determine if a tweet is congruent with a poster’s personality or own beliefs, and there’s been a lot of attention from psychologists over the pandemic in trying to work out what types of messages spread, why we believe what we read, and how we might harness this knowledge to spread essential health or crisis news for health and wellbeing benefits.
We’ve become surprisingly adept at ‘reading between posts’ to navigate our way around social media using surface cues such as language and style. We might trust someone who uses the same phrases as us because we are geared up to trust people who are similar as a survival mechanism. My own research that looked at online communities found that women used comparison as a sort of GPS signal to find groups in saturated spaces like Instagram and Facebook, this comparison process was thought out and intentional, to filter through the digital crowds. There are some unconscious ways we process the semiotics we see online too, recent research by Linda Kaye found a congruence effect whereby positive emoji in higher vertical space were rated more positively than a central or lower space.
Considering that trust judgements sit alongside every new social connection we make, it’s our unconscious thoughts that mean we often can’t tell the difference between who is trustworthy and who is not based on our quick automatic thinking, a situation that people exploit through phishing attempts in the online space. We are poor judges of trustworthiness because automatic biases get in the way. Being not-so-good means that we are continually learning to make better trust judgements, and with some online spaces nudging this behaviour too, should we be delegating our trust process?
I read an opinion piece in the New York Times by Yoel Roth the ex-head of trust and safety at twitter with interest, in his article he reminds us that there is more to Twitters integrity than Musk, that two corporate gatekeepers have the power to remove the blue bird from their digital storefronts, he suggest it is the stores not the apps, that set the tone, the rules and the dynamics of what we consume on our phones. Whatever your opinion on Musk, Microsoft and Google, trusting big corporations whose products we use every day to have our best interests at heart, is deeply problematic.
It's hard to say if online spaces are changing the way we trust, many social psychology experiments have focused on measures around the way two individuals trust each other, whilst others have focused on the benefits of our inclination to trust, so it was great to attend this year’s BPS Cyberpsychology conference to understand how trust in online spaces is being studied, and what we are learning about trust in a dynamic space. Here are a few of the presentations I enjoyed listening to.
Matt Dixon’s research looked at our relationships with our mobile phones and is helping us to understand that when we trust digital stores and phone makers more than we should we see digital security as their problem rather than ours, so much so that we are more likely to put stronger security measures on our personal computer or laptop, than a phone.
The emerging literature around misinformation (non-intentional) and disinformation (intentional) in our online spaces is fascinating, we are misers, we want to use as little cognitive resource as possible, and it leaves us vulnerable to information that is factually incorrect, Laura Joyner presented some work in progress that illustrates that we spread incorrect information when the message is consistent with our beliefs.
Delegating our trust judgements to online space makers without question can lead to the oppression or deletion of voices too, Caroline Are’s (– also known as bloggeronapole ) presentation on weaponised flagging of content in online spaces was thought provoking, Caroline stresses that excluding some forms of content is a form of banishment, and that we are giving power to private companies to moderate our public spaces in automated ways. This presentation left me wondering about the ethics of algorithms and the ability of private companies to literally hide content that is in plain sight from its intended audience.
But despite this complicated online world, we still trust, it’s in our nature to trust and to make mistakes in judgement whilst doing so, and it turns out that there is at least one benefit of being so trusting, it is one part of the anti-ageing formula. As we get older better social connectedness helps us to live longer, a deep sense of belonging that we find from finding ‘people like us’ balances out our emotional wellbeing and at the heart of this connectedness is trust.
Perhaps the future isn’t about ticks or verifications and all that is automatic, maybe longevity depends on the ‘painful but necessary’ manual process of humans taking a conscious interest in what they need to trust, and sometimes getting it wrong.