Trust and the web: A sociologist’s guide


Trust. The word gets bandied about a lot when talking about the web today. We want people to trust our systems. Companies are supposedly building “trusted computing” and “designing for trust”.

But, as sociologist Coye Cheshire, Professor at the School of Information at UC Berkeley will tell you, trust is a thing that happens between people not things. When we talk about trust in systems, we’re actually often talking about the related concepts of reliability or credibility.

Designing for trustworthiness

Take trustworthiness, for example. Trustworthiness is a characteristic that we infer based on other characteristics. It’s an assessment of a person’s future behaviour and it is theoretically linked to concepts like perceived competence and motivation. When we think about whom to ask to watch our bags at the airport, for example, we look around and base our decision to trust someone on perceived competence (do they look like they could apprehend someone if someone tried to steal something?) and/or motivation (do they look like they need my bag or the things inside it?).

Although we can’t really design for trust, we can design symbols to signal competence or motivation by using things like trust badges or seals that signal what Cheshire calls “trust-warranting” characteristics. We can also expose, through design, the “symptoms” of trust – by-products of actions that are associated with trust, such as high customer satisfaction. But again, by designing trust seals or exposing customer reviews, we’re not actually designing trust into a system. We’re just helping people make decisions about who might behave in their interest in the future.

Reputation: implicit and explicit

Knowing who to trust can be helped along by reputation cues – something that has become increasingly popular as a way to gauge competence on the Web today. There are two ways to build for reputation: implicit mechanisms, where we expose different variables relating to a person’s contribution (for example “number of edits”) versus explicit mechanisms, where we ask others to rate people based on their experience working with them.

Implicit reputation design is challenging, says Cheshire.

It means that we’re guessing “likely associates” of particular behaviors or outcomes. For example, in online Q/A forums, we know that showing one’s tenure on the site (“member for 5 years”) and/or number of contributions (“4353 posts”) can imply lots of things. But out of context this could be (either) a 5-year spammer or a 5-year expert who is fairly active.

Explicit reputation systems are often seen as a solution to this challenge since it means that real people are filling in missing context by giving an up/down rating on the person or content. But, this in turn creates a collective action challenge since you need people to take the time and effort to do the ratings – which is why we often want to find a way to use the earlier ‘implicit’ information in the first place.

Cheshire believes that this problem of finding consistent, reliable correlates of trustworthiness from implicit information really depends on the context of a particular online environment. And this is at the heart of Cheshire’s work: discovering how people assess another person’s future behaviour in different online environments.

Do they rate competence higher than motivation, for example? In an experiment, Cheshire and his colleagues asked participants to choose which goods and services they would buy when faced with a series of differently worded advertisements. To improve the accuracy of the results, they said that participants could invest $5 of the money they were getting to participate in the survey ($10) in choosing the most trustworthy seller.

They found that competence matters more when buying a used good (such as a camera) and that motivation matters for buying services (such as website design) where a longer-term relationship is required.

Designing for interpersonal trust

When it comes to designing for interpersonal trust, three key features are essential, says Cheshire:

  • Repeated interactions between parties over time
  • Acts of risk-taking
  • The presence of uncertainty

In a study to work out different levels of trust between individuals based on levels of uncertainty, Cheshire and his colleagues found that as uncertainty goes up, the potential for trust to develop does too. The paradox of building assurance structures such as those that guarantee risk-free interactions on eBay, for example, is that they decrease uncertainty and thus the potential for interpersonal trust. In other words, designing for “trust” can actually decrease the potential for trust.

Betrayal (when someone says they will do something and then doesn’t follow through) is something often attributed to systems. But again, these are actually issues of credibility, reliability and security because the systems do not betray us, but the people who build, maintain and support them might, says Cheshire. When designing crowdsourced platforms like Ushahidi or Wikipedia, this becomes a really important to distinguish. We need to design the system to be secure and to enable participants to make good decisions about who to trust, but we can’t magically ensure that people will trust one another through that system.

Cross-cultural differences in trust and trusting

Cheshire also found that cross-cultural differences matter when it comes to trust. Looking at the same trust game in the US and Japan, they found that players could choose how much to entrust to their partner, as well as whether to return anything entrusted to them. Individuals were partnered with either the same fixed-partner or a new, random partner on every trial. They found that Americans took more risks and trusted their partners more than did the Japanese – even in the random-partner exchanges. They also found that the opportunity to choose the level of risk involved in trusting another helped improve the level of mutual cooperation for both American and Japanese participants.

In new research just completed in Romania and the US, Cheshire found that regional/societal differences do exist and can be rather large but the experience of building trust can essentially erase the effect of region or disposition to trust. Developing systems that enable trust to be built among people is really essential to his work. In the end, Cheshire is driven by the need to understand how trust can be repaired.

My interest in trust began over ten years ago when it became very clear to me that assessing trustworthiness and building trust with other human beings are fundamental aspects of human social interaction, community-building, and collective action in all offline and online settings. Going forward, my work is now focused on detailing what happens to interpersonal trust when individuals move from more secure, reliable, and certain interactions to environments that lack such assurances. Ultimately, I want to gather empirical evidence from many different sources to detail how individuals build trust through experience in uncertain environments and, perhaps most of all, repair trust when and if it fails.

This article is licensed under a Creative Commons license and first appeared in Ethnography Matters.

More

News

Sign up to our newsletter to get the latest in digital insights. sign up

Welcome to Memeburn

Sign up to our newsletter to get the latest in digital insights.