Hidden decisions: why we need content to rebuild trust in technology

Guest author, , Opinion

This guest blog post is written by the freelance content designer, Joanne Schofield.

More and more, we can get information wherever and whenever we need it.

Interfaces surround us. They’re our assistants, collaborators and companions. They give us recommendations, information and jokes; breaking news, restaurant recommendations and history lessons; updates from friends, Netflix recommendations and Spotify playlists.

And, with the rise of artificial intelligence (AI) and deep learning technologies, the possibilities seem increasingly limitless – from coordinating military manoeuvres, making high-value business decisions and accompanying us on space missions – the transformation of entire industries is well underway.

But what are these decisions, recommendations and solutions based on?

Who is controlling what we see and hear?

Can we trust the information if we do not know where it’s coming from?

Hidden decisions are becoming more common as technology advances. We can get information quickly and easily, but at what cost? Public trust and confidence in technology could quickly deteriorate if we’re unable to understand how and why these decisions are made.

Tech for good

The benefits of voice interfaces and AI are far reaching.

Voice interfaces can give people who struggle to use traditional devices, alternative ways of accessing information and digital services. They have the potential to improve the quality of life for people who:

  • have limited or no vision,
  • find reading difficult,
  • find it hard to use a mouse, keyboard, screen or touch pad.

And Artificial Intelligence is also revolutionising industries. In 2015 a research group at Mount Sinai, New York, used data from approximately 700,000 individuals to predict disease. The programme which they named ‘Deep Patient’ found patterns in hospital data to indicate when people were likely to develop certain conditions, including liver cancer and the onset of psychiatric disorders like schizophrenia (which physicians typically find hard to predict).

The benefits of this are evident, but how are these predictions are being made is not. The workings of these machine-learning technology are, by their nature, often opaque:

“The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. ‘We can build these models,’ Joel Dudley [who lead the Sinai team] says ruefully, ‘but we don’t know how they work.’” Will Knight, MIT Technology Review, ‘The dark secrets at the heart of AI’

And these hidden decisions are becoming more common place. Liesl Yearsley, founder of AI start-up Akin (who have recently entered into a partnership with NASA’s Jet Propulsion Laboratory (JPL) NASA) predicts:

“I think AI will make more than half of our life decisions for us in the next 10 years and make up more of our relationships.” Liesl Yearsley, Financial Review, ‘Local start-up Akin helps NASA put human-like AI into space’

But how much do we know about the reasoning behind these decisions?

Can we trust these decisions if we cannot scrutinise them?

Transparency to rebuild confidence

For us to have confidence in these decisions, recommendations and solutions we need to know more about how they are made.

The EU’s General Data Protection Regulation (GDPR) state that people must be able ‘to obtain an explanation of the decision reached’ and be able to challenge the decision. They acknowledge that ‘automated processing’ needs to be open, explainable and ethical.

But most machine-learning systems are unable to explain, in a clear way, why one decision was chosen over another.

Companies are realising this needs to change.

Tom Gruber, who leads the Siri team at Apple, says explainability is one of the main considerations for his team as it tries to make Siri a smarter virtual assistant. And Ruslan Salakhutdinov, director of AI research at Apple, says explainability will be at the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says. (Source: Will Knight, MIT Technology Review, ‘The dark secrets at the heart of AI’)

The military’s Defense Advanced Research Projects Agency has announced a plan to invest in explainable AI, to make machine-learning systems more correctable, predictable, and trustworthy, after acknowledging that:

“…the effectiveness of these [AI] systems is limited by the machine’s current inability to explain their decisions and actions to human users.” Dr. Matt Turek, Defense Advanced Research Projects Agency, ‘Explainable Artificial Intelligence (XAI)

And, in June 2019 the Information Commissioners Office published their 'Project ExplAIn' interim report'. They intend to develop this into ‘practical guidance to assist organisations with explaining artificial intelligence (AI) decisions to the individuals affected.’

The report explores the:

  • need for education and awareness around AI,
  • importance of context in explaining AI decisions,
  • challenges of providing explanations.

Using content to rebuild trust

As designers, we need to know why something has succeeded, failed or errored so that we can improve or change the service in future. It will also help us to predict when failures might occur (which they inevitably will).

As users of these technologies we need to know that the information we’re being given is taken from a source we trust. And that decisions are made in a way which aligns to our values and beliefs.

Not only must we understand the process, we must know that it’s justified, defendable and fair.

The need to get practical solutions to increase trust will challenge what we mean by transparency.

Debates are ongoing about how these hidden decisions could, or should, be opened up. How can we make these explanations genuinely useful and not give just the illusion of control (avoiding, for example, another ‘cookie consent’-style explanation which is often quickly dismissed and rarely understood)?

As with most things, context will be crucial. What may be useful in one situation, may not be in another:

“There is growing interest in explanation of technical decision making systems in the field of human-computer interaction design. Practitioners in this field criticise efforts to open the black box in terms of mathematically interpretable models as removed from cognitive science and the actual needs of people. Alternative approaches would be to allow users to explore the system’s behaviour freely through interactive explanations.”

Javier Ruiz, Open Rights Group, ‘Machine learning and the right to an explanation in GDPR

What needs to be explained, and to what extent, will depend on what the user needs to understand to be able to act on and trust what they’re being told.

We must first understand the context in which users are acting. And then we can start to build accountable and open services in which we:

  • explain complex decisions in clear, simple terms,
  • are transparent about sources and data sets,
  • give users a choice about how they interact with them,
  • acknowledge and fix mistakes,
  • allow users to tell us when something’s wrong,
  • are clear about how we use any data we gather.

By doing this we can start to level the imbalance of power towards the user and rebuild trust in technology.


Sign up to our newsletter

Get content design insights sent straight to your inbox.




  • Choose what information you get: (required)