In a letter to Dorothy Connable, Ernest Hemingway said: “The way to make people trustworthy is to trust them”.
Having written that in the mid-20th century, I’m not sure if Hemingway would apply it under the perception of risk that attaches to computers imitating human intelligence.
When we are looking at the increasingly sophisticated capabilities of emergent technologies, like RiTA, we are no longer looking just at the automation of tasks that are of little significance but create efficiencies.
In imitating human decision-making, such as through a conversational interface, we need to ensure that these machines are trustworthy if they are to help create futures that we want to live in; and workplaces that we want to work in.
The Chief Scientist of Australia, Professor Finkel said in 2018 that “the capacity to trust in unknown humans, not because of a belief in our innate goodness but because of the systems that we humans have made, is the true genius of our species”.
Can our ability to trust each other and our human systems really extend to an expert imitation, which we understand to be the concept of artificial intelligence?
It is one of those words the Marvin Minsky might describe as a ‘suitcase word’: a word that “means nothing by itself, but holds a bunch of things inside that you have to unpack”.
Minksy goes on to say that by unpacking suitcase words, you can reduce an extremely difficult problem to multiple almost-extremely-difficult-problems.
Trust is usually unpacked to mean things like: information, transparency, reliability, fairness, vulnerability, openness and predictability.
On the basis of that meaning, there is a predicament for the great and un-coding mass population who are interacting with and consuming artificial intelligence applications: information, openness, transparency when it comes to ‘how’ AI works is often hidden.
It is hidden for some obvious commercial reasons but also because there is technical knowledge required to understand the computing processes that create them.
The lack of transparency then boils down to this statement from Paul Humphreys: “How we, as humans, can understand and evaluate computationally based scientific methods that transcend our own abilities?”
Is it possible to have trust without knowledge? History says no.
Those within a core of power in a society are able to use the access they have to scientific knowledge to maintain ‘others’ in a peripheral position to that power.
Polarisation is the observed (and logical) outcome of this structure, and even in a political environment as new as the European Union, the unequal nature of power and trade in the peripheries “follows a pattern of ‘unequal technological exchange'”.
This unequal technological exchange as a means of peripheralisation is a phenomenon that can be observed in many cycles of human history.
Ancient Egyptian Priests used their technical knowledge of ritual to claim a locus in the core of the upper classes of the monarchical system.
Similarly, the explosion of fear that led to the trials and punishment of ‘witches’ across the world between the 15th and 18th centuries gave ‘technical knowledge’ of how to find and judge witches to the “magistrates, judges and clergy, men”.
It was disproportionately exercised for the persecution of women who stepped outside of gender roles and much of the Malleus Maleficarum (the textbook for witchunting written in 1486 by Kramer), focussed on the “feminine identity of subjects… More than 80 per cent of those accused of being a witch in the early modern period were women.
“These were often willful females… the fear of female power targeted older women who were marginal in society because they had often lost the oversight of male kin and their reproductive role after menopause”.
Groups on the “other” side of technical knowledge have always faced dire consequences as a result of their othering – it is no wonder that the opacity around the operation of this type of computing is quite often met with apprehension from those on the downside of the technological exchange.
Splice in the current Australian social/political context and we raise the temperature on the barometer of (mis)trust.
Bernard Salt is an Australian demographer and journalist who writes that the decade beginning 2010 was: “the era in which we lost faith in the very institutions that underpin society. The exposure of appalling behaviour by some members of the clergy and big business undermines the foundations of public trust. This loss of trust breeds cynicism and creates social division; it rationalises self-interest; it is the antithesis of a united, loving and generous society” (Salt, 2019).
It is not surprising then that research from Edelman (2020) showed that Australia has the highest level of ‘trust inequality’ in the world – meaning that the informed public capable of trusting things like systems, institutions, technologies, is the smallest proportion of the population of any country in the world.
Those with access to power and knowledge have trust in the institutions and systems that they influence on the leading edge of the curve; those in the middle have mistrust in the institutions they can conceive but can’t access or influence.
Our mistrust is hurting us and our businesses.
If we can’t create strong, high-trust environments, AI technologies will cost more and be slower to market and Australians will be unable to keep pace with the acceleration of the property industry compared to their global counterparts – and their domestic competitors who are prepared to adopt the technology and use it wisely.
If we cannot trust something because we do not have knowledge of it – the next question is can we achieve understanding without knowledge? The answer is more likely to be yes.
If we are brave enough to confront the potential for exploitation in the relationship between specialised knowledge and power, we must also be brave enough to cross the bridges and to create understanding in complexity – removing the opacity and democratising choices around the adoption of emergent technologies.
When ‘understanding’ is the key that unlocks trust and enables credibility, participation, engagement…we must look at the notion of trust as interpretability.
Where language, status, education, geography or language has been a barrier to understanding – there have been interpreters; tasked with holding each side in “communicative balance”.
The role of the translator is a sacred one – it is not simply a task of a carrier pigeon, to merely replay – they interpret and deliver understanding through devices such as story, allegory, and metaphor.
In crossing the bridges between property and technology to co-create knowledge from the in-between and beyond those two knowledge cultures; we at AiRE have earnestly attempted to create and deliver technology with understanding at its core.
For a list of references and further reading, or a chat on AI and trust, please contact Sarah Bell at [email protected].