QUT’s research into our decision-making processes is helping to develop greater trust for machines. Novella Moncrieff spends five minutes explaining our thinking.
Anticipation is an art form. After high school I tried dental nursing and, even though it wasn’t for me, it was a great lesson in learning to anticipate others’ needs. I needed to know what my dentist needed and hold it at his fingertips, ready to use before a word of request. I wasn’t a middle-aged male dentist, but I could think like one.
That may not sound brag-worthy, but humans are deep and learning to think like someone else can take practice. Take online shopping, for example. It’s spooky how close those algorithms get to reading our minds, but machines still can’t quite match our logic. Why? Because we continue to defy it.
Companies like Amazon spend oodles of time and money achieving incremental improvements to product suggestions because a little more accuracy translates to a lot more profit. We’ve learned to trust Amazon to be on the money with reading and product suggestions, but what if our decision was more complex? What if lives depended on it?
Professor Peter Bruza from QUT wants to help increase our trust in machine decision-making by having computers to “think” like us — irrationally. Machine learning processes currently use statistical models based on probability theory to predict human decisions. Yet, it’s proven we often defy these laws which make us, by definition, illogical or irrational. Even with all the best evidence supporting our probable decision, we sometimes choose a less probable alternative — another path — especially under conditions of uncertainty.
Our thinking aligns more with quantum theory, in which all possibilities exist simultaneously until one of them becomes reality. Mind-bending. Professor Bruza is testing quantum mechanical-like models, called “quantum cognition”, against human decision-making in forums like Amazon Turk with a goal to align human and machine decision-making in situations requiring ultimate trust, such as defence and disaster relief. Trust between human and machine will be enhanced when the machines are equipped with an understanding of how humans make decisions.
Unlike probability theory, the order in which humans receive information provides context that can be vital for building understanding and trust. If my dentist started pulling all the teeth from a patient’s mouth without explanation, he’d probably wind-up losing a few of his own. If, on the other hand, he explained the teeth were rotten, causing a life-threatening infection and couldn’t be saved, he’d likely get a gappy yet grateful smile.
This research was born of necessity. Defence analysts, for example, can receive information from different sources about an emerging situation. As humans, we combine all we know to make a global assessment of trust. When the reliability of information varies and we’re unable to make this assessment, the order in which we receive information, inferences and the context in which we make a decision can sway our thinking.
Explaining human decision-making rationale is the missing link to developing greater human trust for machines. The potential for misunderstanding between human and machine can quickly erode our trust in them.
Quantum cognition explains context — the interference a first judgement can have on subsequent judgements. Quantum cognition models provide a better account of human thinking than traditional probabilistic models because these account for ‘contextuality’. So, humans may not be irrational after all, we just need a better model to explain ourselves… one that computers can be equipped with.
QUT is researching these models to make computers’ decisions more trustworthy like humans. The two-year project will be finished in 2019.
Thanks to Novella Moncrieff for her blog on Prof Bruza’s research to help us promote the great science happening in Queensland.