15 July 2017

AI is More Instinct than Intelligence

Lately I've been spending a lot of time with machine learning, neural nets, and the question of extracting and communicating their thinking so that humans could review their conclusions and/or learn from them. It is a hardly surprising observation that what these models do is primarily pattern matching: a system that assesses whether a bank transaction is fraudulent will flag up a transaction because in some very complicated ways it is similar to other transactions that were results of fraud. Even non-supervised learning, which autonomously finds patterns in data, is doing just that: finding patterns.

If so, then everything these systems do is more like what we do by instinct, and not what we do via high-level reasoning which we traditionally call intelligence. In fact, I think this is partly why these systems appear so magical: from reading handwriting to self-driving cars, they do things we don't know how we do ourselves. They are getting pretty good at things we can learn to do by instinct.

(Classic AI / ML did in fact concern itself with symbolic calculations and reasoning, but the statistical models that are becoming so powerful today meant a shift from reasoning to instinctual decisions.)

This, in turn, is why it's so difficult to understand how an AI model arrives at a conclusion: it does so based on patterns and similarity, like our amygdala, and does not complement this with some kind of abstract reasoning. Even if it would simply rationalise a decision already made based on its instinct (like probably how most humans arrive at "rational" decisions), adding this rational layer would be truly amazing, as it would allow us to communicate with an AI system and peek into its thought processes.

1 comment:

  1. Interesting ideas, but I think they oversimplify the situation. You are totally right that modern AI is pattern matching - but so is formal reasoning as well. The difference is that they operate on different levels. "Explaining" can be seen as a way to transfer the high-level pattern which matched the data - and thus led to a certain conclusion. So there is nothing wrong with pattern matching - the problem is how to transfer what pattern I am using. This is a totally different problem which requires (partly) understanding how humans transfer these patterns: how they "explain" decisions, ideas. These methods are probably restricted to certain patterns. The next step is to use AI methods which fall into that realm - so that the machine can "explain" its decision.