Artificial Intelligence (AI) and society

As computer algorithms based on AI start penetrating our every-day lives, we as a society need to have an in depth discussion of how we want to deal with this new technology.

AI systems are different, as such that they ‘learn’ from a test data set and then apply the rules that the system derives in a general context. But there’s a problem: an AI system has only statistical predictability, meaning it will be wrong at times and it can also not easily explain the rules it has derived. It can only easily be interrogated (inferrence).

As a society we need a little more than that though. Hence I’m believe an accountability standard for all AI based systems is needed so us humans can peacefully co-exist with this new technology.

Such accountabillity should probably include:

  • Explainability – Make the factors explicit, that lead to an AI system’s decision. Why did the system make this decision the way it did? We can’t hide behind the ‘black box’ approach.
  • Confidence – Any AI derived answer should be accompanied by a confidence measure, so humans know when a decision should be re-evaluated or possibly over-written. If the input data doesn’t correlate well with the training set the system’s confidence might be low.
  • Continuity – the system should apply similar judgement over time and shouldn’t change it’s “mind”. If interrogated with the same input set, the answer should always be the same. This avoids unpredictability and unfairness. It also avoids the risk of the a system starting to ‘lie’ by changing it’s mind over time.
  • Disputability – Humans should have easy access to dispute a system’s judgement. The burden of proof should lie on the AI system’s operator.

I’m in support of deploying AI based systems widely. There exist many exciting applications where AI systems can help us see the signal in all the data noise. But humans must come first and humans need accountability.

 

Leave a Reply

Your email address will not be published. Required fields are marked *