Is AI Safe?

In a previous post, I promised to elaborate on why a “safety mechanism” is essential when Machine Learning (e.g. Deep Learning) is used to automate decision making.

In a classic computer software situation, the computer executes code written by a programmer in a chosen programming language. As such, the software behaves in a very predictable manner. However, when Deep Learning is used, the decision is not made by an explicit programmer’s code. Instead, it is made by a pre-trained model.

One might ask, why not “debug” the model when things go wrong? It’s a computer code at the end of the day!

Unfortunately, while Deep Learning might appear simple on paper (see image on the left), this is just a simplified abstraction. In reality, it’s totally different (see image on the right). It should be clear by now why AI is black box and no one can really tell where the error may be.

For this reason, it makes sense to test the output against well-know “good” limits and take action when the prediction is off range. For example, override the prediction or notify the user.

By Jaafar

Founder of AccuVal