On the interpretability of models

from blog Blog - Alex Strick van Linschoten, | ↗ original
A common criticism of deep learning models is that they are 'black boxes'. You put data in one end as your inputs, the argument goes, and you get some predictions or results out the other end, but you have no idea why the model gave your those predictions. This has something to do with how neural networks work: you often have many layers that are...