Making The Black Box Transparent: Lessons in Opacity
YOW! Data 2019
Deep Learning is all the rage now. It is powerful, it is cheap. To proponents of "explainable" machine learning however, this is not really good news - deep learning is essentially a black box that one can't look into.
To be sure, there are efforts to peek inside the black box to see what it's learning - saliency maps and various visualization tools are useful to understand what is going on in deep learning neural networks. The question of course, is whether it's worth it?
In this talk I shall cover the basics of looking into a deep neural network and share a different approach of looking into neural networks.
Chief Data Scientist
Xuanyi is Just Another Human Bean. He is the primary author of Gorgonia, a suite of libraries for doing deep learning library in the Go programming language (which he's proud to note, predated the public release version of TensorFlow).
By day he works as the chief data scientist of a local startup. By night he chases the weirder ideas that he believes will help him create SkyNet. That leaves him somewhat a Jack of All Trades and Master of None.