Update: There is now a recording of the meetup up on YouTube.
Here you find my slides the TWiML & AI EMEA Meetup about Trust in ML models, where I presented the Anchors paper by Carlos Guestrin et al..
I have also just written two articles for the German IT magazin iX about the same topic of Explaining Black-Box Machine Learning Models:
A short article in the iX 12/2018
During my stay in London for the m3 conference, I also gave a talk at the R-Ladies London Meetup on Tuesday, October 16th, about one of my favorite topics: Interpretable Deep Learning with R, Keras and LIME.
Keras is a high-level open-source deep learning framework that by default works on top of TensorFlow. Keras is minimalistic, efficient and highly flexible because it works with a modular layer system to define, compile and fit neural networks.
These are my sketchnotes for Sam Charrington’s podcast This Week in Machine Learning and AI about Evaluating Model Explainability Methods with Sara Hooker:
Sketchnotes from TWiMLAI talk: Evaluating Model Explainability Methods with Sara Hooker
You can listen to the podcast here.
In this, the first episode of the Deep Learning Indaba series, we’re joined by Sara Hooker, AI Resident at Google Brain. I had the pleasure of speaking with Sara in the run-up to the Indaba about her work on interpretability in deep neural networks.
On Wednesday, September 26th, I gave a talk about ‘Decoding The Black Box’ at the Frankfurt Data Science Meetup.
My slides were created with beautiful.ai and can be found here.
DECODING THE BLACK BOX
And finally we will have with us Dr.Shirin Glander, whom we were inviting for a long time back. Shirin lives in Münster and works as a Data Scientist at codecentric, she has lots of practical experience.
I have yet another Meetup talk to announce:
On Wednesday, September 26th, I’ll be talking about ‘Decoding The Black Box’ at the Frankfurt Data Science Meetup.
Particularly cool with this meetup is that they will livestream the event at www.youtube.com/c/FrankfurtDataScience!
TALK#2: DECODING THE BLACK BOX
And finally we will have with us Dr.Shirin Glander, whom we were inviting for a long time back. Shirin lives in Münster and works as a Data Scientist at codecentric, she has lots of practical experience.
Today I am very happy to announce that during my stay in London for the m3 conference, I’ll also be giving a talk at the R-Ladies London Meetup on Tuesday, October 16th, about one of my favorite topics: Interpretable Deep Learning with R, Keras and LIME.
You can register via Eventbrite: https://www.eventbrite.co.uk/e/interpretable-deep-learning-with-r-lime-and-keras-tickets-50118369392
ABOUT THE TALK
Keras is a high-level open-source deep learning framework that by default works on top of TensorFlow.
Here I am sharing the slides for a webinar I gave for SAP about Explaining Keras Image Classification Models with LIME.
Slides can be found here: https://www.slideshare.net/ShirinGlander/sap-webinar-explaining-keras-image-classification-models-with-lime
Keras is a high-level open-source deep learning framework that by default works on top of TensorFlow. Keras is minimalistic, efficient and highly flexible because it works with a modular layer system to define, compile and fit neural networks. It has been written in Python but can also be used from within R.