At the upcoming This week in machine learning and AI European online Meetup, I’ll be presenting and leading a discussion about the Anchors paper, the next generation of machine learning interpretability tools. Come and join the fun! :-)
Date: Tuesday 4th December 2018 Time: 19:00 PM CET/CEST Join: https://twimlai.com/meetups/trust-in-predictions-of-ml-models/
In my last blogpost about Random Forests I introduced the codecentric.ai Bootcamp. The next part I published was about Neural Networks and Deep Learning. Every video of our bootcamp will have example code and tasks to promote hands-on learning. While the practical parts of the bootcamp will be using Python, below you will find the English R version of this Neural Nets Practical Example, where I explain how neural nets learn and how the concepts and techniques translate to training neural nets in R with the H2O Deep Learning function.
A few colleagues of mine and I from codecentric.ai are currently working on developing a free online course about machine learning and deep learning. As part of this course, I am developing a series of videos about machine learning basics - the first video in this series was about Random Forests.
You can find the video on YouTube but as of now, it is only available in German. Same goes for the slides, which are also currently German only.
During my stay in London for the m3 conference, I also gave a talk at the R-Ladies London Meetup on Tuesday, October 16th, about one of my favorite topics: Interpretable Deep Learning with R, Keras and LIME.
Keras is a high-level open-source deep learning framework that by default works on top of TensorFlow. Keras is minimalistic, efficient and highly flexible because it works with a modular layer system to define, compile and fit neural networks.
The last two days, I was in London for the M-cubed conference.
Here are the slides from my talk about Explaining complex machine learning models with LIME:
Traditional machine learning workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria.
These are my sketchnotes for Sam Charrington’s podcast This Week in Machine Learning and AI about Evaluating Model Explainability Methods with Sara Hooker:
Sketchnotes from TWiMLAI talk: Evaluating Model Explainability Methods with Sara Hooker
You can listen to the podcast here.
In this, the first episode of the Deep Learning Indaba series, we’re joined by Sara Hooker, AI Resident at Google Brain. I had the pleasure of speaking with Sara in the run-up to the Indaba about her work on interpretability in deep neural networks.
In our next MünsteR R-user group meetup on Tuesday, November 20th, 2018, titled Using R to help plan the future of transport, Mark Padgham will provide an overview of several inter-related R packages for analysing urban dynamics.
You can RSVP here: http://meetu.ps/e/F7zDN/w54bW/f
The primary motivation for developing these packages has been their use in Active Transport Futures - a group of researchers and coders striving to aid cities to better plan for futures in which active travel, particularly walking and cycling, plays an increasingly prominent role (lots of open source code at github.