As with the other videos from our codecentric.ai Bootcamp (Random Forests, Neural Nets & Gradient Boosting), I am again sharing an English version of the script (plus R code) for this most recent addition on How Convolutional Neural Nets work.
In this lesson, I am going to explain how computers learn to see; meaning, how do they learn to recognize images or object on images? One of the most commonly used approaches to teach computers “vision” are Convolutional Neural Nets.
In our next MünsteR R-user group meetup on Tuesday, February 5th, 2019, titled Don’t reinvent the wheel: making use of shiny extension packages., Suthira Owlarn will introduce the shiny package and show how she used it to build an interactive web app for her sequencing datasets.
You can RSVP here: http://meetu.ps/e/Gg5th/w54bW/f
Shiny is a popular R package for building interactive web apps and dashboards – no web development knowledge required!
Registration is now open for my 1.5-day workshop on how to develop end-2-end from a Keras/TensorFlow model to production.
It will take place on February 21st & 22nd in Berlin, Germany. The workshop will cost 950.00 Euro + MwST. We will start at 9 am on Thursday and finish around 3 pm on Friday.
Please register by sending an email to shirin.glander@gmail.com with the following information:
name company/institute/affiliation address for invoice phone number reference to this blog The course material will be in English and we will speak a mix of German and English, depending on the participants’ preferences.
This is code that accompanies a book chapter on customer churn that I have written for the German dpunkt Verlag. The book is in German and will probably appear in February: https://www.dpunkt.de/buecher/13208/9783864906107-data-science.html.
The code you find below can be used to recreate all figures and analyses from this book chapter. Because the content is exclusively for the book, my descriptions around the code had to be minimal. But I’m sure, you can get the gist, even without the book.
Update: There is now a recording of the meetup up on YouTube.
Here you find my slides the TWiML & AI EMEA Meetup about Trust in ML models, where I presented the Anchors paper by Carlos Guestrin et al..
I have also just written two articles for the German IT magazin iX about the same topic of Explaining Black-Box Machine Learning Models:
A short article in the iX 12/2018
In a recent video, I covered Random Forests and Neural Nets as part of the codecentric.ai Bootcamp.
In the most recent video, I covered Gradient Boosting and XGBoost.
You can find the video on YouTube and the slides on slides.com. Both are again in German with code examples in Python.
But below, you find the English version of the content, plus code examples in R for caret, xgboost and h2o.
On November 7th, Uwe Friedrichsen and I gave our talk from the JAX conference 2018: Deep Learning - a Primer again at the W-JAX in Munich.
A few weeks before, I gave a similar talk at two events about Demystifying Big Data and Deep Learning (and how to get started).
Here are the two very similar presentations from these talks: