Knowing the mathematics behind machine learning algorithms is a superpower. If you have ever built a model for a real-life problem, you probably experienced that familiarity with the details goes a long way if you want to move beyond baseline performance. This is especially true when you want to push the boundaries of state-of-the-art deep learning tools.
However, most of this knowledge is hidden behind layers of advanced mathematics. Understanding methods like stochastic gradient descent might seem challenging since it is built on top of multivariable calculus and probability theory.
If you are a beginner and don't necessarily have formal education in higher mathematics, creating a curriculum for yourself is hard. With proper foundations, though, most ideas can be seen as quite natural. This post aims to present a roadmap of all the mathematics for machine learning, taking you from absolute zero to a deep understanding of how neural networks work.
To keep things simple, the aim is not to cover everything. Instead, we will focus on getting our directions. This way, you will be able to study other topics without difficulties, if need be.