This page is designed to answer the most common question we receive, "what order should I take your courses in?" Feel free to skip any courses in which you already understand the subject matter. Do not skip courses that contain prerequisites to later courses you want to take.

Many people do not understand that if you know nothing about machine learning and the first thing you try to tackle is word2vec, you WILL NOT succeed. Take your time to learn and strengthen the fundamentals.

See the course descriptions for a more in-depth review of what is contained within each course. This chart mostly explains the dependencies (i.e. why one course will teach you things that are needed in the next)

Deep learning-specific courses are in green, non-deep learning machine learning courses are in blue. All contain techniques that tie into deep learning.

It's very important to note that learning about machine learning is a very *nonlinear* process. In other words, it's not a matter of learning one subject, then learning the next, and the next, and so on. Sometimes, you might need 2 separate, unrelated courses to provide background for 1. Sometimes 1 course might provide background for 2 different courses. Thus, this linear chart only provides a very rough guideline. For a "graphical" representation, please scroll down to the bottom of this page. You'll be able to drag around each course and see arrows pointing from prerequisite to sequel. Click here to jump down.

- No dependencies, can be taken at any time
- Learn SQL (a universal language), which can then be applied to mobile and desktop (SQLite), web services (MySQL, PostgreSQL), and big data technologies (Hadoop Hive, Spark)
- Mostly useful for scalable data preparation (after which I assume you would feed in the data into one of the algorithms below)

- Take this first if you're unfamiliar with the Numpy stack
- Covers common operations in Numpy, Scipy, Matplotlib, Pandas

- Possibly the simplest machine learning model
- Fitting a line
- Regression (predicting a number)
- The form "wx + b" will appear in every subsequent deep learning course
- Has a simple calculus-based "closed-form" solution

- Binary classification (predicting a category)
- Add to the form "wx + b" to do classification
- Learn how this models a "neuron", and thus forms the building block for neural networks
- No closed-form solution, use gradient descent

- Not much math, focus on algorithms and geometrical visualization
- Practical ML concepts (generalization, cross-validation)
- Bigger picture: How you would fit ML into a web-service
- Naive Bayes, Decision Trees, K-Nearest Neighbor, Perceptron (ancestor of deep learning)

- Confidence intervals, frequentist A/B testing
- Improvements on vanilla A/B test: greedy-epsilon and UCB1
- The Bayesian paradigm
- Adaptive learning using Bayesian techniques

- Go from binary classification to multi-class classification (softmax)
- Combine neurons to form a neural network
- Derive the backpropagation algorithm

- Improve backpropagation using momentum and adaptive learning rates (RMSProp, Adam)
- Modern regularization techniques like dropout, batch normalization
- Speedup with GPU on AWS
- Do backpropagation automatically with Theano and TensorFlow
- Learn Theano and TensorFlow from the ground up
- Other modern libraries like Keras, PyTorch, MXNet, CNTK

- Derive and demonstrate bias-variance tradeoff
- Bootstrap and bagging, combining models by resampling
- Random forest
- Derive theory behind AdaBoost and implement it
- Compare random forest and AdaBoost to deep learning techniques

- Add another element (convolution) to neural network
- Deep learning for images
- Learn special pre-processing techniques for images
- Learn the principles behind how famous deep neural networks are built, such as the LeNet, VGG, AlexNet, and Inception

- Go from supervised to unsupervised paradigm
- K-Means Clustering, Hierarchical Clustering, Gaussian-Mixture Models (GMM)

- Practice the unsupervised paradigm with deep learning
- Explore the vanishing gradient problem and techniques to solve it
- Deep learning for visualizing data
- PCA, t-SNE, Autoencoders, Restricted Boltzmann Machines (RBMs)

- Apply machine learning (both supervised and unsupervised) to language

- Extend unsupervised paradigm from vectors to sequences
- Add GMM to HMM to model continuous data
- Apply Theano in a non-deep learning setting, and learn basic tools needed to code recurrent neural networks

- Apply Markov models to the Markov Decision Process (MDP) - the framework for RL problems
- Continue learning about the explore-exploit dilemma, originally seen in Bayesian Machine Learning: A/B testing
- See how deep learning can be applied to reinforcement learning

- Deep learning for sequences
- Recurrent architectures (LSTM, GRU)
- Some language modeling (sentences are sequences of words)

- Learning word vectors / word embeddings (word2vec, GLoVe)
- State-of-the-art sentiment analysis with Recursive Neural Networks and Recursive Neural Tensor Networks (RNTNs) - these are extensions of RNNs

- Learn about MDPs, Monte Carlo, and Temporal Difference learning more in-depth
- Apply deep neural networks to reinforcement learning
- Play with the OpenAI Gym (CartPole, MountainCar, Atari)

- Learn state-of-the-art techniques for generating realistic, high-quality images using convolutional neural networks
- Apply game theory and Bayesian machine learning to deep learning
- Understand the connection between GANs and reinforcement learning

- Learn state-of-the-art architectures such as VGG, ResNet, and Inception
- Train state-of-the-art models fast with transfer learning
- Object detection principles with SSD
- Neural style transfer
- Super resolution (a.k.a. image enhance)

- Learn state-of-the-art NLP and RNN techniques such as seq2seq, attention, and memory networks
- Apply CNNs to NLP
- Apply RNNs to image classification
- Speech recognition

Check out this draggable graph of courses! This best illustrates the fact that the learning path is nonlinear. Sometimes, the same course will provide background for more than one other course. Sometimes it is the reverse - you'll need 2 or more different topics to prepare you for one. Not surprisingly, this still does not tell the whole story.

Sometimes, not all of a course is relevant to the next. So sometimes a prerequisite covers some useful element of the sequel but the rest of the content is unrelated. Sometimes there is a very strong connection, where the entire prerequisite is critical to success in the sequel. These nuances are described in a lecture that can be found in the appendix of any course: "What order should I take your courses in?"

Note that you can zoom in/out and drag the nodes around.