What order should I take your courses in?



This page is designed to answer the most common question we receive, "what order should I take your courses in?" Feel free to skip any courses in which you already understand the subject matter. Do not skip courses that contain prerequisites to later courses you want to take.

Many people do not understand that if you know nothing about machine learning and the first thing you try to tackle is word2vec, you WILL NOT succeed. Take your time to learn and strengthen the fundamentals.

See the course descriptions for a more in-depth review of what is contained within each course. This chart mostly explains the dependencies (i.e. why one course will teach you things that are needed in the next)

Deep learning-specific courses are in green, non-deep learning machine learning courses are in blue. All contain techniques that tie into deep learning.

  • No dependencies, can be taken at any time
  • Learn SQL (a universal language), which can then be applied to mobile and desktop (SQLite), web services (MySQL, PostgreSQL), and big data technologies (Hadoop Hive, Spark)
  • Mostly useful for scalable data preparation (after which I assume you would feed in the data into one of the algorithms below)
  • Take this first if you're unfamiliar with the Numpy stack
  • Covers common operations in Numpy, Scipy, Matplotlib, Pandas
  • Possibly the simplest machine learning model
  • Fitting a line
  • Regression (predicting a number)
  • The form "wx + b" will appear in every subsequent deep learning course
  • Has a simple calculus-based "closed-form" solution
  • Binary classification (predicting a category)
  • Add to the form "wx + b" to do classification
  • Learn how this models a "neuron", and thus forms the building block for neural networks
  • No closed-form solution, use gradient descent
  • Not much math, focus on algorithms and geometrical visualization
  • Practical ML concepts (generalization, cross-validation)
  • Bigger picture: How you would fit ML into a web-service
  • Naive Bayes, Decision Trees, K-Nearest Neighbor, Perceptron (ancestor of deep learning)
  • Confidence intervals, frequentist A/B testing
  • Improvements on vanilla A/B test: greedy-epsilon and UCB1
  • The Bayesian paradigm
  • Adaptive learning using Bayesian techniques
  • Go from binary classification to multi-class classification (softmax)
  • Combine neurons to form a neural network
  • Derive the backpropagation algorithm
  • Improve backpropagation using momentum and adaptive learning rates
  • Modern regularization techniques like dropout
  • Speedup with GPU on AWS
  • Do backpropagation automatically with Theano and TensorFlow
  • Learn Theano and TensorFlow from the ground up
  • Derive and demonstrate bias-variance tradeoff
  • Bootstrap and bagging, combining models by resampling
  • Random forest
  • Derive theory behind AdaBoost and implement it
  • Compare random forest and AdaBoost to deep learning techniques
  • Add another element (convolution) to neural network
  • Deep learning for images
  • Learn special pre-processing techniques for images
  • Go from supervised to unsupervised paradigm
  • K-Means Clustering, Hierarchical Clustering, Gaussian-Mixture Models (GMM)
  • Practice the unsupervised paradigm with deep learning
  • Explore the vanishing gradient problem and techniques to solve it
  • Deep learning for visualizing data
  • PCA, t-SNE, Autoencoders, Restricted Boltzmann Machines (RBMs)
  • Apply machine learning (both supervised and unsupervised) to language
  • Extend unsupervised paradigm from vectors to sequences
  • Add GMM to HMM to model continuous data
  • Apply Theano in a non-deep learning setting, and learn basic tools needed to code recurrent neural networks
  • Apply Markov models to the Markov Decision Process (MDP) - the framework for RL problems
  • Continue learning about the explore-exploit dilemma, originally seen in Bayesian Machine Learning: A/B testing
  • See how deep learning can be applied to reinforcement learning
  • Deep learning for sequences
  • Recurrent architectures (LSTM, GRU)
  • Some language modeling (sentences are sequences of words)
  • Learning word vectors / word embeddings (word2vec, GLoVe)
  • State-of-the-art sentiment analysis with Recursive Neural Networks and Recursive Neural Tensor Networks (RNTNs) - these are extensions of RNNs
  • Learn about MDPs, Monte Carlo, and Temporal Difference learning more in-depth
  • Apply deep neural networks to reinforcement learning
  • Play with the OpenAI Gym (CartPole, MountainCar, Atari)
  • Learn state-of-the-art techniques for generating realistic, high-quality images using convolutional neural networks
  • Apply game theory and Bayesian machine learning to deep learning
  • Understand the connection between GANs and reinforcement learning