Modern Deep Learning in Python

Build with modern libraries like Tensorflow, Theano, Keras, PyTorch, CNTK, MXNet. Train faster with GPU on AWS.

Register for this Course

$25.00 $120.00 USD 79% OFF!

Login or signup to register for this course

Have a coupon? Click here.

Course Data

Lectures: 64
Length: 06h 52m
Skill Level: All Levels
Languages: English
Includes: Lifetime access, 30-day money back guarantee

Course Description

This course continues where my first course, Deep Learning in Python, left off. You already know how to build an artificial neural network in Python, and you have a plug-and-play script that you can use for TensorFlow. Neural networks are one of the staples of machine learning, and they are always a top contender in Kaggle contests. If you want to improve your skills with neural networks and deep learning, this is the course for you.

You already learned about backpropagation, but there were a lot of unanswered questions. How can you modify it to improve training speed? In this course you will learn about batch and stochastic gradient descent, two commonly used techniques that allow you to train on just a small sample of the data at each iteration, greatly speeding up training time.

You will also learn about momentum, which can be helpful for carrying you through local minima and prevent you from having to be too conservative with your learning rate. You will also learn about adaptive learning rate techniques like AdaGrad, RMSprop, and Adam which can also help speed up your training.

Because you already know about the fundamentals of neural networks, we are going to talk about more modern techniques, like dropout regularization and batch normalization, which we will implement in both TensorFlow and Theano. The course is constantly being updated and more advanced regularization techniques are coming in the near future.

In my last course, I just wanted to give you a little sneak peak at TensorFlow. In this course we are going to start from the basics so you understand exactly what's going on - what are TensorFlow variables and expressions and how can you use these building blocks to create a neural network? We are also going to look at a library that's been around much longer and is very popular for deep learning - Theano. With this library we will also examine the basic building blocks - variables, expressions, and functions - so that you can build neural networks in Theano with confidence.

Theano was the predecessor to all modern deep learning libraries today. Today, we have almost TOO MANY options. Keras, PyTorch, CNTK (Microsoft), MXNet (Amazon / Apache), etc. In this course, we cover all of these! Pick and choose the one you love best.

Because one of the main advantages of TensorFlow and Theano is the ability to use the GPU to speed up training, I will show you how to set up a GPU-instance on AWS and compare the speed of CPU vs GPU for training a deep neural network.

With all this extra speed, we are going to look at a real dataset - the famous MNIST dataset (images of handwritten digits) and compare against various known benchmarks. This is THE dataset researchers look at first when they want to ask the question, "does this thing work?"

This course focuses on "how to build and understand", not just "how to use". Anyone can learn to use an API in 15 minutes after reading some documentation. It's not about "remembering facts", it's about "seeing for yourself" via experimentation. It will teach you how to visualize what's happening in the model internally. If you want more than just a superficial look at machine learning models, this course is for you.



NOTES:

All the code for this course can be downloaded from my github:

https://github.com/lazyprogrammer/machine_learning_examples

In the directory: ann_class2

Make sure you always "git pull" so you have the latest version!



HARD PREREQUISITES / KNOWLEDGE YOU ARE ASSUMED TO HAVE:

  • calculus
  • linear algebra
  • probability
  • Python coding: if/else, loops, lists, dicts, sets
  • Numpy coding: matrix and vector operations, loading a CSV file
  • linear regression, logistic regression
  • neural networks and backpropagation


TIPS (for getting through the course):

  • Watch it at 2x.
  • Take handwritten notes. This will drastically increase your ability to retain the information.
  • Write down the equations. If you don't, I guarantee it will just look like gibberish.
  • Ask lots of questions on the discussion board. The more the better!
  • Realize that most exercises will take you days or weeks to complete.
  • Write code yourself, don't just sit there and look at my code.

Lectures

Introduction and Outline

Review

  1. Review of Basic Conceps (14:13)
  2. Where to get the MNIST dataset and Establishing a Linear Benchmark (04:31)

Gradient Descent: Full vs Batch vs Stochastic

  1. What are full, batch, and stochastic gradient descent? (02:45)
  2. Full vs Batch vs Stochastic Gradient Descent in code (05:38)

Momentum and adaptive learning rates

  1. Momentum (01:56)
  2. Code for training a neural network using momentum (06:41)
  3. Variable and adaptive learning rates (11:45)
  4. Constant learning rate vs. RMSProp in Code (04:05)
  5. Adam Optimization (11:18)
  6. Adam in Code (05:43)

Choosing Hyperparameters

  1. Hyperparameter Optimization: Cross-validation, Grid Search, and Random Search (03:19)
  2. Sampling Logarithmically (03:10)
  3. Grid Search in Code (07:11)
  4. Modifying Grid Search (01:22)
  5. Random Search in Code (03:45)

Weight Initialization

  1. Weight Initialization Section Introduction (00:58)
  2. Vanishing and Exploding Gradients (06:06)
  3. Weight Initialization (08:20)
  4. Local vs. Global Minima (02:51)
  5. Weight Initialization Section Summary (01:38)

Theano

  1. Theano Basics: Variables, Functions, Expressions, Optimization (07:47)
  2. Building a neural network in Theano (09:17)

TensorFlow

  1. TensorFlow Basics: Variables, Functions, Expressions, Optimization (07:27)
  2. Building a neural network in TensorFlow (09:43)
  3. What is a Session? (And more) (14:08)

GPU Speedup. Homework, and Other Misc. Topics

  1. Setting up a GPU Instance on Amazon Web Services (07:06)
  2. Can Big Data be used to Speed Up Backpropagation? (03:21)
  3. Exercises and Concepts Still to be Covered (02:13)
  4. How to Improve your Theano and Tensorflow Skills (04:39)
  5. Theano vs. TensorFlow (05:52)

Transition to the 2nd Half of the Course

  1. Transition to the 2nd Half of the Course (05:24)

Project: Facial Expression Recognition

  1. Facial Expression Recognition Project Introduction (04:51)
  2. Facial Expression Recognition Problem Description (12:21)
  3. The class imbalance problem (06:01)
  4. Utilities walkthrough (05:45)
  5. Class-Based ANN in Theano (19:09)
  6. Class-Based ANN in TensorFlow (15:28)
  7. Facial Expression Recognition Project Summary (01:20)

Modern Regularization Techniques

  1. Modern Regularization Techniques Section Introduction (02:25)
  2. Dropout Regularization (11:38)
  3. Dropout Intuition (04:01)
  4. Noise Injection (05:22)
  5. Modern Regularization Techniques Section Summary (02:15)

Batch Normalization

  1. Batch Normalization Section Introduction (02:03)
  2. Exponentially-Smoothed Averages (10:54)
  3. Batch Normalization Theory (10:54)
  4. Batch Normalization Tensorflow (part 1) (05:20)
  5. Batch Normalization Tensorflow (part 2) (05:34)
  6. Batch Normalization Theano (part 1) (04:20)
  7. Batch Normalization Theano (part 2) (06:33)
  8. Noise Perspective (01:58)
  9. Batch Normalization Section Summary (01:38)

Keras

  1. Keras Discussion (06:48)
  2. Keras in Code (06:37)

PyTorch, CNTK, and MXNet

  1. PyTorch, CNTK, and MXNet (00:48)

Appendix

  1. Manually Choosing Learning Rate and Regularization Penalty (04:08)
  2. How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow (17:22)
  3. How to Succeed in this Course (Long Version) (10:24)
  4. How to Code Yourself (part 1) (15:54)
  5. How to Code Yourself (part 2) (09:23)
  6. How to Uncompress a .tar.gz file (03:18)
  7. Where to get discount coupons and FREE deep learning material (02:20)

Extras

  • Estimator API Tutorial