This course continues where my first course, Deep Learning in Python, left off. You already know how to build an artificial neural network in Python, and you have a plug-and-play script that you can use for TensorFlow. Neural networks are one of the staples of machine learning, and they are always a top contender in Kaggle contests. If you want to improve your skills with neural networks and deep learning, this is the course for you.

You already learned about backpropagation, but there were a lot of unanswered questions. How can you modify it to improve training speed? In this course you will learn about**batch and stochastic gradient descent**, two commonly used techniques that allow you to train on just a small sample of the data at each iteration, greatly speeding up training time.

You will also learn about**momentum**, which can be helpful for carrying you through local minima and prevent you from having to be too conservative with your learning rate. You will also learn about **adaptive learning rate** techniques like **AdaGrad**, **RMSprop**, and **Adam** which can also help speed up your training.

Because you already know about the fundamentals of neural networks, we are going to talk about more modern techniques, like**dropout regularization** and **batch normalization**, which we will implement in both **TensorFlow** and **Theano**. The course is constantly being updated and more advanced regularization techniques are coming in the near future.

In my last course, I just wanted to give you a little sneak peak at TensorFlow. In this course we are going to start from the basics so you understand exactly what's going on - what are TensorFlow variables and expressions and how can you use these building blocks to create a neural network? We are also going to look at a library that's been around much longer and is very popular for deep learning - Theano. With this library we will also examine the basic building blocks - variables, expressions, and functions - so that you can build neural networks in Theano with confidence.

Theano was the predecessor to all modern deep learning libraries today. Today, we have almost TOO MANY options.**Keras**, **PyTorch**, **CNTK** (Microsoft), **MXNet** (Amazon / Apache), etc. In this course, we cover all of these! Pick and choose the one you love best.

Because one of the main advantages of TensorFlow and Theano is the ability to use the GPU to speed up training, I will show you how to set up a GPU-instance on AWS and compare the speed of**CPU vs GPU** for training a deep neural network.

With all this extra speed, we are going to look at a real dataset - the famous**MNIST** dataset (images of handwritten digits) and compare against various known benchmarks. This is THE dataset researchers look at first when they want to ask the question, "does this thing work?"

This course focuses on**"how to build and understand"**, not just "how to use". Anyone can learn to use an API in 15 minutes after reading some documentation. It's not about "remembering facts", it's about **"seeing for yourself" via experimentation**. It will teach you how to visualize what's happening in the model internally. If you want more than just a superficial look at machine learning models, this course is for you.

Suggested Prerequisites:

Tips for success:

You already learned about backpropagation, but there were a lot of unanswered questions. How can you modify it to improve training speed? In this course you will learn about

You will also learn about

Because you already know about the fundamentals of neural networks, we are going to talk about more modern techniques, like

In my last course, I just wanted to give you a little sneak peak at TensorFlow. In this course we are going to start from the basics so you understand exactly what's going on - what are TensorFlow variables and expressions and how can you use these building blocks to create a neural network? We are also going to look at a library that's been around much longer and is very popular for deep learning - Theano. With this library we will also examine the basic building blocks - variables, expressions, and functions - so that you can build neural networks in Theano with confidence.

Theano was the predecessor to all modern deep learning libraries today. Today, we have almost TOO MANY options.

Because one of the main advantages of TensorFlow and Theano is the ability to use the GPU to speed up training, I will show you how to set up a GPU-instance on AWS and compare the speed of

With all this extra speed, we are going to look at a real dataset - the famous

This course focuses on

Suggested Prerequisites:

- calculus
- matrix arithmetic
- probability
- Python coding: if/else, loops, lists, dicts, sets
- Numpy coding: matrix and vector operations, loading a CSV file
- linear regression, logistic regression
- neural networks and backpropagation

Tips for success:

- Use the video speed changer! Personally, I like to watch at 2x.
- Take handwritten notes. This will drastically increase your ability to retain the information.
- Write down the equations. If you don't, I guarantee it will just look like gibberish.
- Ask lots of questions on the discussion board. The more the better!
- Don't get discouraged if you can't solve every exercise right away. Sometimes it'll take hours, days, or maybe weeks!
- Write code yourself, this is an applied course! Don't be a "couch potato".

- Introduction and Outline (09:20) (FREE preview available)
- Where to get the Code (09:21)
- How to Succeed in this Course (05:52)

- Review (pt 1): Neuron Predictions (14:02)
- Review (pt 2): Neuron Learning (09:17)
- Review (pt 3): Artificial Neural Networks (12:10)
- Review Exercise Prompt (05:39)
- Review Code (pt 1) (05:51)
- Review Code (pt 2) (12:40)
- Review Summary (01:13)

- Stochastic Gradient Descent and Mini-Batch Gradient Descent (Theory) (16:15)
- SGD Exercise Prompt (03:34)
- Stochastic Gradient Descent and Mini-Batch Gradient Descent (Code pt 1) (11:08)
- Stochastic Gradient Descent and Mini-Batch Gradient Descent (Code pt 2) (12:10)

- Using Momentum to Speed Up Training (06:11)
- Nesterov Momentum (06:37)
- Code for training a neural network using momentum (06:35)
- Variable and adaptive learning rates (11:46)
- Constant learning rate vs. RMSProp in Code (04:05)
- Adam Optimization (pt 1) (13:15)
- Adam Optimization (pt 2) (11:14)
- Adam in Code (05:44)
- Suggestion Box (03:03)

- Hyperparameter Optimization: Cross-validation, Grid Search, and Random Search (03:20)
- Sampling Logarithmically (03:11)
- Grid Search in Code (07:12)
- Modifying Grid Search (01:23)
- Random Search in Code (03:46)

- Weight Initialization Section Introduction (59:00)
- Vanishing and Exploding Gradients (06:07)
- Weight Initialization (08:21)
- Local vs. Global Minima (02:52)
- Weight Initialization Section Summary (01:39)

- Theano Basics: Variables, Functions, Expressions, Optimization (07:47)
- Building a neural network in Theano (09:17)
- Is Theano Dead? (10:04)

- TensorFlow Basics: Variables, Functions, Expressions, Optimization (07:27)
- Building a neural network in TensorFlow (09:43)
- What is a Session? (And more) (14:09)

- Setting up a GPU Instance on Amazon Web Services (07:07)
- Installing NVIDIA GPU-Accelerated Deep Learning Libraries on your Home Computer (22:15)
- Can Big Data be used to Speed Up Backpropagation? (03:22)
- How to Improve your Theano and Tensorflow Skills (04:40)
- Theano vs. TensorFlow (05:53)

- Transition to the 2nd Half of the Course (05:25)

- Facial Expression Recognition Project Introduction (04:52)
- Facial Expression Recognition Problem Description (12:22)
- The class imbalance problem (06:02)
- Utilities walkthrough (05:46)
- Class-Based ANN in Theano (19:10)
- Class-Based ANN in TensorFlow (15:29)
- Facial Expression Recognition Project Summary (01:21)

- Modern Regularization Techniques Section Introduction (02:26)
- Dropout Regularization (11:39)
- Dropout Intuition (04:02)
- Noise Injection (05:23)
- Modern Regularization Techniques Section Summary (02:16)

- Batch Normalization Section Introduction (02:04)
- Exponentially-Smoothed Averages (04:25)
- Batch Normalization Theory (10:55)
- Batch Normalization Tensorflow (part 1) (05:21)
- Batch Normalization Tensorflow (part 2) (05:35)
- Batch Normalization Theano (part 1) (04:21)
- Batch Normalization Theano (part 2) (06:34)
- Noise Perspective (01:59)
- Batch Normalization Section Summary (01:39)

- Keras Discussion (06:49)
- Keras in Code (06:38)
- Keras Functional API (04:27)
- How to easily convert Keras into Tensorflow 2.0 code (01:49)

- PyTorch Basics (11:36)
- PyTorch Dropout (02:56)
- PyTorch Batch Norm (02:58)

- PyTorch, CNTK, and MXNet (49:00)

- What's the difference between "neural networks" and "deep learning"? (07:59)
- Manually Choosing Learning Rate and Regularization Penalty (04:09)
- Where does this course fit into your deep learning studies? (03:49)

- How to Uncompress a .tar.gz file (03:18)

- Windows-Focused Environment Setup 2018 (20:21)
- How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow (17:33)

- How to Code Yourself (part 1) (15:55)
- How to Code Yourself (part 2) (09:24)
- Proof that using Jupyter Notebook is the same as not using it (12:29)
- Python 2 vs Python 3 (04:38)

- How to Succeed in this Course (Long Version) (10:25)
- Is this for Beginners or Experts? Academic or Practical? Fast or slow-paced? (22:05)
- What order should I take your courses in? (part 1) (11:19)
- What order should I take your courses in? (part 2) (16:07)

- What is the Appendix? (02:48)
- Where to get discount coupons and FREE deep learning material (05:31)

- Estimator API Tutorial