This course is the next logical step in my **deep learning**, **data science**, and **machine learning** series. I’ve done a lot of courses about deep learning, and I just released a course about **unsupervised learning**, where I talked about **clustering** and **density estimation**. So what do you get when you put these 2 together? Unsupervised deep learning!

In these course we’ll start with some very basic stuff -**principal components analysis (PCA)**, and a popular nonlinear dimensionality reduction technique known as **t-SNE** (t-distributed stochastic neighbor embedding).

Next, we’ll look at a special type of unsupervised neural network called the**autoencoder**. After describing how an autoencoder works, I’ll show you how you can link a bunch of them together to form a deep stack of autoencoders, that leads to better performance of a supervised **deep neural network**. Autoencoders are like a non-linear form of PCA.

Last, we’ll look at**Restricted Boltzmann Machines (RBMs)**. These are yet another popular unsupervised neural network, that you can use in the same way as autoencoders to **pretrain** your supervised deep neural network. I’ll show you an interesting way of training restricted Boltzmann machines, known as **Gibbs sampling**, a special case of **Markov Chain Monte Carlo**, and I’ll demonstrate how even though this method is only a rough approximation, it still ends up reducing other cost functions, such as the one used for autoencoders. This method is also known as **Contrastive Divergence** or **CD-k**. As in physical systems, we define a concept called **free energy>** and attempt to minimize this quantity.

Finally, we’ll bring all these concepts together and I’ll show you visually what happens when you use PCA and t-SNE on the features that the autoencoders and RBMs have learned, and we’ll see that even without labels the results suggest that a pattern has been found.

All the materials used in this course are FREE. Since this course is the 4th in the deep learning series, I will assume you already know calculus, linear algebra, and**Python** coding. You'll want to install **Numpy**, **Theano** and **Tensorflow** for this course. These are essential items in your **data analytics** toolbox.

If you are interested in deep learning and you want to learn about modern deep learning developments beyond just plain backpropagation, including using unsupervised neural networks to interpret what features can be automatically and hierarchically learned in a deep learning system, this course is for you.

This course focuses on**"how to build and understand"**, not just "how to use". Anyone can learn to use an API in 15 minutes after reading some documentation. It's not about "remembering facts", it's about **"seeing for yourself" via experimentation**. It will teach you how to visualize what's happening in the model internally. If you want more than just a superficial look at machine learning models, this course is for you.

NOTES:

All the code for this course can be downloaded from my github:

https://github.com/lazyprogrammer/machine_learning_examples

In the directory: unsupervised_class2

Make sure you always "git pull" so you have the latest version!

HARD PREREQUISITES / KNOWLEDGE YOU ARE ASSUMED TO HAVE:

TIPS (for getting through the course):

In these course we’ll start with some very basic stuff -

Next, we’ll look at a special type of unsupervised neural network called the

Last, we’ll look at

Finally, we’ll bring all these concepts together and I’ll show you visually what happens when you use PCA and t-SNE on the features that the autoencoders and RBMs have learned, and we’ll see that even without labels the results suggest that a pattern has been found.

All the materials used in this course are FREE. Since this course is the 4th in the deep learning series, I will assume you already know calculus, linear algebra, and

If you are interested in deep learning and you want to learn about modern deep learning developments beyond just plain backpropagation, including using unsupervised neural networks to interpret what features can be automatically and hierarchically learned in a deep learning system, this course is for you.

This course focuses on

NOTES:

All the code for this course can be downloaded from my github:

https://github.com/lazyprogrammer/machine_learning_examples

In the directory: unsupervised_class2

Make sure you always "git pull" so you have the latest version!

HARD PREREQUISITES / KNOWLEDGE YOU ARE ASSUMED TO HAVE:

- calculus
- linear algebra
- probability
- Python coding: if/else, loops, lists, dicts, sets
- Numpy coding: matrix and vector operations, loading a CSV file
- linear regression, logistic regression
- neural networks and backpropagation
- Can write a feedforward neural network in Theano and TensorFlow

TIPS (for getting through the course):

- Watch it at 2x.
- Take handwritten notes. This will drastically increase your ability to retain the information.
- Write down the equations. If you don't, I guarantee it will just look like gibberish.
- Ask lots of questions on the discussion board. The more the better!
- Realize that most exercises will take you days or weeks to complete.
- Write code yourself, don't just sit there and look at my code.

- Introduction and Outline (01:55) (FREE preview available)
- Where does this course fit into your deep learning studies? (02:57)
- How to Succeed in this Course (03:13)
- What are the practical applications of unsupervised deep learning? (05:34)

- What does PCA do? (06:14)
- PCA derivation (04:22)
- MNIST visualization, finding the optimal number of principal components (03:39)
- PCA objective function (02:05)

- t-SNE Theory (04:28)
- t-SNE Visualization (04:33)
- t-SNE on the Donut (05:51)
- t-SNE on XOR (04:36)
- t-SNE on MNIST (02:13)

- Autoencoders (03:20)
- Denoising Autoencoders (01:55)
- Stacked Autoencoders (03:32)
- Writing the autoencoder class in code (Theano) (11:55)
- Testing our Autoencoder (Theano) (03:05)
- Writing the deep neural network class in code (Theano) (12:42)
- Autoencoder in Code (Tensorflow) (08:29)
- Testing greedy layer-wise autoencoder training vs. pure backpropagation (03:33)
- Cross Entropy vs. KL Divergence (04:40)
- Deep Autoencoder Visualization Description (01:32)
- Deep Autoencoder Visualization in Code (11:14)

- Restricted Boltzmann Machine Theory (09:31)
- Deriving Conditional Probabilities from Joint Probability (06:18)
- Contrastive Divergence for RBM Training (02:45)
- RBM in Code (Theano) and Greedy Layer-wise Pre-training on MNIST (14:24)
- RBM in Code (Tensorflow) (05:03)

- The Vanishing Gradient Problem Description (03:07)
- The Vanishing Gradient Problem Demo in Code (12:17)

- Exercises on feature visualization and interpretation (02:07)
- How to derive the free energy formula (06:32)

- Application of PCA and SVD to NLP (Natural Language Processing) (02:30)
- Latent Semantic Analysis in Code (10:08)
- Application of t-SNE + K-Means: Finding Clusters of Related Words (08:38)

- What is the Appendix? (02:48)
- Windows-Focused Environment Setup 2018 (20:20)
- How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow (17:22)
- Is this for Beginners or Experts? Academic or Practical? Fast or slow-paced? (22:04)
- How to Code Yourself (part 1) (15:54)
- How to Code Yourself (part 2) (09:23)
- What order should I take your courses in? (part 1) (11:18)
- What order should I take your courses in? (part 2) (16:07)
- Python 2 vs Python 3 (04:38)
- How to Succeed in this Course (Long Version) (10:24)
- Where to get discount coupons and FREE deep learning material (02:20)