Yann LeCun, a deep learning pioneer, has said that the most important development in recent years has been adversarial training, referring to GANs.

GAN stands for

What is

Unsupervised learning means we’re not trying to map input data to targets, we’re just trying to learn the structure of that input data.

Once we’ve learned that structure, we can do some pretty cool things.

One example is generating poetry - we’ve done examples of this in the past.

But poetry is a very specific thing, how about writing in general?

If we can learn the structure of language, we can generate any kind of text. In fact, big companies are putting in lots of money to research how the news can be written by machines.

But what if we go back to poetry and take away the words?

Well then we get art, in general.

By learning the structure of art, we can create more art.

How about art as sound?

If we learn the structure of music, we can create new music.

Imagine the top 40 hits you hear on the radio are songs written by robots rather than humans.

The possibilities are endless!

You might be wondering, "how is this course different from the first unsupervised deep learning course?"

In this first course, we still tried to learn the structure of data, but the reasons were different.

We wanted to learn the structure of data in order to improve supervised training, which we demonstrated was possible.

In this new course, we want to learn the structure of data in order to produce more stuff that resembles the original data.

This by itself is really cool, but we'll also be incorporating ideas from

Thanks for reading and I’ll see you in class. =)

Suggested Prerequisites:

- Calculus
- Probability
- Object-oriented programming
- Python coding: if/else, loops, lists, dicts, sets
- Numpy coding: matrix and vector operations
- Linear regression
- Gradient descent
- Know how to build a feedforward and convolutional neural network in Theano and TensorFlow

Tips for success:

- Use the video speed changer! Personally, I like to watch at 2x.
- Take handwritten notes. This will drastically increase your ability to retain the information.
- Write down the equations. If you don't, I guarantee it will just look like gibberish.
- Ask lots of questions on the discussion board. The more the better!
- Don't get discouraged if you can't solve every exercise right away. Sometimes it'll take hours, days, or maybe weeks!
- Write code yourself, this is an applied course! Don't be a "couch potato".

You probably already know this, but some of us really and truly appreciate you. BTW, I spent a reasonable amount of time making a learning roadmap based on your courses and have started the journey.

Looking forward to your new stuff.

I am signing up so that I have the easy refresh when needed and the see what you consider important, as well as to support your great work, thank you.

READ MORE

I wish you a happy and safe holiday season. I am glad you chose to share your knowledge with the rest of us.

And, I couldn't agree more with some of your "rants", and found myself nodding vigorously!

You are an excellent teacher, and a rare breed.

And, your courses are frankly, more digestible and teach a student far more than some of the top-tier courses from ivy leagues I have taken in the past.

(I plan to go through many more courses, one by one!)

I know you must be deluged with complaints in spite of the best content around That's just human nature.

Also, satisfied people rarely take the time to write, so I thought I will write in for a change. :)

In the process of completing my Master’s at Hunan University, China, I am writing this feedback to you in order to express my deep gratitude for all the knowledge and skills I have obtained studying your courses and following your recommendations.

The first course of yours I took was on Convolutional Neural Networks (“Deep Learning p.5”, as far as I remember). Answering one of my questions on the Q&A board, you suggested I should start from the beginning – the Linear and Logistic Regression courses. Despite that I assumed I had already known many basic things at that time, I overcame my “pride” and decided to start my journey in Deep Learning from scratch. ...

READ MORE

- Welcome (04:37) (FREE preview available)
- Where does this course fit into your deep learning studies? (05:01)
- Where to get the code and data (03:52)
- How to Succeed in this Course (03:04)
- Tensorflow or Theano - Your Choice! (04:10)

- What does it mean to Sample? (04:58)
- Sampling Demo: Bayes Classifier (03:58)
- Gaussian Mixture Model Review (10:32)
- Sampling Demo: Bayes Classifier with GMM (03:55)
- Why do we care about generating samples? (11:21)
- Neural Network and Autoencoder Review (07:27)
- Tensorflow Warmup (04:08)
- Theano Warmup (04:55)
- Suggestion Box (03:10)

- Variational Autoencoders Section Introduction (05:40)
- Variational Autoencoder Architecture (05:58)
- Parameterizing a Gaussian with a Neural Network (08:01)
- The Latent Space, Predictive Distributions and Samples (05:14)
- Cost Function (07:29)
- Tensorflow Implementation (pt 1) (07:19)
- Tensorflow Implementation (pt 2) (02:30)
- Tensorflow Implementation (pt 3) (09:56)
- The Reparameterization Trick (05:06)
- Theano Implementation (10:53)
- Visualizing the Latent Space (03:10)
- Bayesian Perspective (10:12)
- Variational Autoencoder Section Summary (04:03)

- GAN - Basic Principles (05:14)
- GAN Cost Function (pt 1) (07:24)
- GAN Cost Function (pt 2) (06:29)
- DCGAN (07:39)
- Batch Normalization Review (08:02)
- Fractionally-Strided Convolution (08:36)
- Tensorflow Implementation Notes (13:24)
- Tensorflow Implementation (18:14)
- Theano Implementation Notes (07:27)
- Theano Implementation (19:48)
- GAN Summary (09:44)

- Theano Basics: Variables, Functions, Expressions, Optimization (07:47)
- Building a neural network in Theano (09:17)
- TensorFlow Basics: Variables, Functions, Expressions, Optimization (07:27)
- Building a neural network in TensorFlow (09:43)

- Pre-Installation Check (04:13)
- Anaconda Environment Setup (20:21)
- How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow (17:33)

- How to Code Yourself (part 1) (15:55)
- How to Code Yourself (part 2) (09:24)
- Proof that using Jupyter Notebook is the same as not using it (12:29)
- Python 2 vs Python 3 (04:38)
- Is Theano Dead? (10:04)

- How to Succeed in this Course (Long Version) (10:25)
- Is this for Beginners or Experts? Academic or Practical? Fast or slow-paced? (22:05)
- What order should I take your courses in? (part 1) (11:19)
- What order should I take your courses in? (part 2) (16:07)

- What is the Appendix? (02:48)
- Where to get discount coupons and FREE deep learning material (05:49)

- GAN Tutorial PDF
- Variational Autoencoder Tutorial PDF
- Pre-trained Style Transfer Network Ready for Use
- GAN in Tensorflow 2