: Instead of SSD, I show you how to use RetinaNet, which is better and more modern. I show you both how to use a pretrained model and how to train one yourself with a custom dataset on Google Colab
This is one of the most exciting courses I’ve done and it really shows how fast and how far deep learning has come over the years.
When I first started my deep learning
series, I didn’t ever consider that I’d make two
courses on convolutional neural networks
I think what you’ll find is that, this course is so entirely different from the previous one, you will be impressed at just how much material we have to cover.
Let me give you a quick rundown of what this course is all about:
We’re going to bridge the gap between the basic CNN architecture you already know and love, to modern, novel architectures such as VGG
, and Inception
(named after the movie which by the way, is also great!)
We’re going to apply these to images of blood cells, and create a system that is a better medical expert than either you or I. This brings up a fascinating idea: that the doctors of the future are not humans, but robots.
In this course, you’ll see how we can turn a CNN into an object detection
system, that not only classifies images but can locate each object in an image and predict its label.
You can imagine that such a task is a basic prerequisite for self-driving vehicles
. (It must be able to detect cars, pedestrians, bicycles, traffic lights, etc. in real-time)
We’ll be looking at a state-of-the-art algorithm called SSD
which is both faster and more accurate than its predecessors.
Another very popular computer vision task that makes use of CNNs is called neural style transfer
This is where you take one image called the content image, and another image called the style image, and you combine these to make an entirely new image, that is as if you hired a painter to paint the content of the first image with the style of the other. Unlike a human painter, this can be done in a matter of seconds.
I will also introduce you to the now-famous GAN
architecture (Generative Adversarial Networks
), where you will learn some of the technology behind how neural networks are used to generate state-of-the-art, photo-realistic images.
Currently, we also implement object localization
, which is an essential first step toward implementing a full object detection system.
Finally, I teach you about the controversial technology behind facial recognition
- how to identify a person based on a photo of their face.
I hope you’re excited to learn about these advanced applications of CNNs, I’ll see you in class!
- One of the major themes of this course is that we’re moving away from the CNN itself, to systems involving CNNs.
- Instead of focusing on the detailed inner workings of CNNs (which we've already done), we'll focus on high-level building blocks. The result? Almost zero math. (If that's what you're looking for, earlier courses in the series are math-heavy, which was required to understand the inner workings of these building blocks.)
- Another result? No complicated low-level code such as that written in Tensorflow, Theano, or PyTorch (although some optional exercises may contain them for the very advanced students). Most of the course will be in Keras which means a lot of the tedious, repetitive stuff is written for you.
- Know how to build, train, and use a CNN using some library (preferably in Python)
- Understand basic theoretical concepts behind convolution and neural networks
- Decent Python coding skills, preferably in data science and the Numpy Stack
Tips for success:
- Watch it at 2x.
- Take handwritten notes. This will drastically increase your ability to retain the information.
- Write down the equations. If you don't, I guarantee it will just look like gibberish.
- Ask lots of questions on the discussion board. The more the better!
- Realize that most exercises will take you days or weeks to complete.
- Write code yourself, don't just sit there and look at my code.