Artificial Intelligence: Reinforcement Learning in Python

Complete guide to artificial intelligence and machine learning, prep for deep reinforcement learning

Register for this Course

$54.99 $199.99 USD 73% OFF!

Login or signup to register for this course

Have a coupon? Click here.

Course Data

Lectures: 178
Length: 19h 45m
Skill Level: All Levels
Languages: English
Includes: Lifetime access, certificate of completion (shareable on LinkedIn, Facebook, and Twitter), Q&A forum

Course Description

When people talk about artificial intelligence, they usually don’t mean supervised and unsupervised machine learning.

These tasks are pretty trivial compared to what we think of AIs doing - playing chess and Go, driving cars, and beating video games at a superhuman level.

Reinforcement learning has recently become popular for doing all of that and more.

Much like deep learning, a lot of the theory was discovered in the 70s and 80s but it hasn’t been until recently that we’ve been able to observe first hand the amazing results that are possible.

In 2016 we saw Google’s AlphaGo beat the world Champion in Go.

We saw AIs playing video games like Doom and Super Mario.

Self-driving cars have started driving on real roads with other drivers and even carrying passengers (Uber), all without human assistance.

If that sounds amazing, brace yourself for the future because the law of accelerating returns dictates that this progress is only going to continue to increase exponentially.

Learning about supervised and unsupervised machine learning is no small feat. To date I have over TWENTY FIVE (25!) courses just on those topics alone.

And yet reinforcement learning opens up a whole new world. As you’ll learn in this course, the reinforcement learning paradigm is vastly different from both supervised and unsupervised learning.

It’s led to new and amazing insights both in behavioral psychology and neuroscience. As you’ll learn in this course, there are many analogous processes when it comes to teaching an agent and teaching an animal or even a human. It’s the closest thing we have so far to a true general artificial intelligence.

What’s covered in this course?

  • The multi-armed bandit problem and the explore-exploit dilemma
  • Ways to calculate means and moving averages and their relationship to stochastic gradient descent
  • Markov Decision Processes (MDPs)
  • Dynamic Programming
  • Monte Carlo
  • Temporal Difference (TD) Learning
  • Approximation Methods (i.e. how to plug in a deep neural network or other differentiable model into your RL algorithm)
  • How to use OpenAI Gym, with zero code changes
  • Project: Apply Q-Learning to build a stock trading bot


If you’re ready to take on a brand new challenge, and learn about AI techniques that you’ve never seen before in traditional supervised machine learning, unsupervised machine learning, or even deep learning, then this course is for you.

See you in class!



Suggested Prerequisites:

  • calculus
  • object-oriented programming
  • probability
  • Python coding: if/else, loops, lists, dicts, sets
  • Numpy coding: matrix and vector operations, loading a CSV file
  • linear regression
  • gradient descent

Testimonials and Success Stories


I am one of your students. Yesterday, I presented my paper at ICCV 2019. You have a significant part in this, so I want to sincerely thank you for your in-depth guidance to the puzzle of deep learning. Please keep making awesome courses that teach us!

I just watched your short video on “Predicting Stock Prices with LSTMs: One Mistake Everyone Makes.” Giggled with delight.

You probably already know this, but some of us really and truly appreciate you. BTW, I spent a reasonable amount of time making a learning roadmap based on your courses and have started the journey.

Looking forward to your new stuff.

Thank you for doing this! I wish everyone who call’s themselves a Data Scientist would take the time to do this either as a refresher or learn the material. I have had to work with so many people in prior roles that wanted to jump right into machine learning on my teams and didn’t even understand the first thing about the basics you have in here!!

I am signing up so that I have the easy refresh when needed and the see what you consider important, as well as to support your great work, thank you.

Thank you, I think you have opened my eyes. I was using API to implement Deep learning algorithms and each time I felt I was messing out on some things. So thank you very much.

I have been intending to send you an email expressing my gratitude for the work that you have done to create all of these data science courses in Machine Learning and Artificial Intelligence. I have been looking long and hard for courses that have mathematical rigor relative to the application of the ML & AI algorithms as opposed to just exhibit some 'canned routine' and then viola here is your neural network or logistical regression. ...

READ MORE

I have now taken a few classes from some well-known AI profs at Stanford (Andrew Ng, Christopher Manning, …) with an overall average mark in the mid-90s. Just so you know, you are as good as any of them. But I hope that you already know that.

I wish you a happy and safe holiday season. I am glad you chose to share your knowledge with the rest of us.

Hi Sir I am a student from India. I've been wanting to write a note to thank you for the courses that you've made because they have changed my career. I wanted to work in the field of data science but I was not having proper guidance but then I stumbled upon your "Logistic Regression" course in March and since then, there's been no looking back. I learned ANNs, CNNs, RNNs, Tensorflow, NLP and whatnot by going through your lectures. The knowledge that I gained enabled me to get a job as a Business Technology Analyst at one of my dream firms even in the midst of this pandemic. For that, I shall always be grateful to you. Please keep making more courses with the level of detail that you do in low-level libraries like Theano.

I just wanted to reach out and thank you for your most excellent course that I am nearing finishing.

And, I couldn't agree more with some of your "rants", and found myself nodding vigorously!

You are an excellent teacher, and a rare breed.

And, your courses are frankly, more digestible and teach a student far more than some of the top-tier courses from ivy leagues I have taken in the past.

(I plan to go through many more courses, one by one!)

I know you must be deluged with complaints in spite of the best content around That's just human nature.

Also, satisfied people rarely take the time to write, so I thought I will write in for a change. :)

Hello, Lazy Programmer!

In the process of completing my Master’s at Hunan University, China, I am writing this feedback to you in order to express my deep gratitude for all the knowledge and skills I have obtained studying your courses and following your recommendations.

The first course of yours I took was on Convolutional Neural Networks (“Deep Learning p.5”, as far as I remember). Answering one of my questions on the Q&A board, you suggested I should start from the beginning – the Linear and Logistic Regression courses. Despite that I assumed I had already known many basic things at that time, I overcame my “pride” and decided to start my journey in Deep Learning from scratch. ...

READ MORE

By the way, if you are interested to hear. I used the HMM classification, as it was in your course (95% of the script, I had little adjustments there), for the Customer-Care department in a big known fintech company. to predict who will call them, so they can call him before the rush hours, and improve the service. Instead of a poem, I Had a sequence of the last 24 hours' events that the customer had, like: "Loaded money", "Usage in the food service", "Entering the app", "Trying to change the password", etc... the label was called or didn't call. The outcome was great. They use it for their VIP customers. Our data science department and I got a lot of praise.

Lectures

Welcome

5 Lectures · 38min
  1. Introduction (03:14) (FREE preview available)
  2. Course Outline and Big Picture (08:53)
  3. Where to get the Code (04:36)
  4. How to succeed in this course (05:52)
  5. Warmup (15:36)

Return of the Multi-Armed Bandit

26 Lectures · 02hr 56min
  1. Section Introduction: The Explore-Exploit Dilemma (10:17)
  2. Applications of the Explore-Exploit Dilemma (08:00)
  3. Epsilon-Greedy Theory (07:04)
  4. Calculating a Sample Mean (pt 1) (05:56)
  5. Epsilon-Greedy Beginner's Exercise Prompt (05:05)
  6. Designing Your Bandit Program (04:09)
  7. Epsilon-Greedy in Code (07:12)
  8. Comparing Different Epsilons (06:02)
  9. Optimistic Initial Values Theory (05:40)
  10. Optimistic Initial Values Beginner's Exercise Prompt (02:26)
  11. Optimistic Initial Values Code (04:18)
  12. UCB1 Theory (14:32)
  13. UCB1 Beginner's Exercise Prompt (02:14)
  14. UCB1 Code (03:28)
  15. Bayesian Bandits / Thompson Sampling Theory (pt 1) (12:43)
  16. Bayesian Bandits / Thompson Sampling Theory (pt 2) (17:35)
  17. Thompson Sampling Beginner's Exercise Prompt (02:50)
  18. Thompson Sampling Code (05:03)
  19. Thompson Sampling With Gaussian Reward Theory (11:24)
  20. Thompson Sampling With Gaussian Reward Code (06:18)
  21. Exercise on Gaussian Rewards (01:21)
  22. Why don't we just use a library? (05:40)
  23. Nonstationary Bandits (07:11)
  24. Bandit Summary, Real Data, and Online Learning (06:30)
  25. (Optional) Alternative Bandit Designs (10:05)
  26. Suggestion Box (03:10)

High Level Overview of Reinforcement Learning and Course Outline

3 Lectures · 23min
  1. What is Reinforcement Learning? (08:09)
  2. On Unusual or Unexpected Strategies of RL (06:10)
  3. From Bandits to Full Reinforcement Learning (08:42)

Build an Intelligent Tic-Tac-Toe Agent (VIP-only)

12 Lectures · 01hr 10min
  1. Naive Solution to Tic-Tac-Toe (03:51)
  2. Components of a Reinforcement Learning System (08:01)
  3. Notes on Assigning Rewards (02:42)
  4. The Value Function and Your First Reinforcement Learning Algorithm (16:34)
  5. Tic Tac Toe Code: Outline (03:17)
  6. Tic Tac Toe Code: Representing States (02:57)
  7. Tic Tac Toe Code: Enumerating States Recursively (06:15)
  8. Tic Tac Toe Code: The Environment (06:37)
  9. Tic Tac Toe Code: The Agent (05:49)
  10. Tic Tac Toe Code: Main Loop and Demo (06:03)
  11. Tic Tac Toe Summary (05:26)
  12. Tic Tac Toe: Exercise (03:21)

Markov Decision Proccesses (MDPs)

14 Lectures · 01hr 59min
  1. MDP Section Introduction (06:19)
  2. Gridworld (12:35)
  3. Choosing Rewards (03:58)
  4. The Markov Property (06:12)
  5. Markov Decision Processes (MDPs) (14:42)
  6. Future Rewards (09:34)
  7. Value Functions (05:07)
  8. The Bellman Equation (pt 1) (08:46)
  9. The Bellman Equation (pt 2) (06:42)
  10. The Bellman Equation (pt 3) (06:09)
  11. Bellman Examples (22:25)
  12. Optimal Policy and Optimal Value Function (pt 1) (09:17)
  13. Optimal Policy and Optimal Value Function (pt 2) (04:36)
  14. MDP Summary (02:58)

Dynamic Programming

14 Lectures · 02hr 04min
  1. Dynamic Programming Section Introduction (08:59)
  2. Iterative Policy Evaluation (15:36)
  3. Designing Your RL Program (05:00)
  4. Gridworld in Code (11:37)
  5. Iterative Policy Evaluation in Code (12:17)
  6. Windy Gridworld in Code (07:47)
  7. Iterative Policy Evaluation for Windy Gridworld in Code (07:14)
  8. Policy Improvement (11:23)
  9. Policy Iteration (07:57)
  10. Policy Iteration in Code (08:27)
  11. Policy Iteration in Windy Gridworld (08:50)
  12. Value Iteration (07:39)
  13. Value Iteration in Code (06:36)
  14. Dynamic Programming Summary (04:57)

Monte Carlo

8 Lectures · 58min
  1. Monte Carlo Intro (09:21)
  2. Monte Carlo Policy Evaluation (10:52)
  3. Monte Carlo Policy Evaluation in Code (07:52)
  4. Monte Carlo Control (09:00)
  5. Monte Carlo Control in Code (08:51)
  6. Monte Carlo Control without Exploring Starts (04:41)
  7. Monte Carlo Control without Exploring Starts in Code (05:40)
  8. Monte Carlo Summary (01:53)

Temporal Difference Learning

8 Lectures · 37min
  1. Temporal Difference Introduction (03:55)
  2. TD(0) Prediction (05:24)
  3. TD(0) Prediction in Code (04:54)
  4. SARSA (04:36)
  5. SARSA in Code (06:20)
  6. Q Learning (04:55)
  7. Q Learning in Code (05:02)
  8. TD Learning Section Summary (02:27)

Approximation Methods

11 Lectures · 01hr 13min
  1. Approximation Methods Section Introduction (04:19)
  2. Linear Models for Reinforcement Learning (08:32)
  3. Feature Engineering (10:16)
  4. Approximation Methods for Prediction (09:55)
  5. Approximation Methods for Prediction Code (08:26)
  6. Approximation Methods for Control (04:41)
  7. Approximation Methods for Control Code (08:54)
  8. CartPole (05:34)
  9. CartPole Code (05:49)
  10. Approximation Methods Exercise (04:07)
  11. Approximation Methods Section Summary (03:05)

Interlude: Common Beginner Questions

1 Lectures · 07min
  1. This Course vs. RL Book: What's the Difference? (07:11)

Stock Trading Project with Reinforcement Learning

10 Lectures · 01hr 21min
  1. Beginners, halt! Stop here if you skipped ahead (14:10)
  2. Stock Trading Project Section Introduction (05:15)
  3. Data and Environment (12:23)
  4. How to Model Q for Q-Learning (09:38)
  5. Design of the Program (06:46)
  6. Code pt 1 (08:00)
  7. Code pt 2 (09:41)
  8. Code pt 3 (04:29)
  9. Code pt 4 (07:17)
  10. Stock Trading Project Discussion (03:39)

Return of the Multi-Armed Bandit (Legacy)

9 Lectures · 39min
  1. Problem Setup and The Explore-Exploit Dilemma (03:56)
  2. Epsilon-Greedy (01:49)
  3. Updating a Sample Mean (01:23)
  4. Comparing Different Epsilons (04:07)
  5. Optimistic Initial Values (02:57)
  6. UCB1 (04:57)
  7. Bayesian / Thompson Sampling (09:53)
  8. Thompson Sampling vs. Epsilon-Greedy vs. Optimistic Initial Values vs. UCB1 (05:12)
  9. Nonstationary Bandits (04:52)

Markov Decision Proccesses (Legacy)

9 Lectures · 48min
  1. Defining Some Terms (07:02)
  2. Gridworld (02:14)
  3. The Markov Property (04:37)
  4. Defining and Formalizing the MDP (04:11)
  5. Future Rewards (03:17)
  6. Value Function Introduction (12:04)
  7. Value Functions (09:16)
  8. Optimal Policy and Optimal Value Function (04:10)
  9. MDP Summary (01:36)

Dynamic Programming (Legacy)

10 Lectures · 40min
  1. Intro to Dynamic Programming and Iterative Policy Evaluation (03:07)
  2. Gridworld in Code (05:48)
  3. Iterative Policy Evaluation in Code (06:25)
  4. Policy Improvement (02:52)
  5. Policy Iteration (02:01)
  6. Policy Iteration in Code (03:47)
  7. Policy Iteration in Windy Gridworld (04:58)
  8. Value Iteration (03:59)
  9. Value Iteration in Code (02:15)
  10. Dynamic Programming Summary (05:15)

Monte Carlo (Legacy)

9 Lectures · 35min
  1. Monte Carlo Intro (03:11)
  2. Monte Carlo Policy Evaluation (05:46)
  3. Monte Carlo Policy Evaluation in Code (03:36)
  4. Policy Evaluation in Windy Gridworld (03:39)
  5. Monte Carlo Control (06:00)
  6. Monte Carlo Control in Code (04:05)
  7. Monte Carlo Control without Exploring Starts (02:59)
  8. Monte Carlo Control without Exploring Starts in Code (02:52)
  9. Monte Carlo Summary (03:43)

Temporal Difference Learning (Legacy)

8 Lectures · 24min
  1. Temporal Difference Intro (01:43)
  2. TD(0) Prediction (03:47)
  3. TD(0) Prediction in Code (02:28)
  4. SARSA (05:16)
  5. SARSA in Code (03:39)
  6. Q Learning (03:06)
  7. Q Learning in Code (02:14)
  8. TD Summary (02:35)

Approximation Methods (Legacy)

9 Lectures · 37min
  1. Approximation Intro (04:12)
  2. Linear Models for Reinforcement Learning (04:17)
  3. Features (04:03)
  4. Monte Carlo Prediction with Approximation (01:55)
  5. Monte Carlo Prediction with Approximation in Code (02:59)
  6. TD(0) Semi-Gradient Prediction (04:23)
  7. Semi-Gradient SARSA (03:09)
  8. Semi-Gradient SARSA in Code (04:09)
  9. Course Summary and Next Steps (08:39)

Setting Up Your Environment (Appendix/FAQ by Student Request)

2 Lectures · 37min
  1. Anaconda Environment Setup (20:21)
  2. How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow (17:33)

Extra Help With Python Coding for Beginners (Appendix/FAQ by Student Request)

4 Lectures · 42min
  1. How to Code Yourself (part 1) (15:55)
  2. How to Code Yourself (part 2) (09:24)
  3. Proof that using Jupyter Notebook is the same as not using it (12:29)
  4. Python 2 vs Python 3 (04:38)

Effective Learning Strategies for Machine Learning (Appendix/FAQ by Student Request)

4 Lectures · 59min
  1. How to Succeed in this Course (Long Version) (10:25)
  2. Is this for Beginners or Experts? Academic or Practical? Fast or slow-paced? (22:05)
  3. What order should I take your courses in? (part 1) (11:19)
  4. What order should I take your courses in? (part 2) (16:07)

Appendix / FAQ Finale

2 Lectures · 08min
  1. What is the Appendix? (02:48)
  2. Where to get discount coupons and FREE deep learning material (05:31)

Extras

  • Monte Carlo with Importance Sampling for Reinforcement Learning
  • Reinforcement Learning Algorithms: Expected SARSA
This website is using cookies. That's Fine