Artificial Intelligence: Reinforcement Learning in Python

Complete guide to artificial intelligence and machine learning, prep for deep reinforcement learning

Register for this Course

$19.00 $180.00 89% OFF!

Login or signup to register for this course

Have a coupon? Click here.

Course Data

Lectures: 71
Length: 05h 42m
Skill Level: All Levels
Languages: English
Includes: Lifetime access, 30-day money back guarantee

Course Description

When people talk about artificial intelligence, they usually don’t mean supervised and unsupervised machine learning.

These tasks are pretty trivial compared to what we think of AIs doing - playing chess and Go, driving cars, and beating video games at a superhuman level.

Reinforcement learning has recently become popular for doing all of that and more.

Much like deep learning, a lot of the theory was discovered in the 70s and 80s but it hasn’t been until recently that we’ve been able to observe first hand the amazing results that are possible.

In 2016 we saw Google’s AlphaGo beat the world Champion in Go.

We saw AIs playing video games like Doom and Super Mario.

Self-driving cars have started driving on real roads with other drivers and even carrying passengers (Uber), all without human assistance.

If that sounds amazing, brace yourself for the future because the law of accelerating returns dictates that this progress is only going to continue to increase exponentially.

Learning about supervised and unsupervised machine learning is no small feat. To date I have over SIXTEEN (16!) courses just on those topics alone.

And yet reinforcement learning opens up a whole new world. As you’ll learn in this course, the reinforcement learning paradigm is more different from supervised and unsupervised learning than they are from each other.

It’s led to new and amazing insights both in behavioral psychology and neuroscience. As you’ll learn in this course, there are many analogous processes when it comes to teaching an agent and teaching an animal or even a human. It’s the closest thing we have so far to a true general artificial intelligence.

What’s covered in this course?

  • The multi-armed bandit problem and the explore-exploit dilemma
  • Ways to calculate means and moving averages and their relationship to stochastic gradient descent
  • Markov Decision Processes (MDPs)
  • Dynamic Programming
  • Monte Carlo
  • Temporal Difference (TD) Learning
  • Approximation Methods (i.e. how to plug in a deep neural network or other differentiable model into your RL algorithm)


If you’re ready to take on a brand new challenge, and learn about AI techniques that you’ve never seen before in traditional supervised machine learning, unsupervised machine learning, or even deep learning, then this course is for you.

See you in class!

NOTES:

All the code for this course can be downloaded from my github:

https://github.com/lazyprogrammer/machine_learning_examples

In the directory: rl

Make sure you always "git pull" so you have the latest version!



HARD PREREQUISITES / KNOWLEDGE YOU ARE ASSUMED TO HAVE:

  • calculus
  • object-oriented programming
  • probability
  • Python coding: if/else, loops, lists, dicts, sets
  • Numpy coding: matrix and vector operations, loading a CSV file
  • linear regression
  • gradient descent


TIPS (for getting through the course):

  • Watch it at 2x.
  • Take handwritten notes. This will drastically increase your ability to retain the information.
  • Write down the equations. If you don't, I guarantee it will just look like gibberish.
  • Ask lots of questions on the discussion board. The more the better!
  • Realize that most exercises will take you days or weeks to complete.
  • Write code yourself, don't just sit there and look at my code.

Lectures

Introduction and Outline

  1. Introduction and outline (06:22) (FREE preview available)
  2. What is Reinforcement Learning? (13:46)
  3. Where to get the Code (02:41)
  4. Strategy for Passing the Course (05:56)

Return of the Multi-Armed Bandit

  1. Problem Setup and The Explore-Exploit Dilemma (03:55)
  2. Epsilon-Greedy (01:48)
  3. Updating a Sample Mean (01:22)
  4. Comparing Different Epsilons (04:06)
  5. Optimistic Initial Values (02:56)
  6. UCB1 (04:56)
  7. Bayesian / Thompson Sampling (09:52)
  8. Thompson Sampling vs. Epsilon-Greedy vs. Optimistic Initial Values vs. UCB1 (05:11)
  9. Nonstationary Bandits (04:51)

Build an Intelligent Tic-Tac-Toe Agent

  1. Naive Solution to Tic-Tac-Toe (03:50)
  2. Components of a Reinforcement Learning System (08:00)
  3. Notes on Assigning Rewards (02:41)
  4. The Value Function and Your First Reinforcement Learning Algorithm (16:33)
  5. Tic Tac Toe Code: Outline (03:16)
  6. Tic Tac Toe Code: Representing States (02:56)
  7. Tic Tac Toe Code: Enumerating States Recursively (06:14)
  8. Tic Tac Toe Code: The Environment (06:36)
  9. Tic Tac Toe Code: The Agent (05:48)
  10. Tic Tac Toe Code: Main Loop and Demo (06:02)
  11. Tic Tac Toe Summary (05:25)

Markov Decision Proccesses

  1. Gridworld (02:13)
  2. The Markov Property (04:36)
  3. Defining and Formalizing the MDP (04:10)
  4. Future Rewards (03:16)
  5. Value Functions (04:38)
  6. Optimal Policy and Optimal Value Function (04:09)
  7. MDP Summary (01:35)

Dynamic Programming

  1. Intro to Dynamic Programming and Iterative Policy Evaluation (03:06)
  2. Gridworld in Code (05:47)
  3. Iterative Policy Evaluation in Code (06:24)
  4. Policy Improvement (02:51)
  5. Policy Iteration (02:00)
  6. Policy Iteration in Code (03:46)
  7. Policy Iteration in Windy Gridworld (04:57)
  8. Value Iteration (03:58)
  9. Value Iteration in Code (02:14)
  10. Dynamic Programming Summary (05:14)

Monte Carlo

  1. Monte Carlo Intro (03:10)
  2. Monte Carlo Policy Evaluation (05:45)
  3. Monte Carlo Policy Evaluation in Code (03:35)
  4. Policy Evaluation in Windy Gridworld (03:38)
  5. Monte Carlo Control (05:59)
  6. Monte Carlo Control in Code (04:04)
  7. Monte Carlo Control without Exploring Starts (02:58)
  8. Monte Carlo Control without Exploring Starts in Code (02:51)
  9. Monte Carlo Summary (03:42)

Temporal Difference Learning

  1. Temporal Difference Intro (01:42)
  2. TD(0) Prediction (03:46)
  3. TD(0) Prediction in Code (02:27)
  4. SARSA (05:15)
  5. SARSA in Code (03:38)
  6. Q Learning (03:05)
  7. Q Learning in Code (02:13)
  8. TD Summary (02:34)

Approximation Methods

  1. Approximation Intro (04:11)
  2. Linear Models for Reinforcement Learning (04:16)
  3. Features (04:02)
  4. Monte Carlo Prediction with Approximation (01:54)
  5. Monte Carlo Prediction with Approximation in Code (02:58)
  6. TD(0) Semi-Gradient Prediction (04:22)
  7. Semi-Gradient SARSA (03:08)
  8. Semi-Gradient SARSA in Code (04:08)
  9. Course Summary and Next Steps (08:38)

Appendix

  1. How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow (17:22)
  2. How to Code Yourself (part 1) (15:54)
  3. How to Code Yourself (part 2) (09:23)
  4. Where to get discount coupons and FREE deep learning material (02:20)