This course is all about **A/B testing**.

A/B testing is used everywhere. Marketing, retail, newsfeeds, online advertising, and more.

A/B testing is all about comparing things.

If you’re a data scientist, and you want to tell the rest of the company, “logo A is better than logo B”, well you can’t just say that without proving it using numbers and statistics.

Traditional A/B testing has been around for a long time, and it’s full of approximations and confusing definitions.

In this course, while we will do traditional A/B testing in order to appreciate its complexity, what we will eventually get to is the**Bayesian machine learning** way of doing things.

First, we’ll see if we can improve on traditional A/B testing with adaptive methods. These all help you solve the**explore-exploit** dilemma.

You’ll learn about the**epsilon-greedy** algorithm, which you may have heard about in the context of **reinforcement learning**.

We’ll improve upon the epsilon-greedy algorithm with a similar algorithm called UCB1.

Finally, we’ll improve on both of those by using a fully Bayesian approach.

Why is the Bayesian method interesting to us in machine learning?

It’s an entirely different way of thinking about probability.

It’s a paradigm shift.

You’ll probably need to come back to this course several times before it fully sinks in.

It’s also powerful, and many machine learning experts often make statements about how they “subscribe to the Bayesian school of thought”.

In sum - it’s going to give us a lot of powerful new tools that we can use in machine learning.

The things you’ll learn in this course are not only applicable to A/B testing, but rather, we’re using A/B testing as a concrete example of how Bayesian techniques can be applied.

You’ll learn these fundamental tools of the Bayesian method - through the example of A/B testing - and then you’ll be able to carry those Bayesian techniques to more advanced machine learning models in the future.

See you in class!

Suggested Prerequisites:

Tips for success:

A/B testing is used everywhere. Marketing, retail, newsfeeds, online advertising, and more.

A/B testing is all about comparing things.

If you’re a data scientist, and you want to tell the rest of the company, “logo A is better than logo B”, well you can’t just say that without proving it using numbers and statistics.

Traditional A/B testing has been around for a long time, and it’s full of approximations and confusing definitions.

In this course, while we will do traditional A/B testing in order to appreciate its complexity, what we will eventually get to is the

First, we’ll see if we can improve on traditional A/B testing with adaptive methods. These all help you solve the

You’ll learn about the

We’ll improve upon the epsilon-greedy algorithm with a similar algorithm called UCB1.

Finally, we’ll improve on both of those by using a fully Bayesian approach.

Why is the Bayesian method interesting to us in machine learning?

It’s an entirely different way of thinking about probability.

It’s a paradigm shift.

You’ll probably need to come back to this course several times before it fully sinks in.

It’s also powerful, and many machine learning experts often make statements about how they “subscribe to the Bayesian school of thought”.

In sum - it’s going to give us a lot of powerful new tools that we can use in machine learning.

The things you’ll learn in this course are not only applicable to A/B testing, but rather, we’re using A/B testing as a concrete example of how Bayesian techniques can be applied.

You’ll learn these fundamental tools of the Bayesian method - through the example of A/B testing - and then you’ll be able to carry those Bayesian techniques to more advanced machine learning models in the future.

See you in class!

Suggested Prerequisites:

- calculus
- probability (continuous and discrete distributions, joint, marginal, conditional, PDF, PMF, CDF, Bayes rule)
- Python coding: if/else, loops, lists, dicts, sets
- Numpy, Scipy, Matplotlib

Tips for success:

- Watch it at 2x.
- Take handwritten notes. This will drastically increase your ability to retain the information.
- Write down the equations. If you don't, I guarantee it will just look like gibberish.
- Ask lots of questions on the discussion board. The more the better!
- Realize that most exercises will take you days or weeks to complete.
- Write code yourself, don't just sit there and look at my code.

- What's this course all about? (03:55) (FREE preview available)
- Where to get the code for this course (05:01)
- How to succeed in this course (05:18)

- Real-World Examples of A/B Testing (06:46)
- What is Bayesian Machine Learning? (11:33)

- Review Section Introduction (01:22)
- Probability and Bayes' Rule Review (05:27)
- Calculating Probabilities - Practice (10:25)
- The Gambler (05:42)
- The Monty Hall Problem (07:01)
- Maximum Likelihood Estimation - Bernoulli (11:42)
- Click-Through Rates (CTR) (02:08)
- Maximum Likelihood Estimation - Gaussian (pt 1) (10:07)
- Maximum Likelihood Estimation - Gaussian (pt 2) (08:40)
- CDFs and Percentiles (09:38)
- Probability Review in Code (10:24)
- Probability Review Section Summary (05:12)
- Beginners: Fix Your Understanding of Statistics vs Machine Learning (06:47)
- Suggestion Box (03:03)

- A/B Testing Problem Setup (04:26)
- Simple A/B Testing Recipe (05:07)
- P-Values (03:53)
- Test Characteristics, Assumptions, and Modifications (06:45)
- t-test in Code (03:23)
- t-test Exercise (05:18)
- 0.01 vs 0.011 - Why should we care? (01:46)
- A/B Test for Click-Through Rates (Chi-Square Test) (06:04)
- CTR A/B Test in Code (08:49)
- Chi-Square Exercise (02:33)
- A/B/C/D/... Testing - The Bonferroni Correction (02:20)
- Statistical Power (03:08)
- A/B Testing Pitfalls (04:01)
- Traditional A/B Testing Summary (03:42)

- Section Introduction: The Explore-Exploit Dilemma (10:17)
- Applications of the Explore-Exploit Dilemma (08:00)
- Epsilon-Greedy Theory (07:04)
- Calculating a Sample Mean (pt 1) (05:56)
- Epsilon-Greedy Beginner's Exercise Prompt (05:05)
- Designing Your Bandit Program (04:09)
- Epsilon-Greedy in Code (07:12)
- Comparing Different Epsilons (06:02)
- Optimistic Initial Values Theory (05:40)
- Optimistic Initial Values Beginner's Exercise Prompt (02:26)
- Optimistic Initial Values Code (04:18)
- UCB1 Theory (14:32)
- UCB1 Beginner's Exercise Prompt (02:14)
- UCB1 Code (03:28)
- Bayesian Bandits / Thompson Sampling Theory (pt 1) (12:43)
- Bayesian Bandits / Thompson Sampling Theory (pt 2) (17:35)
- Thompson Sampling Beginner's Exercise Prompt (02:50)
- Thompson Sampling Code (05:03)
- Thompson Sampling With Gaussian Reward Theory (11:24)
- Thompson Sampling With Gaussian Reward Code (06:18)
- Why don't we just use a library? (05:40)
- Nonstationary Bandits (07:11)
- Bandit Summary, Real Data, and Online Learning (06:29)
- (Optional) Alternative Bandit Designs (10:05)

- Exercise: Compare different strategies (02:06)
- Intro to Exercises on Conjugate Priors (06:04)
- Exercise: Die Roll (02:38)
- Exercise: Gaussians (05:41)
- Exercise: Gaussian Implementation (02:03)
- The most important quiz of all - Obtaining an infinite amount of practice (09:26)

- What's this course all about? (02:18)
- Where to get the code for this course (01:17)
- How to succeed in this course (03:26)

- Bayes Rule Review (09:28)
- Simple Probability Problem (02:03)
- The Monty Hall Problem (03:57)
- Imbalanced Classes (04:40)
- Maximum Likelihood - Mean of a Gaussian (04:52)
- Maximum Likelihood - Click-Through Rate (04:23)
- Confidence Intervals (10:17)
- What is the Bayesian Paradigm? (05:46)

- Explore vs. Exploit (04:00)
- More about the Explore-Exploit Dilemma (07:39)
- The Epsilon-Greedy Solution (02:58)
- UCB1 (04:35)
- Conjugate Priors (07:04)
- Bayesian A/B Testing (04:10)
- Bayesian A/B Testing in Code (08:50)
- The Online Nature of Bayesian A/B Testing (02:31)
- Finding a Threshold Without P-Values (04:52)
- Thompson Sampling Convergence Demo (04:01)
- Confidence Interval Approximation vs. Beta Posterior (05:41)
- Adaptive Ad Server Exercise (05:38)

- Windows-Focused Environment Setup 2018 (20:20)
- How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow (17:22)

- How to Code Yourself (part 1) (15:54)
- How to Code Yourself (part 2) (09:23)
- Proof that using Jupyter Notebook is the same as not using it (12:29)
- Python 2 vs Python 3 (04:38)

- How to Succeed in this Course (Long Version) (10:24)
- Is this for Beginners or Experts? Academic or Practical? Fast or slow-paced? (22:04)
- What order should I take your courses in? (part 1) (11:18)
- What order should I take your courses in? (part 2) (16:07)

- What is the Appendix? (02:48)
- Where to get discount coupons and FREE deep learning material (05:31)