This course is all about **A/B testing**.

A/B testing is used everywhere. Marketing, retail, newsfeeds, online advertising, and more.

A/B testing is all about comparing things.

If you’re a data scientist, and you want to tell the rest of the company, “logo A is better than logo B”, well you can’t just say that without proving it using numbers and statistics.

Traditional A/B testing has been around for a long time, and it’s full of approximations and confusing definitions.

In this course, while we will do traditional A/B testing in order to appreciate its complexity, what we will eventually get to is the**Bayesian machine learning** way of doing things.

First, we’ll see if we can improve on traditional A/B testing with adaptive methods. These all help you solve the**explore-exploit** dilemma.

You’ll learn about the**epsilon-greedy** algorithm, which you may have heard about in the context of **reinforcement learning**.

We’ll improve upon the epsilon-greedy algorithm with a similar algorithm called UCB1.

Finally, we’ll improve on both of those by using a fully Bayesian approach.

Why is the Bayesian method interesting to us in machine learning?

It’s an entirely different way of thinking about probability.

It’s a paradigm shift.

You’ll probably need to come back to this course several times before it fully sinks in.

It’s also powerful, and many machine learning experts often make statements about how they “subscribe to the Bayesian school of thought”.

In sum - it’s going to give us a lot of powerful new tools that we can use in machine learning.

The things you’ll learn in this course are not only applicable to A/B testing, but rather, we’re using A/B testing as a concrete example of how Bayesian techniques can be applied.

You’ll learn these fundamental tools of the Bayesian method - through the example of A/B testing - and then you’ll be able to carry those Bayesian techniques to more advanced machine learning models in the future.

See you in class!

Suggested Prerequisites:

Tips for success:

A/B testing is used everywhere. Marketing, retail, newsfeeds, online advertising, and more.

A/B testing is all about comparing things.

If you’re a data scientist, and you want to tell the rest of the company, “logo A is better than logo B”, well you can’t just say that without proving it using numbers and statistics.

Traditional A/B testing has been around for a long time, and it’s full of approximations and confusing definitions.

In this course, while we will do traditional A/B testing in order to appreciate its complexity, what we will eventually get to is the

First, we’ll see if we can improve on traditional A/B testing with adaptive methods. These all help you solve the

You’ll learn about the

We’ll improve upon the epsilon-greedy algorithm with a similar algorithm called UCB1.

Finally, we’ll improve on both of those by using a fully Bayesian approach.

Why is the Bayesian method interesting to us in machine learning?

It’s an entirely different way of thinking about probability.

It’s a paradigm shift.

You’ll probably need to come back to this course several times before it fully sinks in.

It’s also powerful, and many machine learning experts often make statements about how they “subscribe to the Bayesian school of thought”.

In sum - it’s going to give us a lot of powerful new tools that we can use in machine learning.

The things you’ll learn in this course are not only applicable to A/B testing, but rather, we’re using A/B testing as a concrete example of how Bayesian techniques can be applied.

You’ll learn these fundamental tools of the Bayesian method - through the example of A/B testing - and then you’ll be able to carry those Bayesian techniques to more advanced machine learning models in the future.

See you in class!

Suggested Prerequisites:

- calculus
- probability (continuous and discrete distributions, joint, marginal, conditional, PDF, PMF, CDF, Bayes rule)
- Python coding: if/else, loops, lists, dicts, sets
- Numpy, Scipy, Matplotlib

Tips for success:

- Use the video speed changer! Personally, I like to watch at 2x.
- Take handwritten notes. This will drastically increase your ability to retain the information.
- Write down the equations. If you don't, I guarantee it will just look like gibberish.
- Ask lots of questions on the discussion board. The more the better!
- Don't get discouraged if you can't solve every exercise right away. Sometimes it'll take hours, days, or maybe weeks!
- Write code yourself, this is an applied course! Don't be a "couch potato".

You probably already know this, but some of us really and truly appreciate you. BTW, I spent a reasonable amount of time making a learning roadmap based on your courses and have started the journey.

Looking forward to your new stuff.

I am signing up so that I have the easy refresh when needed and the see what you consider important, as well as to support your great work, thank you.

READ MORE

I wish you a happy and safe holiday season. I am glad you chose to share your knowledge with the rest of us.

And, I couldn't agree more with some of your "rants", and found myself nodding vigorously!

You are an excellent teacher, and a rare breed.

And, your courses are frankly, more digestible and teach a student far more than some of the top-tier courses from ivy leagues I have taken in the past.

(I plan to go through many more courses, one by one!)

I know you must be deluged with complaints in spite of the best content around That's just human nature.

Also, satisfied people rarely take the time to write, so I thought I will write in for a change. :)

In the process of completing my Master’s at Hunan University, China, I am writing this feedback to you in order to express my deep gratitude for all the knowledge and skills I have obtained studying your courses and following your recommendations.

The first course of yours I took was on Convolutional Neural Networks (“Deep Learning p.5”, as far as I remember). Answering one of my questions on the Q&A board, you suggested I should start from the beginning – the Linear and Logistic Regression courses. Despite that I assumed I had already known many basic things at that time, I overcame my “pride” and decided to start my journey in Deep Learning from scratch. ...

READ MORE

- What's this course all about? (03:55) (FREE preview available)
- Where to get the code for this course (09:21)
- How to Succeed in this Course (03:04)

- Real-World Examples of A/B Testing (06:47)
- What is Bayesian Machine Learning? (11:34)

- Review Section Introduction (01:22)
- Probability and Bayes' Rule Review (05:27)
- Calculating Probabilities - Practice (10:25)
- The Gambler (05:42)
- The Monty Hall Problem (07:01)
- Maximum Likelihood Estimation - Bernoulli (11:42)
- Click-Through Rates (CTR) (02:08)
- Maximum Likelihood Estimation - Gaussian (pt 1) (10:07)
- Maximum Likelihood Estimation - Gaussian (pt 2) (08:40)
- CDFs and Percentiles (09:38)
- Probability Review in Code (10:24)
- Probability Review Section Summary (05:12)
- Beginners: Fix Your Understanding of Statistics vs Machine Learning (06:47)
- Suggestion Box (03:10)

- Confidence Intervals (pt 1) - Intuition (05:09)
- Confidence Intervals (pt 2) - Beginner Level (04:45)
- Confidence Intervals (pt 3) - Intermediate Level (10:25)
- Confidence Intervals (pt 4) - Intermediate Level (11:42)
- Confidence Intervals (pt 5) - Intermediate Level (10:08)
- Confidence Intervals Code (06:32)
- Hypothesis Testing - Examples (07:15)
- Statistical Significance (05:26)
- Hypothesis Testing - The API Approach (09:17)
- Hypothesis Testing - Accept Or Reject? (02:23)
- Hypothesis Testing - Further Examples (04:59)
- Z-Test Theory (pt 1) (08:47)
- Z-Test Theory (pt 2) (08:30)
- Z-Test Code (pt 1) (13:02)
- Z-Test Code (pt 2) (05:54)
- A/B Test Exercise (03:54)
- Classical A/B Testing Section Summary (09:57)

- Section Introduction: The Explore-Exploit Dilemma (10:17)
- Applications of the Explore-Exploit Dilemma (08:00)
- Epsilon-Greedy Theory (07:04)
- Calculating a Sample Mean (pt 1) (05:56)
- Epsilon-Greedy Beginner's Exercise Prompt (05:05)
- Designing Your Bandit Program (04:09)
- Epsilon-Greedy in Code (07:12)
- Comparing Different Epsilons (06:02)
- Optimistic Initial Values Theory (05:40)
- Optimistic Initial Values Beginner's Exercise Prompt (02:26)
- Optimistic Initial Values Code (04:18)
- UCB1 Theory (14:32)
- UCB1 Beginner's Exercise Prompt (02:14)
- UCB1 Code (03:28)
- Bayesian Bandits / Thompson Sampling Theory (pt 1) (12:43)
- Bayesian Bandits / Thompson Sampling Theory (pt 2) (17:35)
- Thompson Sampling Beginner's Exercise Prompt (02:50)
- Thompson Sampling Code (05:03)
- Thompson Sampling With Gaussian Reward Theory (11:24)
- Thompson Sampling With Gaussian Reward Code (06:18)
- Exercise on Gaussian Rewards (01:21)
- Why don't we just use a library? (05:40)
- Nonstationary Bandits (07:11)
- Bandit Summary, Real Data, and Online Learning (06:30)
- (Optional) Alternative Bandit Designs (10:05)

- Exercise: Compare different strategies (02:07)
- Intro to Exercises on Conjugate Priors (06:05)
- Exercise: Die Roll (02:39)
- Exercise: Gaussians (05:42)
- Exercise: Gaussian Implementation (02:04)
- The most important quiz of all - Obtaining an infinite amount of practice (09:27)

- What's this course all about? (02:19)
- Where to get the code for this course (01:18)
- How to succeed in this course (03:27)

- Bayes Rule Review (09:29)
- Simple Probability Problem (02:04)
- The Monty Hall Problem (03:58)
- Imbalanced Classes (04:40)
- Maximum Likelihood - Mean of a Gaussian (04:53)
- Maximum Likelihood - Click-Through Rate (04:24)
- Confidence Intervals (10:18)
- What is the Bayesian Paradigm? (05:47)

- A/B Testing Problem Setup (04:27)
- Simple A/B Testing Recipe (05:08)
- P-Values (03:54)
- Test Characteristics, Assumptions, and Modifications (06:46)
- t-test in Code (03:24)
- t-test Exercise (05:19)
- 0.01 vs 0.011 - Why should we care? (01:47)
- A/B Test for Click-Through Rates (Chi-Square Test) (06:05)
- CTR A/B Test in Code (08:49)
- Chi-Square Exercise (02:34)
- A/B/C/D/... Testing - The Bonferroni Correction (02:21)
- Statistical Power (03:09)
- A/B Testing Pitfalls (04:01)
- Traditional A/B Testing Summary (03:43)

- Explore vs. Exploit (04:01)
- More about the Explore-Exploit Dilemma (07:39)
- The Epsilon-Greedy Solution (02:59)
- UCB1 (04:36)
- Conjugate Priors (07:05)
- Bayesian A/B Testing (04:11)
- Bayesian A/B Testing in Code (08:51)
- The Online Nature of Bayesian A/B Testing (02:32)
- Finding a Threshold Without P-Values (04:53)
- Thompson Sampling Convergence Demo (04:02)
- Confidence Interval Approximation vs. Beta Posterior (05:42)
- Adaptive Ad Server Exercise (05:39)

- Pre-Installation Check (04:13)
- Anaconda Environment Setup (20:21)
- How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow (17:33)

- How to Code Yourself (part 1) (15:55)
- How to Code Yourself (part 2) (09:24)
- Proof that using Jupyter Notebook is the same as not using it (12:29)
- Python 2 vs Python 3 (04:38)

- How to Succeed in this Course (Long Version) (10:25)
- Is this for Beginners or Experts? Academic or Practical? Fast or slow-paced? (22:05)
- What order should I take your courses in? (part 1) (11:19)
- What order should I take your courses in? (part 2) (16:07)

- What is the Appendix? (02:48)
- Where to get discount coupons and FREE deep learning material (05:49)