One of the most common questions I get in my Linear Regression class is, "What if we use the absolute error instead of the squared error?"

The answer is: this is entirely possible, but it requires an entirely different solution method.

These techniques are not usually taught in machine learning, yet they are essential to many fields such as operations research, quantitative finance, engineering, manufacturing, logistics, and more.

The main technique we will learn how to apply is called Linear Programming.

We will study several alternative loss functions for linear models, such as the L1 (absolute) loss, maximum absolute deviation (MAD), and the exponential loss for when you want your error to be positive-only or negative-only.

These techniques are part of a broader field of study known as**convex optimization**.

I hope you will join me in learning this essential skill for today's data science and quantitative professionals.

See you in class!

Suggested Prerequisites:

The answer is: this is entirely possible, but it requires an entirely different solution method.

These techniques are not usually taught in machine learning, yet they are essential to many fields such as operations research, quantitative finance, engineering, manufacturing, logistics, and more.

The main technique we will learn how to apply is called Linear Programming.

We will study several alternative loss functions for linear models, such as the L1 (absolute) loss, maximum absolute deviation (MAD), and the exponential loss for when you want your error to be positive-only or negative-only.

These techniques are part of a broader field of study known as

I hope you will join me in learning this essential skill for today's data science and quantitative professionals.

See you in class!

Suggested Prerequisites:

- calculus
- matrix arithmetic (adding, multiplying)
- probability
- be able to derive linear regression on paper and code linear regression in Python
- Python coding: if/else, loops, lists, dicts, sets
- Numpy coding: matrix and vector operations, loading a CSV file

You probably already know this, but some of us really and truly appreciate you. BTW, I spent a reasonable amount of time making a learning roadmap based on your courses and have started the journey.

Looking forward to your new stuff.

I am signing up so that I have the easy refresh when needed and the see what you consider important, as well as to support your great work, thank you.

READ MORE

I wish you a happy and safe holiday season. I am glad you chose to share your knowledge with the rest of us.

And, I couldn't agree more with some of your "rants", and found myself nodding vigorously!

You are an excellent teacher, and a rare breed.

And, your courses are frankly, more digestible and teach a student far more than some of the top-tier courses from ivy leagues I have taken in the past.

(I plan to go through many more courses, one by one!)

I know you must be deluged with complaints in spite of the best content around That's just human nature.

Also, satisfied people rarely take the time to write, so I thought I will write in for a change. :)

In the process of completing my Master’s at Hunan University, China, I am writing this feedback to you in order to express my deep gratitude for all the knowledge and skills I have obtained studying your courses and following your recommendations.

The first course of yours I took was on Convolutional Neural Networks (“Deep Learning p.5”, as far as I remember). Answering one of my questions on the Q&A board, you suggested I should start from the beginning – the Linear and Logistic Regression courses. Despite that I assumed I had already known many basic things at that time, I overcame my “pride” and decided to start my journey in Deep Learning from scratch. ...

READ MORE

- Introduction and Outline (03:59) (FREE preview available)
- Least Squares Review (09:00)

- Linear Programming Example (08:21)
- Linear Programming Example in Code (04:57)
- Absolute Error (L1 Loss) Maximum Likelihood (03:02)
- Absolute Error (L1 Loss) Linear Program (08:03)
- Absolute Error (L1 Loss) Code (06:01)
- Maximum Absolute Deviation Theory (05:05)
- Maximum Absolute Deviation Code (02:45)
- Exponential Maximum Likelihood (04:39)
- Exponential Linear Program (07:32)
- Exponential Code (04:03)

- Linear Programming Example Notebook
- Linear Programming Linear Regression Notebook