One of the most common questions I get in my Linear Regression class is, "What if we use the absolute error instead of the squared error?"
The answer is: this is entirely possible, but it requires an entirely different solution method.
These techniques are not usually taught in machine learning, yet they are essential to many fields such as operations research, quantitative finance, engineering, manufacturing, logistics, and more.
The main technique we will learn how to apply is called Linear Programming.
We will study several alternative loss functions for linear models, such as the L1 (absolute) loss, maximum absolute deviation (MAD), and the exponential loss for when you want your error to be positive-only or negative-only.
These techniques are part of a broader field of study known as
convex optimization.
I hope you will join me in learning this essential skill for today's data science and quantitative professionals.
See you in class!
Suggested Prerequisites:
- calculus
- matrix arithmetic (adding, multiplying)
- probability
- be able to derive linear regression on paper and code linear regression in Python
- Python coding: if/else, loops, lists, dicts, sets
- Numpy coding: matrix and vector operations, loading a CSV file