An introduction to logistic regression

Variables can be described as either quantitative or qualitative.
Quantitative variables have a numerical value, e.g. a person’s income, or the price of a house.
Qualitative variables have a values taken from one of different classes or categories. E.g., a person’s gender (male or female), the type of house purchased (villa, flat, penthouse, …) the colour of the eye (brown, blue, green) or a cancer diagnosis.

Linear regression predicts a continuous variable but sometime we want to predict a categorical variable, i.e. a variable with a small number of possible discrete outcomes, usually unordered (there is no order among the outcomes).

This kind of problems are called Classification.

Classification

Given a feature vector X and a qualitative response y taking values from one fixed set, the classification task is to build a function f(X) that takes as input the feature vector X and predicts its value for y.
Often we are interested also (or even more) in estimating the probabilities that X belongs to each category in C.
For example, it is more valuable to have the probability that an insurance claim is fraudulent, than if a classification is fraudulent or not.

There are many possible classification techniques, or classifiers, available to predict a qualitative response.

We will se now one called logistic regression.

Note: this post is part of a series about Machine Learning with Python.
Continue reading “An introduction to logistic regression”

Google and Microsoft blend AI into core products

I recently watched parts of both Google and Microsoft developer conferences (respectively Build 2017 and I/O 2017).
As expected, there was big emphasis on Artificial Intelligence but, in all, I liked more the Microsoft’s one while the Google’s felt too heterogeneous and without real meat (the new capabilities from Google Lens have been available e.g. at Baidu since years).

A few things that attracted my curiosity:

Vision plus X is the killer app of AI

At Google I/O, Dr. Fei-Fei Li – the new Chief Scientist of AI/ML at Google Cloud – articulated the most convincing vision: Continue reading “Google and Microsoft blend AI into core products”

Machine Learning Yearning

Got this morning the first draft 12 chapters of Prof. Andrew Ng‘s new book titled “Machine Learning Yearning – Technical strategy for AI engineers, in the era of deep learning”.

screen-shot-2016-12-04-at-18-50-21
The book cover

The book aims to help readers to quickly become better at building AI systems by gaining the practical skills needed when organising a machine learning project.
The book assumes readers are already familiar with machine learning concepts and does not go into technical explanations of how they work.

These first chapters look great, I think this book will help to close the gap between machine learning knowledge and proper execution.

My favourite chapter is the Ninth: Optimizing and satisficing metrics which suggests how to handle the problem of establishing a single-number evaluation metric when the interesting metrics are not compatible:

Suppose you care about both the accuracy and the running time of a learning algorithm.

It seems unnatural to derive a single metric by putting accuracy and running time into a single formula, such as:

Accuracy – 0.5*RunningTime

Here’s what you can do instead:
First, define what is an “acceptable” running time. Let’s say anything that runs in 100ms is acceptable.
Then, maximize accuracy, subject to your classifier meeting the running time criteria.
Here, running time is a “satisficing metric”— your classifier just has to be “good enough” on this metric, in the sense that it should take at most 100ms. Accuracy is the “optimizing metric.”


P.S. I know, satisficing is not a common word and is marked wrong by spelling checkers but it really exists ! I had to look it myself but here is the definition:
To satisfice means to choose or adopt the first option fulfilling all requirements that one comes across (in contrast to look for the optimal one).