Machines “think” differently but it’s not a problem (maybe)

Yet another article about the interpretability problem of many AI algorithms, this time on the MIT Technology Review, May/June 2017 issue.

The issue is clear; many of the most successful recent AI technologies revolve around deep learning: complex artificial neural networks – with so many layers of so many neurons transforming so many variables – that behave like “black boxes” for us.
We cannot comprehend anymore the model, we don’t know how or why the outcome to a specific input is obtained.
Is it scary?

In the film Dekalog 1 by Krzysztof Kieślowski – the first of ten short films inspired to the ten Christian imperatives, the first one being “I am the Lord your God; you shall have no other gods before me”  – Krzysztof lives alone with Paweł, his 12-years-old and highly intelligent son, and introduces him to the world of personal computers. Continue reading “Machines “think” differently but it’s not a problem (maybe)”

Agile for managing a research data team


An interesting read: Lessons learned managing a research data science team on the ACMqueue magazine by Kate Matsudaira.

The author described how she managed a data science team in her role as VP engineering at a data mining startup.

When you have a team of people working on hard data science problems, the things that work in traditional software don’t always apply. When you are doing research and experiments, the work can be ambiguous, unpredictable, and the results can be hard to measure.

These are the changes that the team implemented in the process: Continue reading “Agile for managing a research data team”

[Link] Algorithms literature

From the Social Media Collective, part of the Microsoft Research labs, an interesting and comprehensive list of studies about algorithms as social concern.

Our interest in assembling this list was to catalog the emergence of “algorithms” as objects of interest for disciplines beyond mathematics, computer science, and software engineering.

They also try to categorise the studies and add an intriguing timeline visualisation (that shows how much interest are sparking the algorithms in this time):


Machine Learning Yearning

Got this morning the first draft 12 chapters of Prof. Andrew Ng‘s new book titled “Machine Learning Yearning – Technical strategy for AI engineers, in the era of deep learning”.

The book cover

The book aims to help readers to quickly become better at building AI systems by gaining the practical skills needed when organising a machine learning project.
The book assumes readers are already familiar with machine learning concepts and does not go into technical explanations of how they work.

These first chapters look great, I think this book will help to close the gap between machine learning knowledge and proper execution.

My favourite chapter is the Ninth: Optimizing and satisficing metrics which suggests how to handle the problem of establishing a single-number evaluation metric when the interesting metrics are not compatible:

Suppose you care about both the accuracy and the running time of a learning algorithm.

It seems unnatural to derive a single metric by putting accuracy and running time into a single formula, such as:

Accuracy – 0.5*RunningTime

Here’s what you can do instead:
First, define what is an “acceptable” running time. Let’s say anything that runs in 100ms is acceptable.
Then, maximize accuracy, subject to your classifier meeting the running time criteria.
Here, running time is a “satisficing metric”— your classifier just has to be “good enough” on this metric, in the sense that it should take at most 100ms. Accuracy is the “optimizing metric.”

P.S. I know, satisficing is not a common word and is marked wrong by spelling checkers but it really exists ! I had to look it myself but here is the definition:
To satisfice means to choose or adopt the first option fulfilling all requirements that one comes across (in contrast to look for the optimal one).

Modern agile?

InfoQ recently posted an article about something called “Modern Agile” by Joshua Kerievsky which sparked my curiosity. It seems that there is also a website but did not find other references beside the author’s articles or keynote.

What is modern agile?

Modern Agile is ultra-light, the opposite of mainstream Agile, which is drowning in a bloated tangle of enterprise tools, scaling frameworks and questionable certificates that yield more bureaucracy than results.

Well, after reading it, seems to me that is nothing more than the principles already outlined in the original Agile Manifesto.
Not sure what Modern stays for …

Modern Agile has no roles, responsibilities or anointed practices. Instead, it is defined by four guiding principles

  1. Make People Awesome
  2. Deliver Value Continuously
  3. Make Safety a Prerequisite
  4. Experiment and Learn Rapidly

It goes then on to describe how these four guiding principles match one-to-one with the four Agile values from the manifesto … well, then why do we need them?

I fully agree that some interpretations (is this what is intended with “mainstream”?) of Agile (or better, Scrum …) are becoming over-engineered with all these frameworks but in its original concept was already a light process with only four values.

No need of new interpretations.
Let’s just go back to the roots of Agile mindset.


A brief history of chatbots

A chatbot is a computer program which conducts a conversation via auditory or textual methods.

The term “ChatterBot” was originally coined by Michael Mauldin in 1994 to describe these conversational programs but they are much older, the first one being ELIZA by Joseph Weizenbaum of MIT in 1966.

Leaving the academic world, conversational agents have been typically used in dialog systems including customer service or information acquisition.
Many large companies started to use automated online assistants instead of call centres with humans, to provide a first point of contact.
Most of these systems ask you to push a digit corresponding to what you want or say what you’re calling about and scan for keywords within the vocal input, then pull a reply with the most matching answer from a database.
These systems are based on simple logic trees (SLT).

An SLT agent relies therefore on a fixed decision tree to gather information and redirect the user.
For example, an insurance bot may ask several questions to determine which policy is ideal for you. Or an airline bot could ask you the departure city, the destination and a time. Or a device diagnostics bot could guide you through the hardware components and tests to find out the issue.
It´s a finite-state dialog, the system completely controls the conversation.
If your input match what the bot has anticipated, the experience will be seamless. However, if it stray from the answers programmed and stored in the bot database, you might hit a dead-end. Back to a human to complete the interaction…

These were efficient and simple systems but not really effective.
In normal human-to-human dialogue the initiative shifts back and forth between the participants, it’s not system-only.

A very recent trend is to use natural language processing (NLP) and Machine Learning (ML) algorithms such as you see in smartphone-based personal assistants (Apple Siri, Microsoft Cortana  or Google Now) or when talking to your car or your home automation system (Amazon Alexa) or in some messaging platforms.

Continue reading “Chatbots”