[Link] Algorithms literature

From the Social Media Collective, part of the Microsoft Research labs, an interesting and comprehensive list of studies about algorithms as social concern.

Our interest in assembling this list was to catalog the emergence of “algorithms” as objects of interest for disciplines beyond mathematics, computer science, and software engineering.

They also try to categorise the studies and add an intriguing timeline visualisation (that shows how much interest are sparking the algorithms in this time):

timeline

Chatbots

A brief history of chatbots

A chatbot is a computer program which conducts a conversation via auditory or textual methods.

The term “ChatterBot” was originally coined by Michael Mauldin in 1994 to describe these conversational programs but they are much older, the first one being ELIZA by Joseph Weizenbaum of MIT in 1966.

Leaving the academic world, conversational agents have been typically used in dialog systems including customer service or information acquisition.
Many large companies started to use automated online assistants instead of call centres with humans, to provide a first point of contact.
Most of these systems ask you to push a digit corresponding to what you want or say what you’re calling about and scan for keywords within the vocal input, then pull a reply with the most matching answer from a database.
These systems are based on simple logic trees (SLT).

An SLT agent relies therefore on a fixed decision tree to gather information and redirect the user.
For example, an insurance bot may ask several questions to determine which policy is ideal for you. Or an airline bot could ask you the departure city, the destination and a time. Or a device diagnostics bot could guide you through the hardware components and tests to find out the issue.
It´s a finite-state dialog, the system completely controls the conversation.
If your input match what the bot has anticipated, the experience will be seamless. However, if it stray from the answers programmed and stored in the bot database, you might hit a dead-end. Back to a human to complete the interaction…

These were efficient and simple systems but not really effective.
In normal human-to-human dialogue the initiative shifts back and forth between the participants, it’s not system-only.

A very recent trend is to use natural language processing (NLP) and Machine Learning (ML) algorithms such as you see in smartphone-based personal assistants (Apple Siri, Microsoft Cortana  or Google Now) or when talking to your car or your home automation system (Amazon Alexa) or in some messaging platforms.

Continue reading “Chatbots”

[Link] The five keys to a successful Google team

An interesting article from the NY Times  about a 2012 Google initiative— code-named Project Aristotle — to study hundreds of Google’s teams and figure out why some stumbled while others soared.

The article itself is a longer and more narrative recount of what has been posted earlier by one of the lead researchers, Rozovsky. Following is a summary and highlights, see the article for the entire text.

After months arranging and looking at the data, Rozovsky and her colleagues were not able to find any patterns or even an evidence that the composition of a team made any difference.

We were dead wrong. Who is on a team matters less than how the team members interact, structure their work, and view their contributions.

As they struggled to figure out what made a team successful,  they looked at what are known as ‘‘group norms’’: the traditions, behavioural standards and unwritten rules that govern how we function when we gather.

Team members may behave in certain ways as individuals but when they gather, the group’s norms typically override individual proclivities and encourage deference to the team.

Project Aristotle’s researchers began searching for instances when team members described a particular behaviour as an ‘‘unwritten rule’’ or when they explained certain things as part of the ‘‘team’s culture’’ and which norms mattered most.

There were other behaviors that seemed important as well — like making sure teams had clear goals and creating a culture of dependability. But Google’s data indicated that psychological safety, more than anything else, was critical to making a team work.

Psychological safety is ‘‘a sense of confidence that the team will not embarrass, reject or punish someone for speaking up,’’ Edmondson wrote in a study published in 1999. ‘‘It describes a team climate characterised by interpersonal trust and mutual respect in which people are comfortable being themselves.’’

However, establishing psychological safety is, by its very nature, somewhat messy and difficult to implement.

What Project Aristotle has taught people within Google is that no one wants to put on a ‘‘work face’’ when they get to the office. No one wants to leave part of their personality and inner life at home. We can’t be focused just on efficiency. Rather, when we start the morning by collaborating with a team of engineers and then send emails to our marketing colleagues and then jump on a conference call, we want to know that those people really hear us. We want to know that work is more than just labor.

Finally, Google created a 10-minute exercise that summarises how the team is doing on five key dynamics:

  1. Psychological safety: Can we take risks on this team without feeling insecure or embarrassed?
  2. Dependability: Can we count on each other to do high quality work on time?
  3. Structure & clarity: Are goals, roles, and execution plans on our team clear?
  4. Meaning of work: Are we working on something that is personally important for each of us?
  5. Impact of work: Do we fundamentally believe that the work we’re doing matters?

team