I recently watched parts of both Google and Microsoft developer conferences (respectively Build 2017 and I/O 2017).
As expected, there was big emphasis on Artificial Intelligence but, in all, I liked more the Microsoft’s one while the Google’s felt too heterogeneous and without real meat (the new capabilities from Google Lens have been available e.g. at Baidu since years).
A few things that attracted my curiosity:
Vision plus X is the killer app of AI
At Google I/O, Dr. Fei-Fei Li – the new Chief Scientist of AI/ML at Google Cloud – articulated the most convincing vision:
As it matures, vision is going to be the “killer app of AI”, said Li. “When eyes were first developed in animals, suddenly animal life becomes proactive… evolution has changed,” she explained.
Computer vision, she stressed is “one of the most important elements of machine intelligence and the transformation of enterprise and companies”.
While the last 10 years have brought significant progress in basic perception tasks, such as object recognition and image tagging, Li called the next big phase “vision plus X”. In other words, the transformation of enterprise will come with combining the capabilities of computer vision with other fields of study.
For instance, she noted that vision plays a “fundamental” role in communication, opening up opportunities to combine vision and language capabilities for functions like indexing videos.
Microsoft showed a preview of Azure IoT Edge, a technology that extends the intelligence of cloud computing to edge devices (in this context, devices – sensors, actuators, system end-points which are computing near the data source).
In early Internet of Things (IoT) solutions, most devices simply sent telemetry to and received commands from the cloud, with the logic residing in the cloud.
As billions of devices get connected and send trillions of messages, it makes sense to move some of the cloud intelligence out to IoT devices themselves. When IoT devices start running cloud intelligence locally, we refer to them as “IoT edge” devices. Enabling intelligence on IoT edge devices means enabling analytics and insights to happen closer to the source of the data, saving customers money and simplifying their solutions.
Think about the advantages when a factory equipment failure is predicted by local intelligence and the equipment can quickly act autonomously.
Edge devices have usually small footprint (e.g. a Raspberry Pi or smaller) but even smartphones and autonomous vehicles can profit from this.
Those devices could use local models to make predictions (like Google Smart Reply) but standard machine learning approaches require centralising the training data on one powerful machine or in a data center.
Now everyone is trying to bring the model training to the device as well. For example, Google with its new Federated Learning, that enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud.
The challenge is how to develop, deploy and manage this cloud intelligence for IoT devices in a secure, cross-platform and scalable way. You want devices that can act locally based on the data they generate, while also taking advantage of the cloud to configure, deploy and manage them securely and at scale.
Azure IoT Edge builds upon (and incorporates) the existing Azure Gateway SDK which allows to connect the devices regardless of protocols and supports multiple hardware and languages to process the data.
In the image above the on-premises gateway performs the needed protocol adaptations, filters the data so that only the most important information is then sent to the cloud.
On Microsoft GitHub repository are available several modules (i.e., to map MAC addresses to IoT devices or to send and receive messages to / from IoT devices) and several apps examples.
Finally, Microsoft Graph is the fabric that will tie all devices and applications together.
It is an already existing service (that lets developers get data out of its productivity apps like Office 365 and its other cloud-based productivity services, in a business environment) but is now expanding the reach of what can do and where developers and users can interact with it.
Microsoft Graph creates connections between people and activities; the new possibility is that you can connect devices too. Microsoft says that basically any device where you use Cortana will become part of the graph. Indeed, Cortana looks to be the main conduit for prompting users to pick up a certain activity when they switch between devices.
The new feature will allow to work seamlessly among different devices including iPhone, Android or a Windows PC and it goes beyond that, tapping into Microsoft’s cloud storage services, keeping track of almost everything you do on your PC.
You can see it as Microsoft realising that “if you cannot beat your enemies then join them” .
Graph technology is a complement to AI: graphs encode the intelligence that exists in relationships among entities.
The IoT is made for graph-based distributed intelligence: the devices — such as sensor-equipped smartphones, drones, cars, industrial or home appliances — tend to be deployed in complex, non hierarchical relationships that change frequently, sometimes moment by moment.
When IoT devices are associated with specific people, those devices would inherit those individuals’ interest graphs, opening a wide range of possibilities.
Think about you are listening a song on your smartphone and when you enter a shared vehicle that you just booked, the music is automatically streamed to the car’s speakers, and all kind of preferences (sit height, air conditioning, …) can be set.
Now, let’s see if Apple will announce something in this direction too, at their Developers Conference in June.