Microsoft Build 2018 keynote summary

The Microsoft Build 2018 kicked off today in Seattle with Microsoft CEO Satya Nadella taking the stage and presenting Microsoft’s vision and strategy for the developer ecosystem. Scott Guthrie took then the audience through the main technical novelties with a lot of help from product managers and Microsoft partners or customers. If you missed the Microsoft Build 2018 keynote, here is a brief summary of what happened, taking note that it might be difficult to sum up in a few lines everything that was discussed for more than 3 hours. 

To start with, here are some forecasts for the tech world, that help us put the entire Microsoft Build conference in perspective. By 2020 there will be 30 billion connected devices. Individuals will generate an average of 1.5 GB of data each day.  Smart homes will generate around 50 GB of data per day. Autonomous vehicles will generate 5 TB of data per day. Smart buildings will generate 150 GB of data per day. With such amount of data it comes to no surprise that the Microsoft’s strategy concentrate on the intelligent cloud and the intelligent edge. Talking about the intelligent edge, which in my opinion is the core topic for this year’s Microsoft Build conference, Satya Nadella outlined 3 layers in the world of the intelligent cloud and intelligent edge: ubiquitous computingartificial intelligence and multi-sense, multi-device experiences. All talks orchestrated these concepts. So let’s dig into some important announcements.

Not directly related with the intelligent cloud and the intelligent edge was the announcement of .Net Core 3.0. The real cool thing announced here was support for desktop applications in .Net Core. That’s right! We’ll be able to build WPF applications, UWP applications or even Windows Forms applications on .Net Core. It’s expected a better overall better performance of desktop applications running on .Net Core and there will be full support for .Net Core CLI tools. A preview version of .Net Core 3.0 will be available later this year, while the global availability was announced for 2019.

With no major new announcements there were some great talks and demos on how artificial intelligence can be integrated in almost everything. AI can be integrated into meetings ran on Surface Hub to make sure that at the end of the meeting all participants will have a transcript of the entire meeting. But artificial intelligence doesn’t stop here.

One of the coolest things demonstrated was the better integration between Microsoft Cognitive Services and Azure Cosmos DB. A very powerful way to make the most out of data is to infuse it with AI before it’s stored. So one can run text recognition, image recognition, voice recognition, video transcript AI tools (and many more) before the data is really stored. And then you can create a powerful and almost real time intelligent search experience with Azure Search.

And since I mentioned Azure CosmosDb, another very cool thing was the announcement of multi-master support in Azure CosmosDb. With Azure Cosmos DB multi-master support, you can perform writes on containers of data (for example, collections, graphs, tables) distributed anywhere in the world. You can update data in any region that is associated with your database account. These data updates can propagate asynchronously. In addition to providing fast access and write latency to your data, multi-master also provides a practical solution for failover and load-balancing issues. In summary, with Azure Cosmos DB you get write latency of <10 ms at the 99th percentile anywhere in the world, 99.999% write and read availability anywhere in the world, and the ability to scale both write and read throughput anywhere around the world.

Even though not necessarily something brand new, the idea of running Azure function on edge devices was pretty appealing. Leveraging container technologies you can basically deploy Azure function on IoT devices and the code would be executed on the device itself, not in Azure. That;s how developers can build a lot of logic into devices that otherwise might be fairly “dumb”. And this means, in turn, that you can go as far as do basic machine learning model training on devices itself. Sure, you won’t be able to run the ML algorithm, but you can use devices like a Raspberry Pi to take photos and tag then, before sending them to the custom vision cognitive service in Azure. This way you can collect streams of data and categorize it real quick.

Basically everything was around machine learning and artificial intelligence and how you can integrate both in custom software solution very quick and without the need to be a machine learning guru. Sure, there were other cool things happening during the Microsoft Build 2018 keynote, like Alexa live on staged being asked about Cortana. Seems Alexa has become very friendly lately:

I like Cortana. We both have experience with light rings, although hers is more of a .

The other side of the medal: Microsoft is also allowing Alexa to run natively on Windows 10 PCs.

Overall, the 2018 Microsoft Build keynote was note full of major announcements that few would expect, but it was a great opportunity to better understand Microsoft’s approach to infuse artificial intelligence in really everything. I’m waiting for the rest of the sessions, especially on ASP.Net Core.

Update: One major announcement that I’ve somehow missed from my summary is the preveiw availability of  Azure Blockchain Workbench, the fastest way to get started with blockchain on Azure. This developer tool orchestrates an Azure-supported consortium network with a set of cloud services commonly needed to create working blockchain applications. Developers can link blockchain identities with Active Directory for easier log-in and collaboration. Store secrets and keys securely with Azure Key Vault and synchronize on-chain data with off-chain storage and databases to more easily query attestations and visualize ledger activity. Workbench also makes it easy to integrate blockchain workflows with existing systems by using Microsoft Flow and Logic Apps and extend capabilities with a REST-based API for client development and a message-based API for system-to-system integration.

Leave a Reply

Your email address will not be published. Required fields are marked *