Google just gave its 2019 I/O presentation, live at the Shoreline Amphitheatre in Mountain View, California. The event began with CEO, Sundar Pichai taking the stage, addressing audiences on all the exciting announcements prepared for I/O ’19. A link to the full presentation can be viewed down below:
Pichai began the presentation by asking those in attendance to take the time in opening up the I/O app. This briefly showcased the app’s ability to highlight areas of interest, through AR technology, in helping attendees, navigate the show floor.
Pichai than jumped into innovations regarding Google Mail and its services. E-mail improvements shown off included smarter suggested texts and the ability to take back, sent e-mails. Full Coverage, Google’s news service will now be implemented into Google Search. An example was shown, where a Google search for “Blackhole” yielded more meaningful results thanks in part to Full Coverage and machine learning integration.
Aparna Chennarpragada, Google’s AR and Camera VP took the stage, showcasing camera integration into Google Search and image results. An example search for the word muscle, yeiled a 3D model, which users will be able to place in 3D, through AR and camera integration. Another demo was shown off, where a search for Great White Shark yielded a massive 3D shark render, which then was placed on the stage through AR, in proper scale.
Google Lens, allows the ability to index the physical world. Chennarpragda showcased a new Google Lens feature that transforms the user’s smartphone, allowing, for example, restaurant goers to scan menus, highlighting popular dishes and areas of interest. Data accrued from google reviews and customer data is what fuels Google Lens. The new and updated Google Lens software will now able to translate and verbally, read aloud text, scanned from the user’s Google AI-powered, smart device. In other words, Google Lens will allow those who may have trouble reading or those with a penchant for travel, a new sense of freedom, regardless of their circumstance or physical location. Chennarprada promises these updates will roll out later this May.
Google Duplex has seen expansion as well, with the software becoming available on the web. With Duplex integration, users will be able to ask Google to actually book their whole trip just by simply asking their assistant. The demo showcased Google Assitant booking a vehicle for Pichai, auto-filling all the applicable forms, while still allowing the user (Pichai) take over and make changes if desired.
Scott Huffman took the stage, announcing the next generation of Google Assitant. This next-gen Google AI reduces the massive storage requirements of the software into a more manageable .5 gbs, allowing the software to live on the user’s smart device, eliminating network related-latency. A live demo was shown off, where the assistant was put through its paces. Huffman promised that this next generation version of Google Assitant will be available in the next iteration of the Google Pixel.
Smarter suggestions, known as Picks for You will allow Google Assitant in showing users, recipes and other personally tailored results, that conform and learn the nuances of the individual. Google promises that they will also be bringing Google Assistant into the car. By saying, Hey, Google, let’s drive. Google Assistant will learn the driving habits of the individual, suggesting shortcuts, traffic information and other useful resources, without the need of opening additional apps.
New privacy measures at Google were shown, with Google user accounts now living on the top right of your smart device of choice, allowing easy access to the most relevant accounts details with a tap of the screen. Incognito mode will also be coming to Google Maps, allowing users to obscure their location and search history if so desired.
Federated Learning, a new form of machine learning, which has been created in order to save bandwidth and data, allows machine learning to quickly pick up new and emerging slang or shorthand words, such as YOLO, without the need of a large network, instead of picking up.
Live Captions, a new feature that essentially allows subtitles for real-life scenarios, now makes it possible for users who may suffer from speech or communication difficulties to quickly understand verbal information, through the use of their smart device. This technology is possible due to a shift from the cloud to onboard or native learning software, already existing on the user’s phone.
Stephanie Cuthberston took the stage, briefly talking about upcoming foldable phones, with, of course, an emphasis on Android Q. 5G network features were shown off, and finally Dark Theme was shown off, promising that it will bring with a reduction in battery useage.
A new mode for Android users, known as Focus mode was shown off. This mode allows users to users to pick and choose apps to disable and allow for given periods of time. New parental controls, which now can be linked to a family of devices, has opened up its flexibility, allowing parents to see app activity, limit usage time and essentially control their child’s device, without directly taking it from them.
Q BETA 3 now available on 23 different vendors, including most OEMs and all Pixel phone variants.
Rick Osterloh, Google’s VP of devices, introduced a new name for their line of Google smart ai powered range of products, known as the Helpful Home. This new umbrella integrates google Nest with a new product known as the Nest Hub Max. Hub Max pulls together all the various smart devices present in the home, in one useful hub. Hub Max ships with its own very camera, allowing users to have an all-in-one portal to their home. A physical button on the back of the device, electrically disconnects the camera when not in use, giving users the assurance that the camera is not on, when not desired.
Nest Hub Max uses on-device machine learning to better understand the faces of those in the home, giving appropriate levels of access to residences of the household. By simply moving ones hand, the Nest Hub Max can learn to pause music or media without the need of the user verbally asking the Google assistant. The original Hub Max will now be able for $129.99, in addition to it now being available for purchase in 12 new countries. No price or release dates were given for the Nest Hub Max.
TensorFlow, Google’s opensource machine learning software was given some screen time, with a short video demonstrating various third parties and non-google affiliates, utilizing the technology in various fields, including medical institutions. A CT scan of a patient’s lungs, which appeared to be healthy by 5 out of 6 radiologists that were given the sample, when viewed by Google’s next-gen AI, actually found early traces of cancer, a year before it became evident and high risk. According to Google, a difference in a year, equates to a 40% increase, in the survival rate, amongst those with lung cancer.
Jeff Dean, Google’s AI Division lead, took the stage, introducing the audience to a new version of Public Alerts, a prominent feature within Google’s AI social Good initiative. The new Public Alerts software simulates areas that would likely be affected by flooding and elevated water levels. Neural Networks then accurately correct the information with map data, granting people access to accurate predictions of what to predict, in times of natural crisis.
Finally, before ending the press event, Dean touched upon Google’s Impact Challenge call, which the company launched last year. Today, Dean revealed the 20 selected groups, with a few being present during the live event. This initiative aims in giving some of the world’s brightest minds the same resources as used by Google in delivering some of the new innovations talked about during this every event.