#166: Google adds AI everywhere
AI is here, and it's in everything. Is that OK?


Google focuses on AI as a differentiator

This week I attended Google I/O, the company's annual developer event, in Mountain View for a look at the future of the company's products – which turned out to be a dizzying array of updates for consumers.

Many of Google's products are being injected with "artificial intelligence" for the first time, now that the company has perfected machine learning. It's betting that this is the key differentiator going forward, and is how it can defend against Apple's hardware chops: sheer software innovation.

Android, for example, will soon understand how you use your phone throughout the day and adjust its performance accordingly to save battery until you go to bed. It will use the same smarts to adjust screen brightness to your liking and a bunch more – all performed locally on the phone.

A future update for Assistant will be able to understand when you're carrying on the conversation and keep it flowing without the need for more hotwords, and Google News uses AI to understand news stories, group them together, and add additional context on its own.

Google's strategy is clear: get great hardware into your hands, and make it significantly better over time with software alone, by understanding how you actually use it and quietly adjusting over time to fit your day.

Previously, Google built in these types of smarts out of the box but didn't say anything, or add them on the OS-level for use beyond the confines of its own devices. That's changing in a big way.

The Pixel 2, for example, is widely regarded as one of the best cameras available and uses fairly standard parts but does some on-device neural magic to make its photos so perfect. Google Home only has two microphones up against Amazon Echo's seven, but can perform just as efficiently with less thanks to software-side beamforming that understands acoustics and can make adjustments without the additional components. 

Google's now doubling down and applying this across its products and into its software at its most fundamental level... in your operating system.

By understanding how people actually use their phones, and making adjustments over time, devices will become less blunt algorithmic instruments and more tailored to us. Some of these changes are behind the scenes, others are more obvious.

Slices, a new Android feature, uses similar technology to anticipate what you might want to do next and offer up options from inside apps so you don't need to tap around to get to whatever it is that needs doing. An example of this would be surfacing an instant order Uber to Shoreline Amphitheatre button when you should be on your way there based on your calendar. 

This is a step change for our devices because they'll soon understand much more about how we live with them, and adjust accordingly, rather than just do exactly what we've configured them to. The question that's then left is what does this mean for the user, and how will they know what's going on behind the scenes? Will they care?

That much is unclear, but we're now at a level of complexity in our devices that it's likely for the better, and will change the way we come to expect to interact with software.

Google's betting that AI will help us live our lives and reduce technology's complexity. Whether or not that is true is yet to be seen, but it's a fairly different approach from Apple's, which is predicated on yearly hardware improvements. Consumers may not even notice AI, but they'll notice the ambient effects of it.

Maybe soon we'll have fewer endless menus and configuration options, just devices that get out of the way and help us get stuff done.


Other news

🤖 Android gets a gesture-focused interface

🔓 Critical PGP flaw might expose secure email

👻 Snap rolls out a redesign of its redesign

🕐 Google includes digital wellbeing tools in Android P

🤷 Huge executive shakeup at Facebook


Technology moves fast. Keep up with it, and be the first to know with re:Charged, my new briefing delivered four days a week. I'll tell you what matters and why.

Join 300 others building a better model for news! Sign up here today, and keep the ads away. 💌


Should I give my children a smart speaker?

A few weeks back, Amazon touted a new Echo device specifically built for kids. It's colorful, drop-proof and even has special software designed for kids (Google is doing this too, now). The big question? Should kids have a personal voice assistant? When is OK?

I don't have children, but we grappled with the "what if" scenario of if we did on the Charged Podcast last week; it's an interesting question to consider, and the technology isn't going away.

🌎 Medium (if you can't read, open in incognito mode)


Other great reads

Mark Zuckerberg doesn't understand journalism (The Atlantic)

I dream of content trash (The Baffler)

Educating the next generation of designers (This post is about Zach Grosser, our new podcast host!)


We've got a new podcast!

If you've been following this for a while, you likely know about Charged Podcast, but I have some exciting news! We've created an all-new podcast format with two hosts, and are now releasing episodes weekly discussing the implications of our ever-shifting technology world.

Jump in, check out the new format, and let me know what you think. We're on Apple Podcasts, Google Podcasts and wherever else is good. 

🎙 Check out the new Charged Podcast


Thanks for reading! You're a part of a community of 15,000 others getting the best in tech news every week.


Like on Facebook:

#166: Google adds AI everywhere