Technology giants grapple with privacy in an AI world

What's been clear from Google IO so far, along with Microsoft's developer event, is that artificial intelligence is becoming pervasive as a core part of the everyday tools. The word "AI" is thrown around a lot, often covering things that aren't even really intelligent, so it's often hard to know what it really means.

Seeing Google's advances in AI, and experiencing some of them first hand in Android P, or Assistant, is wild because we're beginning to move into a world in which it's difficult to understand the complexity going on behind the scenes. As AI becomes more commonplace and unavoidable, a new discussion arises: how much data are we giving away in return?

I'm in two minds about where we're headed and this tension will become more apparent for us all in the future. 

On one hand, the things we're able to do as a result of these AI advances, like have a computer call a hairdresser and make appointments for us, are amazing. On the other hand, we could be giving up much more than we anticipate in the name of convenience, as Charlie Warzel argues for BuzzFeed:

Google has always offered users a trade-off for its services: Let us know everything about you. We promise it‚Äôll be worth your while. But the company's latest vision of the future offers users more invasive trade: dominion over the choices you make ‚ÄĒ in some cases, control of your literal words and actions.

These advances are incredible, and while I believe that the benefits often outweigh the drawbacks, it's worth keeping these trade-offs in mind as they become even more pervasive. As AI advances become a part of your everyday workflow, we should ask for more control, and transparency, over what they're learning about us, and where that data ends up.

A good example of this in action at I/O: adaptive battery. Google loudly touted this feature, which learns your phone usage and adjusts your phone to extend your charge as far as possible, but didn't clearly outline how that data will be used. Oddly enough, Google has spent a bunch of time on building these features to work locally, offline on your device in a similar fashion to Apple, but didn't highlight this at its keynote.

AI is going to become impossible to avoid, that's a fact. With its advances, and sheer convenience as a result, users won't complain or even question what's going on underneath. Technology companies are making AI friendly, but the side effect of that is users not understanding, or caring, what happens underneath. Where is that data going? Does it matter? How will it be used? These are simple questions that AI tries to gloss over, by sounding futuristic and friendly.

To be clear, I don't think we should be trying to scare people or push them away from AI, but just help them understand the reality of the tools they embrace. For many, computers are still magic, so AI is mind-boggling, and understanding the implications on society if data collection is ignored en-masse as a result is scary to consider.

What we should be doing is at the very least making them aware of the inner workings behind that magic. Something like those labels on food with a breakdown of the ingredients, or even just a simple explainer of where the data ends up would suffice.

For now, however, it's difficult to see behind the magic curtain. It's not too late to change that.