Google bows to pressure with new AI principles
Google capitulated to pressure from its employees and the public last week amid growing concern over its work to help the Pentagon implement machine learning technology; the company backed down, and the contract will end.
The debate raged fiercely because few companies have any sort of public rule set, or principles for what they will (or won't) do with AI in the future. Google today laid out seven principles for the use of AI, that it says will guide it into future projects:
- Be socially beneficial
- Avoid creating or reinforcing unfair bias
- Be built and tested for safety
- Be accountable to people
- Incorporate privacy design principles
- Uphold high standards of scientific excellence
- Be made available for uses that accord with these principles
These are some great, specific tenets that are worth reading because they go into detail about what each means. I was quite surprised, and happy, to see that Google plans to put privacy at the core of all future machine learning projects, and give people a way to opt out as well as refusing to use AI for any sort of surveillance in the future.
What's interesting is that while Google says it'll specifically prohibit the use of AI for any sort of harm, it will still work on military-backed projects that don't facilitate warfare.
The problem with that, and the tenets above, is that they're a great start but so vaguely worded that interpretation in the long haul could be an issue.
That said, Google is now the first of the technology giants to acknowledge these issues, and publicly declare where its borders lie, even going as far to say it wouldn't have participated in the government project if this framework was already in place.
Another new page details how AI should be used, and goes into incredible depth on the topic of fairness, diversity and privacy for these tools. What's crazy is that this conversation is just starting and it's going to get even tougher in the future as these tools become more powerful.
Facebook UI bug might have made you share publicly
It has now been  days since the last Facebook scandal: a UI bug accidentally set privacy to public by default for 14 million people, who may have unwittingly posted to the wrong audience as a result. Good of them to own up!
The image above gives me anxiety, and I kid you not, it's an actual photo from the datacenter where the FCC's comments system was hosted in 2014. Yeah, good luck managing those computers.
Not only is that photo absurd, the former FCC chief, Tom Wheeler, said that the claimed DDoS that resulted from the push for comments on net neutrality "didn't happen" and that even independent contractors confirmed it had never been an issue.
Amazon's Fire TV Cube combines an Echo, streaming box and universal remote
I would like more of this, please, and less random IoT hubs lying around my house.