Google Crusades for ‘Fairness’

A Google sign is seen during the WAIC (World Artificial Intelligence Conference) in Shanghai, China, September 17, 2018. REUTERS/Aly Song - RC159620B230

Project Veritas recently released clips of a conversation it recorded between a Veritas operative and Google’s Head of Responsible Innovation, Jen Gennai. The content of their conversation, combined with internal memos released and explicated by an anonymous internal whistleblower, were meant to demonstrate a pervasive and insidious left-wing bias at Google. How insidious that bias is remains an open question, but if O’Keefe’s footage and documents are to be believed, there are certainly people at the company promoting intersectional and other critical theories designed to influence algorithms and search outcomes.

Take, for instance, the so-called “Machine Learning Fairness” algorithms used by Google, designed to avoid producing results that are, as an internal memo describes, facilitated by the “unjust or prejudicial treatment of people that is related to sensitive characteristics, such as race, income, sexual orientation or gender through algorithmic systems or algorithmically aided decision-making.” Google calls this phenomenon “algorithmic unfairness,” which sounds benign enough; later in the document, however, Google expounds upon precisely what this means in practice.

When a search result “is factually accurate” — or, in other words, when the company’s search algorithm delivers an accurate and precise representation of the world as it is — Google insists that this can “still be algorithmic unfairness.”

The memo lays out an example of this phenomenon: “Imagine that a Google image query for CEOs shows predominantly men. Even if it were a factually accurate representation of the world, it would still be algorithmic unfairness.”

Read more: https://www.nationalreview.com/2019/07/google-crusades-for-fairness/

Leave a Reply

Your email address will not be published. Required fields are marked *