There is a bill being making its rounds in the Senate that, if passed, would require large internet companies to disclose that their results are using “opaque algorithms” and offer consumers the option to see search or content that are non-personalized. This was reported by the Wall Street Journal (WSJ) and is called “The Filter Bubble Transparency Act.”
Activist and co-founder of Upworthy Eli Pariser came up with the term Filter Bubble, which describes the socially destructive impact of showing fragmented and highly personalized content to internet users.
Republic Senator John Thune is the main sponsor of the bill, but is being supported by both major parties. In an interview, he said that the point of the bill is to improve “transparency,” “choice” and “control” for consumers. Here is the general idea behind the bill:
[The platform] provides notice to users of the platform that the platform uses an opaque algorithm that makes inferences based on user specific data to select the content the user sees. Such notice shall be presented in a clear, conspicuous manner on the platform whenever the user interacts with an opaque algorithm for the first time, and may be a one-time notice that can be dismissed by the user.
[The platform] makes available a version of the platform that uses an input-transparent algorithm and enables users to easily switch between the version of the platform that uses an opaque algorithm and the version of the platform that uses the input-transparent algorithm by selecting a prominently placed icon, which shall be displayed wherever the user interacts with an opaque algorithm.
Currently, “personalization” is the big thing marketers and tech companies are focusing on, but the proposed law will strip away any personalization from being utilized by algorithms. It’s believed that Google heavy-handedly personalizes results, the company has previously said it does not, with the exception of location and “immediate context from a prior search.”
This would have a limited impact on Google from a practical matter if this is the case, but it would possibly have a bigger impact on companies like YouTube, Facebook and maybe even Amazon. Other things that would be affected would be websites and apps that utilize an algorithm that takes personal data or context into account.
The bill would apply to “any public-facing website, internet application, or mobile application, including a social network site, video sharing service, search engine, or content aggregation service.”
It represents an expression of frustration with Google and Facebook in particular, and attempt some control over how they present content. Republicans believe that “conservative voices” are being “filtered out” but big internet platforms, which they believe are biased. For Democrats, platforms like Facebook are manipulated by bad actors and partly responsible for intensifying polarization of the electorate.
The bill wouldn’t affect companies with less than 500 employees, less than $50 million in revenue or audiences of less than one million users.
Currently, it doesn’t seems that the proposed law will require companies to disclose the specific inputs into their algorithms, just that they are using them. U.S. courts had ruled previously, in Search King, Inc. v. Google Technology, Inc. (2003) and Langdon v. Google, Inc. (2007), that “search results” are protected editorial speech.
Theoretically, this applies equally to Facebook’s News Feed or YouTube results and content recommendations. But, the Supreme Court hasn’t ruled on the specific question of whether or not search results are protected speech under the first amendment.
It isn’t clear if the First Amendment might be used to challenge its Constitutionality. There was some speculation from WSJ that the bill’s provisions could ultimately be folded into a broader Congressional digital privacy legislation. If the bill becomes law in some form, the Federal Trade Commission would be in charge of enforcement.