Bias response policies rarely emphasize free speech, opting instead for stern, admonitory lists of “no go” topics subject to administrative questioning.
Recently, courts have grappled with the question of whether data is speech for purposes of the First Amendment. Google, and other tech giants, have defended their algorithmic outputs under the guise of free speech. In a new essay titled “What Happens if Data is Speech,” published in the University of Pennsylvania Journal of Constitutional Law Online, I consider the next question in this emerging area of the law. What happens if data is speech? I approach this inquiry from three angles.
First, I explore how affording constitutional scrutiny to data-based outputs impacts the validity of data privacy laws. Second, I turn to the power of search engines, and consider which poses a greater threat to free expression: the lack of regulations of these powerful intermediaries, or the regulations themselves. As search engines evolve into decision engines, and more of our choices are informed by the outputs of these algorithms, this tradeoff becomes more important based on what the search engines choose to reveal, and obscure.
I conclude by offering a framework of how courts should treat algorithmic outputs for purposes of the First Amendment, based on their nexus with human interaction. The more the human interacts, the closer the communication will be to something the human created herself, and something that warrants protection. In contrast, outputs that are created with isolated autonomy, and involve little personal involvement depart further from the humanistic expression that warrants protection. Whatever regime the courts settle on must confront this interwoven nature of human-computer interactions.