Monitoring Social Media

Power always has to be limited. American citizens know that checks and balances are key to protecting individuals from abusive uses of state power. Dominant private actors need constraints, too. Sometimes private citizens may seem like less of a threat to individual freedoms, and for that reason, they may get less oversight and less public attention. This helps to explain why social media platforms have been minimally regulated for so long. This has given them the chance to gain tremendous power, becoming almost like new sovereigns for our time. 

Under the “Cloak of Invisibility,” owners control the most valuable asset on Earth: users’ personal data. This enables them to profile their users, and use or misuse those profiles in ways that might threaten democracy. Information is power, and personal information is especially powerful. Checks and balances are needed to avoid private abuses, just as they are needed to restrain government power.

In what follows, I will explain why social media has so much power. I will consider some ways that this power could be abused. Finally, I will consider how clearer moral or social norms might help to diminish this threat.

The Power of Information

Businesses are expected to seek profit, but monetizing personal data raises ethical concerns. On social media platforms, artificial intelligence and algorithms are used to enhance the users’ experience, to know the users, and to target them with specific ads and information that might interest them. In social media, it is not the users who find the content but the content that finds them. This can be convenient in certain ways, but filtered content can also manipulate people in problematic ways.

Profiling enables platform owners to predict users’ decisions, and to profit by selling information to advertisers. Everyone understands that platforms provide a service enabling advertisers to connect certain target groups of users with their ads. User data fuels the social media system. A huge benefit of this business model is that data is non-rivalrous—as Daniel Castro points out—which means that one party’s use of the data does not reduce the amount of data available for others. This structure requires a permanent group of users to feed the system with their data profiled and monitored by algorithms and Artificial Intelligence. The more data is available, the more profit the relevant services can generate. To maximize the audience (and the profit), ownership makes profit-oriented decisions, some of which encourage users to spend more time on the platforms. Other actors then incentivize them to share more information about themselves.

Learning about users’ educational background, workplace, professional and personal network, locations and travel destinations, likes, and reactions to others’ content is valuable information to advertisers (and not just to advertisers). Not every piece of information is equally valuable, but all information helps machines to understand users. According to Rahul Telang, “If you are educated or wealthy, advertisers will pay more for you.” Simply: everything you do or don’t do offers social media platforms more information about you. Machines are able to draw conclusions based on users’ shared or hidden information. Knowing the users helps the algorithms find what triggers their further consumption of the contents of the platform. They find what is most attractive to their users, and thus become their main forum to exchange views, get information, and share still more data. 

This business model exploits a characteristic of human beings, which political scientist Herbert A. Simon called “limited rationality.” Simon observed that human beings are forced to make decisions based on the information presently available to them. Given our cognitive and time limitations, people are easily misled by unreliable sources of information. 

When multiple social media companies are competing to become the most popular outlet on the web, this can slowly push the focus from professional, objective, and ethical information-sharing towards more sensationalist content.

Since people’s resources and capacities are limited, owners can get away with filtering the news for the platforms’ users based on their individual user profiles. The potential to manipulate users is incorporated into the system. The filtered news comes from well-selected sources and is directly delivered to the users’ newsfeed. The selected information behaves as a sub-threshold stimulus that targets users’ unconscious minds and impacts their interpretations of certain topics. 

The Costs of Manipulation

It is obvious that social media can threaten individuals, especially children and other vulnerable people. American law has done very little to address this. European data protection rules are currently incorporated into the General Data Protection Regulation (GDPR). The GDPR determines principles for processing personal data (lawfulness, fairness, and transparency) and lays down some legal principles for collecting and processing the users’ personal data. The GDPR is particularly concerned with the lawful processing of personal data, carefully considering when the processing is necessary for the legitimate interests pursued by the controller or by a third party. Sometimes, even if compelling interests exist, they may be overridden by the interests or fundamental rights and freedoms of the data subject, which require the protection of personal data. This is particularly important when the data subject is a child. There should be checks and balances in the system, even on a self-regulatory basis, to provide privacy protection for users and minimize their exposure to potential harm.

Beyond the personal costs, social media can also pose risks to society at large. When multiple social media companies are competing to become the most popular outlet on the web, this can slowly push the focus from professional, objective, and ethical information-sharing towards more sensationalist content. The more extreme and negative the content is, the more people it reaches. Because its business model is meant to capture people’s attention, social media does serve as a significant source of news for Americans today. According to the Pew Research Center’s survey conducted in January 2021, Facebook and Twitter top the charts for news sharing; 54% of Facebook users and 59% of Twitter users get the news from these platforms. Social media platforms have no particular responsibility for ensuring that users get news that is balanced or fair, or even accurate. For them, all that matters is that people continue to click and share. When social media filters the news, users should expect to see more extreme content, and more social polarization. People find themselves in “social media bubbles” where everyone sees the same links they see. Those bubbles can shape a person’s worldview in powerful ways.

Perhaps most insidious of all, social media companies can use the information they collect for political purposes, potentially undermining fair elections. We started to understand this in 2008, when Barack Obama harnessed social media in his first presidential campaign to rally a majority of voters and win the 2008 election. Around 74% of internet users sought election news online during Obama’s first campaign, representing 55% of the entire adult population at the time, according to Pew Research Center. It is evident that social media is a handy tool for reaching a large audience, and politicians naturally want to use it. However, this empowers platform owners to decide who can get access, what content can be shared, and whose advertisement is mediated towards the masses. Their decisions over political content sharing might influence the outcome of an election. According to the Center for Responsive Politics, the 2018 Texas Senate race broke the record for the most money ($93 million) spent in a U.S. Senate election (until 2018), much of which was raised by and spent on social media ads and events. Another aspect is the ban of certain politicians from social media. Silencing certain viewpoints threatens democracy, as it may distort the public discourse in one direction. 

Social media also opens society at large to potential foreign influences, which could pose real risks to national security and the integrity of our electoral system. Ben Smith wrote an informative summary about TikTok’s algorithms’ impacts and interpretations. That article summarizes the Trump administration’s concerns about TikTok. First, the vast trove of data TikTok holds—for instance, the private sexual desires of fans of the app who might end up becoming American public officials—could be used to pressure or blackmail high-level officials. The second concern was whether TikTok censors politically sensitive posts. Although both concerns were considered latent by the report of Citizen Lab, foreign knowledge about US citizens’ profiles is still a threat. Of course, foreign influences could also use social media to influence US users’ opinions.

Setting Standards

Regulating social media is difficult, but the process may start with a more serious effort to set expectations for social media companies. How do we want them to regulate themselves?

The owners’ morality is a key question in the age of “surveillance capitalism”—as Shoshana Zuboff named the “bloodless battle for power and profit as violent as any the world has seen.” But, as Count István Széchenyi—often referred to as the Greatest Hungarian—famously expressed, “ownership and knowledge come with responsibilities.” His 19th-century wisdom is even more relevant in the age of “surveillance capitalism.” In democracies, no one should control the vast trove of information social media commands, without at least cultivating and being held to moral standards that might serve as checks and balances in the existing legal framework. 

Google attempted to construct a moral compass of sorts when it added the phrase “Don’t be evil” to its corporate code of conduct, which was also its motto for a time. Following Google’s corporate restructuring under the conglomerate Alphabet Inc. in October 2015, Alphabet took “do the right thing” as its new motto, also forming the opening of its corporate code of conduct. The original “don’t be evil” motto was retained in Google’s code of conduct, now a subsidiary of Alphabet. Owners recognized that the unique model of Google is so efficient, and so unprecedented in its power (compared to the traditional media outlets) that it can be easily used for evil purposes. However, what does evil means in this context?

The Code of Conduct explained it as “do the right thing”—follow the law, act honorably, and treat each other with respect.” It might remind the reader of the principle from Roman Law: Honeste vivere, non alterum laedere et jus suum cuique tribuere. That is, one should aspire to live virtuously, not to injure others, and to give everyone his due. These are the supreme norms of justice are the underlying principles of law and order in society. However, they may not provide enough guidance in the present case. What do the owners define as the right thing? How do they set their internal compass? What if the law is not giving the answers for certain questions, or provides multiple answers?

In the case of social media, owners could do the right thing (or avoid being evil) by allowing all content that does not breach laws. All non-criminal content should be allowed (not including, for instance, hate speech or incitement), with attention given to such abuses as defamation or the violation of personal rights. Freedom of speech is critical in democracies; it is essential to protect online speech and online spaces for expressing opinions. Standards for free speech should be the same in the online sphere as in the offline sphere. Fairness matters, and fairness requires equal treatment of users whose online behavior is within legal frames.

Social media raises serious concerns, but every cloud has a silver lining. The good news is that most people prefer normalcy over extremes. Normalcy requires a healthy balance between private and public interests. Applying high privacy standards to protect the users, requiring transparent functioning from platforms, and acting fairly in ensuring equal treatment to users in their online free speech, are all key to improving the social media experience. Ownership morality and proportionate state measures together can prevent social media platforms from becoming the new unelected sovereigns.