Media Policy Design Problem

Ellen P. Goodman
7 min readOct 5, 2018

We have a media policy design problem when it comes to digital speech platforms. Media policy has been entirely privatized, with speech platforms assuming the role of government without transparency or accountability.

I will briefly address how First Amendment (“FA”) doctrine has evolved to place ever greater restrictions on government power over platforms at the same time that platforms have grown more powerful. This is principally an American story, but the platforms have exported this structure globally and we are seeing push back in new laws in Europe, South America and elsewhere that are trying to reign in platform power.

In the U.S., the result of the legal structure is that: (1) platforms have unfettered and unaccountable authority to make media policy and (2) to the extent that there is any government role, it is exercised via threat and suasion in ways that are also unaccountable and unfettered. Because of the monopoly structure of digital platforms, policymakers need only convince a couple of CEOs to change their speech policies to do privately what the FA would not allow them to do publicly. This is not a good situation whether one favors an entirely unregulated Internet or accountable media policy.

Here I highlight three FA strands to this predicament: (1) the status of platforms as speakers, indeed with special privileges; (2) platform internalization of certain FA principles of “neutrality” and resistance to defining and privileging the press; and (3) secular trends in FA jurisprudence to expand FA coverage and protection for the sake of protecting economic interests against regulation. Then I’ll conclude with some suggestions for the way forward.

1. Status of Platforms as Speakers

The FA applies only to state action to regulate speech. But right now, it is private entities that are most influential in censoring, encouraging, and funding speech. This is not a new phenomenon. But the degree and kind of influence is new. Platforms like Facebook not only control what’s on their platform, but also shape the discursive space as a whole by purchasing competitors, creating conditions for exits, and intermediating many personal interactions. They in effect make media policy. The ways that digital platforms use data, shape preferences, and control markets is meaningfully different from how big publishers and broadcasters did in the past.

The critique of platform media policy is essentially three-fold:

1. They allow too much bad speech:

a. Structured to reward extreme, sensational, polarizing speech

b. Structured to recommend content that exploits psychological vulnerabilities and bypasses the rational function

c. Do not enforce prohibitions against violence, incitement, hate speech, foreign campaign influence

2. They don’t support good speech: They eviscerate news businesses and fail to pay their due for information costs

3. They are not fair or transparent, and are essentially black boxes. In particular, they are not transparent about advertising, data collection and sharing, private censorship, ranking, and bias.

Platforms are not the first powerful speech intermediaries to undergo critique. Private governance of speech is an old story. Whenever private speakers dominate a field of speech, there is tension between FA law, which is essentially laissez faire, and FA values, which may call for interventions to ensure more participation, more “good” speech, transparency, etc.

The last really big technological transformation in speech mediation came in the form of broadcasting. There, the law addressed this tension by relaxing FA strictures to allow for regulation in the name of FA values. Specifically, the U.S. Supreme Court adopted the notion of “scarcity” (Red Lion). The physical scarcity of the airwaves, and the special licenses that broadcasters had to exclusively occupy them, imposed on broadcasters special responsibilities, such as public interest requirements and concentration limits. These are mostly gone now, but for the most part that is for political reasons rather than FA reasons.

When the Internet came along in the 1990’s, it was seen as a corrective to the platform power of broadcasters and cable. Early court decisions classified Internet speech intermediaries like search and bulletin boards as FA speakers (Reno). This was a foundational move, without which the Internet would not have developed as it did. It certainly made it much more likely that the Internet would be American-born.

Importantly, not only would the full force of the FA apply to Internet intermediaries, but government intervened to boost the power of the emerging platforms with Section 230 of the Communications Decency Act adopted in 1996. Section 230’s 26 words provide sweeping immunity from civil liability for intermediary platforms. “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In other words, they have immunity for circulating speech regardless of its harm. Section 230 bestows on online intermediaries both cost and time-to-market advantages over those who are held accountable for informational harms. It is possible that Internet platform design, with their network effects and winner-take-all structure, would have become essentially monopoly providers no matter what. Section 230 was at the very least an accelerant and possibly a necessary component of dominance.

As we look at the state of FA law today with respect to Internet platforms, there is nothing like the scarcity policy which would relax FA strictures for govt. attempts to achieve positive FA values. Rules that have been adopted in Europe like the Right to be Forgotten or hate speech liability would almost certainly be unconstitutional in the U.S. The same is true for other proposals like search neutrality or preferences for public media. The Sex Trafficking Act — SESTA — that was recently passed takes the first bite out of Section 230 by holding platforms criminally liable for the circulation of speech that encourages or facilitates sex work. It’s not clear that even this law is constitutional under current doctrine.

2. Internalized First Amendment Values

The second FA issue here is really how the platforms have disabled themselves from responsible self-regulation by in effect stepping into the shoes of a hands-off government. As private entities, free from liability under Section 230, and free from government regulation under the First Amendment, the platforms are left to regulate themselves. Policymakers and the public are mad at them for failing. One of the frustrations here is that the platforms have trouble admitting that they do invariably regulate speech through design choices and policies. They insist that they’re tech companies, not media companies. They insist that they’re essentially neutral with respect to speech, showing that they’ve internalized FA law and placed themselves in the role of government (without the associated accountability checks). They don’t want to regulate speech, but rather remain “neutral” as the government is supposed to be.

As part of this worldview, they resist defining the press. One of the failures of FA law in my view is that it has not fleshed out the meaning of the Press Clause. There is freedom of speech and of the press, but the courts have avoided defining what the press is — for good reason. It’s really hard and could be dangerous. So too, the platforms have resisted until recently making distinctions between fact-checked reporting and other speech. This unwillingness to engage in content discrimination — as if they were the government — is one of the sources of private regulatory failure.

3. Lochnerization

Let’s return to the possibility of an outside governmental check on the platforms’ regulatory regimes. As speakers, platforms are well protected by the FA. But FA law is evolving in ways that may immunize companies from regulation even when they’re not engaged in expressive communication. Throughout the 2000’s, at the same time that the Internet was developing, FA doctrine has been changing to make it even harder for government to regulate. This is what some scholars call the Lochnerization of the FA (after a famous pre-New Deal case that held labor laws unconstitutional for interfering with freedom of contract).

There are two principal moves: (1) courts are expanding the coverage of the FA to reach communications that hadn’t really been considered speech subject to FA protection (e.g., data about prescribing habits, SEC disclosure requirements, warning labels on products, product descriptions, etc.) and (2) there’s been an increase in the amount of protection for what is covered. In other words, government must satisfy higher and higher levels of scrutiny in order to convince courts that what was once seen as economic or market regulation is not offensive to free speech.

These jurisprudential changes have been the result of a cunning campaign by libertarian groups like the Washington Legal Foundation, sometimes with help from the Chamber of Commerce or Cato. The greatest victory in this effort was probably Citizens United. Lochnerization may not be that consequential for digital speech platforms, since there is no question that they deal in core FA expression. It may be more important for digital platforms like Uber that use data in ways that are not expressive, but constitutes communication all the same.

4. Towards Solutions

Given this state of affairs, what are some regulatory options? Here are some mutually compatible ones:

a. Deemphasize behavioral regulation and focus on structural regulation, like antitrust, interoperability, data portability, possibly other forms of regulation that could be developed and enforced by a regulatory agency.

b. Develop something like a scarcity rationale focused not on radio frequencies, but perhaps on data, that recognizes the gatekeeping function of digital platforms. This could permit regulation consistent with understandings of the FA.

c. Self-regulation through consortia of platform companies that agree to public rules of behavior and nurture corporate codes of conduct with respect to data analytics and information practices; humane tech.

d. A new approach to media education and civic responsibility that attacks the problem of disinformation on the demand side and increases demand (and willingness to pay) for high quality information.

--

--

Ellen P. Goodman

Distinguished Professor, Rutgers Law. Information law, media, algorithmic governance, smart cities, free speech, disclosure, green marketing