New Approaches to Digital Platform Governance: Regulatory Analogies

Ellen P. Goodman
6 min readNov 1, 2019
Photo by LUM3N on Unsplash

The cockfight between Facebook and Twitter over how to handle political advertising on digital platforms — and the all-too serious consequences — is only the most recent manifestation of western democracies’ failure to regulate digital platforms. There are a number of regulatory proposals and new laws, especially in Europe, to create oversight for digital media platforms, content regulations, transparency requirements, and responsibilities for monitoring political communications, among other interventions. For the most part, these moves extend analog media regulatory modalities onto digital platforms, ending the misplaced deference to a passé internet exceptionalism by which digital platform owners were able to disclaim responsibility for harms on the grounds that they are merely neutral conduits. Commentators have argued for many years that digital platforms are media companies; responsive regulatory reform is finally coming.

However, this reform impulse runs into two obstacles: (1) many of the moves, especially outside of the U.S., are content-based and bump up against individual speech liberties which, at a time of rising illiberalism, need defending both from state and platform power, and (2) the extension of analog regulation does not scale for the speed, volume, and algorithmic channeling of digital flows. New kinds of regulatory intervention are needed to deal with these platform governance challenges, not to mention other applications of AI. Media regulation is not sufficient. Because of both free speech sensitivities and scale problems, the most promising solutions focus less on disinformation and incendiary speech and more on systemic design issues. The following ideas borrow from regulatory modalities outside of media law, drawing analogies to other kinds of systems regulation that might bear on digital platforms and other technologies.

Products liability. Pioneering attorney Carrie Goldberg brought and lost a federal case against Grindr for enabling an ex-boyfriend to harass her client by creating fake profiles and then sending men looking for sex to his home and work. She claimed that the service should be liable in tort law as a defective product. She lost because Grindr is immune under federal law (Section 230) for the speech it publishes. If the scope of Section 230 immunity is this broad, it needs to be amended. Platforms should have some duty of care for harms caused or enabled by their platform design.

Danielle Citron and Benjamin Wittes have proposed that service providers should have to earn immunity through reasonable content moderation practices. Rather than assessing these practices on a case-by-case basis, courts would look at them as a whole. I have proposed with Karen Kornbluh a similar trimming of Section 230 immunity, but with the involvement of an expert regulatory agency. Any increased liability for digital platforms raises the danger that risk-averse platforms will over-police. That is why it is important to make liability reform part of an ecosystem change that enables more competition and reduces the power of any single platform.

Environmental regulation. For some years, privacy scholars have been agitating to end the notice-and-consent model of privacy incursion and instead treat the promiscuous sharing of data like an environmental harm. This is partly because the negative externalities swamp the individual harm and also because it is impossible to have meaningful consent where there is constant and ubiquitous data harvesting and transmission. Data practices are at the root of online harms. Microtargeting based on fine-grained data dossiers are what enables YouTube autoplay and Facebook newsfeed contributions to radicalization and conspiracy theories. Harold Feld at Public Knowledge, building on the work of Omri Ben Shahar, has proposed a tax on data transactions. If priced properly, such a toll could wean platforms from surveillance capitalism.

Of course, forcing the platforms to internalize the true cost of their data practices would almost certainly put an end to zero-priced platform services. That is why a data pollution tax should be attended by subsidies for public goods. A rather larger sphere of connectivity might have to be subsidized than currently is, perhaps using data tax revenue. Indeed, this revenue could also be used to help support high quality information, local journalism, data commons, and other services the market does not provide. The environmental analogy extends here too: these are the national parks.

Product safety. Regulators, scholars, and activists recognized that some practices to increase data flow threaten individual freedom of mind. The Council of Europe gestured at this in its 2019 Declaration on the Manipulative Capabilities of Algorithmic Processes, where it states that “fine grained, sub-conscious and personalized levels of algorithmic persuasion may have significant effects on the cognitive autonomy of individuals and their right to form opinions and take independent decisions.” Tristan Harris’s Center for Humane Technology identifies design features that confuse or trap users into over-sharing personal information or driving compulsive use, especially by children and other vulnerable people.

The regulatory response to dangerous product features might be to ban them, or to defer to a standard-setting body to regulate them. A piece of legislation introduced by Senators Warner and Fischer would do both with respect to “dark patterns” (the Deceptive Experiences To Online Users Reduction (DETOUR) Act). The law “aims to curb manipulative dark pattern behavior by prohibiting the largest online platforms from relying on user interfaces that intentionally impair user autonomy, decision-making, or choice.” It would empower a professional standards body to develop best practices, prohibit features that hook kids, and stop regulated entities from doing behavioral experiments (often through A/B testing) without informed consent.

These concepts need refinement; working them through for any technology requires competencies that Congress doesn’t have and few legislatures would. New kinds of regulators or regulatory partnerships may be needed to determine when AI has dangerous features. In the meantime, the precautionary principle may be appropriate to hold the line before bad features are released into the “wild” where they will do harm. For example, there is no good way yet to cabin the use of facial recognition systems so that they preserve the right of individuals to obscurity as they walk through the world. That may be a reason to ban facial recognition systems — or at least government deployment of them — as cities like San Francisco have done. Governments that procure algorithmic systems for policing or child welfare management are implementing controls about inputs or models they simply won’t use because of accuracy, bias, transparency, or other concerns.

Agriculture regulation. At the start of the 20th Century, Upton Sinclair’s description of revolting conditions at a Chicago meatpacker resulted in a meat inspection law. Later, agricultural inspection was expanded and carried out by the Food and Drug Administration, the Department of Agriculture, and state authorities. They check farm production practices to monitor for pathogens and vectors of contagion. One of the chief critiques of algorithmic systems is that they are black boxes with no outside audits. Attempts by independent researchers to “contest” these algorithms are frustrated by lack of access. Sometimes, the inspectors even face legal liability under the Computer Fraud and Abuse Act for scraping data.

The inspection of algorithmic systems would allow for an audit of Facebook’s content decisions or Twitter’s trending topics. It would permit assessment of the accuracy of a prediction about credit-worthiness or promises around data flows. The recent federal court decision in hiQ v. LinkedIn, finding that scraping from publicly available data is not a crime even if it violates terms of service, is a positive step in empowering private inspectors. But affirmative actions are needed. Whether that means ensuring the appropriate regulatory bodies (such as the Federal Trade Commission) have access to proprietary algorithmic systems for inspection or relying on civil society and academic groups for that function, access should be given and inspection made. The very forces at issue on the farm — contagion and pathogens — are at work in digital flows. A bad sentencing algorithm can spread injustice through an entire state system and beyond. A tweak to the Facebook news feed algorithm can quickly change the public discourse. Grokking those results, even without the regulatory power to change them, is important not least to democratizing knowledge about the medium we are swimming in.

--

--

Ellen P. Goodman

Distinguished Professor, Rutgers Law. Information law, media, algorithmic governance, smart cities, free speech, disclosure, green marketing