Friday, May 23, 2025
  • About Us
  • Contact Us
Parliamentobserver
  • Ecology
  • Economy
  • Healthcare
  • Politics
  • Education
  • Business
  • Login
No Result
View All Result
Parliamentobserver
Home Business

Policy recommendations for addressing content moderation in podcasts

Dennis Rogers by Dennis Rogers
September 29, 2022
in Business
0
Policy recommendations for addressing content moderation in podcasts
0
SHARES
11
VIEWS
Share on FacebookShare on Twitter

By Valerie Wirtschafter, Chris Meserole

Related posts

FTC Building

FTC Cracks Down on Hidden Charges and “Junk Fees” in New Proposal

October 5, 2024
Navigating Financial Growth: Leveraging Bookkeeping and Accounting Services for Startups

Navigating Financial Growth: Leveraging Bookkeeping and Accounting Services for Startups

May 3, 2024

Charlie Kirk speaks at CPAC 2022 Day One continues in Orlando, Florida on February 24, 2022. (Photo by Zach Roberts/NurPhoto)NO USE FRANCECharlie Kirk, the conservative activist and podcast host who played an important role in spreading misinformation about the outcome of the 2020 election, speaks at the Conservative Political Action Conference in Orlando, Florida on February 24, 2022. (Zach D. Roberts via Reuters Connect)

A great reckoning has arrived for content moderation in podcasts. Just as Facebook, Twitter, YouTube, and other digital platforms have struggled for years with difficult questions about what content to allow on their platforms, podcast apps must now weigh them as well. What speech should be permitted? What speech should be shared? And what principles should inform those decisions?  

Although there are insights to be gleaned from these ongoing discussions, addressing the spread of hate speech, misinformation, and related content via podcasts is different than on other social-media platforms. Whereas digital platforms host user-generated content themselves, most podcasts are hosted on the open web. Podcasting apps typically work by plugging into an external RSS feed, downloading a given podcast, and then playing it. As a result, the main question facing podcasting apps is not what content to host and publish, but instead what content to play and amplify.  

Making those determinations is far from straightforward, of course, but the challenge is not an intractable one. From new policies and user interfaces to novel regulatory approaches, the podcast ecosystem can and should employ far more robust content-moderation measures. 

Balancing moderation with censorship 

Debates over content moderation in podcasts hinge primarily on whether and how widely to share so-called “lawful but awful” content. Major podcasting apps—the applications commonly used on smartphones, tablets, and computers to listen and download to podcast episodes—already have policies and procedures in place to deal with blatantly illegal content. Spotify or Apple Podcasts won’t knowingly distribute an Islamic State recruitment podcast, since doing so would open them to prosecution for supporting a designated terrorist group. How podcasting apps should handle hate speech, misinformation, and related content that is legal but may have harmful societal effects is far less clear.  

Below the level of blatantly illegal content, the most popular podcasting apps face a daunting challenge. On the one hand, given the scale and reach of apps like Spotify and Apple Podcasts—each now enjoys more than 25 million monthly podcast listeners in the United States—their content moderation policies need to account for the societal harms that can result from the mass distribution of hate speech and misinformation. Popular podcasts played a prominent role in spreading the so-called “Big Lie” in the lead-up to the January 6 assault on the U.S. Capitol, for instance, and have also been a key vector in spreading misinformation related to COVID-19 vaccines, leading to unnecessary deaths. On the other hand, popular podcasting apps also have a responsibility not to curtail speech too aggressively. Since hate speech and misinformation can be difficult to define, excessively restricting the reach of contentious political speech—as China, Russia, and other authoritarian states are wont to do—risks unduly limiting the freedom of expression on which democratic discourse depends.  

Until recently, major podcast applications have largely refrained from balancing free speech with societal harms at all. Whereas major platforms like Facebook and Twitter have developed sophisticated platform policies and interface designs to address “lawful but awful” content, the major players in the podcasting space have yet to establish similarly robust policies and measures. As a result, the guidelines and processes for content moderation in the podcasting ecosystem remain relatively underdeveloped and opaque.  

Podcast app policies 

For starters, podcasting apps need to develop far more nuanced and transparent policies for the kinds of content that users can download and play. Podcasting applications have long argued that because they typically do not host content themselves, they operate more like search engines than a traditional social media network or file-sharing service. That is undeniably true. But major search engines like Google and Bing still have well-developed guidelines for the kinds of content they will surface in search results, and those guidelines go well beyond blocking illegal content alone. By comparison, Apple’s podcast guidelines for illegal or harmful content are enumerated in a paltry 188 words. One of the guidelines includes a prohibition on “defamatory, discriminatory or mean-spirited” content but gives no indication of how these terms are defined. In stark contrast to YouTube and Spotify, there are no policies at all for managing election- and COVID-related misinformation. 

Podcasting applications should also have clear guidelines for what kinds of podcasts the app itself will recommend. Along with word-of-mouth, users tend to discover podcasts through a given app’s “most popular” feature (e.g., Apple’s “Top 100” list) or a “personal recommendations” feature (e.g., Apple’s “You Might Also Like” section). By definition, these features will not recommend content that has already been removed. But without further guidelines, they may recommend so-called “borderline” content that comes close to violating an application’s guidelines without actually doing so. By way of example, consider a podcast that falsely claims vaccines are responsible for mass infertility. Such a podcast would not violate, say, Spotify’s prohibition on podcasts that claim vaccines cause death—and therefore would not be blocked within the Spotify app. But that does not mean Spotify’s algorithms should still actively promote and recommend to its users a podcast linking vaccines to infertility. Just as YouTube and other platforms have developed separate guidelines for the kinds of content their recommendation algorithms can promote, so too major podcasting apps like Spotify and Apple Podcasts should develop nuanced policies around the kinds of podcasts they are comfortable playing in their app but not amplifying across their user base. 

User features 

Podcast apps should also build more robust measures for reporting. Whereas major social media platforms rely on a mix of algorithmic filtering, manual review, and user reporting to screen posts for harmful content, podcast apps frequently do not have well-developed algorithms or in-house moderation teams to identify harmful content at scale. Absent the development of sophisticated, real-time systems that allow for better monitoring of prohibited and borderline content, these apps will remain more dependent on user reporting to identify harmful content.   

Yet clear and easy-to-use mechanisms for user reporting are conspicuously underdeveloped in several podcasting applications. Whereas social media networks typically have reporting features embedded in the primary user interface, not all major podcasting apps employ similar features. On Google’s Podcasts app, users looking to report inappropriate content can “send feedback” via a simple text form. On Spotify, neither the desktop nor iPhone app offers an easy reporting process for users. For example, here is Spotify’s interface, which provides no means to directly report content from a podcast series’ page: 

By comparison, Apple Podcast offers a more robust reporting experience. Note how at both the series and episode level users are invited to “Report a Concern”:  

From there, users are directed to a webpage, which delineates specific categories of “concern” that may be in violation of their content moderation policies. Apple’s reporting interface highlights two essential features for leveraging the collective knowledge of users as a tool in content moderation: (1) clear icons in the primary user interface that direct users toward a reporting process; and (2) multiple categories for different types of violations that link to a specific content moderation policy.  

Finally, in addition to improved user reporting, some podcasting apps may consider experimenting with voting and commenting systems. For example, both Reddit and Stack Overflow, as well as other open forums like Discourse, allow users to both upvote and downvote content and leave comments on posted content. The goal of this approach is to leverage the collective knowledge of the community to ensure that quality content is featured prominently across these platforms. “Wisdom of the crowd” approaches such as these aren’t feasible for every app, and they need to be developed in a way that guards against adversarial or other attempts to game the system. Nonetheless, they offer a promising way to leverage user feedback as a way of moderating content at scale. 

Regulation  

Regulators and lawmakers also have a role to play in shaping policies in the podcast ecosystem. Regulating podcasts is difficult in part because it requires balancing the right to freedom of expression with the need to preserve societal welfare and protect against social harms. To strike that balance appropriately, regulators and government officials should neither seek to proscribe lawful content outright nor indirectly pressure podcasting applications to interpret their terms of service such that certain content is banned.  

Yet even if government officials should not weigh in on the permissibility of otherwise legal speech, that does not mean they should take a hands-off approach to the podcast ecosystem overall. In particular, for podcast applications with mass reach, policymakers and regulators should push for greater transparency on:  

  • Content guidelines and policies. Regulators should require podcasting apps to clearly disclose what their content moderation polices are. Ideally, the policies would also be easy for users to understand and include either examples or clarifications of how ambiguous terms will be interpreted. For example, Apple podcasts’ prohibition on “mean-spirited” content should be qualified in more detail: How will “mean-spirited” content be distinguished from merely “critical” content? Clear guidelines about what categories of content will be restricted, and what those categories actually entail, are essential for a vibrant podcast ecosystem. Without public and transparent guidelines, content moderation decisions will appear ad hoc and undermine user trust.  
  • Moderation practices and appeals process. Podcast apps should also be required to publicly and transparently disclose high-level details about their content-moderation practices, as well as their review process. Whether a podcasting app relies on client-side scanning to check a given podcast for harmful content before playing it or instead relies primarily on user-reporting should be disclosed, as users have a right to know what role they or their devices play in the application’s content moderation process. Further, apps should also be required to publish clear guidelines for how to contest a moderation decision: If a podcast episode has been banned, users have a right to know how to appeal that decision and whether the review process will involve an automated or manual review.  
  • Recommendation algorithms. Since users often discover new podcasts series and episodes via recommendation algorithms, podcasting apps should be required to disclose the content their recommendation algorithms are amplifying the most, as well as basic details about how those algorithms work. As we documented earlier this year, more than 50% of popular political podcast episodes between the November election and the January 6 assault on the U.S. Capitol contained electoral misinformation. There is a clear public interest in knowing whether those episodes were among the most recommended on Apple Podcasts or Spotify. Likewise, if those episodes were recommended widely, there is also a public interest in understanding why they were recommended. That does not mean podcast apps should be required to disclose user data or detailed information about the architecture of their algorithms, but it does mean they should be required to list basic factors about what kinds of data the algorithm considers when boosting an episode or series.   
  • Funding. At present, advertising represents the primary source of revenue for the podcasting ecosystem. While Apple requires advertising to “be in compliance with applicable law”, and Spotify requires “content providers to comply with applicable laws and regulations,” including sanctions and export regulations, there are few obvious guidelines in place for financial disclosures in podcasting beyond those dictated between sponsor and series. Furthermore, it is unclear how apps might determine if and where to report when a podcast is in fact in violation of “applicable laws.” As a result, anyone could in theory provide financial support for a podcast, including foreign governments or obscure funders. As with radio reporting guidelines, regulators could help bring transparency to this opaque business model by delineating clear public financial reporting processes for podcast series. Given the size of the podcasting ecosystem, these guidelines might be limited to those series that generate a certain minimum revenue or audience size and would most benefit from an additional level of transparency.  

In short, regulators should push podcasting applications to adhere to the emerging standards for transparency enumerated in the Santa Clara principles and elsewhere. By focusing on transparency, regulators can vastly improve the quality of content moderation in the podcast ecosystem without compromising freedom of expression. And since policymakers in the United States, the EU, and elsewhere are already considering similar transparency requirements for other digital platforms and online service providers, extending those provisions to the podcasting space should be straightforward.  

Mature content-moderation regimes

Today, nearly a quarter of the U.S. population gets their news from podcasts. As that figure continues to rise, the content moderation policies of major podcasting apps will need to mature accordingly. Podcasts are now a mass medium, yet the content moderation policies and reporting mechanisms of many podcasting apps remain remarkably underdeveloped—as do the regulatory frameworks that oversee them.  

Developing a robust content moderation framework for the podcasting ecosystem will not be simple, especially as podcasting business models and architectures evolve. With Spotify, YouTube, and now Substack entering the podcasting market in ways that upend the once-open architecture of the medium, the space now encompasses both more traditional media business models as well as newer, more decentralized ones. As a result, a flexible, broadly applicable approach to moderating content and regulating podcasting platforms will become increasingly critical. By drawing on common principles and practices that have informed content moderation in other digital platforms, the approach briefly outlined above would encourage responsible content moderation without unduly restricting free speech. To get the balance right, podcast apps, users, and regulators all have a role to play—if they embrace it. 

Valerie Wirtschafter is a senior data analyst in the Artificial Intelligence and Emerging Technologies Initiative at the Brookings Institution.
Chris Meserole is a fellow in Foreign Policy at the Brookings Institution and director of research for the Brookings Artificial Intelligence and Emerging Technology Initiative.

Facebook, Google, and Microsoft provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research. 

Previous Post

Class Notes: Unconditional free college tuition, productivity of maternal childcare, and more 

Next Post

Laura Castaneda-Gomez: Roots may not be key drivers of soil responses to elevated CO2 in a phosphorus-limited forest

Next Post
Laura Castaneda-Gomez: Roots may not be key drivers of soil responses to elevated CO2 in a phosphorus-limited forest

Laura Castaneda-Gomez: Roots may not be key drivers of soil responses to elevated CO2 in a phosphorus-limited forest

RECOMMENDED NEWS

Who should regulate: Chairs or majorities of the board

Who should regulate: Chairs or majorities of the board

3 years ago
Can We Teach the “Federalist Papers” Today?

Can We Teach the “Federalist Papers” Today?

3 years ago
For Taiwan, reading the moment is essential

For Taiwan, reading the moment is essential

3 years ago
Interview: An Introspective Look At The Collapsing Republic & Push For Global Totalitarianism

Interview: An Introspective Look At The Collapsing Republic & Push For Global Totalitarianism

3 years ago

BROWSE BY CATEGORIES

  • Business
  • Ecology
  • Economy
  • Education
  • Healthcare
  • Politics
  • Uncategorized

POPULAR NEWS

  • Klaus Schwab – The Most Dangerous Man in the World

    0 shares
    Share 0 Tweet 0
  • Dr. Robert Malone v WEF

    0 shares
    Share 0 Tweet 0
  • Ukraine Adopts WEF Proposals

    0 shares
    Share 0 Tweet 0
  • Trudeau Backs Down After Banks Scream about Massive Withdrawals

    0 shares
    Share 0 Tweet 0
  • Trudeau’s Approval Rating Hits 12-Month Low

    0 shares
    Share 0 Tweet 0
Parliamentobserver

We bring you latest news about ecology, economy, healthcare, politics, education, business.

Recent News

  • FTC Cracks Down on Hidden Charges and “Junk Fees” in New Proposal
  • Eden Announces Extended Memorial Day Sale, Promoting Access to Metabolic Health Treatments
  • Top 5 Advantages of Staying in a Sober Living House

Category

  • Business
  • Ecology
  • Economy
  • Education
  • Healthcare
  • Politics
  • Uncategorized

Recent News

FTC Building

FTC Cracks Down on Hidden Charges and “Junk Fees” in New Proposal

October 5, 2024
Eden Announces Extended Memorial Day Sale, Promoting Access to Metabolic Health Treatments

Eden Announces Extended Memorial Day Sale, Promoting Access to Metabolic Health Treatments

May 27, 2024
  • About Us
  • Contact Us

© 2022 parliamentobserver.com Submit news release

No Result
View All Result
  • Ecology
  • Economy
  • Healthcare
  • Politics
  • Education
  • Business

© 2022 parliamentobserver.com Submit news release

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In