censorship-course

Internet Censorship Course / Book Workshop

View the Project on GitHub noise-lab/censorship-course

Automated Content Moderation

Platforms often use some type of automation to perform content moderation. In the realm of copyright, one way of moderating content is to use some type of matching algorithm, such as matching a hash or fingerprint of the content against a known databased of infringing content. There are different ways of performing these types of matches.

Sentiment Analysis: Large Language Models

There are many tools and libraries that can be used to perform sentiment analysis. More recently, the advent of OpenAI’s GPT and various accompanying large language models have opened up new possibilities for sentiment analysis. A large language model that can be used to perform a variety of natural language processing tasks; models from OpenAI (e.g., ChatGPT) and Anthropic (e.g., Claude), for example, can perform a broad array of text generation and analysis tasks.

Try asking ChatGPT or Claude to perform sentiment analysis on the same set of phrases that you used with the Perspective API.

Follow Up: Experiment with moderating AI-generated content. For this follow-up, create or collect content that was generated by different AI models (ChatGPT, Claude, Bard/Gemini, etc.) and run it through various content moderation systems. Consider the following:

Follow Up: Experiment with local LLMs (e.g., UChicago Phoenix AI). For this follow-up, install and configure a local LLM like UChicago Phoenix AI or other open-source models such as Llama, Mistral, or Falcon. Then, consider some of the following questions or concerns:

Follow Up: Analyze the platform’s content moderation policy. As a follow-up activity, you could input a platform’s content moderation policy into an LLM, and subsequently input other types of content into the LLM to see how it might be classified.

As a follow-up activity, you could input a platform’s content moderation policy into an LLM, and subsequently input other types of content into the LLM to see how it might be classified. For this activity:

  1. Select a major platform (e.g., YouTube, Twitter/X, Facebook, TikTok) and locate their official content moderation policies
  2. Break down these policies into clear, structured guidelines that can be fed to an LLM.
  3. Create a testing protocol where you generate or collect content examples that test different aspects of the policy.
  4. Input both the policy and test content into an LLM, prompting it to evaluate whether the content violates specific policies.
  5. Compare the LLM’s evaluations with how the actual platform has been known to moderate similar content.
  6. Analyze discrepancies and discuss potential reasons for differences between LLM interpretation and actual platform enforcement.
  7. Consider how cultural context, ambiguity in policies, and political factors might influence moderation decisions.

Sentiment Analysis: Perspectives API

The Perspective API aims to help online communities detect and filter out toxic content. It is a machine learning model that can be used to score the likelihood that a comment is toxic. The model is trained on a variety of data sources, including Wikipedia talk page comments, and is able to distinguish between different types of toxicity, such as threats, obscenity, and identity-based hate.

  1. Download and install the Perspectives library and try it on various text input. Here are some instructions for getting started. You can also try the API directly from the website.
  2. You might try its effectiveness on the following:
    • Full sentences vs. phrases
    • Words or phrases with two meanings
    • Phrases in foreign languages

Bonus Activity: Spectral Hashing

One approach used for audio is to perform a so-called spectral or frequency-based, which does not match the content bit-by-bit, but rather matches how the audio “sounds”, by matching frequencies and beats through spectral analysis.

In this part of the hands-on assignment, you can download or compile the Echoprint code and perform some spectral hashes on audio files.

  1. Download and install the Echoprint code. Setup instructions.
  2. Select an mp3 file and compute the spectral fingerprint for that audio.
  3. Try various modifications to see if Echoprint’s fingerprint is affected:
    • Shorten the clip (e.g., take the first 30 seconds)
    • Find a different version of the same song
  4. More generally, you could try more complex manipulations, including:
    • Change the volume of the audio
    • Change the speed of the audio
    • Change the pitch of the audio