About This Project

When individual bias is aggregated, does it become "legitimate"?

What Did You Just Take Part In?

You ranked 40 emoji characters by survival priority. Each character keeps only three attributes:skin tone,gender, andage.

Your choices, together with thousands of others, are aggregated into a single "collective ranking."

What Does This Have to Do with AI?

In 2024, Anthropic (the company behind Claude) partnered with the Collective Intelligence Project and asked around 1,000 people to vote on how AI should behave.

They called it"Collective Constitutional AI."

They claimed it represents "democratic input" and "participatory alignment."

CEO Dario Amodei even linked it to the future of democracy itself.

But we want to ask:

  • When individual bias is aggregated, does it become "legitimate"?
  • Does participation itself create legitimacy?
  • Who designed the voting rules? Who chose the options? Who was excluded?

Statistical Legitimacy

The core concept here is Statistical Legitimacy.

It describes a condition where a collective result is treated as legitimate not because it was deliberated or justified, but simply because it is the product of aggregation.

You may have noticed that top-ranked emojis tend to share certain traits.

That is not because "the majority must be right," but because:

  1. Everyone participates with bias
  2. The system mechanically aggregates those biases
  3. The result is presented as "collective will"
  4. Bias then gains procedural legitimacy

We Are Not Blaming You

Everyone has bias. The people who designed this system do too.

The goal is not to shame you, but to make you see:

  • How your bias is collected
  • How it aggregates with others bias
  • How it turns into a seemingly neutral "collective outcome"

When AI companies claim their systems represent "human values," remember what you experienced here.