Investigating speciesism in LLMs

Advances in LLMs have raised questions about how these systems reflect and reproduce human moral judgments. In our new study, we investigate whether LLMs exhibit speciesism, the tendency to assign lower moral worth to specific species, especially to “farm animals”. We systematically examine this issue across three paradigms: (1) SpeciesismBench, a 1,003‑item benchmark assessing recognition and moral evaluation of speciesist statements; (2) established psychological measures comparing model responses with those of human participants; and (3) text‑generation tasks probing elaboration on, or resistance to, speciesist rationalizations. These findings suggest that while LLMs reflect a mixture of progressive and mainstream human views, they nonetheless reproduce entrenched cultural norms around animal exploitation. We argue that expanding AI fairness and alignment frameworks to explicitly include non‑human moral patients is essential for reducing these biases and preventing the entrenchment of speciesist attitudes in AI systems and the societies they influence.