Using psychology to investigate behavior in large language models

Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life. Therefore, it is of great importance to thoroughly assess and scrutinize their capabilities. Due to increasingly complex and novel behavioral patterns in current LLMs, this can be done by treating them as participants in psychology experiments that were originally designed to test humans. For this purpose, I wrote a new paper introducing the field of “machine psychology”. It aims to discover emergent abilities in LLMs that cannot be detected by most traditional natural language processing benchmarks. A preprint of the paper can be read here.

Machine intuition in GPT

Together with two colleagues, Sarah Fabi and Michal Kosinski, I wrote a paper about a phenomenon we call “machine intuition”. We used a state-of-the-art large language model, namely GPT-3.5, and probed it with the Cognitive Reflection Test as well as semantic illusions that were originally designed to investigate intuitive decision-making in humans. Our results show that GPT-3.5 systematically exhibits “machine intuition”, meaning that it produces incorrect responses that are surprisingly equal to how humans respond to the Cognitive Reflection Test as well as to semantic illusions. The paper is available as an arXiv preprint.

Recent papers

Recently, three new papers have been published. Together with Kristof Meding, I conducted an empirical study on industry partners in AI research. The study is based on an analysis of nearly 11,000 publications from the most important AI conferences. The paper was published in “AI & Society” and can be read here.

A further paper appeared in “Minds and Machines” in which I argue for providing AI systems only those “environmental stimuli” for training that result in ethically desirable machine behavior. The idea is to overcome the Big Data principle of n=all in order to use new dimensions of data quality to better segregate which datafied behaviors are allowed to become training stimuli for machine learning applications in the first place. The paper can be viewed here.

Another paper I co-authored with my colleague Paula Helm critically addresses AI-based policing software. While predictive policing systems are often studied in this area, we explicitly looked at software used for criminal prosecution. An overview of these and other publications can be found here.