Machine intuition in GPT

Together with two colleagues, Sarah Fabi and Michal Kosinski, I wrote a paper about a phenomenon we call “machine intuition”. We used a state-of-the-art large language model, namely GPT-3.5, and probed it with the Cognitive Reflection Test as well as semantic illusions that were originally designed to investigate intuitive decision-making in humans. Our results show that GPT-3.5 systematically exhibits “machine intuition”, meaning that it produces incorrect responses that are surprisingly equal to how humans respond to the Cognitive Reflection Test as well as to semantic illusions. The paper is available as an arXiv preprint.