In a recent seminal paper on machine behavior, Rahwan et al. (2019) stress that ML applications “cannot be fully understood without the integrated study of algorithms and the social environments in which algorithms operate.” This essay aims to tentatively address this demand by using theories and insights from developmental psychology (as well as related scientific disciplines such as pedagogics or sociology) to assess environmental influences, which shape AI applications.
Developmental psychology focuses on long-term progressions with regard to the experiences and the behavior of individuals in order to find patterns and regularities that are crucial for the development of intellectually and emotionally sound and mature individuals (Lener 2015). In the context of (supervised) machine learning development, there are three ways in which hereditary and environmental information can be inscribed into algorithms: they can be incorporated into applications by programmers making design choices in algorithms (Brey 2010; Friedman and Nissenbaum 1996), by particular training stimuli (i.e. data), or by a machine’s own “experience”. Besides the difficult question whether hereditary factors or environmental influences are more important in the shaping of an individual (and algorithm, in this case), one can stress that AI applications combine both; the former through algorithm design, and the latter through training stimuli, where both factors interfere with each other. This essay focuses on the latter aspect, combining ethical research and basic research on AI with insights from developmental psychology. The decisive question is: How can theories, empirical evidence or categorizations from developmental psychology be used for research on AI and inform normative evaluations of AI ethics?
To answer the question above, one should bear in mind that AI applications are based on approaches resembling the theory of behaviorism (Watson 1913) and the social cognitive theory (Bandura 1986). Although both theories cannot explain the whole range of different forms of human learning, their basic theses hold true. In order for a person’s socialization to succeed, it requires, among other things, a certain range of promoting influences from a social environment, which in turn can be separated from harmful influences. A succeeding development is measured by aspects such as problem-solving abilities, emotional intelligence, cognitive development, prosocial behavior, mental health, educational success et cetera. As soon as norms for a succeeding personal development are defined, one can roughly differentiate between positive and negative environmental impacts. Negative environmental influences can affect health, gross and fine motor skills, socio-emotional development, the speed of information processing, self-concepts, knowledge, or language behavior and range from alcohol to stress during pregnancy, residential areas with high crime rates, low educational levels, emotional, physical or sexual abuse, as well as a neglectful parenting style (Sullivan and Knutson 2000; Spera 2005). In this context, it may prove useful to look for parallels with regard to the influence of environmental factors on the development and design of ML applications. The following table shows several examples of how different (negative or positive) environmental influences affect machine behavior and its ethical implications.
The idea of this essay is to search for parallels between the ontogenesis of children and training stimuli or “experiences” for AI applications. Data fed into those applications often reflects people’s (e.g., discriminative) behavior, so people’s behavior becomes machine (discriminative) behavior (Barocas & Selbst, 2016). Thus, when technology ethicists talk about „moral machines“ (Wallach and Allen 2009) in the context of AI applications, one also has to ask for „moral people“ and „moral people’s data“. As a result, a further question one hat to answer is: What are “good” environmental influences or “good” datasets for AI applications? How can one define what a “succeeding socialization” of an AI application is?
To answer these questions, one has to focus on identifying data sources reflecting behavior that is ethically sound, which in turn can be identified via scrutinizing stages of moral development (Levine et al. 1985; Kohlberg et al. 1983), educational backgrounds (Burchinal et al. 2015), different amounts of social and cultural capital (Bourdieu 1984), institutional influences (Foucault 1977), system-1 or system-2 human-computer-interactions (Tversky and Kahneman 1974; Lischka and Stöcker 2017) and the like, which can then promote the development of benevolent AI. The matrix of the different evaluation frameworks is then to be linked to the normative evaluation of different data sources. To do this, one has to connect developmental psychology with AI development. To the best of my knowledge, this has not been done yet. Hitherto, AI research is mainly concerned about fairness, robustness, preventing discrimination, explainability, or preserving privacy. Besides that, especially in the field of supervised machine learning, the question of what characterizes — from an ethical perspective — good data contexts remains largely untreated. This is crucial, since morally sound AI applications are in many regards only as sophisticated as their environmental influences or training data. Fruitful research insights can emerge when combining developmental psychology and related scientific disciplines with AI research for the purpose of innovating not just AI ethics, but also value driven technology development. In case you are interested in collaborating on this topic, in answering the above mentioned questions and in developing a framework how developmental psychology and machine learning research can be combined, feel free to contact me via email@example.com
Bandura, Albert (1986): Social foundations of thought and action. A social cognitive theory. Englewood Cliffs: Prentice-Hall.
Barocas, Solon, and Andrew D. Selbst. “Big data’s disparate impact.” Calif. L. Rev. 104 (2016): 671.
Barocas, Solon, Moritz Hardt, and Arvind Narayanan. “Fairness and Machine Learning. fairmlbook. org, 2018.” URL: http://www. fairmlbook. org.
Bourdieu, Pierre (1984): Distinction. A Social Critique of the Judgement of Taste. Cambridge, Massachusetts: Harvard University Press.
Brey, Philip (2010): Values in technology and disclosive computer ethics. In Luciano Floridi (Ed.): The Cambridge Handbook of Information and Computer Ethics. Cambridge, Massachusetts: Cambridge University Press, pp. 41–58.
Burchinal, Margaret; Magnuson, Katherine; Powell, Douglas; Hong, Sandra Soliday (2015): Early Childcare and Education. In Richard M. Lerner (Ed.): Handbook of child psychology and developmental science. Hoboken, New Jersey: Wiley, pp. 223–267.
Foucault, Michel (1977): Discipline and punish. New York: Pantheon Books.
Friedman, Batya; Nissenbaum, Helen (1996): Bias in computer systems. In ACM Trans. Inf. Syst. 14 (3), pp. 330–347.
Hagendorff, Thilo (2019): The Ethics of AI Ethics. An Evaluation of Guidelines. In arXiv, pp. 1–15.
Kohlberg, Lawrence; Levine, Charles; Hewer, Alexandra (1983): Moral stages. A current formulation and a response to critics. Basel: Karger.
Lerner, Richard M. (Hg.) (2015): Handbook of child psychology and developmental science. Hoboken, New Jersey: Wiley.
Levine, Charles; Kohlberg, Lawrence; Hewer, Alexandra (1985): The Current Formulation of Kohlberg’s Theory and a Response to Critics. In Human Development 28 (2), pp. 94–100.
Lischka, Konrad; Stöcker, Christian (2017): Digitale Öffentlichkeit. Wie algorithmische Prozesse den gesellschaftlichen Diskurs beeinflussen. Arbeitspapier. Gütersloh: Bertelsmann Stiftung, pp. 1–88.
Rahwan, Iyad; Cebrian, Manuel; Obradovich, Nick; Bongard, Josh; Bonnefon, Jean-François; Breazeal, Cynthia et al. (2019): Machine behaviour. In Nature 568 (7753), pp. 477–486.
Spera, Christopher (2005): A Review of the Relationship Among Parenting Practices, Parenting Styles, and Adolescent School Achievement. In Educ Psychol Rev 17 (2), pp. 125–146.
Sullivan, Patricia M.; Knutson, John F. (2000): Maltreatment and disabilities: a population-based epidemiological study. In Child Abuse & Neglect 24 (10), pp. 1257–1273.
Tversky, Amos; Kahneman, Daniel (1974): Judgment under Uncertainty. Heuristics and Biases. In Science 185 (4157), pp. 1124–1131.
Wallach, Wendell; Allen, Colin (2009): Moral Machines. Teaching Robots Right from Wrong. New York: Oxford University Press.
Watson, John B. (1913): Psychology as the behaviorist views it. In Psychological review 20 (2), pp. 158–177.