BEGIN:VCALENDAR VERSION:2.0 PRODID:-//132.216.98.100//NONSGML kigkonsult.se iCalcreator 2.20.4// BEGIN:VEVENT UID:20251104T023446EST-670158FxHv@132.216.98.100 DTSTAMP:20251104T073446Z DESCRIPTION:Title: Moral Learning and Artificial Intelligence.\n\nCafé/bisc uits dès 15h.Traditional approaches to moral development have emphasized t he explicit teaching of norms\, e.g.\, via parental instruction\, or the a cquisition of behavioral dispositions by “social learning”\, e.g.\, via in fant imitation and modeling of observed behaviors\, or progression through a fixed set of developmental “stages”. But what if we understood moral le arning as closer to causal learning and the development of commonsense phy sics? Developmental evidence suggests that infants early on begin to model their physical environment and its possibilities (Gopnik & Schulz\, 2004) \, using observation but receiving very limited explicit instruction or ex ternal reinforcement. Similarly\, there is evidence that infants early on begin learning a kind of commonsense psychology that enables them to model others’ behavior in terms of intentional states\, once again\, using obse rvation but very limited explicit instruction or external reinforcement (W ellman\, 2014). These internal models enable infants to interact reasonabl y successfully with their physical and social environment even if they are unable to articulate the causal or psychological principles involved—the knowledge underlying these capacities is therefore generalizable despite b eing implicit\, and so is spoken of as intuitive. Internal models are not limited to causal and predictive information\, however\, but also appear t o encode evaluative information\, including evaluation of possible actions or third-party social interactions for such features as helpfulness\, har m\, knowledgeability\, and trustworthiness (Hamlin et al.\, 2011\; Doebel & Koenig\, 2013). When combined with an implicit capacity to empathically simulate the mental states of others\, these evaluative capacities can und erwrite a kind of intuitive learning of commonsense morality. Such learnin g occurs without much explicit instruction in moral principles\, yet with a capacity to generalize and with some degree of moral autonomy—so that by age 3-4\, infants will resist conforming to imposed rules that involve ha rm or unfairness toward others (Turiel\, 2002). To be genuinely intelligen t\, artificial systems will need to possess the kinds of intuitive knowled ge involved in commonsense physics and psychology. And to be both autonomo us and trustworthy\, artificial systems will need to be able to evaluate s ituations\, actions\, and agents in the terms of such categories of common sense morality as helpfulness\, harm\, knowledgeability\, and trustworthin ess. Deep learning approaches suggest how intuitive knowledge of the kind involved in predictive learning might be acquired and represented\, withou t being “programmed in” or explicitly taught. How might further developmen ts of these approaches make possible the acquisition of intuitive evaluati ve knowledge of the kind involved in commonsense epistemic or moral assess ment?\n\n\n\n DTSTART:20180118T203000Z DTEND:20180118T203000Z LOCATION:Room Z330\, CA\, UdeM\, Pavillon Claire McNicoll SUMMARY:Peter Railton (University of Michigan) URL:/mathstat/channels/event/peter-railton-university- michigan-283824 END:VEVENT END:VCALENDAR