Moral reasoning

(Redirected from Moral judgment)

Moral reasoning is the study of how people think about right and wrong and how they acquire and apply moral rules. It is a subdiscipline of moral psychology that overlaps with moral philosophy, and is the foundation of descriptive ethics.

Description

edit

Starting from a young age, people can make moral decisions about what is right and wrong. Moral reasoning, however, is a part of morality that occurs both within and between individuals.[1] Prominent contributors to this theory include Lawrence Kohlberg and Elliot Turiel. The term is sometimes used in a different sense: reasoning under conditions of uncertainty, such as those commonly obtained in a court of law. It is this sense that gave rise to the phrase, "To a moral certainty;"[2] however, this idea is now seldom used outside of charges to juries.

Moral reasoning is an important and often daily process that people use when trying to do the right thing. For instance, every day people are faced with the dilemma of whether to lie in a given situation or not. People make this decision by reasoning the morality of their potential actions, and through weighing their actions against potential consequences.

A moral choice can be a personal, economic, or ethical one; as described by some ethical code, or regulated by ethical relationships with others. This branch of psychology is concerned with how these issues are perceived by ordinary people, and so is the foundation of descriptive ethics. There are many different forms of moral reasoning which often are dictated by culture. Cultural differences in the high-levels of cognitive function associated with moral reasoning can be observed through the association of brain networks from various cultures and their moral decision making. These cultural differences demonstrate the neural basis that cultural influences can have on an individual's moral reasoning and decision making.[3]

Distinctions between theories of moral reasoning can be accounted for by evaluating inferences (which tend to be either deductive or inductive) based on a given set of premises.[4] Deductive inference reaches a conclusion that is true based on whether a given set of premises preceding the conclusion are also true, whereas, inductive inference goes beyond information given in a set of premises to base the conclusion on provoked reflection.[4]

In philosophy

edit

Philosopher David Hume claims that morality is based more on perceptions than on logical reasoning.[4] This means that people's morality is based more on their emotions and feelings than on a logical analysis of any given situation. Hume regards morals as linked to passion, love, happiness, and other emotions and therefore not based on reason.[4] Jonathan Haidt agrees, arguing in his social intuitionist model that reasoning concerning a moral situation or idea follows an initial intuition.[5] Haidt's fundamental stance on moral reasoning is that "moral intuitions (including moral emotions) come first and directly cause moral judgments"; he characterizes moral intuition as "the sudden appearance in consciousness of a moral judgment, including an affective valence (good-bad, like-dislike), without any conscious awareness of having gone through steps of searching, weighing evidence, or inferring a conclusion".[4]

Immanuel Kant had a radically different view of morality. In his view, there are universal laws of morality that one should never break regardless of emotions.[4] He proposes a four-step system to determine whether or not a given action was moral based on logic and reason. The first step of this method involves formulating "a maxim capturing your reason for an action".[4] In the second step, one "frame[s] it as a universal principle for all rational agents".[4] The third step is assessing "whether a world based on this universal principle is conceivable".[4] If it is, then the fourth step is asking oneself "whether [one] would will the maxim to be a principle in this world".[4] In essence, an action is moral if the maxim by which it is justified is one which could be universalized. For instance, when deciding whether or not to lie to someone for one's own advantage, one is meant to imagine what the world would be like if everyone always lied, and successfully so. In such a world, there would be no purpose in lying, for everybody would expect deceit, rendering the universal maxim of lying whenever it is to your advantage absurd. Thus, Kant argues that one should not lie under any circumstance. Another example would be if trying to decide whether suicide is moral or immoral; imagine if everyone committed suicide. Since mass international suicide would not be a good thing, the act of suicide is immoral. Kant's moral framework, however, operates under the overarching maxim that you should treat each person as an end in themselves, not as a means to an end. This overarching maxim must be considered when applying the four aforementioned steps.[4]

Reasoning based on analogy is one form of moral reasoning. When using this form of moral reasoning the morality of one situation can be applied to another based on whether this situation is relevantly similar: similar enough that the same moral reasoning applies. A similar type of reasoning is used in common law when arguing based upon legal precedent. [a]

In consequentialism (often distinguished from deontology) actions are based as right on wrong based upon the consequences of action as opposed to a property intrinsic to the action itself.

In developmental psychology

edit

Moral reasoning first attracted a broad attention from developmental psychologists in the mid-to-late 20th century. Their main theorization involved elucidating the stages of development of moral reasoning capacity.

Jean Piaget

edit

Jean Piaget developed two phases of moral development, one common among children and the other common among adults. The first is known as the Heteronomous Phase.[7] This phase, more common among children, is characterized by the idea that rules come from authority figures in one's life such as parents, teachers, and God.[7] It also involves the idea that rules are permanent no matter what.[7] Thirdly, this phase of moral development includes the belief that "naughty" behavior must always be punished and that the punishment will be proportional.[7]

The second phase in Piaget's theory of moral development is referred to as the Autonomous Phase. This phase is more common after one has matured and is no longer a child. In this phase people begin to view the intentions behind actions as more important than their consequences.[7] For instance, if a person who is driving swerves in order to not hit a dog and then knocks over a road sign, adults are likely to be less angry at the person than if he or she had done it on purpose just for fun. Even though the outcome is the same, people are more forgiving because of the good intention of saving the dog. This phase also includes the idea that people have different morals and that morality is not necessarily universal.[7] People in the Autonomous Phase also believe rules may be broken under certain circumstances.[7] For instance, Rosa Parks broke the law by refusing to give up her seat on a bus, which was against the law but something many people consider moral nonetheless. In this phase people also stop believing in the idea of immanent justice.[7]

Lawrence Kohlberg

edit

Inspired by Piaget, Lawrence Kohlberg made significant contributions to the field of moral reasoning by creating a theory of moral development.[8] His theory is a "widely accepted theory that provides the basis for empirical evidence on the influence of human decision making on ethical behavior."[9] In Lawrence Kohlberg's view, moral development consists of the growth of less egocentric and more impartial modes of reasoning on more complicated matters. He believed that the objective of moral education is the reinforcement of children to grow from one stage to an upper stage. Dilemma was a critical tool that he emphasized that children should be presented with; yet also, the knowledge for children to cooperate.[10] According to his theory, people pass through three main stages of moral development as they grow from early childhood to adulthood. These are pre-conventional morality, conventional morality, and post-conventional morality.[8] Each of these is subdivided into two levels.[8]

The stages that are presented by Lawrence Kohlberg can be varied into three which are pre-conventional, conventional, and post-conventional where each contains two stages varying in ages. The first stage in the pre-conventional level is obedience and punishment. In this stage people, usually young children where they age from 5 to 7 years old, avoid certain behaviors only because of the fear of punishment, not because they see them as wrong. They believe that the rules are mandatory, and they like avoiding harms.[8] The second stage in the pre-conventional level is called individualism and exchange: in this stage people make moral decisions based on what best serves their needs. It happens usually during ages of 8 to 10 years old where they have some understandings that some rules are arbitrary but not consistent.[8]

The third stage is part of the conventional morality level and is called interpersonal relationships. This stage ages from 10 to 12 years old where they concern about living up to expectations and reciprocity. For example, their actings are mostly motivated by their parent's praises or reactions. In this stage one tries to conform to what is considered moral by the society that they live in, attempting to be seen by peers as a good person.[8] The fourth stage is also in the conventional morality level and is called maintaining social order. Kids around this stage age between 12 and 14 where they believe conventions are arbitrary social expectations. Also, they believe moral decisions are based on fairness not rules. This stage focuses on a view of society as a whole and following the laws and rules of that society.[8]

The fifth stage is a part of the post-conventional level and is called social contract and individual rights. People in this age vary from 17 to 20 which not much people are in it. They believe morality is relative to systems of laws, and they don't think any system is necessarily superior. In this stage people begin to consider differing ideas about morality in other people and feel that rules and laws should be agreed on by the members of a society.[8] The sixth and final stage of moral development, the second in the post-conventional level, is called universal principles. They usually age between 21 or older where they think what is moral are values and rights that exist prior to social attachment and contracts. At this stage people begin to develop their ideas of universal moral principles and will consider them the right thing to do regardless of what the laws of a society are.[8]

James Rest

edit

In 1983, James Rest developed the four component Model of Morality, which addresses the ways that moral motivation and behavior occurs.[11] The first of these is moral sensitivity, which is "the ability to see an ethical dilemma, including how our actions will affect others".[12] The second is moral judgment, which is "the ability to reason correctly about what 'ought' to be done in a specific situation".[12] The third is moral motivation, which is "a personal commitment to moral action, accepting responsibility for the outcome".[12] The fourth and final component of moral behavior is moral character, which is a "courageous persistence in spite of fatigue or temptations to take the easy way out".[12]

In social cognition

edit

Based on empirical results from behavioral and neuroscientific studies, social and cognitive psychologists attempted to develop a more accurate descriptive (rather than normative) theory of moral reasoning. That is, the emphasis of research was on how real-world individuals made moral judgments, inferences, decisions, and actions, rather than what should be considered as moral.

Dual-process theory and social intuitionism

edit

Developmental theories of moral reasoning were critiqued as prioritizing on the maturation of cognitive aspect of moral reasoning.[13] From Kohlberg's perspective, one is considered as more advanced in moral reasoning as she is more efficient in using deductive reasoning and abstract moral principles to make moral judgments about particular instances.[13][14] For instance, an advanced reasoner may reason syllogistically with the Kantian principle of 'treat individuals as ends and never merely as means' and a situation where kidnappers are demanding a ransom for a hostage, to conclude that the kidnappers have violated a moral principle and should be condemned. In this process, reasoners are assumed to be rational and have conscious control over how they arrive at judgments and decisions.[13]

In contrast with such view, however, Joshua Greene and colleagues argued that laypeople's moral judgments are significantly influenced, if not shaped, by intuition and emotion as opposed to rational application of rules. In their fMRI studies in the early 2000s,[15][16] participants were shown three types of decision scenarios: one type included moral dilemmas that elicited emotional reaction (moral-personal condition), the second type included moral dilemmas that did not elicit emotional reaction (moral-impersonal condition), and the third type had no moral content (non-moral condition). Brain regions such as posterior cingulate gyrus and angular gyrus, whose activation is known to correlate with experience of emotion, showed activations in moral-personal condition but not in moral-impersonal condition. Meanwhile, regions known to correlate with working memory, including right middle frontal gyrus and bilateral parietal lobe, were less active in moral-personal condition than in moral-impersonal condition. Moreover, participants' neural activity in response to moral-impersonal scenarios was similar to their activity in response to non-moral decision scenarios.

Another study[14] used variants of trolley problem that differed in the 'personal/impersonal' dimension and surveyed people's permissibility judgment (Scenarios 1 and 2). Across scenarios, participants were presented with the option of sacrificing a person to save five people. However, depending on the scenario, the sacrifice involved pushing a person off a footbridge to block the trolley (footbridge dilemma condition; personal) or simply throwing a switch to redirect the trolley (trolley dilemma condition; impersonal). The proportions of participants who judged the sacrifice as permissible differed drastically: 11% (footbridge dilemma) vs. 89% (trolley dilemma). This difference was attributed to the emotional reaction evoked from having to apply personal force on the victim, rather than simply throwing a switch without physical contact with the victim. Focusing on participants who judged the sacrifice in trolley dilemma as permissible but the sacrifice in footbridge dilemma as impermissible, the majority of them failed to provide a plausible justification for their differing judgments.[14] Several philosophers have written critical responses on this matter to Joshua Greene and colleagues.[17][18][19][20]

Based on these results, social psychologists proposed the dual process theory of morality. They suggested that our emotional intuition and deliberate reasoning are not only qualitatively distinctive, but they also compete in making moral judgments and decisions. When making an emotionally-salient moral judgment, automatic, unconscious, and immediate response is produced by our intuition first. More careful, deliberate, and formal reasoning then follows to produce a response that is either consistent or inconsistent with the earlier response produced by intuition,[13][5][21] in parallel with more general form of dual process theory of thinking. But in contrast with the previous rational view on moral reasoning, the dominance of the emotional process over the rational process was proposed.[5][21] Haidt highlighted the aspect of morality not directly accessible by our conscious search in memory, weighing of evidence, or inference. He describes moral judgment as akin to aesthetic judgment, where an instant approval or disapproval of an event or object is produced upon perception.[5] Hence, once produced, the immediate intuitive response toward a situation or person cannot easily be overridden by the rational consideration that follows. The theory explained that in many cases, people resolve inconsistency between the intuitive and rational processes by using the latter for post-hoc justification of the former. Haidt, using the metaphor "the emotional dog and its rational tail",[5] applied such nature of our reasoning to the contexts ranging from person perception to politics.

A notable illustration of the influence of intuition involved feeling of disgust. According to Haidt's moral foundations theory, political liberals rely on two dimensions (harm/care and fairness/reciprocity) of evaluation to make moral judgments, but conservatives utilize three additional dimensions (ingroup/loyalty, authority/respect, and purity/sanctity).[21][22] Among these, studies have revealed the link between moral evaluations based on purity/sanctity dimension and reasoner's experience of disgust. That is, people with higher sensitivity to disgust were more likely to be conservative toward political issues such as gay marriage and abortion.[23] Moreover, when the researchers reminded participants of keeping the lab clean and washing their hands with antiseptics (thereby priming the purity/sanctity dimension), participants' attitudes were more conservative than in the control condition.[24] In turn, Helzer and Pizarro's findings have been rebutted by two failed attempts of replications.[25]

Other studies raised criticism toward Haidt's interpretation of his data.[26][27] Augusto Blasi also rebuts the theories of Jonathan Haidt on moral intuition and reasoning. He agrees with Haidt that moral intuition plays a significant role in the way humans operate. However, Blasi suggests that people use moral reasoning more than Haidt and other cognitive scientists claim. Blasi advocates moral reasoning and reflection as the foundation of moral functioning. Reasoning and reflection play a key role in the growth of an individual and the progress of societies.[28]

Alternatives to these dual-process/intuitionist models have been proposed, with several theorists proposing that moral judgment and moral reasoning involves domain general cognitive processes, e.g., mental models,[29] social learning [30][31][32] or categorization processes.[33]

Motivated reasoning

edit

A theorization of moral reasoning similar to dual-process theory was put forward with emphasis on our motivations to arrive at certain conclusions.[34] Ditto and colleagues[35] likened moral reasoners in everyday situations to lay attorneys than lay judges; people do not reason in the direction from assessment of individual evidence to moral conclusion (bottom-up), but from a preferred moral conclusion to assessment of evidence (top-down). The former resembles the thought process of a judge who is motivated to be accurate, unbiased, and impartial in her decisions; the latter resembles that of an attorney whose goal is to win a dispute using partial and selective arguments.[21][35]

Kunda proposed motivated reasoning as a general framework for understanding human reasoning.[34] She emphasized the broad influence of physiological arousal, affect, and preference (which constitute the essence of motivation and cherished beliefs) on our general cognitive processes including memory search and belief construction. Importantly, biases in memory search, hypothesis formation and evaluation result in confirmation bias, making it difficult for reasoners to critically assess their beliefs and conclusions. It is reasonable to state that individuals and groups will manipulate and confuse reasoning for belief depending on the lack of self control to allow for their confirmation bias to be the driving force of their reasoning. This tactic is used by media, government, extremist groups, cults, etc. Those with a hold on information may dull out certain variables that propagate their agenda and then leave out specific context to push an opinion into the form of something reasonable to control individual, groups, and entire populations. This allows the use of alternative specific context with fringe content to further veer from any form of dependability in their reasoning. Leaving a fictional narrative in the place of real evidence for a logical outlook to form a proper, honest, and logical assessment .[34]

Applied to moral domain, our strong motivation to favor people we like leads us to recollect beliefs and interpret facts in ways that favor them. In Alicke (1992, Study 1),[36] participants made responsibility judgments about an agent who drove over the speed limit and caused an accident. When the motive for speeding was described as moral (to hide a gift for his parents' anniversary), participants assigned less responsibility to the agent than when the motive was immoral (to hide a vial of cocaine). Even though the causal attribution of the accident may technically fall under the domain of objective, factual understanding of the event, it was nevertheless significantly affected by the perceived intention of the agent (which was presumed to have determined the participants' motivation to praise or blame him).

Another paper by Simon, Stenstrom, and Read (2015, Studies 3 and 4)[37] used a more comprehensive paradigm that measures various aspects of participants' interpretation of a moral event, including factual inferences, emotional attitude toward agents, and motivations toward the outcome of decision. Participants read about a case involving a purported academic misconduct and were asked to role-play as a judicial officer who must provide a verdict. A student named Debbie had been accused of cheating in an exam, but the overall situation of the incident was kept ambiguous to allow participants to reason in a desired direction. Then, the researchers attempted to manipulate participants' motivation to support either the university (conclude that she cheated) or Debbie (she did not cheat) in the case. In one condition, the scenario stressed that through previous incidents of cheating, the efforts of honest students have not been honored and the reputation of the university suffered (Study 4, Pro-University condition); in another condition, the scenario stated that Debbie's brother died from a tragic accident a few months ago, eliciting participants' motivation to support and sympathize with Debbie (Study 3, Pro-Debbie condition). Behavioral and computer simulation results showed an overall shift in reasoning—factual inference, emotional attitude, and moral decision—depending on the manipulated motivation. That is, when the motivation to favor the university/Debbie was elicited, participants' holistic understanding and interpretation of the incident shifted in the way that favored the university/Debbie. In these reasoning processes, situational ambiguity was shown to be critical for reasoners to arrive at their preferred conclusion.[34][37][38]

From a broader perspective, Holyoak and Powell interpreted motivated reasoning in the moral domain as a special pattern of reasoning predicted by coherence-based reasoning framework.[39] This general framework of cognition, initially theorized by the philosopher Paul Thagard, argues that many complex, higher-order cognitive functions are made possible by computing the coherence (or satisfying the constraints) between psychological representations such as concepts, beliefs, and emotions.[40] Coherence-based reasoning framework draws symmetrical links between consistent (things that co-occur) and inconsistent (things that do not co-occur) psychological representations and use them as constraints, thereby providing a natural way to represent conflicts between irreconcilable motivations, observations, behaviors, beliefs, and attitudes, as well as moral obligations.[37][39] Importantly, Thagard's framework was highly comprehensive in that it provided a computational basis for modeling reasoning processes using moral and non-moral facts and beliefs as well as variables related to both 'hot' and 'cold' cognitions.[39][40][41]

Causality and intentionality

edit

Classical theories of social perception had been offered by psychologists including Fritz Heider (model of intentional action)[42] and Harold Kelley (attribution theory).[43] These theories highlighted how laypeople understand another person's action based on their causal knowledge of internal (intention and ability of actor) and external (environment) factors surrounding that action. That is, people assume a causal relationship between an actor's disposition or mental states (personality, intention, desire, belief, ability; internal cause), environment (external cause), and the resulting action (effect). In later studies, psychologists discovered that moral judgment toward an action or actor is critically linked with these causal understanding and knowledge about the mental state of the actor.

Bertram Malle and Joshua Knobe conducted survey studies to investigate laypeople's understanding and use (the folk concept) of the word 'intentionality' and its relation to action.[44] His data suggested that people think of intentionality of an action in terms of several psychological constituents: desire for outcome, belief about the expected outcome, intention to act (combination of desire and belief), skill to bring about the outcome, and awareness of action while performing that action. Consistent with this view as well as with our moral intuitions, studies found significant effects of the agent's intention, desire, and beliefs on various types of moral judgments, Using factorial designs to manipulate the content in the scenarios, Cushman showed that the agent's belief and desire regarding a harmful action significantly influenced judgments of wrongness, permissibility, punishment, and blame. However, whether the action actually brought about negative consequence or not only affected blame and punishment judgments, but not wrongness and permissibility judgments.[45][46] Another study also provided neuroscientific evidence for the interplay between theory of mind and moral judgment.[47]

Through another set of studies, Knobe showed a significant effect in the opposite direction: Intentionality judgments are significantly affected by the reasoner's moral evaluation of the actor and action.[48][49] In one of his scenarios, a CEO of a corporation hears about a new programme designed to increase profit. However, the program is also expected to benefit or harm the environment as a side effect, to which he responds by saying 'I don't care'. The side effect was judged as intentional by the majority of participants in the harm condition, but the response pattern was reversed in the benefit condition.

Many studies on moral reasoning have used fictitious scenarios involving anonymous strangers (e.g., trolley problem) so that external factors irrelevant to researcher's hypothesis can be ruled out. However, criticisms have been raised about the external validity of the experiments in which the reasoners (participants) and the agent (target of judgment) are not associated in any way.[50][51] As opposed to the previous emphasis on evaluation of acts, Pizarro and Tannenbaum stressed our inherent motivation to evaluate the moral characters of agents (e.g., whether an actor is good or bad), citing the Aristotelian virtue ethics. According to their view, learning the moral character of agents around us must have been a primary concern for primates and humans beginning from their early stages of evolution, because the ability to decide whom to cooperate with in a group was crucial to survival.[50][52] Furthermore, observed acts are no longer interpreted separately from the context, as reasoners are now viewed as simultaneously engaging in two tasks: evaluation (inference) of moral character of agent and evaluation of her moral act. The person-centered approach to moral judgment seems to be consistent with results from some of the previous studies that involved implicit character judgment. For instance, in Alicke's (1992)[36] study, participants may have immediately judged the moral character of the driver who sped home to hide cocaine as negative, and such inference led the participants to assess the causality surrounding the incident in a nuanced way (e.g., a person as immoral as him could have been speeding as well).[52]

In order to account for laypeople's understanding and use of causal relations between psychological variables, Sloman, Fernbach, and Ewing proposed a causal model of intentionality judgment based on Bayesian network.[53] Their model formally postulates that character of agent is a cause for the agent's desire for outcome and belief that action will result in consequence, desire and belief are causes for intention toward action, and the agent's action is caused by both that intention and the skill to produce consequence. Combining computational modeling with the ideas from theory of mind research, this model can provide predictions for inferences in bottom-up direction (from action to intentionality, desire, and character) as well as in top-down direction (from character, desire, and intentionality to action).

Gender difference

edit

At one time psychologists believed that men and women have different moral values and reasoning. This was based on the idea that men and women often think differently and would react to moral dilemmas in different ways. Some researchers hypothesized that women would favor care reasoning, meaning that they would consider issues of need and sacrifice, while men would be more inclined to favor fairness and rights, which is known as justice reasoning.[54] However, some also knew that men and women simply face different moral dilemmas on a day-to-day basis and that might be the reason for the perceived difference in their moral reasoning.[54] With these two ideas in mind, researchers decided to do their experiments based on moral dilemmas that both men and women face regularly. To reduce situational differences and discern how both genders use reason in their moral judgments, they therefore ran the tests on parenting situations, since both genders can be involved in child rearing.[54] The research showed that women and men use the same form of moral reasoning as one another and the only difference is the moral dilemmas they find themselves in on a day-to-day basis.[54] When it came to moral decisions both men and women would be faced with, they often chose the same solution as being the moral choice. At least this research shows that a division in terms of morality does not actually exist, and that reasoning between genders is the same in moral decisions.

Notes

edit
  1. ^ See for example the section "Reasoning by analogy" in.[6]

References

edit
  1. ^ Raine, A. & Yang, Y. (2006). Neural foundations of moral reasoning and antisocial behavior. Social Cognitive and Affective Neuroscience 1(3), 203-213. doi:10.1093/scan/nsl033
  2. ^ Victor v. Nebraska (92-8894), 511 U.S. 1(1994), from the syllabus, holding (c) and throughout, available in the Cornell Law School Supreme Court Collection
  3. ^ Sachdeva, S., Singh, P., & Medin, D. (2011). Culture and the quest for universal principals in moral reasoning. International Journal of Psychology, 46(3), 161-176. doi:10.1080/00207863.2011.568486
  4. ^ a b c d e f g h i j k Bucciarelli, Monica; Khemlani, Sangeet; Johnson-Laird, P.N. (February 2008). "The psychology of moral reasoning" (PDF). Judgment and Decision Making, Vol. 3, No. 2. 3 (2): 121–139. doi:10.1017/S1930297500001479. S2CID 327124. Retrieved 20 July 2011.121-139&rft.date=2008-02&rft_id=info:doi/10.1017/S1930297500001479&rft_id=https://api.semanticscholar.org/CorpusID:327124#id-name=S2CID&rft.aulast=Bucciarelli&rft.aufirst=Monica&rft.au=Khemlani, Sangeet&rft.au=Johnson-Laird, P.N.&rft_id=http://journal.sjdm.org/jdm8105.pdf&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  5. ^ a b c d e Haidt, Jonathan (2001). "The emotional dog and its rational tail: A social intuitionist approach to moral judgment". Psychological Review. 108 (4): 814–834. doi:10.1037/0033-295x.108.4.814. ISSN 0033-295X. PMID 11699120. S2CID 2252549.814-834&rft.date=2001&rft.issn=0033-295X&rft_id=https://api.semanticscholar.org/CorpusID:2252549#id-name=S2CID&rft_id=info:pmid/11699120&rft_id=info:doi/10.1037/0033-295x.108.4.814&rft.aulast=Haidt&rft.aufirst=Jonathan&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  6. ^ David R. Morrow (June 2017). Moral Reasoning: A Text and Reader on Ethics and Contemporary Moral Issues. Oxford University Press. ISBN 978-0-19-023585-7.
  7. ^ a b c d e f g h Walsh, Keiron. "Piaget's Theory of Moral Development". Development of Moral Understanding. Retrieved 11 Oct 2014.
  8. ^ a b c d e f g h i Cherry, Kendra. "Kohlberg's Theory of Moral Development". about.com psychology. Retrieved 20 July 2011.
  9. ^ Tsui, Judy; Carolyn Windsor (May 2001). "Some Cross-Cultural Evidence on Ethical Reasoning". Journal of Business Ethics. 31 (2): 143–150. doi:10.1023/A:1010727320265. S2CID 141929754.143-150&rft.date=2001-05&rft_id=info:doi/10.1023/A:1010727320265&rft_id=https://api.semanticscholar.org/CorpusID:141929754#id-name=S2CID&rft.aulast=Tsui&rft.aufirst=Judy&rft.au=Carolyn Windsor&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  10. ^ Musschenga, Albert W. (2009). "Moral Intuitions, Moral Expertise and Moral Reasoning". Journal of Philosophy of Education. 43 (4): 597–613. doi:10.1111/j.1467-9752.2009.00707.x. Retrieved 19 December 2012.597-613&rft.date=2009&rft_id=info:doi/10.1111/j.1467-9752.2009.00707.x&rft.aulast=Musschenga&rft.aufirst=Albert W.&rft_id=https://www.academia.edu/206236&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  11. ^ Lincoln, S. H. & Holmes, E. K. (2011). Ethical decision making: a process influenced by moral intensity. Journal of Healthcare, Science and the Humanities, 1(1), 55-69. doi:10.1111/j.1559-2011.tb02661.x
  12. ^ a b c d Lynn E. Swaner, "Ethical and Moral Reasoning," Educating for Personal and Social Responsibility, Position Paper, American Council of Colleges and Universities, September 13, 2004 (pdf), citing James Rest, "Morality," in Cognitive Development, ed. John H. Flavell and Ellen M. Markman, Handbook of Child Psychology volume 3, 4th ed. New York: Wiley, 1983, ISBN 978-0-471-09064-9, pp. 556–629.
  13. ^ a b c d Cushman, Fiery; Young, Liane; Greene, Joshua D. (2010), "Multi-system Moral Psychology", The Moral Psychology Handbook, Oxford University Press, pp. 47–71, doi:10.1093/acprof:oso/9780199582143.003.0003, ISBN 978-0-19-958214-347-71&rft.date=2010&rft_id=info:doi/10.1093/acprof:oso/9780199582143.003.0003&rft.isbn=978-0-19-958214-3&rft.aulast=Cushman&rft.aufirst=Fiery&rft.au=Young, Liane&rft.au=Greene, Joshua D.&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  14. ^ a b c Hauser, Marc; Cushman, Fiery; Young, Liane; Kang-Xing Jin, R.; Mikhail, John (2007). "A Dissociation Between Moral Judgments and Justifications". Mind & Language. 22 (1): 1–21. doi:10.1111/j.1468-0017.2006.00297.x. ISSN 0268-1064.1-21&rft.date=2007&rft_id=info:doi/10.1111/j.1468-0017.2006.00297.x&rft.issn=0268-1064&rft.aulast=Hauser&rft.aufirst=Marc&rft.au=Cushman, Fiery&rft.au=Young, Liane&rft.au=Kang-Xing Jin, R.&rft.au=Mikhail, John&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  15. ^ Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105-2108.
  16. ^ Greene, Joshua; Haidt, Jonathan (2002). "How (and where) does moral judgment work?". Trends in Cognitive Sciences. 6 (12): 517–523. doi:10.1016/s1364-6613(02)02011-9. ISSN 1364-6613. PMID 12475712. S2CID 6777806.517-523&rft.date=2002&rft.issn=1364-6613&rft_id=https://api.semanticscholar.org/CorpusID:6777806#id-name=S2CID&rft_id=info:pmid/12475712&rft_id=info:doi/10.1016/s1364-6613(02)02011-9&rft.aulast=Greene&rft.aufirst=Joshua&rft.au=Haidt, Jonathan&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  17. ^ Lott, Micah (October 2016). "Moral Implications from Cognitive (Neuro)Science? No Clear Route". Ethics. 127 (1): 241–256. doi:10.1086/687337. S2CID 151940241.241-256&rft.date=2016-10&rft_id=info:doi/10.1086/687337&rft_id=https://api.semanticscholar.org/CorpusID:151940241#id-name=S2CID&rft.aulast=Lott&rft.aufirst=Micah&rft_id=https://philarchive.org/rec/LOTMIF&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  18. ^ Königs, Peter (April 3, 2018). "Two types of debunking arguments". Philosophical Psychology. 31 (3): 383–402. doi:10.1080/09515089.2018.1426100. S2CID 148678250.383-402&rft.date=2018-04-03&rft_id=info:doi/10.1080/09515089.2018.1426100&rft_id=https://api.semanticscholar.org/CorpusID:148678250#id-name=S2CID&rft.aulast=Königs&rft.aufirst=Peter&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  19. ^ Meyers, C. D. (May 19, 2015). "Brains, trolleys, and intuitions: Defending deontology from the Greene/Singer argument". Philosophical Psychology. 28 (4): 466–486. doi:10.1080/09515089.2013.849381. S2CID 146547149.466-486&rft.date=2015-05-19&rft_id=info:doi/10.1080/09515089.2013.849381&rft_id=https://api.semanticscholar.org/CorpusID:146547149#id-name=S2CID&rft.aulast=Meyers&rft.aufirst=C. D.&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  20. ^ Kahane, Guy (2012). "On the Wrong Track: Process and Content in Moral Psychology". Mind & Language. 27 (5): 519–545. doi:10.1111/mila.12001. PMC 3546390. PMID 23335831.519-545&rft.date=2012&rft_id=https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3546390#id-name=PMC&rft_id=info:pmid/23335831&rft_id=info:doi/10.1111/mila.12001&rft.aulast=Kahane&rft.aufirst=Guy&rft_id=https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3546390&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  21. ^ a b c d Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New York, NY: Paragon.
  22. ^ Graham, Jesse; Haidt, Jonathan; Nosek, Brian A. (2009). "Liberals and conservatives rely on different sets of moral foundations". Journal of Personality and Social Psychology. 96 (5): 1029–1046. doi:10.1037/a0015141. ISSN 1939-1315. PMID 19379034. S2CID 2715121.1029-1046&rft.date=2009&rft.issn=1939-1315&rft_id=https://api.semanticscholar.org/CorpusID:2715121#id-name=S2CID&rft_id=info:pmid/19379034&rft_id=info:doi/10.1037/a0015141&rft.aulast=Graham&rft.aufirst=Jesse&rft.au=Haidt, Jonathan&rft.au=Nosek, Brian A.&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  23. ^ Inbar, Yoel; Pizarro, David A.; Bloom, Paul (2009). "Conservatives are more easily disgusted than liberals". Cognition & Emotion. 23 (4): 714–725. CiteSeerX 10.1.1.372.3053. doi:10.1080/02699930802110007. ISSN 0269-9931. S2CID 7411404.714-725&rft.date=2009&rft_id=https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.372.3053#id-name=CiteSeerX&rft_id=https://api.semanticscholar.org/CorpusID:7411404#id-name=S2CID&rft.issn=0269-9931&rft_id=info:doi/10.1080/02699930802110007&rft.aulast=Inbar&rft.aufirst=Yoel&rft.au=Pizarro, David A.&rft.au=Bloom, Paul&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  24. ^ Helzer, Erik G.; Pizarro, David A. (2011-03-18). "Dirty Liberals!: Reminders of Physical Cleanliness Influence Moral and Political Attitudes". Psychological Science. 22 (4): 517–522. doi:10.1177/0956797611402514. ISSN 0956-7976. PMID 21421934. S2CID 18764776.517-522&rft.date=2011-03-18&rft.issn=0956-7976&rft_id=https://api.semanticscholar.org/CorpusID:18764776#id-name=S2CID&rft_id=info:pmid/21421934&rft_id=info:doi/10.1177/0956797611402514&rft.aulast=Helzer&rft.aufirst=Erik G.&rft.au=Pizarro, David A.&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  25. ^ Burnham, B. R. (2020). "Are liberals really dirty? Two failures to replicate Helzer and Pizarro's (2011) study 1, with meta-analysis". Journal of Personality and Social Psychology. 119 (6): e38 – e42. doi:10.1037/pspa0000238. PMID 32551744. S2CID 219901356.e38 - e42&rft.date=2020&rft_id=https://api.semanticscholar.org/CorpusID:219901356#id-name=S2CID&rft_id=info:pmid/32551744&rft_id=info:doi/10.1037/pspa0000238&rft.aulast=Burnham&rft.aufirst=B. R.&rft_id=https://pubmed.ncbi.nlm.nih.gov/32551744/&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  26. ^ Guglielmo, Steve (January 2018). "Unfounded dumbfounding: How harm and purity undermine evidence for moral dumbfounding". Cognition. 170: 334–337. doi:10.1016/j.cognition.2017.08.002. PMID 28803616. S2CID 46809661.334-337&rft.date=2018-01&rft_id=https://api.semanticscholar.org/CorpusID:46809661#id-name=S2CID&rft_id=info:pmid/28803616&rft_id=info:doi/10.1016/j.cognition.2017.08.002&rft.aulast=Guglielmo&rft.aufirst=Steve&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  27. ^ Royzman, Edward B; Kim, Kwanwoo; Leeman, Robert F (2015). "The curious tale of Julie and Mark: Unraveling the moral dumbfounding effect". Judgment and Decision Making. 10 (4): 296–313. doi:10.1017/S193029750000512X.296-313&rft.date=2015&rft_id=info:doi/10.1017/S193029750000512X&rft.aulast=Royzman&rft.aufirst=Edward B&rft.au=Kim, Kwanwoo&rft.au=Leeman, Robert F&rft_id=https://doi.org/10.1017%2FS193029750000512X&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  28. ^ Blasi, Augusto (2009), "The Moral Functioning of Mature Adults and the Possibility of Fair Moral Reasoning", in Narvaez, Darcia; Lapsley, Daniel K (eds.), Personality, Identity, and Character, Cambridge University Press, pp. 396–440, doi:10.1017/cbo9780511627125.019, ISBN 978-0-511-62712-5396-440&rft.pub=Cambridge University Press&rft.date=2009&rft_id=info:doi/10.1017/cbo9780511627125.019&rft.isbn=978-0-511-62712-5&rft.aulast=Blasi&rft.aufirst=Augusto&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  29. ^ Bucciarelli, Monica; Khemlani, Sangeet; Johnson-Laird, Philip N. (2008). "The psychology of moral reasoning". Judgment and Decision Making. 3 (2): 121–139. doi:10.1017/S1930297500001479.121-139&rft.date=2008&rft_id=info:doi/10.1017/S1930297500001479&rft.aulast=Bucciarelli&rft.aufirst=Monica&rft.au=Khemlani, Sangeet&rft.au=Johnson-Laird, Philip N.&rft_id=https://ideas.repec.org/a/jdm/journl/v3y2008ip121-139.html&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  30. ^ Railton, Peter (2017). "Moral Learning: Conceptual foundations and normative relevance". Cognition. 167: 172–190. doi:10.1016/j.cognition.2016.08.015. PMID 27601269.172-190&rft.date=2017&rft_id=info:doi/10.1016/j.cognition.2016.08.015&rft_id=info:pmid/27601269&rft.aulast=Railton&rft.aufirst=Peter&rft_id=https://doi.org/10.1016%2Fj.cognition.2016.08.015&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  31. ^ Magid, Rachel W.; Schulz, Laura E. (October 2017). "Moral alchemy: How love changes norms". Cognition. 167: 135–150. doi:10.1016/j.cognition.2017.03.003. hdl:1721.1/112319. PMID 28404207. S2CID 29172109.135-150&rft.date=2017-10&rft_id=info:hdl/1721.1/112319&rft_id=https://api.semanticscholar.org/CorpusID:29172109#id-name=S2CID&rft_id=info:pmid/28404207&rft_id=info:doi/10.1016/j.cognition.2017.03.003&rft.aulast=Magid&rft.aufirst=Rachel W.&rft.au=Schulz, Laura E.&rft_id=https://doi.org/10.1016%2Fj.cognition.2017.03.003&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  32. ^ Kleiman-Weiner, Max; Saxe, Rebecca; Tenenbaum, Joshua B. (October 2017). "Learning a commonsense moral theory". Cognition. 167: 107–123. doi:10.1016/j.cognition.2017.03.005. hdl:1721.1/118457. PMID 28351662. S2CID 3184506.107-123&rft.date=2017-10&rft_id=info:hdl/1721.1/118457&rft_id=https://api.semanticscholar.org/CorpusID:3184506#id-name=S2CID&rft_id=info:pmid/28351662&rft_id=info:doi/10.1016/j.cognition.2017.03.005&rft.aulast=Kleiman-Weiner&rft.aufirst=Max&rft.au=Saxe, Rebecca&rft.au=Tenenbaum, Joshua B.&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  33. ^ McHugh, Cillian; McGann, Marek; Igou, Eric Raymond; Kinsella, Elaine Louise (2021). "Moral Judgment as Categorization (MJAC)". Perspectives on Psychological Science. 17 (1): 131–152. doi:10.1177/1745691621990636. hdl:10344/10451. PMC 8785282. PMID 34264152.131-152&rft.date=2021&rft_id=https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8785282#id-name=PMC&rft_id=info:pmid/34264152&rft_id=info:hdl/10344/10451&rft_id=info:doi/10.1177/1745691621990636&rft.aulast=McHugh&rft.aufirst=Cillian&rft.au=McGann, Marek&rft.au=Igou, Eric Raymond&rft.au=Kinsella, Elaine Louise&rft_id=https://doi.org/10.1177/1745691621990636&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  34. ^ a b c d Kunda, Ziva (1990). "The case for motivated reasoning". Psychological Bulletin. 108 (3): 480–498. doi:10.1037/0033-2909.108.3.480. ISSN 0033-2909. PMID 2270237. S2CID 9703661.480-498&rft.date=1990&rft.issn=0033-2909&rft_id=https://api.semanticscholar.org/CorpusID:9703661#id-name=S2CID&rft_id=info:pmid/2270237&rft_id=info:doi/10.1037/0033-2909.108.3.480&rft.aulast=Kunda&rft.aufirst=Ziva&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  35. ^ a b Ditto, P. H., Pizarro, D. A., & Tannenbaum, D. (2009). Motivated moral reasoning. In B. H. Ross (Series Ed.) & D. M. Bartels, C. W. Bauman, L. J. Skitka, & D. L. Medin (Eds.), Psychology of learning and motivation, Vol. 50: Moral judgment and decision making (pp. 307-338). San Diego, CA: Academic Press
  36. ^ a b Alicke, Mark D. (1992). "Culpable causation". Journal of Personality and Social Psychology. 63 (3): 368–378. doi:10.1037/0022-3514.63.3.368. ISSN 0022-3514.368-378&rft.date=1992&rft_id=info:doi/10.1037/0022-3514.63.3.368&rft.issn=0022-3514&rft.aulast=Alicke&rft.aufirst=Mark D.&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  37. ^ a b c Simon, Dan; Stenstrom, Douglas M.; Read, Stephen J. (2015). "The coherence effect: Blending cold and hot cognitions". Journal of Personality and Social Psychology. 109 (3): 369–394. doi:10.1037/pspa0000029. ISSN 1939-1315. PMID 26167800. S2CID 10247813.369-394&rft.date=2015&rft.issn=1939-1315&rft_id=https://api.semanticscholar.org/CorpusID:10247813#id-name=S2CID&rft_id=info:pmid/26167800&rft_id=info:doi/10.1037/pspa0000029&rft.aulast=Simon&rft.aufirst=Dan&rft.au=Stenstrom, Douglas M.&rft.au=Read, Stephen J.&rft_id=https://law.bepress.com/cgi/viewcontent.cgi?article=1308&context=usclwps-lss&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  38. ^ Holyoak, Keith J.; Simon, Dan (1999). "Bidirectional reasoning in decision making by constraint satisfaction". Journal of Experimental Psychology: General. 128 (1): 3–31. doi:10.1037/0096-3445.128.1.3. ISSN 1939-2222.3-31&rft.date=1999&rft_id=info:doi/10.1037/0096-3445.128.1.3&rft.issn=1939-2222&rft.aulast=Holyoak&rft.aufirst=Keith J.&rft.au=Simon, Dan&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  39. ^ a b c Holyoak, Keith J.; Powell, Derek (2016). "Deontological coherence: A framework for commonsense moral reasoning". Psychological Bulletin. 142 (11): 1179–1203. doi:10.1037/bul0000075. ISSN 1939-1455. PMID 27709981. S2CID 22681077.1179-1203&rft.date=2016&rft.issn=1939-1455&rft_id=https://api.semanticscholar.org/CorpusID:22681077#id-name=S2CID&rft_id=info:pmid/27709981&rft_id=info:doi/10.1037/bul0000075&rft.aulast=Holyoak&rft.aufirst=Keith J.&rft.au=Powell, Derek&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  40. ^ a b Thagard, Paul (1989). "Explanatory coherence". Behavioral and Brain Sciences. 12 (3): 435–467. doi:10.1017/s0140525x00057046. ISSN 0140-525X.435-467&rft.date=1989&rft_id=info:doi/10.1017/s0140525x00057046&rft.issn=0140-525X&rft.aulast=Thagard&rft.aufirst=Paul&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  41. ^ Thagard, Paul (2006). Hot thought: Mechanisms and applications of emotional cognition. Cambridge, MA: MIT Press.
  42. ^ Heider, F. (1958). The psychology of interpersonal relations. New York: Wiley.
  43. ^ Kelley, Harold H. (1973). "The processes of causal attribution". American Psychologist. 28 (2): 107–128. doi:10.1037/h0034225. ISSN 0003-066X.107-128&rft.date=1973&rft_id=info:doi/10.1037/h0034225&rft.issn=0003-066X&rft.aulast=Kelley&rft.aufirst=Harold H.&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  44. ^ Malle, Bertram F.; Knobe, Joshua (1997). "The Folk Concept of Intentionality". Journal of Experimental Social Psychology. 33 (2): 101–121. doi:10.1006/jesp.1996.1314. ISSN 0022-1031. S2CID 14173135.101-121&rft.date=1997&rft_id=https://api.semanticscholar.org/CorpusID:14173135#id-name=S2CID&rft.issn=0022-1031&rft_id=info:doi/10.1006/jesp.1996.1314&rft.aulast=Malle&rft.aufirst=Bertram F.&rft.au=Knobe, Joshua&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  45. ^ Cushman, Fiery (2008). "Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment". Cognition. 108 (2): 353–380. doi:10.1016/j.cognition.2008.03.006. ISSN 0010-0277. PMID 18439575. S2CID 2193343.353-380&rft.date=2008&rft.issn=0010-0277&rft_id=https://api.semanticscholar.org/CorpusID:2193343#id-name=S2CID&rft_id=info:pmid/18439575&rft_id=info:doi/10.1016/j.cognition.2008.03.006&rft.aulast=Cushman&rft.aufirst=Fiery&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  46. ^ Malle, Bertram F.; Guglielmo, Steve; Monroe, Andrew E. (2014-04-03). "A Theory of Blame". Psychological Inquiry. 25 (2): 147–186. doi:10.1080/1047840x.2014.877340. hdl:2027.42/147150. ISSN 1047-840X. S2CID 10477084.147-186&rft.date=2014-04-03&rft_id=info:hdl/2027.42/147150&rft_id=https://api.semanticscholar.org/CorpusID:10477084#id-name=S2CID&rft.issn=1047-840X&rft_id=info:doi/10.1080/1047840x.2014.877340&rft.aulast=Malle&rft.aufirst=Bertram F.&rft.au=Guglielmo, Steve&rft.au=Monroe, Andrew E.&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  47. ^ Young, L.; Cushman, F.; Hauser, M.; Saxe, R. (2007-05-07). "The neural basis of the interaction between theory of mind and moral judgment". Proceedings of the National Academy of Sciences. 104 (20): 8235–8240. Bibcode:2007PNAS..104.8235Y. doi:10.1073/pnas.0701408104. ISSN 0027-8424. PMC 1895935. PMID 17485679.8235-8240&rft.date=2007-05-07&rft_id=https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1895935#id-name=PMC&rft_id=info:bibcode/2007PNAS..104.8235Y&rft_id=info:pmid/17485679&rft_id=info:doi/10.1073/pnas.0701408104&rft.issn=0027-8424&rft.aulast=Young&rft.aufirst=L.&rft.au=Cushman, F.&rft.au=Hauser, M.&rft.au=Saxe, R.&rft_id=https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1895935&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  48. ^ Knobe, J. (2003). "Intentional action and side effects in ordinary language" (PDF). Analysis. 63 (3): 190–194. doi:10.1093/analys/63.3.190. ISSN 0003-2638.190-194&rft.date=2003&rft_id=info:doi/10.1093/analys/63.3.190&rft.issn=0003-2638&rft.aulast=Knobe&rft.aufirst=J.&rft_id=http://cogprints.org/3116/2/IntentionalAction.pdf&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  49. ^ Knobe, Joshua (2003). "Intentional action in folk psychology: An experimental investigation" (PDF). Philosophical Psychology. 16 (2): 309–324. doi:10.1080/09515080307771. ISSN 0951-5089. S2CID 12326690.309-324&rft.date=2003&rft_id=https://api.semanticscholar.org/CorpusID:12326690#id-name=S2CID&rft.issn=0951-5089&rft_id=info:doi/10.1080/09515080307771&rft.aulast=Knobe&rft.aufirst=Joshua&rft_id=http://cogprints.org/2922/1/IntentionSkill.pdf&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  50. ^ a b Pizarro, D.A. & Tannenbaum, D. (2011). Bringing character back: How the motivation to evaluate character influences judgments of moral blame. In M. Mikulincer & Shaver, P. (Eds) The Social psychology of morality: Exploring the causes of good and evil. APA Press.
  51. ^ Bloom, Paul (2011). "Family, community, trolley problems, and the crisis in moral psychology". The Yale Review. 99 (2): 26–43. doi:10.1111/j.1467-9736.2011.00701.x. ISSN 0044-0124.26-43&rft.date=2011&rft_id=info:doi/10.1111/j.1467-9736.2011.00701.x&rft.issn=0044-0124&rft.aulast=Bloom&rft.aufirst=Paul&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  52. ^ a b Uhlmann, Eric Luis; Pizarro, David A.; Diermeier, Daniel (2015). "A Person-Centered Approach to Moral Judgment". Perspectives on Psychological Science. 10 (1): 72–81. doi:10.1177/1745691614556679. ISSN 1745-6916. PMID 25910382. S2CID 12624043.72-81&rft.date=2015&rft.issn=1745-6916&rft_id=https://api.semanticscholar.org/CorpusID:12624043#id-name=S2CID&rft_id=info:pmid/25910382&rft_id=info:doi/10.1177/1745691614556679&rft.aulast=Uhlmann&rft.aufirst=Eric Luis&rft.au=Pizarro, David A.&rft.au=Diermeier, Daniel&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  53. ^ SLOMAN, STEVEN A.; FERNBACH, PHILIP M.; EWING, SCOTT (2012-03-26). "A Causal Model of Intentionality Judgment". Mind & Language. 27 (2): 154–180. doi:10.1111/j.1468-0017.2012.01439.x. ISSN 0268-1064.154-180&rft.date=2012-03-26&rft_id=info:doi/10.1111/j.1468-0017.2012.01439.x&rft.issn=0268-1064&rft.aulast=SLOMAN&rft.aufirst=STEVEN A.&rft.au=FERNBACH, PHILIP M.&rft.au=EWING, SCOTT&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">
  54. ^ a b c d Clopton, Nancy A.; Sorell, Gwendolyn T. (March 1993). "Gender Differences in Moral Reasoning: Stable or Situational?". Psychology of Women Quarterly. 17 (1): 85–101. doi:10.1111/j.1471-6402.1993.tb00678.x. S2CID 144044301.85-101&rft.date=1993-03&rft_id=info:doi/10.1111/j.1471-6402.1993.tb00678.x&rft_id=https://api.semanticscholar.org/CorpusID:144044301#id-name=S2CID&rft.aulast=Clopton&rft.aufirst=Nancy A.&rft.au=Sorell, Gwendolyn T.&rfr_id=info:sid/en.wikipedia.org:Moral reasoning" class="Z3988">

Further reading

edit
edit