PHILOSOPHY 2018-2019 JELLE DE BOER Lecture 1 This lecture, today - - PowerPoint PPT Presentation

philosophy
SMART_READER_LITE
LIVE PREVIEW

PHILOSOPHY 2018-2019 JELLE DE BOER Lecture 1 This lecture, today - - PowerPoint PPT Presentation

PHILOSOPHY 2018-2019 JELLE DE BOER Lecture 1 This lecture, today Practical matters Introduction Values Wellbeing, happiness Subjectivism Relativism Grade components Multiple Choice Exam: 60% Duo Essay: 40%


slide-1
SLIDE 1

PHILOSOPHY

2018-2019 JELLE DE BOER

Lecture 1

slide-2
SLIDE 2

This lecture, today

■ Practical matters ■ Introduction ■ Values – Wellbeing, happiness ■ Subjectivism ■ Relativism

slide-3
SLIDE 3

Grade components

■ Multiple Choice Exam: 60% ■ Duo Essay: 40% ■ Multiple Choice Exam is about the prescribed literature and the lecture slides. ■ Duo essay, 1000-1500 words: ethical reflection on a suitable subject from your bachelor thesis or on something else.

slide-4
SLIDE 4

Literature

■ The Elements of Moral Philosophy, Rachels – book, to be buyed ■ An Introduction to Decision Theory, Peterson – Canvas ■ Social Cost Benefit Analysis - Canvas ■ The Distribution of Responsibility, Van de Poel & Royakkers – Canvas ■ Values in Science, Staley - Canvas

slide-5
SLIDE 5

Programme – 6 lectures

  • 1. Introduction, Values, Wellbeing; Subjectivism and Relativism
  • 2. Decision Theory and Game Theory
  • 3. Consequentialism and Utilitarianism
  • 4. Deontology; Social Contract Theory
  • 5. Applied Ethics: Social Cost-Benefit Analysis; Distribution of

Responsibility; TBA

  • 6. Values in Science, Scientific Integrity
slide-6
SLIDE 6

What is ethics?

■ Discipline in philosophy that studies morality - ethics = moral philosophy ■ Morality can be studied in different ways:

  • Descriptively (psychology, sociology, anthropology, history): how do people

behave, what causes their behavior, what are the mechanisms?

  • Normatively: how ought people behave, how to justify their behavior?
  • Meta-ethically: what kind of statements are moral statements, do they have

truth values; what are moral properties?

slide-7
SLIDE 7

Some moral issues Self driving cars

  • What if an accident happens, and the only options are: steer to the left and kill
  • ne pedestrian or steer to the right and crash, thereby killing the two people in

the car?

  • What if a fatal accident happens because the car does not recognize some

vehicle for what it is? Who is responsible? The car owner, the manufacturer, the designer?

slide-8
SLIDE 8

Protest against

  • nline censorship

■ Europe: Techplatforms like Facebook, Youtube must restrict sharing of music, art, journalistic content, illegal downloading ■ Block illegal content by uploadfilters  That is censorship!  protest in the streets, March 23 ■ Axel Voss, German Euro parliament: “Google and Facebook spread disinformation and use young people as a mere means.” – What does Voss mean with this? And what exactly is so bad about this?

slide-9
SLIDE 9

Machine learning

■ Predict criminal behavior; prevent terrorist attacks; hire the best workers; diagnose illnesses; legal analysis ■ Amplify biases against blacks, muslims, women; enhance discrimination ■ What values are at stake? How to understand these values? How to weigh them?

slide-10
SLIDE 10

Scientific conduct

■ How to use data? ■ Can you leave out certain data? E.g.

  • utliers?

■ Must your study be replicable? ■ What to do if a senior collegue asks you to commit scientific fraud?

slide-11
SLIDE 11

Values

How do these values relate to each other?

  • Monism: the different values reduce to one

fundamental intrinsic value. E.g. happiness

  • r wellbeing; the other values are only

instrumental.

  • Pluralism: the different values are

irreducible; they are all intrinsic.

Freedom Knowledge Friendship Love Beauty Equality Happiness, wellbeing …

slide-12
SLIDE 12

Focus on Wellbeing/happiness

■ E.g. how do social media affect people’s lives? ■ Somebody likes your message or photo  dopamine. Isn’t that nice? ■ Or is it harmful? In what sense?  To determine answers tot these questions

  • ne must first have a c

conce cept t of wellbeing/happiness

slide-13
SLIDE 13

Theories of wellbeing/happiness

  • 1. Hedonism: wellbeing = sum of pleasure – pain
  • 2. Preference satisfaction: wellbeing = satisfaction of

preferences

  • 3. Objective list: wellbeing = items on an objectieve lijst
slide-14
SLIDE 14

Hedonism

■ Hedonism: wellbeing = sum of pleasure – pain  feeling, psychological state ■ Epicurus (341 v.C. – 270 v. C.), Bentham (1748 - 1832) , Mill (1806 - 1873) ■ Source of wellbeing is irrelevant

slide-15
SLIDE 15

Hedonism – objections

■ Wellbeing does not always come down to an inner feeling. E.g. when you look at a beautiful painting, or when you try to master something difficult. ■ The experience machine of Robert Nozick: do you go in? According to Nozick: Surely not! Since people want: a) to live a real life (compare Charles de Bovary) b) to be a person (instead of a mere heap of organic matter) c) to do things (instead of merely experiencing them)

slide-16
SLIDE 16

■ Wellbeing has distinct forms. Wellbeing [dancing] ≠ Wellbeing [write a book] ≠ Wellbeing [be with friends] John Stuart Mill: some of these are better: It is better to be Socrates unhappy than pig happy. Jeremy Bentham disagreed: Pushpin (a simple board game) is just as good as poetry.

slide-17
SLIDE 17

■ Hedonistic tredmill:

  • i. Habituation: same stimuli provide less pleasure (recall the dopamine)
  • ii. Seeking of stronger stimuli
slide-18
SLIDE 18

■ Measurement problem i): how to measure one’s hedonic state?  Important drive for development of preference satisfaction approach. (But currently there are revivals: the happiness indicator industry) ■ Measurement problem ii): how to compare hedonistic states between people?

slide-19
SLIDE 19

Preference satisfaction

Wellbeing = preference satisfaction n.b.: do not interpret this hedonistically! Satifying as in satisfying requirements. → modern theory: preference satisfaction = utility (Decision theory, Lecture 2). how to determine this: ordinal scale, Von Neumann Morgenstern interval scale

slide-20
SLIDE 20

Criticism and discussion

■ Uninformed preferences. E.g. you take a medicine unaware of the side effects. You use Facebook unaware of its possibly addictive effect. ■ Adaptive preferences. Preferences that adapt to the circumstances.

  • Happy slaves.
  • Tred mill, as for hedonism.
  • Modification: only rational and informed preferences count.

(tends towards objective list).

slide-21
SLIDE 21

■ Malevolent preferences. Should sadistic preferences count for someone’s wellbeing? ■ Experience matters. E.g. stranger in the train. ■ Are all types of preference satisfaction on equal footing? preference satisfaction [dancing] ≠ preference satisfaction [mathematical proof] ≠ preference satisfaction [play tennis] ≠ preference satisfaction [collecting bottle caps] ■ Measurement problem how to compare utility/wellbeing among people?

slide-22
SLIDE 22

Objective list

■ Objectieve list of Basic Needs, e.g. – food – drink – income – shelter – social relations ■ Objective list of things that make people Florish, e.g.: – education – culture – sport – freedom – have a voice – clubs

slide-23
SLIDE 23

Objective list of Capabilities = what people can do – Sen, Nussbaum

■ Physical health ■ Bodily integrity ■ Making use of senses ■ Imagination and thought ■ Express emotions ■ Practical reasoning ■ Social relations and self respect ■ Live in nature and among animals ■ Laughing and playing ■ Political and material control over environment

slide-24
SLIDE 24

Illustrate difference

  • Basic needs – income: everybody same amount
  • Capabilities – making use of senses: someone with bad eyes gets

extra money to buy glasses Objective lists - in general: ■ No intrapersonal, no interpersonal measurement problem ■ Relatively easy to use for policy makers

slide-25
SLIDE 25

Objections

■ Are the items on the list the correct items? ■ How to justify the items? – By saying that people want them? = preference satisfaction theory ■ The items do not constitute wellbeing, they are sources

  • f wellbeing.

■ People have authority over their wellbeing. ■ Not sensitive to differences between people.

slide-26
SLIDE 26

Subjectivism

■ Moral statements are mere expressions of personal opinion or taste. ■ They do not convey matters of fact. ■ They do not have truth values: they cannot be true or false. In Meta ethics, the position that moral statements do not have truth values is more commonly known as: non-co cogni gniti tivi vism sm Early (and simple) version: emoti tivism vism

slide-27
SLIDE 27

Emotivism

■ Moral statements are expressions of emotions: approval & disapproval – “This is morally good” = hooray! – “This is morally bad” = booh! ■ These statements do not have truth values. ■ Moral disagreement is a conflict of attitudes. ■ Explains why some disagreements run deep, hard to reconcile. ■ Difference in moral judgements explained by variety of attitudes. ■ Morality motivates: difficulty for cognitivists, not for emotivists

slide-28
SLIDE 28

Emotivism - objections

  • 1. Moral reasoning between people is an exchange of arguments, not attitudes.

Moral reasoning does not look like a combination of expressions of emotions, e.g. – Murder is morally wrong – If murder is morally wrong, then euthanasia is morally wrong – Therefore, euthanasia is morally wrong Because, how to construct this in an emotivist way? – Booh! [murder] – Hooray! [booh! (murder) & booh! (euthanasia)] – Booh [euthanasia] The “conclusion” does not necessarily follow. Does not reflect the logical stucture. (Frege-Geach problem)

slide-29
SLIDE 29
  • 2. How to distinguish moral statements from other evaluative

statements, e.g. esthetic ones? In a non circular way? Modern non-cognitivism

■ More sophisticated – Norm expressivism (Alan Gibbard) – Quasi realism (Simon Blackburn)

slide-30
SLIDE 30

Relativism

■ Cultural relativism: different cultures vary in systems of moral norms ■ Does it follow that there is no culture independent universal morality? ■ No, not necessarily: – Perhaps there is and somehow no culture has dicovered this system of universal norms – Or varying cultures and their systems of norms are somehow rooted in a more (fundamental?) system of universal norms

slide-31
SLIDE 31

Moral relativism

■ Variant of cognitivism. ■ Moral statements have truth values, they are true or false. ■ They are true or false relative to a specific culture.

slide-32
SLIDE 32

Moral relativism - objections

■ Certain values and norms are common to all cultures. ■ No objective standpoint to criticize the morality of a specific culture. Or to decide a moral discussion between members from different cultures. ■ The idea of moral progress still possible?

slide-33
SLIDE 33

Normative relativism?

■ “each culture should have its own morality” ■ “one should be tolerant of different cultures”

  • These are universal claims.
  • And do not follow from moral relativism.

A moral relativist can also say that one should not be tolerant.

slide-34
SLIDE 34

DECISION THEORY & GAME THEORY

Lecture 2

slide-35
SLIDE 35

Decision theory - branches

■ Individual decision theory: studies decision making when actors are confronted with various ‘states of nature’. (sometimes ‘decion theory’ in a narrower sense) ■ Game theory: studies decision making when actors interact with each other. ■ Social choice theory: studies how to derive a collective decision from individual preferences. (not addressed in this course)

slide-36
SLIDE 36

Rational actor

Mental states, two basic categories: ■ Beliefs: mind-to-world direction of fit → Mental content must mirror the world ■ Desires: world-to-mind direction of fit → World must mirror the mental content

slide-37
SLIDE 37

Example: mental content

Belief [glass of beer]: representation of glass of beer in de world.  Mind to world direction of fit Desire [glass of beer]: bring about change in the world (e.g. I ask the bartender for a glass of beer) so that the world comes to match this mental state.  World to mind direction of fit Elizabeth Anscombe: desire is like a shopping list, a belief is like an inventory list.

slide-38
SLIDE 38

Actors have

desires beliefs  + rationality Formalised in decision theory: preferences over outcomes assign probabilities to outcomes  + these satisfy consistency requirements (axioms of the theory)

slide-39
SLIDE 39

Descriptive - Normative

Decision making can be studied:

  • Descriptively: psychology, behavioral economics → study how

people actually make choices. In the lab or in the field.

  • Normatively: decision theory → studies how people should make

decisions.

slide-40
SLIDE 40

■ Conception of rationality: means-ends rationality  Not about the ends or goals that a person sets himself (substantive rationality) → external to the analysis  But, given these goals, what would be the rational thing to do?

slide-41
SLIDE 41

Formalize decision problem

1. Acts 2. States 3. Outcomes

  • Action: function (state) = outcome

Can be done in a matrix (or table), tree or vector.

slide-42
SLIDE 42

What is the decision table?

You contemplate studying medicine or going to a dance academy. You reason that going to a dance academy may result in an exciting life but only when the economy is not in a recession. Because when then budgets for culture will be cut and you will end up being poor. Becoming a doctor in a growing economy gets you a good life and under a recession it will still

  • ffer you a reasonably good life.
slide-43
SLIDE 43

Recession No recession Dance academy poor exciting Medicine reasonably good good

slide-44
SLIDE 44

Decision making under ignorance

Various rules: ■ Dominance ■ Maximin  we will only look at this one – leximin ■ Maximax ■ Minimax regret ■ Insufficient reason ■ Optimism-pessimism

slide-45
SLIDE 45

Maximin – avoid the worst case scenario

S1 S1 S2 S2 S3 S3 S4 S4 A1 1

  • 3

5 6 A2 2 2 3 3 A3 4 6

  • 10

5

A1: -3 A2: 2 A3: -10 2 is the highest  select A2

slide-46
SLIDE 46

Decision making under risk

Knowledge about the probabilities → standard rule: maximize expected utility = max. {prob . utility} Can also be done with e.g. money (or time or..), if Utlity is a linear function of this factor. (But for many people money has decreasing marginal utility)

slide-47
SLIDE 47

Relation Income – Happiness, countries

slide-48
SLIDE 48

Utility scales and axioms

Ordinal utility function & interval utllity function

Preferences must satisfy 3 axioms

  • asymmetry
  • completeness

2 extra axioms

  • transitivity
  • independence
  • continuity
slide-49
SLIDE 49

Von Neumann en Morgenstern interval scale

Construct a scale by taking two extremes – say, a top item and a lousy item – and compare the choice alternatives with lotteries over these extremes.

Example: firstly rank the alternatives: Porsche Volkswagen Skoda Now choose a top item & a lousy item to construct the scale, e.g. Ferrari & Honda

slide-50
SLIDE 50

Ask the actor what lottery over the Ferrari (F) and the Honda (H) would leave him/her indifferent to a Porsche / Volkswagen / Skoda for certain. A says: Porsche ̴ 0,8 F 0,2 D Volkswagen ̴ 0,5 F 0,5 D Skoda ̴ 0,2 F 0,8 D

slide-51
SLIDE 51

Porsche ̴ 0,8 F 0,2 H Volkswagen ̴ 0,5 F 0,5 H Skoda ̴ 0,2 F 0,8 H Assume U(Ferrari) = 100, U(Honda) = 0, then U(Porsche) = 0,8.100 + 0,2.0 = 80 U (Volkswagen) = 0,5.100 + 0,5.0 = 50 U (Skoda) = 0,2.100 + 0,8.0 = 20

slide-52
SLIDE 52

So when preferences over a set of alternatives of an actor satisfy:  Asymmetry  Completeness  Transitivity  Independence  Continuity Then one can derive a cardinal (interval) VNM utility function: then one can assign interval numbers to the alternatives.

slide-53
SLIDE 53

Application VNM: Health Utilities

Policy makers in health care need a measure for the quality of health states from the perspective of patients.

  • For example for Qaly:

Quality Adjusted Life Year = life expectancy x quality remaining years

slide-54
SLIDE 54

Other method to measure this quality:

  • Rating scale, e.g. Visual analog scale

Death 100% Healthy Illness P Illness Q Validity relatively weak

  • Sensitive to end of the scale bias (people tend to avoid extremes of the

scale)

  • Sensitive to spreading bias (people tend to spread outcomes equally
  • ver the scale)
slide-55
SLIDE 55

Game theory

Analyses interaction-structure between individuals and solution concepts Instead of states of nature: other individuals

slide-56
SLIDE 56

Prisoner’s Dilemma

Cooperate Defect Cooperate 2, 2 0, 3 Defect 3, 0 1, 1

slide-57
SLIDE 57

Sequential: first one actor chooses, then the other

Game tree

slide-58
SLIDE 58

Repeated game

Repeating the game alters the strategic nature of the game. One shot PD leads to mutual defection and a collectively suboptimal equililibrium. Repeated PD offers cooperative possibilities (when indefinitively repeated)

slide-59
SLIDE 59

Rational strategies in a repeated PD

Always cooperate? → susceptibe to exploitation by defecting actor. Always defect?  Equilibrium strategy, but does not reap cooperative benefits.

slide-60
SLIDE 60

Tit for tat, direct reciprocity

  • Start with cooperation.
  • In each next round mirror what the other player did in the previous round.

Axelrod (1984): Tit for Tat most successful strategy in computer tournament.

slide-61
SLIDE 61
  • Reputation plays a role
  • Information can also come from third parties (indirect reciprocity)

Multiple strategies possible:  Defect  Tit for tat  50% cooperate 50% defect Which strategies succeed is often tested by evolutionary simulations  Win, shift; Loose, stay

  • both C in previous round  C
  • Both D in previous round  C with prob. β
  • Other C, I D in previous round  D
  • Other D, I C in previous round  D

 Etc.

slide-62
SLIDE 62

Stag Hunt game

Two hunters: hunt stag or hunt hare Stag Hare Stag 3, 3 0, 2 Hare 2, 0 1, 1 What to do?

slide-63
SLIDE 63

2 rational considerations in a Stag Hunt:

1. Maximize pay off 2. Risk avoidance Cooperation requires trust - Stag hunt game a.k.a. Assurance Game In evolutionary simulations with random pairing: hare hunters take over the population, stag hunters go extinct.

slide-64
SLIDE 64

Nash equilibrium

A combination of strategies is a Nash Equilibrium (NE) if neither party has a reason to unilaterally change its strategy. Stag Hunt: [Stag, Stag] & [Hare, Hare] are both Nash Equilibria.

slide-65
SLIDE 65

What are the Nash equilibria (in pure strategies)?

C1 C2 C1 C2 R1 2, 2 1, 3 R1 2, 1 0, 0 R2 3, 1 0, 0 R2 0, 0 1, 2 C1 C2 R1 2, 1 1, 0 R2 3, 0 0, 1

slide-66
SLIDE 66

Evolutionary game theory

Pay off = number of offspring, reproduction Individuals do not make choices but follow fixed strategies. After each round there is reproduction, new generations (older generations die) Evolutionary stable strategy (ESS): population with species that follow this strategy cannot be invaded by another species that follow another strategy.

slide-67
SLIDE 67

ESS is always also a Nash Equilibrium. However, not every Nash Equilibrium is an ESS. Hi Lo game 1,1 0,0 0,0 2,2 → way to reduce the number of Nash equilibria (and get a unique solution) Evolutionary game theory can also be used for players who are boundedly rational and act on the basis of conditioning (stimulus-response) trial & error learning  gradually towards equilibrium.

slide-68
SLIDE 68

CONSEQUENTIALISM AND UTILITARIANISM

Lecture 3

slide-69
SLIDE 69

Case: Data-driven innovation: Big Data for Growth and Well-being

“data-driven innovation has become a key pillar of 21st century growth, with the potential to significantly enhance productivity, resource efficiency, economic competitiveness, and social l well-be being ing.”

Source: The Organisation for Economic Co-operation and Development (OECD) report “Data-driven innovation”

slide-70
SLIDE 70

Normative ethical theories

  • 1. Consequentialism, Utilitarianism
  • 2. Deontology
  • 3. Social contract theory
  • 4. Virtue ethics
slide-71
SLIDE 71

Person → Action → Consequences ↑ ↑ ↑

Virtue ethics Deontology Consequentialism

Utilitarianism

Interdependency actors (as in game theory) → Social contract theory

slide-72
SLIDE 72

Consequentialism

Consequentialisme: moral worth is in the consequences of an action.

  • That is, in the value(s) that are realized (e.g. freedom, wellbeing

/happiness/utility, knowledge, beauty, etc.)

  • An action is morally good if it has good consequences, given the

possible actions.

  • Can be monistic or pluralistic in terms of values.
  • Value(s) can be maximixed, but not necessarily (another

possibility would be e.g. egalitarian)

slide-73
SLIDE 73

Utilitarianism

■ Subset of consequentialism. ■ Monistic: only utility (= wellbeing) counts ■ Maximizes / promotes utility ■ What is utility or wellbeing? → lecture 1 – Hedonism – Preference satisfaction (as in decision theory, lecture 2) – Objective list

slide-74
SLIDE 74

Prominent utilitarians

Jeremy Bentham (1748-1832), John Stuart Mill (1806-1873), Henry Sidgwick (1828-1900), Derek Parfit (1942 – 2017), Peter Singer (1946-) Bentham: ..“this fundamental axiom, it is the greatest happiness of the greatest number that is the measure of right and wrong.” Mill: “Utility, or the Greatest Happiness Principle, holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness.”

slide-75
SLIDE 75

Procedure

  • 1. Definine concept of utility.
  • Hedonism / preference satisfaction / objective list
  • 2. What are the possible alternative actions?
  • 3. Determine for each alternative action its total

expected utility.

  • Expected utility = probability x utility
  • Total = aggregate: all those who are involved
  • 4. The action that max [total utility] = the morally good

action = one’s obligation to perform.

slide-76
SLIDE 76

Characteristics

  • Impartiality: no one is privileged.

– Anyone who can be more or less happy (can suffer) belongs to the moral community. – Argumentative basis for: women’s right to vote, abolishment slavery, animal welfare/rights

  • Traditional moral rules (thou shalt not steal, thou shalt not lie) are

not absolute.

  • Those rules must be interpreted as flexible, e.g. lying is good

if it increases total utility.

slide-77
SLIDE 77
  • Forward looking: consequences are ahead, in the future. The past is irrelevant.

– E.g., punish somebody because that increases future utility in society, not because he deserves punishment.

  • Insatiable: any further increase of utulity is better, and morally required.

– Variant that drops this: satisficing utilitarianism.

slide-78
SLIDE 78

Case: Big data for growth and wellbeing

Some facts from:

https://worldhappiness.report/ed/2019/big-data-and-well-being/

■ Number of likes on Facebook  correlates with individual Life Satisfaction (i.L.S.) (but not strongly) ■ Sentiment analysis (Positive emotion terms, negative emotion terms) Twitter  correlates with i.L.S. (but not strongly) ■ Drug prescriptions from administrative datasets of a population  correlates with i.L.S. (more strongly)

slide-79
SLIDE 79

■ Google trend data on frequency of positive terms to do with work, health, family  correlates with i.L.S. (more strongly) ■ Sentiment analysis Twitter Mexico  correlation with events (more strongly) ■ Aggregate sentiment data  correlates with between countries / groups variation (more strongly)

slide-80
SLIDE 80
slide-81
SLIDE 81

How can these data be used?

■ reduces the reliance on expensive large surveys. ■ governments and companies can target the low mood / life satisfaction areas with specific policies.

slide-82
SLIDE 82

Philosophical issues

■ Which concept of well-being? For which use? ■ How to interpret low correlation mood/sentiment measures with life satisfaction? ■ Target the low mood / life satisfaction areas with specific policies seems to presuppose utilitarian calculus: justified? ■ Most data are retrieved without consent.

slide-83
SLIDE 83

■ Ability to measure some proxies well may (unwillingly) move other important things to the background. ■ How to deal with those other important things, e.g.: freedom? Possible answers: – No need! Everything is already incorporated in well-being measure. – Can be measured, e.g. in terms of opportunity sets, but cannot be compared with well- being (e.g. must have a threshold value, must be prioritized) – Can be measured and compared (to a sufficient extent): a utility function can be construed.

slide-84
SLIDE 84

General criticism and discussion

1. Is utility all that matters? Aren’t there other intrinsic values?

  • Is the completeness axiom correct?

2. Rules like ‘thou shalt not steal’ are inflexible. They concern fundamental rights that cannot be traded against considerations of utility/wellbeing. E.g. it is wrong to sacrifice innocent people in order to max [utility]. No exploitation of minorities. 3. Heavy information processing: for each situation calculate expected utility. 4. Integrity (and separateness) of persons: individuals are more than carriers of utility.

slide-85
SLIDE 85

5. Backward looking reasons are important. E.g. one deserves punishment for what

  • ne had done.

6. Special relations are important: family and friends have a higher priority than strangers.

slide-86
SLIDE 86

Responses utilitarianism

 Bite the bullet: e.g. Peter Singer: most criticism is an irrational product of our evolutionary and cultural past.  Modifications: e.g. indirect / rule utilitarianism

  • utilitarian argument:

total utility everybody utilitarian calculating < total utility everybody follows rules

  • System of rules that apply to all in a society.
slide-87
SLIDE 87

Indirect / rule utilitarianism - discussion

■ Problem for indirect/rule utilitarianism: rule fetishism: must a rule always be followed, no matter the circumstances? Even when it is

  • bvious that it does not yield max [U]?

– Response: the rules are rules of thumb, plans for the future. utilitarian calculus → design system of global rules to max [U] Follow these rules as long as there is no reason to reconsider (and to recalculate and redesign).

slide-88
SLIDE 88

■ Other problem: what to do in an actual situation is derived from a hypothetical situation. ■ Again another problem: does it provide the appropriate moral justification? Example: I save my own child instead of 2 strange children. Why? Well, because this rule is element of a system that max [U]… …. Isn’t that one thougt too many? (Bernard Williams)

slide-89
SLIDE 89

Contemporary utilitarian Peter Singer

https://www.ted.com/talks/peter_singer_the_why_and_how_of_effec tive_altruism?language=nl

slide-90
SLIDE 90

DEONTOLOGY AND SOCIAL CONTRACT THEORY

Lecture 4

slide-91
SLIDE 91

Thought-experiment in ethics: trolley problem

Are you going to throw the switch?

slide-92
SLIDE 92

Trolley problem part 2

Are you going to push the fat man?

slide-93
SLIDE 93

Deontology

Founding father: Immanuel Kant (1724-1804) Kant: moral worth is not to be found in the consequences of an action. E.g. lying or stealing or killing is not bad because of the bad consequences that these actions may happen to have but because they are bad actions, period. How to understand this?

slide-94
SLIDE 94

Example

X helps Y to cross the street. Is moral worth to be found in the consequences? Suppose X does it because he:

  • actually wants to gain approval from Y and bystanders?
  • actually symphathizes with Y?
  • actually expects something in return from Y?
  • actually experiences pleasure from doing this?
slide-95
SLIDE 95

In such cases, the consequences are the same but the action is not good: the person does not act out of duty but only dutiful, according to duty. What makes an action good then, if not the consequences? “I helped her crossing the street because that is the right thing to do.” “But this is circular!” Patience… Moral worth most clearly shows itself when other motives are (somehow) absent, e.g. when someone’s mood is clouded – and one still does the right thing.

slide-96
SLIDE 96

Doing the right thing looks pretty formal now. Kant: that is exactly right! Principle underlying the right intention = lawlike, like a natural law. Only this law is a law that humans impose on themselves.

slide-97
SLIDE 97

Difference between humans and animals

Kant: the rational nature of human creatures. Animals are driven by inclinations and impulses → subject to natural laws But humans can also impose laws on themselves, and follow them. (This gives us freedom)

slide-98
SLIDE 98

Kant and Newton

Newton: everything in the universe is subject to natural laws. Kant: morality has universal scope and necessity → just like Newton’s laws. Only: humans impose the laws on themselves.

slide-99
SLIDE 99

Categorical imperative (1)

Universal law formulation Act only according to that maxim by which you can at the same time will that it should become a universal law Categorical: not contingent on one’s own desires (such imperatives Kant calls ‘hypothetical’) and not on the circumstances. Kant’s idea: moral reasons are universally binding, irrespective of time, place, person.

slide-100
SLIDE 100

Example: lying, breaking a promise

Can this be action guiding for you & can you at the same want that everybody acts like this? Would be self defeating.

slide-101
SLIDE 101

Categorical imperative (2)

Humanity formulation Act so that you treat humanity, whether in your own person or that in

  • f another, always as an end and never as a means.

Humans are fundamentally unlike commodities: they do not have a

  • price. They have dignity.

This formulation also prohibits lying, breaking promises / contracts. Because then you treat another person only as an instrument for your

  • wn purposes.
slide-102
SLIDE 102

How are (1) and (2) related?

Kant argued that the various versions of the Categorical Imperative are equivalent. (But this is not very clear) One argument: a rational creature (= a creature that imposes laws upon himself, who self-governs) must respect this distinguishing feature of himself, must respect his rational nature.

Stanford: par. 9. The Unity of the Formulas

slide-103
SLIDE 103

Criticism and discussion

1. Aboluteness of moral rules. Aren’t some lies sometimes permitted, e.g. to divert a murderer? 2. Wellbeing/happiness does not have moral status.

  • Response: striving towards wellbeing/happiness is allowed,

as long it is not disallowed by the categorical imperative. 3. Creatures who are less than rational (children, cognitively impaired, animals) are not part of the moral community. 4. What if rules conflict?

slide-104
SLIDE 104

Utilitarianim: utility / wellbeing → maximize, promote ↑ Deontology: autonomy → side constraint on → individual rights Kantian ethics: philosophical foundation of universal human rights

slide-105
SLIDE 105

Machine learning

■ Predict criminal behavior; prevent terrorist attacks; hire the best workers; diagnose illnesses; legal analysis ■ Amplify biases against blacks, muslims, women; enhance discrimination ■ Values to be protected or promoted: human lives, well-being, privacy ■ How to weigh these? – Utlitarian: tradeoffs – Kantian: constraints

slide-106
SLIDE 106

Self driving cars

  • What if a fatal accident happens because the car makes a mistake? Who is

then responsible? The car owner, the manufacturer, the designer?

  • Possibly a “Problem of Many Hands” (lecture 5)
slide-107
SLIDE 107

Lessons from trolleyology?

  • What if an accident happens,

and the only options are: steer to the left and kill a pedestrian

  • r steer to the right and crash,

thereby killing the two passengers?

  • Consequentialist and

deontological engineers will come up with different design specifications

slide-108
SLIDE 108

May we hack a car

■ With terrorists in it?  lock the doors, slow the car down, stop it / drive it to the police station ■ For the benefit of us all ■ Violation of autonomy ■ What if this capacity falls into bad hands?

slide-109
SLIDE 109

Self organize in platoons

By means of communication between the cars ■ Reduces emissions ■ Reduces traffic jams ■ Loss of autonomy for the “driver”

slide-110
SLIDE 110

Social Contract Theory

Morality is social → moral reasons of people are interdependent Morality = system of mutual expectations and preferences by which people can solve cooperation problems as in n-person Prisoner’s Dilemma problems most notably: provision of collective goods in a society

slide-111
SLIDE 111

Collective goods

■ Goods that can only be produced by cooperation ■ Once produced, everyone can benefit ■ Vulnerable to free riding ■ Examples: infrastructure, national defense, health care, public schools, dykes, clean air ■ Martin Tisne in MIT Technology Review (Dec. 2018) argues that data should be conceptualized as a collective good, and that data ownership is a wrong idea https://www.technologyreview.com/s/612588/its-time-for-a-bill-of-data-rights/

slide-112
SLIDE 112

In a PD – in e.g. material outcomes: Individual rational to defect: → D,D equilibrium. While C,C is Pareto superior. All players have an interest to get in C,C instead of D,D.

slide-113
SLIDE 113

Problem: C,C is not stable, not an equilibrium. At the same time, the players would have a collective reason to C,C. Derive from this: individual reason to do C = moral reason.

slide-114
SLIDE 114

■ Individual moral reason to do C is dependent on the expectation that others also C. ■ If you have a good reason to expect that the others will D, then the moral obligation dissolves. ■ C,D : there is no moral obligation to let yourself exploit by others.

slide-115
SLIDE 115

Thomas Hobbes

■ 1588 – 1679, founding father social contract theory. ■ Wrote Leviathan during Civil War England, conflict between Royalists and Parliamentarists. ■ Central question: people must reach a mutual agreement to stop / avoid war of all against all. What set of rules and how to ensure that they are followed? ■ War of all against all = state of nature = D,D

slide-116
SLIDE 116

Central question in general

■ People must reach a mutual agreement to avoid suboptimal outcomes. What set of rules and how to ensure that they are followed? ■ Multiple solutions possible: multiple Nash equilibria ■ Law is a system of rules that reduce the various Nash equilibria ■ Martin Tisne: unrestricted use of data in the aggregate  bad for everybody – Restricted use by everybody = cooperative outcome ■ Necessary: bill of data rights

slide-117
SLIDE 117

Tisne’s proposal - Bill of data rights

■ The right of the people to be secure against unreasonable surveillance shall not be violated. ■ No person shall have his or her behavior surreptitiously manipulated. ■ No person shall be unfairly discriminated against on the basis of data.

slide-118
SLIDE 118

PD situations often have a repeated nature. Indefinetely repeated PD: cooperative equilibrium possible without morality. E.g. people can play Tit for Tat = equilibrium strategy. Still:

  • Often enough one shot (jackpot) situations.
  • Often enough imperfect monitoring, registering, memorizing.

→ Moral motivation to C enhances cooperation.

slide-119
SLIDE 119

Criticism and discussion

■ Social contract is not real. – Response: understand social contract as tacit or hypothetical. ■ People / creatures who do not participate in cooperative project can also have moral status: future generations, cognitively impaired, animals, children. – Response: other part of morality.

slide-120
SLIDE 120

Hobbesian social contract theory

■ Derive moral rules from rational self interest. ■ Since cooperation problems are often negotiable (in part) and negotiations are sensitive to starting positions in real life: – the resources or capital that parties bring to bargaining table – wealth, goods, talents, skills set, network etc. ■ Moral rules must therefore be sensitive to various starting positions. ■ Example how to distribute cooperative surplus in a repeated PD or a Stag Hunt: according to Nash Bargaining Solution.

slide-121
SLIDE 121

Non-Hobbesian: Hume, Rousseau

■ Morality is combination of utilitarian or Kantian motives & interdependency ■ John Rawls: modern Kantian social contract theorist. – Talents and skills are a product of genes, upbringing  morally arbitrary: should not play a role in designining just society. – Therefore hypothesize contract situation as if everybody reasons behind a veil of ignorance: you don’t have information about your resources.

slide-122
SLIDE 122

■ Decison making under ignorance: Rawls: maximin  2a ■ Rawls’ two principles of justice 1. Principle of Equal Liberty: Each person has an equal right to the most extensive liberties compatible with similar liberties for all. 2. Difference Principle: Social and economic inequalities should be arranged so that they are both (a) to the greatest benefit of the least advantaged persons, and (b) attached to offices and positions open to all under conditions of equality

  • f opportunity.
slide-123
SLIDE 123

Once more, self driving cars

Study in Science (2015) The Social Dilemma of Autonomous Vehicles ■ Most people express moral preference for utilitarian AV’s, that minimize casualties. ■ Also preference for self protection vehicle for themselves. ■ Prisoner’s Dilemma: reason for government regulation. ■ Most people do not want the government to regulate this. ■ Look at existing jurisprudence. Killing innocent pedestrians by cars: not excused. Car owners have special duties. Cf. life boat, USA vs Holmes 1842

slide-124
SLIDE 124

APPLICATIONS

COST BENEFIT ANALYSIS, MANY HANDS PROBLEM Lecture 5

slide-125
SLIDE 125

Social cost benefit analysis

■ Governments must compare policy alternatives, e.g. build a bridge or dig a tunnel, build a new airport or status quo, impose tobacco ban or not ■ Compare in terms of future advantages and disadvantages ■ Use a metric to compare – Utility? Most often: money ■ Monetize all (or most of) the benefits and costs ■ Recalculate to the same year (with interest rate)

slide-126
SLIDE 126

Benefits and costs that do not have a market price

■ Casualties, wounded ■ CO2 emissions ■ Noise ■ Nature, environment ■

  • Different methods
  • What are the costs if we reduce it? e.g. plant trees to reduce CO2
  • Willingness to pay
  • Willingness to accept
slide-127
SLIDE 127

CBA of CBA…

■ Rationalizes public decision making ■ Less subjective ■ Forces one to include all relevant considerations ■ Provides common ground ■ Never complete ■ Some estimates are very uncertain

slide-128
SLIDE 128

CBA of increase speed limit to 130 km/h

2011 Dutch Ministry of infrastructure and watermanagement  Benefits?  Costs?

slide-129
SLIDE 129

Determination

Benefits

■ Travel time  average salary x time

Costs

■ Fuel costs  price x liters ■ Emissions  plant trees ■ Nature  WTP / WTA / intrinsic value? ■ Deaths  2 million/person, via WTP ■ Wounded  hospital costs ■ Noise  WTP / WTA / costs of building noise barrier

slide-130
SLIDE 130

Weighing

■ Utilitarian: add and subtract or calculate B/C ratio ■ Non-utilitarian: constraints, thresholds

slide-131
SLIDE 131

Actual 2011 report

■ Nature: impact on species / habitats ■ Emissions: European norms ■ Actual investments to build noise barriers ■ Actual investments to mitigate casualties

  • Justification of the analysis is not explicit and can be criticized:
  • Do human lives have a price?
  • Are all values commensurable?
  • Value of nature: dependent on our WTP?
  • Future generations?
  • Discount rate = market interest rate?
  • etc
slide-132
SLIDE 132

Problem of Many Hands

■ Something bad happens due to collective human conduct ■ But difficult or impossible to pinpoint individual responsibility  Problem: collective responsibility but no individual responsibility

slide-133
SLIDE 133

Criteria moral responsibility

A person is morally responsible if something goes wrong if: 1. He did something wrong - Wrong doing 2. He did not act under coercion and could have acted differently – Freedom 3. He caused the bad state of affairs – Causality 4. He could have known that his action would cause the bad state of affairs - Knowledge

slide-134
SLIDE 134

Responsibility

Can also sometimes be assigned to collectives like ■ Organizations (firms, NGO’s, governments) ■ Groups (people playing soccer in the park) ■ Occasional collections (bystanders who can prevent something together)  Have / ought to have a collective aim.

slide-135
SLIDE 135

Importance of assigning responsibility

■ Retribution ■ Correction ■ Prevention

slide-136
SLIDE 136

PMH: gap collective responsibility - individual responsibility, for example

Collective

 Wrong doing  Freedom  Causality  Knowledge

Individual

 Wrong doing  Freedom  Causality

  • Knowledge
slide-137
SLIDE 137

Examples

■ Oil spill Mexico BP ■ Herald Free Enterprise ■ Citicorp building ■ Climate change

slide-138
SLIDE 138

BP Oil Spill: Who is to Blame?

https://www.youtube.com/watch?v=txmb-Tzxyd8 BP, Haliburton and Transocean blamed each other. National Commission, installed by Obama: “clear mistakes” [but] “though it is tempting to single out one crucial misstep or point the finger at one bad actor (..) any such explanation provides a dangerously incomplete picture” BP: “a complex and interlinked series of mechanical failures, human judgments, engineering design, operational implementation team interfaces came together to allow (..) the accident.

slide-139
SLIDE 139

How to deal with PMH cases? Three models:

■ Hierarchical model: top management is responsible ■ Collective model: each member is responsible for the whole ■ Individual model: each member is responsible in relation to his/her contribution (see section 9.4 from the chapter on this)

slide-140
SLIDE 140

VALUES IN SCIENCE SCIENTIFIC INTEGRITY

Lecture 6

slide-141
SLIDE 141

Ethics in science

Robert Merton (1910-2003): American sociologist The normative structure of science (1942): describes the ethos of science The CUDOS norms: 1. Communism 2. Universalism 3. Disinterestedness 4. Organized Skepticism

slide-142
SLIDE 142

Communism

■ Science = collective property; not individual property – Boolean operators are not Boole’s, Godel’s incompleteness theorem is not Godel’s, Nash’s equilibrium is not Nash’s. – Scientific findings are broadly shared and made accessible. – No patents.

slide-143
SLIDE 143

Universalism

■ Acceptance/ refutation of claims occurs on impersonal grounds – Personal or social circumstances are irrelevant – No ethnocentrism – No selection on the basis of gender – Careers are open for talents

slide-144
SLIDE 144

Disinterestedness

■ No other goals than the interest of science itself – On an instititutional level – “Exacting scrunity” of other scientists – “Virtual absence of fraud”

slide-145
SLIDE 145

Organized skepticism

■ No dogmatism, nothing is sacred – Perhaps dogmatism on an individual level but not as a community – Temporary suspension of judgment and detached scrunity of beliefs

slide-146
SLIDE 146

Criticism, e.g. John Ziman

■John Ziman (1925-2005), theoretical physicist ■Real Science: What It Is and What It Means. Cambridge University Press – 2000 ■CUDOS does not cover industrial and government research labs

slide-147
SLIDE 147

Back to Merton CUDOS

■ Can also be interpreted normatively: perhaps it is not descriptive of current practice but this is how it OUGHT to be  Ethical code of conduct: VSNU

 https://www.vu.nl/en/about-vu-amsterdam/academic-integrity/ index.aspx

slide-148
SLIDE 148

VSNU Netherlands code of conduct for research integrity

Core values

■ Honesty ■ Scrupulousness ■ Transparancy ■ Independence ■ Responsibility

slide-149
SLIDE 149

Breaches or problems with Scientific Integrity

  • Plagiarism
  • Data manipulation
  • Data fabrication
  • Authorship
  • Conflict of interest
slide-150
SLIDE 150

Plagiarism

“I know it was your idea, but it was my idea to use your idea.”

slide-151
SLIDE 151

Plagiarism

A teacher has written a study book intended for first year students. T

  • increase its readability

he has not used source references, offering instead a list of further reading recommendations per chapter. In writing the book, he nevertheless made extensive use of the work of colleagues from all over the world. Should he have made detailed mention of this?

Source: The Netherlands Code of Conduct for Scientific Practice Principles of good scientific teaching and research 2004, revision 2012 Association of Universities in the Netherlands

slide-152
SLIDE 152

Robert A. Millikan 1868 - 1953

American experimental physicist who won the Noble Prize for Physics in 1924 for

■ his measurement of the elementary electronic charge and ■ his work on the photoelectric effect.

Felix Ehrenhaft 1879- 1952

Austrian physicist who contributed to:

  • atomic physics
  • ptical properties of metal

colloids and

  • the measurement of

electrical charges.

slide-153
SLIDE 153

Manipulation of data?

■ Milikan had chosen between drops ■ Out of 189 observations only 140 presented in paper ■ Still he wrote in his paper: “It is to be remarked that this is not a selected group of drops but represents all of the drops experimented upon during 60 consecutive days”

slide-154
SLIDE 154

Controversy with Ehrenhaft

■ Results were contested by Ehrenhaft who claimed to have found subelectrons ■ Looking back we know Milikan was right and Ehrenhaft wrong. Does that matter?

slide-155
SLIDE 155
slide-156
SLIDE 156

Data fabrication – The Diederik Stapel affair

  • Experimental finding by Stapel, Vonk, Zeelenberg:

■ Correlation between meat eating & a-social behavior ■ High school students who had looked at pictures with a steak scored migh higher on competitivess and a-social behavior in a subsequent task than other students who look at pictures with trees and clouds. ■ Earlier research: people associate meat with traits like toughness and self confidence. ■ Earlier opinion piece by Vonk: meat eaters are brutes

slide-157
SLIDE 157

Other example: ■ “Coping with chaos” in Science by Stapel and Lindenberg (2011) ■ Messy environment induces stereotypical thinking and discrimination ■ Earlier research by Lindenberg, Keizer and Steg (2008): “The spreading of disorder” – a messy environment induces norm breaking behavior

slide-158
SLIDE 158

■ Number of fraudulent publications: 69 ■ Many co-authors (ca. 30), from senior faculty to PhD students ■ Discovered by 3 junior researchers in 2011

  • 1. Data sets that were too good to be true
  • 2. Stapel got tested himself: “how would thinking of

the financial crisis influence charity?” → month later: confirmation with perfect data set, but with a far too low consistency among the answers

  • 3. Replication of earlier studies: no effects
slide-159
SLIDE 159

Levelt Committee: not just Stapel, the whole field  sloppy science

An experiment fails to yield the expected statistically significant results. The experiment is repeated, often with minor changes in the manipulation or

  • ther conditions, and the only experiment subsequently reported is the one

that did yield the expected results. It is unclear why in theory the changes made should yield the expected results. The article makes no mention of this exploratory method; the impression created is of a one-off experiment performed to check the a priori expectations. It should be clear, certainly with the usually modest numbers of experimental subjects, that using experiments in this way can easily lead to an accumulation of chance

  • findings. It is also striking in this connection that the research materials for

some studies shows the use of several questionnaire versions, but that the researchers no longer knew which version was used in the article.

slide-160
SLIDE 160

A variant of the above method is: a given experiment does not yield statistically significant differences between the experimental and control groups. The experimental group is compared with a control group from a different experiment –reasoning that ‘they are all equivalent random groups after all’ – and thus the desired significant differences are found. This fact likewise goes unmentioned in the article.

slide-161
SLIDE 161

The removal of experimental conditions. For example, the experimental manipulation in an experiment has three values. Each of these conditions (e.g. three different colours of the otherwise identical stimulus material) is intended to yield a certain specific difference in the dependent variable relative to the other two. T wo of the three conditions perform in accordance with the research hypotheses, but a third does not. With no mention in the article of the omission, the third condition is left out, both in theoretical terms and in the

  • results. Related to the above is the observed

verification procedure in which the experimental conditions are expected to have certain effects on different dependent variables. The only effects on these dependent variables that are reported are those that support the hypotheses, usually with no mention of the insignificant effects on the other dependent variables and no further explanation.

slide-162
SLIDE 162

Authorship

■ How to establish whether somebody has co-authored a paper? ■ How to establish the order of authors: first author, second author…. last author?

slide-163
SLIDE 163

Example: Guidelines ICMJE

1. Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND 2. Drafting the work or revising it critically for important intellectual content; AND 3. Final approval of the version to be published; AND 4. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

slide-164
SLIDE 164

All those designated as authors should meet all four criteria for authorship, and all who meet the four criteria should be identified as

  • authors. Those who do not meet all four criteria should be

acknowledged. Examples of activities that alone (without other contributions) do not qualify a contributor for authorship are acquisition of funding; general supervision of a research group or general administrative support; and writing assistance, technical editing, language editing, and proofreading.

slide-165
SLIDE 165

Conflict of interest - Example Glyphosate ingredient in weedkiller

  • prof. Boobis chairs

Joint panel FAO/WHO: “probably not carcinogenic” Companies that produce weedkillers with glyphosate not hampered, e.g. Monsanto professor Alan Boobis Got $ 1.000.000 donation from Monsanto

slide-166
SLIDE 166

Remedies?