PHILOSOPHY
2018-2019 JELLE DE BOER
Lecture 1
PHILOSOPHY 2018-2019 JELLE DE BOER Lecture 1 This lecture, today - - PowerPoint PPT Presentation
PHILOSOPHY 2018-2019 JELLE DE BOER Lecture 1 This lecture, today Practical matters Introduction Values Wellbeing, happiness Subjectivism Relativism Grade components Multiple Choice Exam: 60% Duo Essay: 40%
2018-2019 JELLE DE BOER
Lecture 1
■ Practical matters ■ Introduction ■ Values – Wellbeing, happiness ■ Subjectivism ■ Relativism
■ Multiple Choice Exam: 60% ■ Duo Essay: 40% ■ Multiple Choice Exam is about the prescribed literature and the lecture slides. ■ Duo essay, 1000-1500 words: ethical reflection on a suitable subject from your bachelor thesis or on something else.
■ The Elements of Moral Philosophy, Rachels – book, to be buyed ■ An Introduction to Decision Theory, Peterson – Canvas ■ Social Cost Benefit Analysis - Canvas ■ The Distribution of Responsibility, Van de Poel & Royakkers – Canvas ■ Values in Science, Staley - Canvas
Responsibility; TBA
■ Discipline in philosophy that studies morality - ethics = moral philosophy ■ Morality can be studied in different ways:
behave, what causes their behavior, what are the mechanisms?
truth values; what are moral properties?
the car?
vehicle for what it is? Who is responsible? The car owner, the manufacturer, the designer?
■ Europe: Techplatforms like Facebook, Youtube must restrict sharing of music, art, journalistic content, illegal downloading ■ Block illegal content by uploadfilters That is censorship! protest in the streets, March 23 ■ Axel Voss, German Euro parliament: “Google and Facebook spread disinformation and use young people as a mere means.” – What does Voss mean with this? And what exactly is so bad about this?
■ Predict criminal behavior; prevent terrorist attacks; hire the best workers; diagnose illnesses; legal analysis ■ Amplify biases against blacks, muslims, women; enhance discrimination ■ What values are at stake? How to understand these values? How to weigh them?
■ How to use data? ■ Can you leave out certain data? E.g.
■ Must your study be replicable? ■ What to do if a senior collegue asks you to commit scientific fraud?
How do these values relate to each other?
fundamental intrinsic value. E.g. happiness
instrumental.
irreducible; they are all intrinsic.
Freedom Knowledge Friendship Love Beauty Equality Happiness, wellbeing …
■ E.g. how do social media affect people’s lives? ■ Somebody likes your message or photo dopamine. Isn’t that nice? ■ Or is it harmful? In what sense? To determine answers tot these questions
conce cept t of wellbeing/happiness
preferences
■ Hedonism: wellbeing = sum of pleasure – pain feeling, psychological state ■ Epicurus (341 v.C. – 270 v. C.), Bentham (1748 - 1832) , Mill (1806 - 1873) ■ Source of wellbeing is irrelevant
■ Wellbeing does not always come down to an inner feeling. E.g. when you look at a beautiful painting, or when you try to master something difficult. ■ The experience machine of Robert Nozick: do you go in? According to Nozick: Surely not! Since people want: a) to live a real life (compare Charles de Bovary) b) to be a person (instead of a mere heap of organic matter) c) to do things (instead of merely experiencing them)
■ Wellbeing has distinct forms. Wellbeing [dancing] ≠ Wellbeing [write a book] ≠ Wellbeing [be with friends] John Stuart Mill: some of these are better: It is better to be Socrates unhappy than pig happy. Jeremy Bentham disagreed: Pushpin (a simple board game) is just as good as poetry.
■ Hedonistic tredmill:
■ Measurement problem i): how to measure one’s hedonic state? Important drive for development of preference satisfaction approach. (But currently there are revivals: the happiness indicator industry) ■ Measurement problem ii): how to compare hedonistic states between people?
Wellbeing = preference satisfaction n.b.: do not interpret this hedonistically! Satifying as in satisfying requirements. → modern theory: preference satisfaction = utility (Decision theory, Lecture 2). how to determine this: ordinal scale, Von Neumann Morgenstern interval scale
■ Uninformed preferences. E.g. you take a medicine unaware of the side effects. You use Facebook unaware of its possibly addictive effect. ■ Adaptive preferences. Preferences that adapt to the circumstances.
(tends towards objective list).
■ Malevolent preferences. Should sadistic preferences count for someone’s wellbeing? ■ Experience matters. E.g. stranger in the train. ■ Are all types of preference satisfaction on equal footing? preference satisfaction [dancing] ≠ preference satisfaction [mathematical proof] ≠ preference satisfaction [play tennis] ≠ preference satisfaction [collecting bottle caps] ■ Measurement problem how to compare utility/wellbeing among people?
■ Objectieve list of Basic Needs, e.g. – food – drink – income – shelter – social relations ■ Objective list of things that make people Florish, e.g.: – education – culture – sport – freedom – have a voice – clubs
■ Physical health ■ Bodily integrity ■ Making use of senses ■ Imagination and thought ■ Express emotions ■ Practical reasoning ■ Social relations and self respect ■ Live in nature and among animals ■ Laughing and playing ■ Political and material control over environment
extra money to buy glasses Objective lists - in general: ■ No intrapersonal, no interpersonal measurement problem ■ Relatively easy to use for policy makers
■ Are the items on the list the correct items? ■ How to justify the items? – By saying that people want them? = preference satisfaction theory ■ The items do not constitute wellbeing, they are sources
■ People have authority over their wellbeing. ■ Not sensitive to differences between people.
■ Moral statements are mere expressions of personal opinion or taste. ■ They do not convey matters of fact. ■ They do not have truth values: they cannot be true or false. In Meta ethics, the position that moral statements do not have truth values is more commonly known as: non-co cogni gniti tivi vism sm Early (and simple) version: emoti tivism vism
■ Moral statements are expressions of emotions: approval & disapproval – “This is morally good” = hooray! – “This is morally bad” = booh! ■ These statements do not have truth values. ■ Moral disagreement is a conflict of attitudes. ■ Explains why some disagreements run deep, hard to reconcile. ■ Difference in moral judgements explained by variety of attitudes. ■ Morality motivates: difficulty for cognitivists, not for emotivists
Moral reasoning does not look like a combination of expressions of emotions, e.g. – Murder is morally wrong – If murder is morally wrong, then euthanasia is morally wrong – Therefore, euthanasia is morally wrong Because, how to construct this in an emotivist way? – Booh! [murder] – Hooray! [booh! (murder) & booh! (euthanasia)] – Booh [euthanasia] The “conclusion” does not necessarily follow. Does not reflect the logical stucture. (Frege-Geach problem)
statements, e.g. esthetic ones? In a non circular way? Modern non-cognitivism
■ More sophisticated – Norm expressivism (Alan Gibbard) – Quasi realism (Simon Blackburn)
■ Cultural relativism: different cultures vary in systems of moral norms ■ Does it follow that there is no culture independent universal morality? ■ No, not necessarily: – Perhaps there is and somehow no culture has dicovered this system of universal norms – Or varying cultures and their systems of norms are somehow rooted in a more (fundamental?) system of universal norms
■ Variant of cognitivism. ■ Moral statements have truth values, they are true or false. ■ They are true or false relative to a specific culture.
■ Certain values and norms are common to all cultures. ■ No objective standpoint to criticize the morality of a specific culture. Or to decide a moral discussion between members from different cultures. ■ The idea of moral progress still possible?
■ “each culture should have its own morality” ■ “one should be tolerant of different cultures”
A moral relativist can also say that one should not be tolerant.
Lecture 2
■ Individual decision theory: studies decision making when actors are confronted with various ‘states of nature’. (sometimes ‘decion theory’ in a narrower sense) ■ Game theory: studies decision making when actors interact with each other. ■ Social choice theory: studies how to derive a collective decision from individual preferences. (not addressed in this course)
Mental states, two basic categories: ■ Beliefs: mind-to-world direction of fit → Mental content must mirror the world ■ Desires: world-to-mind direction of fit → World must mirror the mental content
Belief [glass of beer]: representation of glass of beer in de world. Mind to world direction of fit Desire [glass of beer]: bring about change in the world (e.g. I ask the bartender for a glass of beer) so that the world comes to match this mental state. World to mind direction of fit Elizabeth Anscombe: desire is like a shopping list, a belief is like an inventory list.
desires beliefs + rationality Formalised in decision theory: preferences over outcomes assign probabilities to outcomes + these satisfy consistency requirements (axioms of the theory)
Decision making can be studied:
people actually make choices. In the lab or in the field.
decisions.
■ Conception of rationality: means-ends rationality Not about the ends or goals that a person sets himself (substantive rationality) → external to the analysis But, given these goals, what would be the rational thing to do?
1. Acts 2. States 3. Outcomes
Can be done in a matrix (or table), tree or vector.
You contemplate studying medicine or going to a dance academy. You reason that going to a dance academy may result in an exciting life but only when the economy is not in a recession. Because when then budgets for culture will be cut and you will end up being poor. Becoming a doctor in a growing economy gets you a good life and under a recession it will still
Recession No recession Dance academy poor exciting Medicine reasonably good good
Various rules: ■ Dominance ■ Maximin we will only look at this one – leximin ■ Maximax ■ Minimax regret ■ Insufficient reason ■ Optimism-pessimism
S1 S1 S2 S2 S3 S3 S4 S4 A1 1
5 6 A2 2 2 3 3 A3 4 6
5
A1: -3 A2: 2 A3: -10 2 is the highest select A2
Knowledge about the probabilities → standard rule: maximize expected utility = max. {prob . utility} Can also be done with e.g. money (or time or..), if Utlity is a linear function of this factor. (But for many people money has decreasing marginal utility)
Ordinal utility function & interval utllity function
Preferences must satisfy 3 axioms
2 extra axioms
Construct a scale by taking two extremes – say, a top item and a lousy item – and compare the choice alternatives with lotteries over these extremes.
Example: firstly rank the alternatives: Porsche Volkswagen Skoda Now choose a top item & a lousy item to construct the scale, e.g. Ferrari & Honda
Ask the actor what lottery over the Ferrari (F) and the Honda (H) would leave him/her indifferent to a Porsche / Volkswagen / Skoda for certain. A says: Porsche ̴ 0,8 F 0,2 D Volkswagen ̴ 0,5 F 0,5 D Skoda ̴ 0,2 F 0,8 D
Porsche ̴ 0,8 F 0,2 H Volkswagen ̴ 0,5 F 0,5 H Skoda ̴ 0,2 F 0,8 H Assume U(Ferrari) = 100, U(Honda) = 0, then U(Porsche) = 0,8.100 + 0,2.0 = 80 U (Volkswagen) = 0,5.100 + 0,5.0 = 50 U (Skoda) = 0,2.100 + 0,8.0 = 20
So when preferences over a set of alternatives of an actor satisfy: Asymmetry Completeness Transitivity Independence Continuity Then one can derive a cardinal (interval) VNM utility function: then one can assign interval numbers to the alternatives.
Policy makers in health care need a measure for the quality of health states from the perspective of patients.
Quality Adjusted Life Year = life expectancy x quality remaining years
Other method to measure this quality:
Death 100% Healthy Illness P Illness Q Validity relatively weak
scale)
Analyses interaction-structure between individuals and solution concepts Instead of states of nature: other individuals
Cooperate Defect Cooperate 2, 2 0, 3 Defect 3, 0 1, 1
Game tree
Repeating the game alters the strategic nature of the game. One shot PD leads to mutual defection and a collectively suboptimal equililibrium. Repeated PD offers cooperative possibilities (when indefinitively repeated)
Always cooperate? → susceptibe to exploitation by defecting actor. Always defect? Equilibrium strategy, but does not reap cooperative benefits.
Axelrod (1984): Tit for Tat most successful strategy in computer tournament.
Multiple strategies possible: Defect Tit for tat 50% cooperate 50% defect Which strategies succeed is often tested by evolutionary simulations Win, shift; Loose, stay
Etc.
Two hunters: hunt stag or hunt hare Stag Hare Stag 3, 3 0, 2 Hare 2, 0 1, 1 What to do?
1. Maximize pay off 2. Risk avoidance Cooperation requires trust - Stag hunt game a.k.a. Assurance Game In evolutionary simulations with random pairing: hare hunters take over the population, stag hunters go extinct.
A combination of strategies is a Nash Equilibrium (NE) if neither party has a reason to unilaterally change its strategy. Stag Hunt: [Stag, Stag] & [Hare, Hare] are both Nash Equilibria.
C1 C2 C1 C2 R1 2, 2 1, 3 R1 2, 1 0, 0 R2 3, 1 0, 0 R2 0, 0 1, 2 C1 C2 R1 2, 1 1, 0 R2 3, 0 0, 1
Pay off = number of offspring, reproduction Individuals do not make choices but follow fixed strategies. After each round there is reproduction, new generations (older generations die) Evolutionary stable strategy (ESS): population with species that follow this strategy cannot be invaded by another species that follow another strategy.
ESS is always also a Nash Equilibrium. However, not every Nash Equilibrium is an ESS. Hi Lo game 1,1 0,0 0,0 2,2 → way to reduce the number of Nash equilibria (and get a unique solution) Evolutionary game theory can also be used for players who are boundedly rational and act on the basis of conditioning (stimulus-response) trial & error learning gradually towards equilibrium.
Lecture 3
“data-driven innovation has become a key pillar of 21st century growth, with the potential to significantly enhance productivity, resource efficiency, economic competitiveness, and social l well-be being ing.”
Source: The Organisation for Economic Co-operation and Development (OECD) report “Data-driven innovation”
Person → Action → Consequences ↑ ↑ ↑
Virtue ethics Deontology Consequentialism
Utilitarianism
Interdependency actors (as in game theory) → Social contract theory
Consequentialisme: moral worth is in the consequences of an action.
/happiness/utility, knowledge, beauty, etc.)
possible actions.
possibility would be e.g. egalitarian)
■ Subset of consequentialism. ■ Monistic: only utility (= wellbeing) counts ■ Maximizes / promotes utility ■ What is utility or wellbeing? → lecture 1 – Hedonism – Preference satisfaction (as in decision theory, lecture 2) – Objective list
Jeremy Bentham (1748-1832), John Stuart Mill (1806-1873), Henry Sidgwick (1828-1900), Derek Parfit (1942 – 2017), Peter Singer (1946-) Bentham: ..“this fundamental axiom, it is the greatest happiness of the greatest number that is the measure of right and wrong.” Mill: “Utility, or the Greatest Happiness Principle, holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness.”
expected utility.
action = one’s obligation to perform.
– Anyone who can be more or less happy (can suffer) belongs to the moral community. – Argumentative basis for: women’s right to vote, abolishment slavery, animal welfare/rights
not absolute.
if it increases total utility.
– E.g., punish somebody because that increases future utility in society, not because he deserves punishment.
– Variant that drops this: satisficing utilitarianism.
Some facts from:
https://worldhappiness.report/ed/2019/big-data-and-well-being/
■ Number of likes on Facebook correlates with individual Life Satisfaction (i.L.S.) (but not strongly) ■ Sentiment analysis (Positive emotion terms, negative emotion terms) Twitter correlates with i.L.S. (but not strongly) ■ Drug prescriptions from administrative datasets of a population correlates with i.L.S. (more strongly)
■ Google trend data on frequency of positive terms to do with work, health, family correlates with i.L.S. (more strongly) ■ Sentiment analysis Twitter Mexico correlation with events (more strongly) ■ Aggregate sentiment data correlates with between countries / groups variation (more strongly)
■ reduces the reliance on expensive large surveys. ■ governments and companies can target the low mood / life satisfaction areas with specific policies.
■ Which concept of well-being? For which use? ■ How to interpret low correlation mood/sentiment measures with life satisfaction? ■ Target the low mood / life satisfaction areas with specific policies seems to presuppose utilitarian calculus: justified? ■ Most data are retrieved without consent.
■ Ability to measure some proxies well may (unwillingly) move other important things to the background. ■ How to deal with those other important things, e.g.: freedom? Possible answers: – No need! Everything is already incorporated in well-being measure. – Can be measured, e.g. in terms of opportunity sets, but cannot be compared with well- being (e.g. must have a threshold value, must be prioritized) – Can be measured and compared (to a sufficient extent): a utility function can be construed.
1. Is utility all that matters? Aren’t there other intrinsic values?
2. Rules like ‘thou shalt not steal’ are inflexible. They concern fundamental rights that cannot be traded against considerations of utility/wellbeing. E.g. it is wrong to sacrifice innocent people in order to max [utility]. No exploitation of minorities. 3. Heavy information processing: for each situation calculate expected utility. 4. Integrity (and separateness) of persons: individuals are more than carriers of utility.
5. Backward looking reasons are important. E.g. one deserves punishment for what
6. Special relations are important: family and friends have a higher priority than strangers.
Bite the bullet: e.g. Peter Singer: most criticism is an irrational product of our evolutionary and cultural past. Modifications: e.g. indirect / rule utilitarianism
total utility everybody utilitarian calculating < total utility everybody follows rules
■ Problem for indirect/rule utilitarianism: rule fetishism: must a rule always be followed, no matter the circumstances? Even when it is
– Response: the rules are rules of thumb, plans for the future. utilitarian calculus → design system of global rules to max [U] Follow these rules as long as there is no reason to reconsider (and to recalculate and redesign).
■ Other problem: what to do in an actual situation is derived from a hypothetical situation. ■ Again another problem: does it provide the appropriate moral justification? Example: I save my own child instead of 2 strange children. Why? Well, because this rule is element of a system that max [U]… …. Isn’t that one thougt too many? (Bernard Williams)
https://www.ted.com/talks/peter_singer_the_why_and_how_of_effec tive_altruism?language=nl
Lecture 4
Are you going to throw the switch?
Are you going to push the fat man?
Founding father: Immanuel Kant (1724-1804) Kant: moral worth is not to be found in the consequences of an action. E.g. lying or stealing or killing is not bad because of the bad consequences that these actions may happen to have but because they are bad actions, period. How to understand this?
X helps Y to cross the street. Is moral worth to be found in the consequences? Suppose X does it because he:
In such cases, the consequences are the same but the action is not good: the person does not act out of duty but only dutiful, according to duty. What makes an action good then, if not the consequences? “I helped her crossing the street because that is the right thing to do.” “But this is circular!” Patience… Moral worth most clearly shows itself when other motives are (somehow) absent, e.g. when someone’s mood is clouded – and one still does the right thing.
Doing the right thing looks pretty formal now. Kant: that is exactly right! Principle underlying the right intention = lawlike, like a natural law. Only this law is a law that humans impose on themselves.
Kant: the rational nature of human creatures. Animals are driven by inclinations and impulses → subject to natural laws But humans can also impose laws on themselves, and follow them. (This gives us freedom)
Newton: everything in the universe is subject to natural laws. Kant: morality has universal scope and necessity → just like Newton’s laws. Only: humans impose the laws on themselves.
Universal law formulation Act only according to that maxim by which you can at the same time will that it should become a universal law Categorical: not contingent on one’s own desires (such imperatives Kant calls ‘hypothetical’) and not on the circumstances. Kant’s idea: moral reasons are universally binding, irrespective of time, place, person.
Can this be action guiding for you & can you at the same want that everybody acts like this? Would be self defeating.
Humanity formulation Act so that you treat humanity, whether in your own person or that in
Humans are fundamentally unlike commodities: they do not have a
This formulation also prohibits lying, breaking promises / contracts. Because then you treat another person only as an instrument for your
Kant argued that the various versions of the Categorical Imperative are equivalent. (But this is not very clear) One argument: a rational creature (= a creature that imposes laws upon himself, who self-governs) must respect this distinguishing feature of himself, must respect his rational nature.
Stanford: par. 9. The Unity of the Formulas
1. Aboluteness of moral rules. Aren’t some lies sometimes permitted, e.g. to divert a murderer? 2. Wellbeing/happiness does not have moral status.
as long it is not disallowed by the categorical imperative. 3. Creatures who are less than rational (children, cognitively impaired, animals) are not part of the moral community. 4. What if rules conflict?
Utilitarianim: utility / wellbeing → maximize, promote ↑ Deontology: autonomy → side constraint on → individual rights Kantian ethics: philosophical foundation of universal human rights
■ Predict criminal behavior; prevent terrorist attacks; hire the best workers; diagnose illnesses; legal analysis ■ Amplify biases against blacks, muslims, women; enhance discrimination ■ Values to be protected or promoted: human lives, well-being, privacy ■ How to weigh these? – Utlitarian: tradeoffs – Kantian: constraints
then responsible? The car owner, the manufacturer, the designer?
and the only options are: steer to the left and kill a pedestrian
thereby killing the two passengers?
deontological engineers will come up with different design specifications
■ With terrorists in it? lock the doors, slow the car down, stop it / drive it to the police station ■ For the benefit of us all ■ Violation of autonomy ■ What if this capacity falls into bad hands?
By means of communication between the cars ■ Reduces emissions ■ Reduces traffic jams ■ Loss of autonomy for the “driver”
Morality is social → moral reasons of people are interdependent Morality = system of mutual expectations and preferences by which people can solve cooperation problems as in n-person Prisoner’s Dilemma problems most notably: provision of collective goods in a society
■ Goods that can only be produced by cooperation ■ Once produced, everyone can benefit ■ Vulnerable to free riding ■ Examples: infrastructure, national defense, health care, public schools, dykes, clean air ■ Martin Tisne in MIT Technology Review (Dec. 2018) argues that data should be conceptualized as a collective good, and that data ownership is a wrong idea https://www.technologyreview.com/s/612588/its-time-for-a-bill-of-data-rights/
In a PD – in e.g. material outcomes: Individual rational to defect: → D,D equilibrium. While C,C is Pareto superior. All players have an interest to get in C,C instead of D,D.
Problem: C,C is not stable, not an equilibrium. At the same time, the players would have a collective reason to C,C. Derive from this: individual reason to do C = moral reason.
■ Individual moral reason to do C is dependent on the expectation that others also C. ■ If you have a good reason to expect that the others will D, then the moral obligation dissolves. ■ C,D : there is no moral obligation to let yourself exploit by others.
■ 1588 – 1679, founding father social contract theory. ■ Wrote Leviathan during Civil War England, conflict between Royalists and Parliamentarists. ■ Central question: people must reach a mutual agreement to stop / avoid war of all against all. What set of rules and how to ensure that they are followed? ■ War of all against all = state of nature = D,D
■ People must reach a mutual agreement to avoid suboptimal outcomes. What set of rules and how to ensure that they are followed? ■ Multiple solutions possible: multiple Nash equilibria ■ Law is a system of rules that reduce the various Nash equilibria ■ Martin Tisne: unrestricted use of data in the aggregate bad for everybody – Restricted use by everybody = cooperative outcome ■ Necessary: bill of data rights
■ The right of the people to be secure against unreasonable surveillance shall not be violated. ■ No person shall have his or her behavior surreptitiously manipulated. ■ No person shall be unfairly discriminated against on the basis of data.
PD situations often have a repeated nature. Indefinetely repeated PD: cooperative equilibrium possible without morality. E.g. people can play Tit for Tat = equilibrium strategy. Still:
→ Moral motivation to C enhances cooperation.
■ Social contract is not real. – Response: understand social contract as tacit or hypothetical. ■ People / creatures who do not participate in cooperative project can also have moral status: future generations, cognitively impaired, animals, children. – Response: other part of morality.
■ Derive moral rules from rational self interest. ■ Since cooperation problems are often negotiable (in part) and negotiations are sensitive to starting positions in real life: – the resources or capital that parties bring to bargaining table – wealth, goods, talents, skills set, network etc. ■ Moral rules must therefore be sensitive to various starting positions. ■ Example how to distribute cooperative surplus in a repeated PD or a Stag Hunt: according to Nash Bargaining Solution.
■ Morality is combination of utilitarian or Kantian motives & interdependency ■ John Rawls: modern Kantian social contract theorist. – Talents and skills are a product of genes, upbringing morally arbitrary: should not play a role in designining just society. – Therefore hypothesize contract situation as if everybody reasons behind a veil of ignorance: you don’t have information about your resources.
■ Decison making under ignorance: Rawls: maximin 2a ■ Rawls’ two principles of justice 1. Principle of Equal Liberty: Each person has an equal right to the most extensive liberties compatible with similar liberties for all. 2. Difference Principle: Social and economic inequalities should be arranged so that they are both (a) to the greatest benefit of the least advantaged persons, and (b) attached to offices and positions open to all under conditions of equality
Study in Science (2015) The Social Dilemma of Autonomous Vehicles ■ Most people express moral preference for utilitarian AV’s, that minimize casualties. ■ Also preference for self protection vehicle for themselves. ■ Prisoner’s Dilemma: reason for government regulation. ■ Most people do not want the government to regulate this. ■ Look at existing jurisprudence. Killing innocent pedestrians by cars: not excused. Car owners have special duties. Cf. life boat, USA vs Holmes 1842
COST BENEFIT ANALYSIS, MANY HANDS PROBLEM Lecture 5
■ Governments must compare policy alternatives, e.g. build a bridge or dig a tunnel, build a new airport or status quo, impose tobacco ban or not ■ Compare in terms of future advantages and disadvantages ■ Use a metric to compare – Utility? Most often: money ■ Monetize all (or most of) the benefits and costs ■ Recalculate to the same year (with interest rate)
■ Casualties, wounded ■ CO2 emissions ■ Noise ■ Nature, environment ■
■ Rationalizes public decision making ■ Less subjective ■ Forces one to include all relevant considerations ■ Provides common ground ■ Never complete ■ Some estimates are very uncertain
2011 Dutch Ministry of infrastructure and watermanagement Benefits? Costs?
Benefits
■ Travel time average salary x time
Costs
■ Fuel costs price x liters ■ Emissions plant trees ■ Nature WTP / WTA / intrinsic value? ■ Deaths 2 million/person, via WTP ■ Wounded hospital costs ■ Noise WTP / WTA / costs of building noise barrier
■ Utilitarian: add and subtract or calculate B/C ratio ■ Non-utilitarian: constraints, thresholds
■ Nature: impact on species / habitats ■ Emissions: European norms ■ Actual investments to build noise barriers ■ Actual investments to mitigate casualties
■ Something bad happens due to collective human conduct ■ But difficult or impossible to pinpoint individual responsibility Problem: collective responsibility but no individual responsibility
A person is morally responsible if something goes wrong if: 1. He did something wrong - Wrong doing 2. He did not act under coercion and could have acted differently – Freedom 3. He caused the bad state of affairs – Causality 4. He could have known that his action would cause the bad state of affairs - Knowledge
Can also sometimes be assigned to collectives like ■ Organizations (firms, NGO’s, governments) ■ Groups (people playing soccer in the park) ■ Occasional collections (bystanders who can prevent something together) Have / ought to have a collective aim.
■ Retribution ■ Correction ■ Prevention
Collective
Wrong doing Freedom Causality Knowledge
Individual
Wrong doing Freedom Causality
■ Oil spill Mexico BP ■ Herald Free Enterprise ■ Citicorp building ■ Climate change
https://www.youtube.com/watch?v=txmb-Tzxyd8 BP, Haliburton and Transocean blamed each other. National Commission, installed by Obama: “clear mistakes” [but] “though it is tempting to single out one crucial misstep or point the finger at one bad actor (..) any such explanation provides a dangerously incomplete picture” BP: “a complex and interlinked series of mechanical failures, human judgments, engineering design, operational implementation team interfaces came together to allow (..) the accident.
■ Hierarchical model: top management is responsible ■ Collective model: each member is responsible for the whole ■ Individual model: each member is responsible in relation to his/her contribution (see section 9.4 from the chapter on this)
Lecture 6
Robert Merton (1910-2003): American sociologist The normative structure of science (1942): describes the ethos of science The CUDOS norms: 1. Communism 2. Universalism 3. Disinterestedness 4. Organized Skepticism
■ Science = collective property; not individual property – Boolean operators are not Boole’s, Godel’s incompleteness theorem is not Godel’s, Nash’s equilibrium is not Nash’s. – Scientific findings are broadly shared and made accessible. – No patents.
■ Acceptance/ refutation of claims occurs on impersonal grounds – Personal or social circumstances are irrelevant – No ethnocentrism – No selection on the basis of gender – Careers are open for talents
■ No other goals than the interest of science itself – On an instititutional level – “Exacting scrunity” of other scientists – “Virtual absence of fraud”
■ No dogmatism, nothing is sacred – Perhaps dogmatism on an individual level but not as a community – Temporary suspension of judgment and detached scrunity of beliefs
■John Ziman (1925-2005), theoretical physicist ■Real Science: What It Is and What It Means. Cambridge University Press – 2000 ■CUDOS does not cover industrial and government research labs
■ Can also be interpreted normatively: perhaps it is not descriptive of current practice but this is how it OUGHT to be Ethical code of conduct: VSNU
https://www.vu.nl/en/about-vu-amsterdam/academic-integrity/ index.aspx
Core values
■ Honesty ■ Scrupulousness ■ Transparancy ■ Independence ■ Responsibility
“I know it was your idea, but it was my idea to use your idea.”
A teacher has written a study book intended for first year students. T
he has not used source references, offering instead a list of further reading recommendations per chapter. In writing the book, he nevertheless made extensive use of the work of colleagues from all over the world. Should he have made detailed mention of this?
Source: The Netherlands Code of Conduct for Scientific Practice Principles of good scientific teaching and research 2004, revision 2012 Association of Universities in the Netherlands
Robert A. Millikan 1868 - 1953
American experimental physicist who won the Noble Prize for Physics in 1924 for
■ his measurement of the elementary electronic charge and ■ his work on the photoelectric effect.
Felix Ehrenhaft 1879- 1952
Austrian physicist who contributed to:
colloids and
electrical charges.
■ Milikan had chosen between drops ■ Out of 189 observations only 140 presented in paper ■ Still he wrote in his paper: “It is to be remarked that this is not a selected group of drops but represents all of the drops experimented upon during 60 consecutive days”
■ Results were contested by Ehrenhaft who claimed to have found subelectrons ■ Looking back we know Milikan was right and Ehrenhaft wrong. Does that matter?
■ Correlation between meat eating & a-social behavior ■ High school students who had looked at pictures with a steak scored migh higher on competitivess and a-social behavior in a subsequent task than other students who look at pictures with trees and clouds. ■ Earlier research: people associate meat with traits like toughness and self confidence. ■ Earlier opinion piece by Vonk: meat eaters are brutes
Other example: ■ “Coping with chaos” in Science by Stapel and Lindenberg (2011) ■ Messy environment induces stereotypical thinking and discrimination ■ Earlier research by Lindenberg, Keizer and Steg (2008): “The spreading of disorder” – a messy environment induces norm breaking behavior
■ Number of fraudulent publications: 69 ■ Many co-authors (ca. 30), from senior faculty to PhD students ■ Discovered by 3 junior researchers in 2011
the financial crisis influence charity?” → month later: confirmation with perfect data set, but with a far too low consistency among the answers
An experiment fails to yield the expected statistically significant results. The experiment is repeated, often with minor changes in the manipulation or
that did yield the expected results. It is unclear why in theory the changes made should yield the expected results. The article makes no mention of this exploratory method; the impression created is of a one-off experiment performed to check the a priori expectations. It should be clear, certainly with the usually modest numbers of experimental subjects, that using experiments in this way can easily lead to an accumulation of chance
some studies shows the use of several questionnaire versions, but that the researchers no longer knew which version was used in the article.
A variant of the above method is: a given experiment does not yield statistically significant differences between the experimental and control groups. The experimental group is compared with a control group from a different experiment –reasoning that ‘they are all equivalent random groups after all’ – and thus the desired significant differences are found. This fact likewise goes unmentioned in the article.
The removal of experimental conditions. For example, the experimental manipulation in an experiment has three values. Each of these conditions (e.g. three different colours of the otherwise identical stimulus material) is intended to yield a certain specific difference in the dependent variable relative to the other two. T wo of the three conditions perform in accordance with the research hypotheses, but a third does not. With no mention in the article of the omission, the third condition is left out, both in theoretical terms and in the
verification procedure in which the experimental conditions are expected to have certain effects on different dependent variables. The only effects on these dependent variables that are reported are those that support the hypotheses, usually with no mention of the insignificant effects on the other dependent variables and no further explanation.
■ How to establish whether somebody has co-authored a paper? ■ How to establish the order of authors: first author, second author…. last author?
Example: Guidelines ICMJE
1. Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND 2. Drafting the work or revising it critically for important intellectual content; AND 3. Final approval of the version to be published; AND 4. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
All those designated as authors should meet all four criteria for authorship, and all who meet the four criteria should be identified as
acknowledged. Examples of activities that alone (without other contributions) do not qualify a contributor for authorship are acquisition of funding; general supervision of a research group or general administrative support; and writing assistance, technical editing, language editing, and proofreading.
Joint panel FAO/WHO: “probably not carcinogenic” Companies that produce weedkillers with glyphosate not hampered, e.g. Monsanto professor Alan Boobis Got $ 1.000.000 donation from Monsanto