Personalized Medicine and Artificial Intelligence Michael R. - - PowerPoint PPT Presentation

personalized medicine and artificial intelligence
SMART_READER_LITE
LIVE PREVIEW

Personalized Medicine and Artificial Intelligence Michael R. - - PowerPoint PPT Presentation

Outline Personalized Medicine and Artificial Intelligence Michael R. Kosorok, Ph.D. Department of Biostatistics University of North Carolina at Chapel Hill Summer, 2012 1/ 50 Outline Outline 1 Overview of Personalized Medicine Introduction


slide-1
SLIDE 1

Outline

Personalized Medicine and Artificial Intelligence

Michael R. Kosorok, Ph.D.

Department of Biostatistics University of North Carolina at Chapel Hill

Summer, 2012

1/ 50

slide-2
SLIDE 2

Outline

Outline

1 Overview of Personalized Medicine

Introduction Current Approaches

2 Progress on Single-Decision Regime Discovery

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

3 Progress on Multi-Decision (Dynamic) Regime Discovery

Framework Example New Developments

4 Overall Conclusions and Open Questions 2/ 50

slide-3
SLIDE 3

Introduction Current Approaches

Part I Overview of Personalized Medicine

3/ 50

slide-4
SLIDE 4

Introduction Current Approaches

Personalized Medicine

What is Personalized Medicine? Customized healthcare decisions and practices for the individual patient. Why Do We Need Personalized Medicine?

Multiple active treatments available. Heterogeneity in responses:

1

Across patients: what works for one may not work for another.

2

Within a patient: what works now may not work later.

4/ 50

slide-5
SLIDE 5

Introduction Current Approaches

Personalized Medicine

Goal “Providing meaningful improved health outcomes for patients by delivering the right drug at the right dose at the right time.” How Do We Apply Personalized Medicine? Learn individualized treatment rules: tailor treatments based

  • n patient characteristics.
  • When Do We Apply Personalized Medicine?

Single-Decision Setup. Multi-Decision Setup.

5/ 50

slide-6
SLIDE 6

Introduction Current Approaches

Nonpsychotic Chronic Major Depressive Disorder (Single-Decision)

The goal of the Nefazodone-CBASP clinical trial (Keller et al., 2000) is to determine the best treatment choice among

Pharmacotherpy (nefazodone). Psychotherapy (cognitive behavioral-analysis system of psychotherapy (CBASP)). Combination of both.

681 patients, with 50 prognostic variables measured on each patient. Further Goal Can we reduce depression by creating individualized treatment rules based on prognostic data?

6/ 50

slide-7
SLIDE 7

Introduction Current Approaches

Late Stage Non-Small Cell Lung Cancer (Multi-Decision)

In treating advanced non-small cell lung cancer, patients typically experience two or more lines of treatment.

Possible treatments Possible treatments 1st-line 2nd-line

Problem of Interest Can we improve survival by personalizing the treatment at each decision point (at the beginning of a treatment line) based on prognostic data?

7/ 50

slide-8
SLIDE 8

Introduction Current Approaches

The Basic Process

Current approaches to developing personalized medicine typically includes five key elements:

  • btaining patient genetic/genomic data using array and other

high throughput technology; identifying one or more biomarkers; developing new or selecting available therapies; measuring the relationship between biomarkers and clinical

  • utcomes, including prognosis and response to therapy; and

verifying the relationship in a prospective randomized clinical trial.

8/ 50

slide-9
SLIDE 9

Introduction Current Approaches

Review of Personalized Medicine (2006-2010)

We now summarize studies on personalized medicine published in six high-impact journals — Journal of the American Medical Association, Journal of the National Cancer Institute, Lancet, Nature, Nature Medicine, and the New England Journal of Medicine — from 2006 to 2010. All papers were manually selected and reviewed based on specified inclusion and exclusion criteria.

9/ 50

slide-10
SLIDE 10

Introduction Current Approaches

76 articles were selected meeting the above criteria, but two have since been retracted and were not included, resulting in 74 articles for our sample, 53 of which were cancer-related. In all 74, a biomarker was used to stratify patients for differential treatment.

10/ 50

slide-11
SLIDE 11

Introduction Current Approaches

Data Driven versus Knowledge Driven

Because of the so-called “curse of dimensionality,” identifying potential biomarkers from patient genomic profiles is a tremendous challenge. In the studies reviewed, two main approaches were uncovered for identifying the needed biomarkers:

a data-driven approach using primarily empirical methods and a knowledge-driven approach using existing biological knowledge about functions of genes, proteins, pathways and mechanisms.

56 papers developed new biomarkers: 16 based on data-driven approach, 36 knowledge driven, 4 hybrid.

11/ 50

slide-12
SLIDE 12

Introduction Current Approaches

Prognostic vs. Predictive Biomarkers

Two types of relationships between biomarkers and clinical

  • utcomes were observed in the reviewed studies:

association between biomarkers and patient prognosis (prognostic biomarkers) and association between biomarkers and response to treatment (predictive biomarkers). In the reviewed studies: 19 compared different treatments for one patient group; 33 studied the same therapy across different groups; and 16 made both types of comparisons.

12/ 50

slide-13
SLIDE 13

Introduction Current Approaches

Reliability and Reproducibility

A continuing controversy of personalized medicine focuses on its reliability and reproducibility (two of the studies reviewed were retracted because of non-replicability). The complexity of the data and statistical analyses involved make study of reproducibility of results both difficult and important:

datasets must be made publicly available for verification; biomarkers need to be validated in a different group of patients; quality data management is another important issue; creative statistical methods are needed.

Several recommendations regarding these issues have been made and more are to come.

13/ 50

slide-14
SLIDE 14

Introduction Current Approaches

Statistical and Computational Task and Challenges

Task Develop statistically efficient clinical trial designs and analysis methods for discovering individualized treatment rules. Predictors: Medical records, Diagnostic test, Demographics, Imaging, Genetics, Genomics, Proteomics .... Challenges Identify the optimal individualized treatment rule using training data where optimal treatment is unknown. High-dimensional predictors; arbitrary order nonparametric interactions. Longitudinal data: sequentially dependent.

14/ 50

slide-15
SLIDE 15

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Part II Progress on Single-Decision Regime Discovery

15/ 50

slide-16
SLIDE 16

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Single Decision: Data and Goal

Observe independently and identically distributed training data (Xi, Ai, Ri), i = 1, . . . , n. X: baseline variables, X ∈ Rd, A: binary treatment options, A ∈ {−1, 1}, R: outcome (larger is better), R ∈ R+, R is bounded. Randomized study with known randomization probability of the treatment. Construct individualized treatment rule (ITR) D(X) : Rd → {−1, 1}. Goal Maximize the expected outcome if the ITR is implemented in the future.

16/ 50

slide-17
SLIDE 17

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Standard Approach and Challenges

Standard approach:

Use regression and/or machine learning (e.g., support vector regression (SVR)) to estimate Q(x, a) = E(R|X = x, A = a) ˆ Dn(x) = argmaxa ˆ Qn(x, a).

Issues:

For right-censored outcomes, we developed improved random forrests (Zhu and Kosorok, 2012, JASA) and SVR (Goldberg and Kosorok, 2012, Submitted). The current approach is indirect, since we must estimate Q(x, a) and invert to estimate D(x).

17/ 50

slide-18
SLIDE 18

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Optimal Individualized Treatment Rule Discovery

Traditional approach: regression-based

(X, A, R) Predict E(R|A, X) Optimal ITR

Minimize Prediction Error argmaxA∈{−1,1} ˆ E(R|A, X)

Problem: mismatch between minimizing the prediction error and maximizing the value function. Our approach

(X, A, R) Optimal ITR

Maximize V(D)

Can we directly estimate the decision rule which maximizes the value function?

18/ 50

slide-19
SLIDE 19

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Value Function and Optimal Individualized Treatment Rule

1 Let P denote the distribution of (X, A, R), where treatments

are randomized, and PD denoted the distribution of (X, A, R), where treatments are chosen according to D. The value function of D (Qian & Murphy, 2011) is

V(D) = E D(R) =

  • RdPD =
  • R dPD

dP dP = E I(A = D(X)) P(A|X) R

  • .

2 Optimal Individualized Treatment Rule:

D∗ ∈ argmax

D

V(D). E(R|X, A = 1) > E(R|X, A = −1) ⇒ D∗(X) = 1 E(R|X, A = 1) < E(R|X, A = −1) ⇒ D∗(X) = −1

19/ 50

slide-20
SLIDE 20

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Classification Perspective

Intuition: Classification (Artificial Intelligence and Statistical Learning) Given a new observation Xnew, predict the class label D∗,new. No direct information on the true class labels, D∗. Can we assign the right treatment based on the observed information?

Patients, X Large Outcomes Small Outcomes The same treatment The

  • pposite

treatment

Xnew Similar to X Xnew Similar to X 20/ 50

slide-21
SLIDE 21

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Outcome Weighted Learning (OWL)

Optimal Individualized Treatment Rule D∗ Maximize the value Minimize the risk E I(A = D(X)) P(A|X) R

  • E

I(A = D(X)) P(A|X) R

  • For any rule D, D(X) = sign(f (X)) for some function f .

Empirical approximation to the risk function: n−1

n

  • i=1

Ri P(Ai|Xi)I(Ai = sign(f (Xi))). Computation challenges: non-convexity and discontinuity of 0-1 loss.

21/ 50

slide-22
SLIDE 22

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Convex Surrogate Loss: Hinge Loss

−3 −2 −1 1 2 3 1 2 3 4

Af Loss 0−1 Loss Hinge Loss

Hinge Loss: φ(Af (X)) = (1 − Af (X))+, where x+ = max(x, 0)

22/ 50

slide-23
SLIDE 23

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Outcome Weighted Support Vector Machine (SVM)

Objective Function: Regularization Framework min

f

  • 1

n

n

  • i=1

Ri P(Ai|Xi)φ(Aif (Xi)) + λnf 2

  • .

(1) f is some norm for f , and λn controls the severity of the penalty on the functions. A linear decision rule: f (X) = X Tβ + β0, with f as the Euclidean norm of β. Estimated individualized treatment rule: ˆ Dn = sign(ˆ fn(X)), where ˆ fn is the solution to (1).

23/ 50

slide-24
SLIDE 24

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Computation and Kernel Trick

The dual problem is a convex optimization problem. Quadratic programming; Karush-Kuhn-Tucker conditions. Linear decision rules may be insufficient. Kernel trick, k : Rd × Rd → R. Nonlinear decision rule with f (x) = βk(·, x) + β0. Reproducing kernel Hilbert space (RKHS) Hk with norm denoted by · k: Hk =

  • g(x) =

m

  • i=1

αik(xi, x)

  • .

A linear kernel yields a linear decision rule.

24/ 50

slide-25
SLIDE 25

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Risk Bound and Convergence Rates of the OWL Estimator

Understand the accuracy of OWL procedure. Fisher consistent, consistent, and general risk bounds. Precise risk bound under certain regularity conditions. The value converges surprisingly fast to the optimal, almost as fast as n−1. Similar to rate results in SVM literature (Tsybakov, 2004).

25/ 50

slide-26
SLIDE 26

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Empirical Study

OWL with Gaussian kernel: two tuning parameters

λn: the parameter for penalty. σn: the inverse bandwidth of the kernel.

Methods for comparison:

OWL with Linear kernel. Regression based methods:

l1 penalized least squares (l1-PLS) (Qian & Murphy, 2011) with basis function (1, X, A, XA). Ordinary Least Squares (OLS) with basis function (1, X, A, XA).

Evaluation of values in terms of mean squared error (MSE).

1000 replications; each training data set is of size 100, 200, 400 or 800. Independent validation set of size 10000.

26/ 50

slide-27
SLIDE 27

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Data Generation

X = (X1, . . . , X50) ∼ U[−1, 1]50. A ∈ {−1, 1}, P(A = 1) = P(A = −1) = 0.5. The response R ∼ N(Q0, 1), where Q0 = 1 + 2X1 + X2 + 0.5X3 + T0(X, A).

1

T0(X, A) = 0.442(1 − X1 − X2)A.

2

T0(X, A) =

  • X2 − 2X 3

1 − 0.1

  • A.

3

T0(X, A) =

  • 0.5 − X 2

1 − X 2 2

X 2

1 + X 2 2 − 0.3

  • A.

27/ 50

slide-28
SLIDE 28

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Simulation Results

  • −1.0

−0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0

Optimal Decision Boundary X1 X2

  • D* = −1

D* = 1 100 200 300 400 500 600 700 800 0.00 0.05 0.10 0.15

MSE for Values

Sample Size MSE OLS l1−PLS OWL−Linear OWL−Gaussian

Scenario 1: T0(X, A) = 0.442(1 − X1 − X2)A

28/ 50

slide-29
SLIDE 29

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Simulation Results

  • −0.5

0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0

Optimal Decision Boundary X1 X2

  • D* = −1

D* = 1 100 200 300 400 500 600 700 800 0.0 0.1 0.2 0.3 0.4 0.5 0.6

MSE for Values

Sample Size MSE OLS l1−PLS OWL−Linear OWL−Gaussian

Scenario 2: T0(X, A) =

  • X2 − 2X 3

1 − 0.1

  • A

29/ 50

slide-30
SLIDE 30

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Simulation Results

  • −1.0

−0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0

Optimal Decision Boundary X1 X2

  • D* = −1

D* = 1 100 200 300 400 500 600 700 800 0.00 0.01 0.02 0.03 0.04 0.05

MSE for Values

Sample Size MSE OLS l1−PLS OWL−Linear OWL−Gaussian

Scenario 3: T0(X, A) =

  • 0.5 − X 2

1 − X 2 2

X 2

1 + X 2 2 − 0.3

  • A

30/ 50

slide-31
SLIDE 31

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Simulation Results: Misclassification

100 200 300 400 500 600 700 800 0.0 0.1 0.2 0.3 0.4 0.5

Scenario 3, Misclassification Rates

Sample Size OLS l1−PLS OWL−Linear OWL−Gaussian 31/ 50

slide-32
SLIDE 32

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Nefazodone-CBASP clinical trial (Keller et al., 2000)

681 patients with non-psychotic chronic major depressive disorder (MDD). Randomized in a 1:1:1 ratio to either nefazodone, cognitive behavioral-analysis system of psychotherapy (CBASP) or the combination of nefazodone and psychotherapy. Primary outcome: score on the 24-item Hamilton Rating Scale for Depression (HRSD); the lower the better. 50 baseline variables: demographics, psychological problem diagnostics etc.

32/ 50

slide-33
SLIDE 33

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Nefazodone-CBASP clinical trial (Keller et al., 2000)

Pairwise Comparison: OWL: gaussian kernel. l1-PLS and OLS: (1, X, A, XA). Value calculated with a 5-fold cross validation type analysis.

Table 1: Mean HRSD (Lower is Better) from Cross Validation Procedure with Different Methods

OLS l1-PLS OWL Nefazodone vs CBASP 15.87 15.95 15.74 Combination vs Nefazodone 11.75 11.28 10.71 Combination vs CBASP 12.22 10.97 10.86

33/ 50

slide-34
SLIDE 34

Methodology Theoretical Results Simulation Studies and Data Analysis Comments

Comments

The Outcome Weighted Learning procedure Discovers an optimal individualized therapy to improve expected outcome. Nonparametric approach sidesteps the inversion step and invokes statistical learning techniques directly. Some open questions: How to handle censoring? How to generate sample size formulas to enable practical Phase II design?

34/ 50

slide-35
SLIDE 35

Framework Example New Developments

Part III Progress on Multi-Decision (Dynamic) Regime Discovery

35/ 50

slide-36
SLIDE 36

Framework Example New Developments

Dynamic Treatment Regimes (DTR)

Observe data on n individuals, T stages for each individual,

X1, A1, X2, A2, . . . , XT, AT, XT+1 Xt: Observation available at the tth stage. At: Treatment at the tth stage, At ∈ {−1, 1}. Ht: History available at the tth stage, Ht = {X1, A1, X2, . . . , At−1, Xt}. Rt: Outcome following the tth stage, Rt = rt(Ht+1).

A DTR is a sequence of decision rules: D = (D1(H1), . . . , DT(HT)), Dt(Ht) ∈ {−1, 1}. Goal Maximize the expected sum of outcomes if the DTR is implemented in the future.

36/ 50

slide-37
SLIDE 37

Framework Example New Developments

Value Function and Optimal DTR for Two Stages

The value function: V(D) = E D(R1 + R2). Optimal DTR: D∗ = argmaxD V(D). Constructing Optimal DTRs based on Q functions: Q2(h2, a2) = E(R2|H2 = h2, A2 = a2) D∗

2(h2) = argmax a2

Q2(h2, a2) Q1(h1, a1) = E(R1 + max

a2 Q2(H2, a2)|H1 = h1, A1 = a1)

D∗

1(h1) = argmax a1

Q1(h1, a1) Q learning with regression: estimate the Q-functions from data using regression and then find the optimal DTR.

37/ 50

slide-38
SLIDE 38

Framework Example New Developments

Non-Small Cell Lung Cancer (Yufan Zhao et al., 2011)

The clinical setting: There are two to three lines of therapy, but very few utilize three, and we will focus on two here. We need to make decisions at two treatment times: (1) at the beginning of the first line and (2) at the end of the first line. For time (1), we need to decide which of several agent options is best: we will only consider two options in the simulation. For time (2), we need to decide when to start the second line (out of three choices for simplicity) and which of two agents to assign. The reward function is overall survival which is right-censored.

38/ 50

slide-39
SLIDE 39

Framework Example New Developments

Performance of Optimal Personalized Versus Fixed Regimens

9.23 10.39 9.04 9.59 10.25 9.12 10.53 11.29 10.31 9.15 9.75 8.90 17.48 Overall Survival 5 10 15 20 25 A1A31 A1A32 A1A33 A1A41 A1A42 A1A43 A2A31 A2A32 A2A33 A2A41 A2A42 A2A43

  • ptimal

39/ 50

slide-40
SLIDE 40

Framework Example New Developments

Standard Approach and Challenges

Standard approach:

Use regression and/or machine learning (e.g., SVR) to estimate the Q-functions sequentially backwards. At time t, use as outcome the estimated pseudovalue Rt + maxat+1 ˆ Qt+1(Ht+1, at+1).

Issues:

For right-censored outcomes, we developed Q-learning for censored data and possibly irregular number and spacing of decision times (Goldberg and Kosorok, 2012, AOS). As before, the current approach is indirect, since we must estimate Qt(h, a) and invert to estimate Dt(h).

40/ 50

slide-41
SLIDE 41

Framework Example New Developments

Backwards Outcome Weighted Learning (BOWL)

Problem with Q learning Mismatch exists between estimating the optimal Q function and the goal of maximizing the value function (Murphy, 2005). Non-smooth maximization operation. High dimensional covariate space. BOWL Generalization of OWL to multi-decision setup. Find the optimal decision rule by directly maximizing the value function for each stage backwards repeatedly. Consistency and risk bound of BOWL estimator.

41/ 50

slide-42
SLIDE 42

Framework Example New Developments

Simulation Study

Generative Model (Chakraborty et al., 2010) X1 ∼ U[−1, 1]50, X2 = X1. A1, A2 ∈ {−1, 1}, P(A1 = 1) = P(A2 = 1) = 0.5. R1 = 0, R2|H2, A2 ∼ N(−0.5A1 + 0.5A2 + 0.5A1A2, 1). Training data sample size n = 100, 200, 400. Testing data sample size 10000. 500 replications. Methods: BOWL with Gaussian/Linear kernel; Q learning with linear regression.

42/ 50

slide-43
SLIDE 43

Framework Example New Developments

Simulation Results

−1 −0.5 0.5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

Optimal Value→

Values of the Value Function Sample Size n=100 QlearningLinear BOWLLinear −0.5 0.5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

Optimal Value→

Values of the Value Function Sample Size n=200 QlearningLinear BOWLLinear 0.35 0.4 0.45 0.5 0.7 0.72 0.74 0.76 0.78 0.8

Optimal Value→

Values of the Value Function Sample Size n=400 QlearningLinear BOWLLinear

Note: Q learning encounters difficulties with small sample sizes.

43/ 50

slide-44
SLIDE 44

Framework Example New Developments

Open Issues for BOWL

Multicategory/Continuous treatments.

Multiple therapies. Continuous range of dose levels.

Optimize timing to switch treatments in multi-stage trials.

Possible treatments Possible treatments and initial timings

1st-line 2nd-line Immediate Progression Death

44/ 50

slide-45
SLIDE 45

Conclusions

Part IV Overall Conclusions and Open Questions

45/ 50

slide-46
SLIDE 46

Conclusions

Conclusions

Single- and multi- decision personalized medicine trials can discover effective individualized regimens that improve significantly over standard approaches. Artificial intelligence and statistical learning tools play a significant role in new developments. The sample sizes required are usually reasonable. For the multi-decision setting, good dynamic models (both mechanistic and stochastic) are needed to construct virtual patients and virtual trials before designing trials. The advantage is the discovery of effective new treatments that could be missed by conventional approaches.

46/ 50

slide-47
SLIDE 47

Conclusions

Open Questions

Better tools for high-dimensional data: interpretability and simplicity. Inference for individualized treatment regimes: limiting distribution of the value function and sample size formula in both single-decision and multi-decision setup. Survival data (for OWL and BOWL, etc.). Missing data. Observational studies.

47/ 50

slide-48
SLIDE 48

Conclusions

Acknowledgments

Yingqi Zhao Yufan Zhao Zheng Ren Yair Goldberg Donglin Zeng Eric Laber Mark A. Socinski, A. John Rush and Richard M. Goldberg Marie Davidian and Stephen L. George Fred A. Wright and Anastasios A. Tsiatis Min Qian and Lacey Gunter

48/ 50

slide-49
SLIDE 49

Conclusions

References

Chakraborty et al. (2010). Inference for non-regular parameters in

  • ptimal dynamic treatment regimes. Statistical Methods in Medical

Research 19:317 - 343. Goldberg, Y., & Kosorok, M. R. (2012). Q-learning with censored

  • data. Annals of Statistics 40:529-560.

Keller, M. B. et al. (2000). A Comparison of Nefazodone, The Cognitive Behavioral-Analysis System of Psychotherapy, and Their Combination for the Treatment of Chronic Depression. NEJM 342(20):1462-1470. Murphy S.A. (2005). A Generalization Error for Q-Learning. Journal of Machine Learning Research 6:1073-1097.

49/ 50

slide-50
SLIDE 50

Conclusions

References (continued)

Qian, M., & Murphy, S. A. (2011). Performance Guarantees for Individualized Treatment Rules. Annals of Statistics 39:1180-1210. Tsybakov, A. B. (2004). Optimal Aggregation of Classifiers in Statistical Learning. Annals of Statistics 32:135-166. Zhao, Yingqi, et al. (2012). Estimating individualized treatment rules using outcome weighted learning. Journal of the American Statistical Association, In press. Zhao, Yufan, et al. (2011). Reinforcement learning strategies for clinical trials in non-small cell lung cancer. Biometrics, 67:1422 - 1433. Zhu, R., & Kosorok, M. R. (2012). Recursively imputed survival

  • trees. Journal of the American Statistical Association 107:331-340.

50/ 50