Lecture: Continuous Time Models with Investment Applications Simon - - PowerPoint PPT Presentation

lecture continuous time models with investment
SMART_READER_LITE
LIVE PREVIEW

Lecture: Continuous Time Models with Investment Applications Simon - - PowerPoint PPT Presentation

Lecture: Continuous Time Models with Investment Applications Simon Gilchrist Boston Univerity and NBER EC 745 Fall, 2013 Brownian Motion Brownian motion (Wiener process): Continous time stochastic process with three properties: Markov


slide-1
SLIDE 1

Lecture: Continuous Time Models with Investment Applications

Simon Gilchrist Boston Univerity and NBER

EC 745

Fall, 2013

slide-2
SLIDE 2

Brownian Motion

Brownian motion (Wiener process): Continous time stochastic process with three properties:

Markov process: probability distribution for all future values depends only on its current value. Independent increments: probability distribution for the change in the process is independent of any other non-overlapping time interval. Changes in the process over any finite interval are normally distributed witha variance that increases linearly with the time interval.

slide-3
SLIDE 3

Formal Definition

If z(t) is a wiener process then any change in z, ∆z corresponding to a time interval ∆t satisfies the following conditions:

∆z = εt √ ∆t εt ∼ N(0, 1) E(εtεt+s) = 0 for t = s

Intution: Change in z(t) over a finite interval T. Divide T into n = T/∆t : ∆z = z(s + T) − z(s) =

n

  • i=1

εi √ ∆t E(∆z) = 0 V (∆z) = n∆t = T

slide-4
SLIDE 4

Brownian motion with drift

Brownian motion with drift: dx = αdt + σdz where dz is a Wiener process. Over any finite interval ∆t, ∆x is distributed normal with E(∆x) = α∆t, V (∆x) = σ2∆t.

slide-5
SLIDE 5

Random walk representation of Brownian motion:

Show that dx is the limit of a discrete time random walk with drift. Suppose ∆x = ∆h with prob p = −∆h with prob q = 1 − p then E∆x = (p − q)∆h and V (x) = E(∆x2) − E(∆x)2 = (1 − (p − q)2)∆h2 = 4pq∆h2

slide-6
SLIDE 6

Binomial distribution

Let a time interval t have n = t/∆t discrete steps, then xt − xo is a serise of n independent trials with ∆h a success occurring with prob p and −∆h a failure, occurring with prob (1 − p) = q. So xt − xo has a binomial distribution with: E(xt − xo) = n(p − q)∆h = t(p − q)∆h/∆t and V (xt − xo) = n ∗ ((1 − (p − q)2)∆h2 = 4pqt∆h2/∆t

slide-7
SLIDE 7

Random walk representation of Brownian motion:

Choose ∆h, p, q so that mean and variance of xt − xo depends

  • nly on t and not on step-size ∆t or jump ∆h :

∆h = σ √ ∆t p = 1 2

  • 1 + α

σ √ ∆t

  • //q = 1

2

  • 1 − α

σ √ ∆t

  • then

p − q = α σ √ ∆t = α σ2 ∆h This implies E(xt − xo) = t α σ2 ∆h ∆t

2

= αt and V (xt − xo) = t

  • 1 −

α σ 2 ∆t

  • σ2

so lim

∆t→0 V (xt − xo) = tσ2.

slide-8
SLIDE 8

Comments:

Brownian motion is limit of discrete time random walk where mean and variance are independent of step-size ∆t and jump ∆h. This limiting process has the property that variance grows linearly per unit of time. For any finite interval, total distance travelled is infinite as ∆t → 0 : |∆x| = ∆h so E |∆x| = ∆h and nE |∆x| = t∆h ∆t = tσ √ ∆t → ∞ Brownian motion is not differentiable in the conventional sense:

  • ∆x

∆t

  • =
  • ∆h

∆t

  • → ∞

so dx/dt does not exist and we cannot compute E(dx/dt). We can compute E(dx) and 1

dt

  • E(dx) however.
slide-9
SLIDE 9

Ito Processes

Generalize brownian motion (Ito processes): dx = a(x, t)dt + b(x, t)dz where dz = wiener process and a(x, t), b(x, t) are non-random function of state. E(dx) = a(x, t)dt so a(x, t) is instantaneous rate of drift. Instantaneous variance: V (dx) = E(dx2) − E(dx)2 = a(x, t)2dt2 + 2E((a(x, t)b(x, t)dtdz) + b(x, t)2var(dz) The first two terms are of order dt2 and dt3/2 so that V (dx) = b(x, t)2var(dz) = b(x, t)2dt

slide-10
SLIDE 10

Example 1: Geometric brownian motion:

Let dx = αxdt + σxdz If x is a geometric brownian motion then F(x) = ln(x) is brownian motion with drift: dF = (α − 0.5σ2)dt + σdz This implies that ln(xt/xo) ∼ N

  • α − 0.5σ2

t, σ2t

  • Using properties of the log-normal we have

E(xt) = xoeαt V (xt) = x2

  • e2αt(eσ2t − 1)
slide-11
SLIDE 11

Present values

Also we have the present value expression: E ∞

  • x(t)e−rtdt

= ∞

  • xoe−(r−t)dt

= xo r − α The drift rate α can be interpreted as the dividend growth rate.

slide-12
SLIDE 12

Example 2: Continous time AR1 (Ornstein-Uhlenbeck)

Let dx = η(µ − x)dt + σdz Then E(xt) = µ + (xo − µ)e−ηt → µ as t → ∞ V (xt) = σ2 2η (1 − e−2ηt) → σ2 2η as t → ∞ If η → ∞ x is a constant. Need to adjust both σ, η to vary the degree of mean reversion.

slide-13
SLIDE 13

Ito’s Lemma:

Ito process is continuous but not differentiable. What about functions of x, F(x, t) where dx = a(x, t)dt + b(x, t)dz Consider taylor-series expansion of F(x, t) (ignore higher order derivatives of t): dF = ∂F ∂x dt + ∂F ∂t dt + 1 2 ∂2F ∂x2 dx2 + 1 6 ∂3F ∂x3 dx3 + .. We want all terms of order dt :

dx is of order dt (dx)2 = b(x, t)2dt +higher order terms

slide-14
SLIDE 14

Ito’s Lemma

This implies dF = ∂F ∂x dt + ∂F ∂t dt + 1 2 ∂2F ∂x2 dx2 dF = ∂F ∂t + a(x, t) ∂F ∂x

  • + 1

2b(x, t)2 ∂2F ∂x2

  • dt

+ b(x, t) ∂F ∂x

  • dz

Taking expectations we have: E(dF) = ∂F ∂t + a(x, t) ∂F ∂x

  • + 1

2b(x, t)2 ∂2F ∂x2

  • dt

Because of uncertainty the term 1

2b(x, t)2 ∂2F ∂x2

  • is of first-order.

I.e. owing to Jensen’s inequality, if the function is concave at x uncertainty lowers the value of dF.

slide-15
SLIDE 15

Example: Geometric brownian motion

Let dx = αxdt + σxdz Let F(x) = ln(x) dF =

  • a(x)

∂F ∂x

  • + 1

2b(x, t)2 ∂2F ∂x2

  • dt + b(x)

∂F ∂x

  • dz

=

  • α + 1

2σ2x2 −1 x2

  • dt + σx1

xdz =

  • α − 1

2σ2

  • dt + σdz

The log of geometric brownian motion is a brownian motion with drift.

slide-16
SLIDE 16

Dynamic programming in continuous time:

Start with discrete time problem: F(x, t) = max

u {π(x, u, t)∆t +

1 1 + ρ∆tE

  • F(x′, t + ∆t)|x, u
  • where π(x, u, t) is flow profit given state x and policy u.

Rearrange to get ρ∆tF(x, t) = max

u {π(x, u, t)∆t+E

  • F(x′, t + ∆t) − F(x, t)|x, u
  • Divide by ∆t and take limit as ∆t → 0

ρF(x, t) = max

u {π(x, u, t) + 1

dtE

  • F ′(x′, t)|x, u
  • }
slide-17
SLIDE 17

1

Suppose x follows an Ito process: dx = a(x, u, t)dt + b(x, u, t)dz then up to o(∆t) E

  • F(x′, t + ∆t) − F(x, t)|x, u
  • =

[Ft(x, t) + a(x, u, t)Fx(x, t) + 1 2b2(x, u, t)Fxx(x, t)]∆t We now have that the return equation satisifies: ρF(x, t) = max

u {π(x, u, t) + Ft(x, t) + a(x, u, t)Fx(x, t)

+ 1 2b2(x, u, t)Fxx(x, t)}

slide-18
SLIDE 18

Hamilton-Jacobi-Bellman equation

If there is an infinite horizon and a() and b() don’t depend explicitly on time then the value F(x) satisfies the ordinary differential equation: ρF(x) = max

u {π(x, u) + a(x, u)F ′(x) + 1

2b2(x, u)F ′′(x)} This is the continuous time equivalent of the Bellman equation.

slide-19
SLIDE 19

Optimal stopping problem: Discrete time

Let π(x) denote flow profit of a machine. Let Ω(x) denote the terminal payoff. Assume that π(x) −

ρ 1+ρΩ(x) is increasing in x.

Assume that Φ(x′|x), the distribution function is first-order stochastic dominant (i.e. an increase in x shifts the probability distribution of x′ to the right) – example AR1, Random walk.

slide-20
SLIDE 20

Optimal policy

Value function: F(x) = max

  • Ω(x); π(x) +

1 1 + ρE

  • F(x′|x
  • Solution: stop if x < x∗ for some value x∗ to be determined.
slide-21
SLIDE 21

Optimal stopping problem in continuous time

Assume Ito process for x : dx = a(x)dt + b(x)dz Profit relative to flow value of terminal payoff π(x) − ρΩ(x) is increasing in x. Return function: F(x) = max

  • Ω(x); π(x) +

1 1 + ρdtE [F(x + dx|x]

  • Solution: stop if x < x∗ for some value x∗ to be determined.
slide-22
SLIDE 22

Value on continuation region

If x > x∗ the return function satisfies ρF(x) = π(x) + 1 dtE(dF) which implies: ρF(x) = π(x) + a(x)F ′(x) + 1 2b2(x)F ′′(x) for x > x∗ Because x∗ is endogenous, we need two boundary conditions to solve this differential equation.

slide-23
SLIDE 23

Optimality conditions

Value matching: F(x∗) = Ω(x∗) Smooth pasting: Fx(x∗) = Ωx(x∗) Suppose Ω(x) = 0. At boundary: 0 = π(x∗) + 1 2b2(x∗)F ′′(x∗) The solution implies wait until π(x) ≤ π(x∗) < 0 before stopping (t is worthwile to incur some loss before closing down the machine).

slide-24
SLIDE 24

Heuristic argument for smooth pasting:

Suppose Fx < Ωx at x∗. We have upward kink. Then there exists an x∗∗ > x∗ such that Ω(x∗∗) > F(x∗∗) and we should stop at x∗∗. Suppose Fx > Ωx at x∗. We have downward kink. Then payoff is convex at optimum and there is value to waiting and determining what the realized value of x will be.

slide-25
SLIDE 25

Smooth pasting – slightly less heuristic

Assume a = 1, b = 1, Over the interval ∆t, x rise by ∆h with p = 1/2[1 + √ ∆t] and falls by ∆h with prob q = 1/2[1 − √ ∆t]. Consider the alternative policy of waiting until ∆t to take action.

slide-26
SLIDE 26

Return from waiting

Return is G = π(x∗)∆t + 1 1 + ρ∆t[pF(x∗ + ∆h) + qΩ(x∗ − ∆h)] = π(x∗)∆t + 1 1 + ρ∆t[pF(x∗) + Fx(x∗)∆h) + q (Ω(x∗) + Ωx(x∗)∆h)] +higherorder terms Let ∆t → 0 but recognize that ∆t is of order ∆h2 and goes to zero faster than ∆h. Apply value matching condition, we get G = F(x∗) + 1 2[Fx(x∗) − Ωx(x∗)]∆h > F(x∗)

slide-27
SLIDE 27

Option to invest

Assume value of a project evolves according to geometric brownian motion (log-normal dividends): dV = αV dt + σV dz Project manager can pay I to exercise an option and get V . Let F(V ) denote the value of the investment opportunity: F(V ) = max

T

E

  • (VT − I)e−ρT

Here Vt denotes the payoff to investing at t. Assume α < ρ (otherwise wait forever).

slide-28
SLIDE 28

Deterministic case: (σ = 0)

Value of payoff, given initial value Vo : V (t) = Voeαt Value of investing at time T is therefore: F(V ) = (V eαT − I)e−ρT

Suppose α < 0. In this case, invest now if V > I. Otherwise, never invest. Suppose 0 < α < ρ. F(V ) > 0 even if V < I since V is growing exponentially.

slide-29
SLIDE 29

Optimality

First-order-condition: dF(V ) dT = −(ρ − α)V e−(ρ−α)T + ρIe−ρT = 0 Invest now (set T = 0) if V > V ∗ = ρ ρ − αI > I Suppose ρ ρ − αI > V > I Project has positive net-present value at T = 0 but you should still wait to invest. Intuition: cost of investing discounted at higher rate (ρ) than benefit, discounted at ρ − α.

slide-30
SLIDE 30

Solution

Solution is therefore T ∗ = max 1 α ln ρI (ρ − α) V , 0

  • and

F(V ) = αI ρ − α (ρ − α) V ρI ρ/α for V < V ∗ = V − I for V > V ∗

slide-31
SLIDE 31

Stochastic case:(σ > 0).

When is it optimal to invest I in return for an asset worth V ? Assume V follows geometric brownian motion: dV = αV dt + σV dz Investment rule: optimal cutoff V > V ∗ Continuation region: Value of investment project determined by capital gain. ρFdt = E (dF)

slide-32
SLIDE 32

Continuation value

Apply Ito’s Lemma: dF = F ′(V )dV + 1 2F ′′(V )(dV )2 V is Brownian motion: dF = αV F ′(V )dt + σV F ′(V )dz + 1 2σ2V 2F ′′(V )dt Take expectations: E(dF) = αV F ′(V )dt + 1 2σ2V 2F ′′(V )dt Bellman’s equation holds in the continuation region: ρF(V ) = αV F ′(V ) + 1 2σ2V 2F ′′(V )

slide-33
SLIDE 33

Solution

We are looking for a solution to the differential equation ρF(V ) = αV F ′(V ) + 1 2σ2V 2F ′′(V ) which is satisfied at V > V ∗. Because V ∗ is endogenous, we have a free-boundary problem.

slide-34
SLIDE 34

Boundary conditions

Value matching: F(V ∗) = V ∗ − I Smooth pasting F ′(V ∗) = 1 We also have F(0) = 0 i.e. if V = 0 , with geometric brownian motion it remains at zero.

slide-35
SLIDE 35

Waiting to invest

Rewriting the value matching condition we have V ∗ − F(V ∗) = I This implies that manager will wait to invest, even if the project has positive net present value. Reasons:

Dividend growth (as in non-stochastic case) Uncertainty – higher uncertainty raises the option value F(V ∗) and delays investment.

slide-36
SLIDE 36

Explicit solution:

Guess: F(V ) = AV β1 Value matching implies AV ∗β1 = V − I Smooth pasting implies β1AV ∗β1−1 = 1 Combine these we get V ∗ = β1 β1 − 1I and A = V ∗ − I V ∗β1

slide-37
SLIDE 37

Solving for coefficients

We now need to solve for β1. Plug guess into differential equation. Let δ = ρ − α. Equation is satisified if β is a root of 1 2σ2β(β − 1) + (ρ − δ) β − ρ = 0 There are two roots to this equation. The positive root satisfies β1 = 1 2 − (ρ − δ) σ2 + (ρ − δ) σ2 − 1 2 2 + 2 ρ σ2 > 1

slide-38
SLIDE 38

Comparative statics:

β1 is decreasing in σ, we have that V ∗ is increasing in σ – as uncertainty increase, we wait longer to invest. (The wedge between V ∗ and I increases.) β1 is increasing in δ, we have that V ∗ is decreasing in δ – as the growth adjusted discount for profits increases, we invest sooner. β1 is decreasing in ρ (holding δ constant). The more we discount costs relative to growth-adjusted benefits, we wait longer to invest.

slide-39
SLIDE 39

Limiting behavior:

As σ → ∞, V ∗ → ∞ As σ → 0, if α > 0 β1 → ρ ρ − δ and V ∗ → ρ δ I > I As σ → 0, if α ≤ 0 β1 → ∞ and V ∗ → I

slide-40
SLIDE 40

Implications for user cost:

Assume profit flow of machine is geometric brownian motion: dπ = απdt + σπdz Value of profit stream is Vt = E ∞

t

πse−ρ(s−t)ds = πt ρ − α Investment rule is πt > π∗ = β β − 1 (ρ − α) I > (ρ − α) I Quadratic equation implies β β − 1 (ρ − α) I = ρ + 1 2σ2β1 Critical value of profits satisfies: π∗ =

  • ρ + 1

2σ2β1

  • I > ρI
slide-41
SLIDE 41

Comments:

Uncertainty increases the hurdle rate – the effective user cost of capital that should be applied when evaluating a given project. With no uncertainty, the user cost is ρ and does not depend on α. In other words, although without uncertainty, we may still wait to invest, our decision is based on standard user cost arguments – i.e. waiting for the flow value of profits to exceed the flow cost of the investment.

slide-42
SLIDE 42

Abel and Eberly:

Generalized adjustment cost framework to include fixed costs and irreversibility. Operating profit: π(kt, εt) where kt is current capital and εt is a random shock to profits. Assume πk > 0, πkk ≤ 0. Shock process: dεt = µ(εt)dt + σ(εt)dz where z is brownian motion. Capital accumulation: dkt = (It − δkt)dt

slide-43
SLIDE 43

Investment costs:

Purchase-sale costs P −

k I(I < 0) + P + k I(I ≥ 0), P + k > P − k

Adjustment costs: continous, strictly convex, twice

  • differentiable. Minimized at I = 0.

Fixed costs: – non-negative if I = 0

slide-44
SLIDE 44

Augmented adjustment cost function:

Express the augmented adjustment cost function as vC(I, K) where v = 1 if I = 0 = 0 if I = 0 Limits: lim

I→0− C(I, K) = lim I→0+ C(I, K) = C(0, K)

where C(0, K) is fixed cost. Also: CI(0, K)+ ≥ 0 and CI(0, K)+ ≥ CI(0, K)−

slide-45
SLIDE 45

Value of the firm:

Hamilton-Jacobi-Bellman equation: rV (K, ε) = max

I,v

  • π(k, ε) − vc(I, k) + 1

dtE(dV )

  • Taylor expansion:

dV = Vkdk + Vεdε + 1 2Vkk (dk)2 + 1 2Vεε (dε)2 + Vkεdkdε + .. Note that dk = (I − δK)dt so that dk2 = o(dt) Also (dε)2 = σ2 (ε) dt + o(dt)

slide-46
SLIDE 46

Expected firm value

This implies dV = Vk(I − δK)dt + Vε (µ(ε)dt + σ(ε)dz) + 1 2Vεεσ2(ε)dt Taking expectations: EdV =

  • Vk(I − δK) + Vεµ(ε) + 1

2Vεεσ2(ε)

  • dt
slide-47
SLIDE 47

Bellman’s equation

Let q = Vk then rV = max

I,v

  • π(k, ε) − vc(I, k) + q(I − δk) + µ(ε)Vε + 1

2σ2(ε)Vεε

  • Bellman’s equation says that we can choose I, v to solve

max

I,v [qI − vC(I, k)]

slide-48
SLIDE 48

Optimal investment

First consider v = 1. Let Ψ(q, k) = max

I

[qI − c(I, k)] Let I∗(q, k) satisfy: CI(I∗(q, k), k) = q for q < CI(0, k)− or q > CI(0, k)+ I∗(q, k) = 0 for CI(0, k)− < q < CI(0, k)+

slide-49
SLIDE 49

Comments:

CII > 0 implies that I∗(q, k) is strictly increasing in q over range of action. If C(I, k) differentiable at I = 0, CI(I∗(q, k), k) = 0 for all q If C(I, k) non-differentiable at zero, we have a range of inaction. I∗(q, k) < 0 if q < CI(0, k)− = 0 if CI(0, k)− < q < CI(0, k)+ > 0 otherwise These results imply that I∗(q, k) is non-decreasing in q.

slide-50
SLIDE 50

Optimal choice of v

If v = 0, I = 0, and qI − v(CI, k) = 0. If v = 1, Ψ(q, k) = qI∗(q, k) − C(I∗(q, k), k). Look at shape of Ψ(q, k) : Ψq(q, k) = I∗(q, k) < 0 if q < CI(0, k)− = 0 if CI(0, k)− < q < CI(0, k)+ = I∗(q, k) > 0 otherwise Also Ψqq(q, k) = I∗

q (q, k) > 0 in action range

Result: Ψ(q, k) is convex in q and attains minimum on interval CI(0, k)− < q < CI(0, k)+.

slide-51
SLIDE 51

Optimal policy:

Let q1 ≤ q2 be roots of Ψ(q, k). Optimal policy is then:

  • I(q, k)

= I∗(q, k) < 0 if q < q1

  • I(q, k)

= 0 if q1 < q < q2

  • I(q, k)

= I∗(q, k) > 0 if q > q2 Possible cases:

Unique root: occurs with no fixed costs and differentiability. Implies no range of inaction. Exactly two roots: only occurs with fixed cost. Implies range of inaction. Continuum of roots: occurs if there is no fixed cost but non-differentiability. Implies Range of inaction.

slide-52
SLIDE 52

Comments:

Range of inaction depends on adjustment costs not π function or ε process. If there are fixed costs or non-differentiability of C(0, k) we have a non-degenerate range of inaction. With fixed costs the optimal policy I(q, k) will have a discontinuity.

slide-53
SLIDE 53

Solving for q:

Differentiate Bellman equation w.r. to k : rVk = πk(k, ε)− vCk( I, k)−δq+qk( I−δk)+µ(ε)Vε,k+1 2σ2(ε)Vεεk Get E(dq) using Ito’s Lemma: E(dq) = qk( I − δk)dt + µ(ε)qεdt + 1 2σ2(ε)qεεdt Here we use fact that q = Vk so qε = Vkε, qεε = Vεεk. From Bellman equation we now have: (r + δ)q = πk(k, ε) − vCk( I, k) + 1 dtE(dq) i.e. required return on the marginal unit of capital equals marginal product plus expected capital gain.

slide-54
SLIDE 54

Solution

Lemma: Suppose xt is a diffusion process and a > 0 then xt = Et ∞ gt+se−asds is a solution to 1 dtEtdx − axt + gt = 0 This implies qt = Et ∞

  • πk(kt+s, εt+s) −

vt+sCk( It+s, kt+s)

  • e−(r+δ)sds

So qt is the expected present discounted value of the marginal return to capital.

slide-55
SLIDE 55

Relating q to observables:

If π, vC(I, k) are linearly homogenous then qo = Vo/ko To show this, consider 1 dtE(d(qk)) = 1 dtE(dq)k + qdk dt =

  • (r + δ)q − πk +

vCk( I, k)

  • k + q(I − δk)

Apply linear homogeneity to get 1 dtE(d(qk)) =

  • rq − (π −

vC( I, k)

  • + q

I − vCI I If I > 0, v = 1, q = CI. If I = 0, v = 0. so last term is zero. 1 dtE(d(qk)) =

  • rq − (π −

vC( I, k)

  • Again apply the lemma

qoko = Et ∞

  • π(kt+s, εt+s) −

vt+sC( It+s, kt+s)

  • e−rsds = Vo
slide-56
SLIDE 56

Example: Linear homogeneity and fixed cost = bK.

Assume C(I, k) = kc(I/k, 1) = kG(I/k) Then I/k = G′−1(q) < 0 if q ≤ q1 = 0 if q1 < q < q2 = G′−1(q) > 0 ιf q ≥ q2 Example of q : π(k, p) = max

L

pLαK1−α − wL = hpθk where h = (1 − α)αα/1−αw−α/(1−α) > 0, θ = 1 1 − α > 1 then qt = hEt ∞ pθ

t+se−(r+δ)sds

slide-57
SLIDE 57

Specific example

Suppose dp = σpdz which implies ln(pt+s/pt) ∼ N(−0.5σ2s, σ2s) and Et(pθ

t+s) = pθ t exp(1/2(θ (θ − 1) σ2s

so that qt = hpθ

t

r + δ − 0.5θ (θ − 1) σ2 . In this case, as σ increases, Tobin’s Q will increase. So will investment.

slide-58
SLIDE 58

Intuition

Profit functions are convex in prices. Although the total profit function is linearly homogenous, given a quasi-fixed factor capital, the flexibility of labor implies convexity with respect to

  • prices. Thus, a mean preserving spread to the price implies

higher variable profits and therefore more investment. Comment: this is true even if there are irreversibilties and fixed costs to investing. Why? In this model, fixed costs are flow fixed costs, i.e. whenever investment is non-zero, you must pay the cost. There isn’t really an option to invest aspect to the model. (Hence range

  • f inaction does not depend explicitly on the stochastic process

ε). This model does nicely illustrate the point made by Hartman, Abel and others that investment may increase with uncertainty

  • wing to the fact that profit functions are convex in prices

however.

slide-59
SLIDE 59

References:

Dixit, A. and Pindcyk, Investment under uncertainty, Princeton

  • Press. 1994. Chapters 2-5.

Abel, Andrew and Janice Eberly, “A unified model of investment under uncertainty”, AER, 1994. Pindyck, Robert, “Irreversibility, Uncertainty and Investment”, Journal of Economic Literature, Vol XXIX, 1991, 1110-1148. Sodal, Sigbjorn, “A simplified exposition of smooth pasting”, Economic Letters, 58, 1998, 217-223.