Zur militrischen Nutzung der Knstlichen Intelligenz: Ethische, - - PowerPoint PPT Presentation

zur milit rischen nutzung der k nstlichen intelligenz
SMART_READER_LITE
LIVE PREVIEW

Zur militrischen Nutzung der Knstlichen Intelligenz: Ethische, - - PowerPoint PPT Presentation

Zur militrischen Nutzung der Knstlichen Intelligenz: Ethische, vlkerrechtliche und technische Probleme Prof. Dr. rer. nat. Wolfgang Koch dies academicus 2019 Universitt Bonn 15. Mai 2019, 16.15 Uhr, Hrsaal I, Hauptgebude Lngst


slide-1
SLIDE 1

Zur militärischen Nutzung der Künstlichen Intelligenz: Ethische, völkerrechtliche und technische Probleme

  • Prof. Dr. rer. nat. Wolfgang Koch

dies academicus 2019 Universität Bonn

  • 15. Mai 2019, 16.15 Uhr, Hörsaal I, Hauptgebäude

Längst durchzieht Künstliche Intelligenz KI alle Bereiche der Technosphäre, die uns umgibt. Immer mehr gilt dies auch für die Verteidigung und Sicherheit. Welche Risiken ergeben sich daraus? Bleiben sie nicht nur technisch, sondern auch ethisch und völkerrechtlich beherrsch- bar? Nur eines der Beispiele ist Future Combat Air System FCAS, das europäische Luftkampfsystem der Zukunft, auf dessen Entwicklung sich Deutschland und Frankreich 2017 verständigt haben. Anders als bisherige Systeme ist FCAS weit mehr als nur ein Kampfflugzeug, sondern ein kom- plexes System der Systeme, das nur durch Algorithmen kontrollierbar sein wird. Wie kaum ein Großvorhaben zuvor wirft die Integration unbemannter Subsysteme in FCAS die Frage auf, wie ethische und völkerrechtliche Regeln technisch zu implementieren sind. Vor allem ist die Vorgabe zu erfüllen, das „ein Waffeneinsatz von unbemannten Luftfahrzeugen ausschließlich unter der Kontrolle des Menschen erfolgt“, wie die „Militärische Luftfahrstra- tegie“ der Bundesregierung festschreibt. Denn being killed by a machine is the ultimate human

  • indignity. Zudem gilt der Koalitionsvertrag: „Autonome Waffensysteme, die der Verfügung des

Menschen entzogen sind, lehnen wir ab“. Die Diskussion ethischer, rechtlicher und ingenieurwissenschaftlicher Probleme, die nicht nur die militärische Nutzung der KI aufwirft, muss in einen gesamtgesellschaftlichen Diskurs ein- gebettet sein, zu dem die Vorlesung beitragen möchte. Er wird uns noch lange begleiten. Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 1

slide-2
SLIDE 2

More Precise Formulation of the BAYESian Approach

Consider a set of measurements Zl = {zj

l }ml j=1 of a single or a multiple

target state xl at time instants tl, l = 1, . . . , k and the time series: Zk = {Zk, mk, Zk−1, mk−1, . . . , Z1, m1} = {Zk, mk, Zk−1}! Based on Zk, what can be learned about the object states xl at t1, . . . , tk, tk+1, . . ., i.e. for the past, present, and future? Evidently the answer is given be calculating the pdf p(xl|Zk) ! multiple sensor measurement fusion: Calculate p(x|Zk

1, . . . , Zk N)!

  • communication lines
  • common coordinate system: sensor registration

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 2

slide-3
SLIDE 3

How to calculate the pdf p(xl|Zk)?

Consider at first the present time: l = k. an observation: Bayes’ rule: p(xk|Zk) = p(xk|Zk, mk, Zk−1) = p(Zk, mk|xk, Zk−1) p(xk|Zk−1)

dxk p(Zk, mk|xk, Zk−1) p(xk|Zk−1)

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 3

slide-4
SLIDE 4

How to calculate the pdf p(xl|Zk)?

Consider at first the present time: l = k. an observation: Bayes’ rule: p(xk|Zk) = p(xk|Zk, mk, Zk−1) = p(Zk, mk|xk, Zk−1) p(xk|Zk−1)

dxk p(Zk, mk|xk, Zk−1)

  • likelihood function

p(xk|Zk−1)

  • prediction
  • p(xk|Zk−1) is a prediction of the target state at time tk

based on all measurements in the past.

  • p(Zk, mk|xk) ∝ ℓ(xk; Zk, mk) describes, what the current sensor output Zk, mk

can say about the current target state xk and is called likelihood function.

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 4

slide-5
SLIDE 5
  • p(xk|Zk−1) is a prediction for time tk based on all measurements in the past.

p(xk|Zk−1) = dxk−1 p(xk, xk−1|Zk−1) marginal pdf = dxk−1 p(xk|xk−1, Zk−1)

  • bject dynamics!

p(xk−1|Zk−1)

  • idea: iteration!

notion of a conditional pdf sometimes: p(xk|xk−1) = N

  • xk; Fk|k−1

deterministic

xk−1, Dk|k−1

random

  • (linear GAUSS-MARKOV)
  • p(Zk, mk|xk) ∝ ℓ(xk; Zk, mk) describes, what the current sensor output Zk, mk

can say about the current target state xk and is called likelihood function.

sometimes: ℓ(xk; zk) = N

  • zk; Hkxk, Rk
  • (1 target, 1 measurement)

iteration formula: p(xk|Zk) = ℓ(xk; zk) dxk−1 p(xk|xk−1) p(xk−1|Zk−1)

  • dxk ℓ(xk; zk)

dxk−1 p(xk|xk−1) p(xk−1|Zk−1)

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 5

slide-6
SLIDE 6

A popular model for object evolutions

Piecewise Constant White Acceleration Model

Consider state vectors: xk = (r⊤

k , ˙

r⊤

k )⊤

(position, velocity) For known xk−1 and without external influences we have with ∆Tk = tk − tk−1:

xk =

  • I

∆Tk I

O I rk−1

˙

rk−1

  • =: Fk|k−1xk−1,

see blackboard! Assume during the interval ∆Tk a constant acceleration ak causing the state evolution:

1

2∆T 2 k I

∆Tk I

  • ak =: Gkak,

linear transform! Let ak be a Gaussian RV with pdf: p(ak) = N

  • ak; o, Σ2

kI

  • , we therefore have:

p(Gkak) = N

  • Gkak; o, Σ2

kGkG⊤ k

  • .

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 6

slide-7
SLIDE 7

Remember: Affine Transforms of GAUSSian RVs.

N

  • x; ¯

x, P

  • y=t+Tx

− − − − − − − → N

  • y; t + T¯

x, TPT⊤

p(y) =

  • dx p(x, y) =
  • dx p(y|x) p(x) =
  • dx δ(y − t − Tx) p(x)

A possible representation: δ(y − t − Tx) = N

  • y; t + Tx, R
  • with R → O!

p(y) =

  • dx N(y − t; Tx, R) N(x; ¯

x, P)

for R → 0 = N

  • y; t + T¯

x, TPT⊤ + R

  • for R → 0;

product formula! Also true if dim(x) = dim(y)!

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 7

slide-8
SLIDE 8

Therefore: p(xk|xk−1) = N

  • xk; Fk|k−1xk−1, Dk|k−1
  • with

Fk|k−1 =

  • I

∆Tk I

O I

  • ,

Dk|k−1 = Σ2

k

1

4∆T 4 k I 1 2∆T 3 I 1 2∆T 3 k I

∆T 2

k I

  • Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019

— slide 8

slide-9
SLIDE 9

Therefore: p(xk|xk−1) = N

  • xk; Fk|k−1xk−1, Dk|k−1
  • with

Fk|k−1 =

  • I

∆Tk I

O I

  • ,

Dk|k−1 = Σ2

k

1

4∆T 4 k I 1 2∆T 3 k I 1 2∆T 3 k I

∆T 2

k I

  • Exercise 4.1

Consider xk = (r⊤

k , ˙

r⊤

k ,¨

r⊤

k )⊤

(position, velocity, acceleration) Show that Fk|k−1 and Dk|k−1 = Σ2

kGkG⊤ k (constant acceleration rates) are given by:

Fk|k−1 =   I

∆Tk I

1 2∆T 2 k I

O I

∆Tk I

O I I   , Dk|k−1 = Σ2

k

 

1 4∆T 4 k I 1 2∆T 3 k I 1 2∆T 2 k I 1 2∆T 3 k I

∆T 2

k I

∆Tk I

1 2∆T 2 k I

∆TkI

I  

with ∆Tk = tk − tk−1. Reasonable choice:

1 2qmax ≤ Σk ≤ qmax

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 9

slide-10
SLIDE 10

A more Insightful Look at a Data Fusion Algorithm

Kalman filter: xk = (r⊤

k , ˙

r⊤

k )⊤, Zk = {zk, Zk−1}

initiation: p(x0) = Nx0; x0|0, P0|0

  • ,

initial ignorance:

P0|0 ‘large’

prediction: Nxk−1; xk−1|k−1, Pk−1|k−1

  • dynamics model

− − − − − − − − − →

Fk|k−1, Dk|k−1

Nxk; xk|k−1, Pk|k−1

  • xk|k−1 = Fk|k−1xk−1|k−1

Pk|k−1 = Fk|k−1Pk−1|k−1Fk|k−1

⊤ + Dk|k−1

filtering: N

  • xk; xk|k−1, Pk|k−1
  • current measurement zk

− − − − − − − − − − − − − →

sensor model: Hk, Rk

N

  • xk; xk|k, Pk|k
  • xk|k

=

xk|k−1 + Wk|k−1νk|k−1, νk|k−1 = zk − Hkxk|k−1 Pk|k

=

Pk|k−1 − Wk|k−1Sk|k−1Wk|k−1⊤, Sk|k−1 = HkPk|k−1Hk⊤ + Rk Wk|k−1 = Pk|k−1Hk⊤Sk|k−1−1

‘KALMAN gain matrix’

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 10

slide-11
SLIDE 11

Kalman filter: xk = (r⊤

k , ˙

r⊤

k )⊤, Zk = {zk, Zk−1}

initiation: p(x0) = N

  • x0; x0|0, P0|0
  • ,

initial ignorance:

P0|0 ‘large’

prediction: N

  • xk−1; xk−1|k−1, Pk−1|k−1
  • dynamics model

− − − − − − − − − →

Fk|k−1, Dk|k−1

N

  • xk; xk|k−1, Pk|k−1
  • xk|k−1 = Fk|k−1xk−1|k−1

Pk|k−1 = Fk|k−1Pk−1|k−1Fk|k−1

⊤ + Dk|k−1

filtering: Nxk; xk|k−1, Pk|k−1

  • current measurement zk

− − − − − − − − − − − − − →

sensor model: Hk, Rk

N

  • xk; xk|k, Pk|k
  • xk|k

=

xk|k−1 + Wk|k−1νk|k−1, νk|k−1 = zk − Hkxk|k−1 Pk|k

=

Pk|k−1 − Wk|k−1Sk|k−1Wk|k−1⊤, Sk|k−1 = HkPk|k−1Hk⊤ + Rk Wk|k−1 = Pk|k−1Hk⊤Sk|k−1−1

‘KALMAN gain matrix’

Exercise 4.2

In your sensor simulator, chose a sensor at position rs, for example

rs = (0, 0)⊤, that produces measurements zk of the Cartesian target

positions Hxk from your ground truth generator. Use the measurement covariance matrix R = σ2

c diag[1, 1], σc = 50 m.

Program your first Kalman filter using a constant acceleration or the van Keuk model. Visualize your results nicely! Compare the ground truth, the measurement, and the estimates!

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 11

slide-12
SLIDE 12

Sk Sensors Producing Target Measurement at the Same Time

One possibility:

Hkxk =

   

H1

k

. . .

HSk

k

    xk,

Rk = diag[R1

k, . . . , RSk k ]

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 12

slide-13
SLIDE 13

Sk Sensors Producing Target Measurement at the Same Time

One possibility:

Hkxk =

   

H1

k

. . .

HSk

k

    xk,

Rk = diag[R1

k, . . . , RSk k ]

Alternatively, provided that Hi

k = Hk, i = 1, . . . , Sk:

p(z1

k, z2 k|xk) = p(z1 k|xk) p(z2 k|xk)

independent sensors = N

  • z1

k; Hkxk, R1 k

  • N
  • z2

k; Hkxk, R2 k

  • Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019

— slide 13

slide-14
SLIDE 14

Sk Sensors Producing Target Measurement at the Same Time

One possibility:

Hkxk =

   

H1

k

. . .

HSk

k

    xk,

Rk = diag[R1

k, . . . , RSk k ]

Alternatively, provided that Hi

k = Hk, i = 1, . . . , Sk:

p(z1

k, z2 k|xk) = p(z1 k|xk) p(z2 k|xk)

independent sensors = N

  • z1

k; Hkxk, R1 k

  • N
  • z2

k; Hkxk, R2 k

  • Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019

— slide 14

slide-15
SLIDE 15

A Useful Product Formula for GAUSSians

N

  • z; Hx, R
  • N
  • x; y, P
  • = N
  • z; Hy, S
  • independent of x

×

  • N
  • x; y + Wν, P − WSW⊤

Nx; Q−1(P−1x + H⊤R−1z), Q

  • ν = z − Hy,

S = HPH⊤ + R, W = PH⊤S−1, Q−1 = P−1 + H⊤R−1H.

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 15

slide-16
SLIDE 16

Sk Sensors Producing Target Measurement at the Same Time

One possibility:

Hkxk =

   

H1

k

. . .

HSk

k

    xk,

Rk = diag[R1

k, . . . , RSk k ]

Alternatively, provided that Hi

k = Hk, i = 1, . . . , Sk:

p(z1

k, z2 k|xk) = p(z1 k|xk) p(z2 k|xk)

independent sensors = N

  • z1

k; Hkxk, R1 k

  • N
  • z2

k; Hkxk, R2 k

  • = N
  • Hkxk; z1

k, R1 k

  • N
  • z2

k; Hkxk, R2 k

  • ∝ N
  • Hkxk; Rk(R1 −1

k

z1

k + R2 −1 k

z2

k)

  • =zk

, (R1 −1

k

+ R2 −1

k

)−1

  • =Rk
  • = N
  • zk; Hkxk, Rk
  • Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019

— slide 16

slide-17
SLIDE 17

Exercise 4.3

Generalize to the case Sk > 2 (induction argument)!

One possible fusion strategy:

Create a singe effective measurement by preprocessing of individual sensor measurement!

zk = Rk

Sk

  • s=1
  • Rs

k

−1zs

k

weighted arithmetic mean of measurements

Rk =

 

Sk

  • s=1
  • Rs

k

−1  

−1

harmonic mean of measuremt covariances

A typical structure for fusion equations!

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 17

slide-18
SLIDE 18

Kalman filter: xk = (r⊤

k , ˙

r⊤

k )⊤, Zk = {zk, Zk−1}

initiation: p(x0) = N

  • x0; x0|0, P0|0
  • ,

initial ignorance:

P0|0 ‘large’

prediction: N

  • xk−1; xk−1|k−1, Pk−1|k−1
  • dynamics model

− − − − − − − − − →

Fk|k−1, Dk|k−1

N

  • xk; xk|k−1, Pk|k−1
  • xk|k−1 = Fk|k−1xk−1|k−1

Pk|k−1 = Fk|k−1Pk−1|k−1Fk|k−1

⊤ + Dk|k−1

filtering: Nxk; xk|k−1, Pk|k−1

  • current measurement zk

− − − − − − − − − − − − − →

sensor model: Hk, Rk

N

  • xk; xk|k, Pk|k
  • xk|k

=

xk|k−1 + Wk|k−1νk|k−1, νk|k−1 = zk − Hkxk|k−1 Pk|k

=

Pk|k−1 − Wk|k−1Sk|k−1Wk|k−1⊤, Sk|k−1 = HkPk|k−1Hk⊤ + Rk Wk|k−1 = Pk|k−1Hk⊤Sk|k−1−1

‘KALMAN gain matrix’

Exercise 4.4

In your sensor simulator, chose an arbitrary number S of sensors at positions rs, s = 1, . . . , S, produce measurements zs

k, s = 1, . . . , S,

  • f the Cartesian target positions Hxk from your ground truth
  • generator. Use preprocessing both algorithms! Discuss pros & cons!

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 18

slide-19
SLIDE 19

Towards real world sensors: range, azimuth data

  • Gaussian measurements in polar coordinates:

zp

k = (rk, ϕk)⊤, error covariance matrix: Rp =

  • σ2

r 0

0 σ2

ϕ

  • , r, ϕ independent
  • transformation into Cartesian target positions:

zc

k = t[zp k] = rk

  • cos ϕk

sin ϕk

  • A non-affin transformation!

A Taylor-series approximation of t[zp

k] up to the first order were affin!

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 19

slide-20
SLIDE 20

Towards real world sensors: range, azimuth data

  • Gaussian measurements in polar coordinates:

zp

k = (rk, ϕk)⊤, error covariance matrix: Rp =

  • σ2

r 0

0 σ2

ϕ

  • , r, ϕ independent
  • Taylor-approximation around: rk|k−1 = (rk|k−1, ϕk|k−1)⊤:

zc

k = t[zp k] = rk

  • cos ϕk

sin ϕk

  • ≈ t[rk|k−1] + T (zk − rk|k−1)

T = ∂t[rk|k−1]

∂rk|k−1 =

  • cos ϕk|k−1

−rk|k−1 sin ϕk|k−1 sin ϕk|k−1 rk|k−1 cos ϕk|k−1

  • =
  • cos ϕ

− sin ϕ sin ϕ cos ϕ

  • rotation Dϕ
  • 1

r

  • dilation Sr

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 20

slide-21
SLIDE 21

Towards real world sensors: range, azimuth data

  • Gaussian measurements in polar coordinates:

zp

k = (rk, ϕk)⊤, error covariance matrix: Rp =

  • σ2

r 0

0 σ2

ϕ

  • , r, ϕ independent
  • Taylor-approximation around: rk|k−1 = (rk|k−1, ϕk|k−1)⊤:

zc

k = t[zp k] = rk

  • cos ϕk

sin ϕk

  • ≈ t[rk|k−1] + T (zk − rk|k−1)

T = ∂t[rk|k−1]

∂rk|k−1 =

  • cos ϕ

− sin ϕ sin ϕ cos ϕ

  • rotation Dϕ
  • 1

r

  • dilation Sr
  • Cartesian error covariance (time dependent):

TRT⊤ = DϕSr R SrD⊤

ϕ = Dϕ

  • σ2

r

0 (rσϕ)2

  • D⊤

ϕ

  • sensor fusion: sensor-to-target-geometry enters into TRT⊤

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 21

slide-22
SLIDE 22

s1 s2 s3

Multiple sensor fusion: sensor-to-target-geometry enters into TRT⊤. Typical of radar, sonar, laser scanner (lidar), cameras, microphones, ...

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 22

slide-23
SLIDE 23

Kalman filter: xk = (r⊤

k , ˙

r⊤

k )⊤, Zk = {zk, Zk−1}

initiation: p(x0) = N

  • x0; x0|0, P0|0
  • ,

initial ignorance:

P0|0 ‘large’

prediction: N

  • xk−1; xk−1|k−1, Pk−1|k−1
  • dynamics model

− − − − − − − − − →

Fk|k−1, Dk|k−1

N

  • xk; xk|k−1, Pk|k−1
  • xk|k−1 = Fk|k−1xk−1|k−1

Pk|k−1 = Fk|k−1Pk−1|k−1Fk|k−1

⊤ + Dk|k−1

filtering: Nxk; xk|k−1, Pk|k−1

  • current measurement zk

− − − − − − − − − − − − − →

sensor model: Hk, Rk

N

  • xk; xk|k, Pk|k
  • xk|k

=

xk|k−1 + Wk|k−1νk|k−1, νk|k−1 = zk − Hkxk|k−1 Pk|k

=

Pk|k−1 − Wk|k−1Sk|k−1Wk|k−1⊤, Sk|k−1 = HkPk|k−1Hk⊤ + Rk Wk|k−1 = Pk|k−1Hk⊤Sk|k−1−1

‘KALMAN gain matrix’

Exercise 4.5

Do the same as in exercise 5.4, but use sensors that are producing range and azimuth measurements of the target positions.

Sensor Data Fusion - Methods and Applications, 4th Lecture on April May 8, 2019 — slide 23