Adversarial AI in Cyber Security WHO AM I - - PowerPoint PPT Presentation

adversarial ai in cyber security who am i join trend
SMART_READER_LITE
LIVE PREVIEW

Adversarial AI in Cyber Security WHO AM I - - PowerPoint PPT Presentation

Adversarial AI in Cyber Security WHO AM I Join Trend Micro on 2009 Infra Developer Threat Researcher Machine Learning Researcher Join XGen ML project on 2015 Now leading


slide-1
SLIDE 1

資訊安全中的人工智能對抗

Adversarial AI in Cyber Security

張佳彥

slide-2
SLIDE 2

WHO AM I

  • Join Trend Micro on 2009

– Infra Developer – Threat Researcher – Machine Learning Researcher

  • Join XGen ML project on 2015
  • Now leading the Machine Learning Research/Operation team of XGen
slide-3
SLIDE 3

Agenda

  • What is Machine Learning ?
  • What is Adversarial Machine Learning ?
  • Adversarial ML Methodologies
  • Possible countermeasures
  • Conclusions
slide-4
SLIDE 4

Machine Learning & Adversarial Machine Learning

slide-5
SLIDE 5

XGen ML – Layer protection

slide-6
SLIDE 6

What is Machine Learning

slide-7
SLIDE 7

What is Adversarial Machine Learning

Adversarial machine learning is a technique employed in the field of machine learning which attempts to fool models through malicious input.

  • Wikipwdia
slide-8
SLIDE 8

What is Adversarial Machine Learning

Image Recognition

slide-9
SLIDE 9

What is Adversarial Machine Learning

Image Recognition

slide-10
SLIDE 10

What is Adversarial Machine Learning

Spam Detection

Spam content salad word

slide-11
SLIDE 11

Adversarial ML Methodologies

slide-12
SLIDE 12

Adversarial ML Methodologies

  • Evasion Attack
  • Black box
  • White box
  • model stealing
  • Poisoning Attack
slide-13
SLIDE 13

Adversarial ML Methodologies

Training

Model

Training set Prediction (classification) Train Predict Evasion misclassify

slide-14
SLIDE 14

Adversarial ML Methodologies

Training

Model

Training set Prediction (classification) Train Predict Poison misclassify

Cats Dogs

slide-15
SLIDE 15

Evasion

  • Black Box
  • Hacker can only test model with Input/Output
  • White Box
  • Hacker knows the detail parameters of the model

Input Output Model Input Output

slide-16
SLIDE 16

Black Box Evasion: Iterative Random Attack

Evasion successful ratio = 1/1000

slide-17
SLIDE 17

Model

Black Box Evasion: Genetic Algorithm

  • Baseline (seed)

n possible changes (DNA)

  • Random

1st generation

  • Probe

Select lowest score

  • Random

next generation N generation…

Evasion successful ratio = 1/100

slide-18
SLIDE 18

Poison Attack

  • Online training
slide-19
SLIDE 19

Countermeasures

slide-20
SLIDE 20

Adversarial ML Countermeasures

  • Evasion Attack - Black box
  • Abuse Protection
  • Model Retrain
  • Reactive
  • Proactive (GAN)
  • Evasion Attack - White box
  • Data/feature/model protection
  • Poisoning Attack
  • Data/Label quality control
slide-21
SLIDE 21

Adversarial ML Countermeasures

  • Evasion Attack - Black box
  • Abuse Protection
  • Model Retrain
  • Reactive
  • Proactive (GAN)
  • Evasion Attack - White box
  • Data/feature/model protection
  • Poisoning Attack
  • Data/Label quality control
slide-22
SLIDE 22

Adversarial ML Countermeasures

slide-23
SLIDE 23

Adversarial ML Countermeasures

  • Evasion Attack - Black box
  • Abuse Protection
  • Model Retrain
  • Reactive
  • Proactive
  • Evasion Attack - White box
  • Data/feature/model protection
  • Poisoning Attack
  • Data/Label quality control
slide-24
SLIDE 24

Adversarial ML Countermeasures

Hacker generate malware to cheat classifier Security company model to identify malware

slide-25
SLIDE 25

Adversarial ML Countermeasures

Reactive model retrain

slide-26
SLIDE 26

Adversarial ML Countermeasures

Proactive model retrain

slide-27
SLIDE 27

Adversarial ML Countermeasures

What if the hair length is an important feature?

slide-28
SLIDE 28

Adversarial ML Countermeasures

  • Trade off
  • Robustness
  • r

Accuracy

  • Proactive
  • r

Reactive

  • Fast
  • r

Confidence

slide-29
SLIDE 29

Adversarial ML Countermeasures

  • Trade off
  • Robustness
  • r

Accuracy

  • Proactive
  • r

Reactive

  • Fast
  • r

Confidence

slide-30
SLIDE 30

Adversarial ML Countermeasures

  • Evasion Attack - Black box
  • Abuse Protection
  • Model Retrain
  • Reactive
  • Proactive (GAN)
  • Evasion Attack - White box
  • Data/feature/model protection
  • Poisoning Attack
  • Data/Label quality control
slide-31
SLIDE 31

Adversarial ML Countermeasures

  • Evasion Attack - Black box
  • Abuse Protection
  • Model Retrain
  • Reactive
  • Proactive (GAN)
  • Evasion Attack - White box
  • Data/feature/model protection
  • Poisoning Attack
  • Data/Label quality control
slide-32
SLIDE 32

Conclusions

slide-33
SLIDE 33

Conclusions

  • Almost all models can be cheated
  • Find possible vulnerabilities and take the

proper actions

  • This is an endless battle
  • Pros: Global visibility and excellent operation
  • Cons: 1 FN will cause the damage
slide-34
SLIDE 34

Conclusions

  • There is no silver bullet for Cyber Security
  • Dynamic & Fast Response are the key points
slide-35
SLIDE 35

Thank You