Learning Nonstationary Models of Normal Network Traffic for - - PDF document

learning nonstationary models of normal network traffic
SMART_READER_LITE
LIVE PREVIEW

Learning Nonstationary Models of Normal Network Traffic for - - PDF document

Learning Nonstationary Models of Normal Network Traffic for Detecting Novel Attacks Matthew V. Mahoney Philip K. Chan Intrusion Detection Systems. Is data x hostile? Signature - models known hostile behavior: P( x |attack) Good: low


slide-1
SLIDE 1

Learning Nonstationary Models of Normal Network Traffic for Detecting Novel Attacks

Matthew V. Mahoney Philip K. Chan

slide-2
SLIDE 2

Intrusion Detection Systems. Is data x hostile?

Signature - models known hostile behavior: P(x|attack) Good: low false alarm rate Bad: Cannot detect new attacks Anomaly - models normal behavior: P(x|normal) Good: can detect new attacks Bad: high false alarm rate

Combined system: Odds(attack|x) = Odds(attack) P(x|attack) / P(x|normal)

slide-3
SLIDE 3

IDS Examples

Host Signature: Virus detectors Host Anomaly: File System: COPS, Tripwire System Calls: Forrest et. al. Network Signature: BRO, SNORT Network Anomaly (fixed model): most firewalls Network Anomaly (adaptive model): SPADE, ADAM,

NIDES - model user behavior (IP addresses/ports) We add protocol modeling to adaptive network anomaly detection.

slide-4
SLIDE 4

The DARPA IDS Evaluation Data Set

201 attacks on a simulated network (SunOS, Solaris, NT,

Linux, Cisco router, Ethernet, Internet gateway) in 2 weeks.

Test data: inside/outside network traffic, BSM system

call logs, audit logs, file system dumps.

Training data: 2 weeks attack free for anomaly detection,

1 week of labeled attacks (some held back) for signature detection.

slide-5
SLIDE 5

Example Attacks - Probes

System configuration: ipsweep, portsweep, ls, resetscan Operating system: queso Known vulnerabilities: satan, mscan, ntinfoscan Data collection: illegalsniffer

slide-6
SLIDE 6

Denial of Service Attacks

Floods: neptune, mailbomb, processtable, smurf,

udpstorm

Malformed IP packets: land, teardrop, pod (ping of

death)

Malformed client requests: apache2, back, crashiis,

dosnuke, syslogd

Network disruption: arppoison, tcpreset Unauthorized use: warezmaster, warezclient Malformed user command: selfping

slide-7
SLIDE 7

Remote to Local (R2L) attacks

Password guessing: dict (FTP, telnet, POP3), guest,

snmpget

Password stealing: sshtrojan, xlock, xsnoop Trojans: framespoofer, ppmacro Buffer overflows: imap, named, ncftp, sendmail Misconfiguration/bugs: ftpwrite, phf Backdoors: httptunnel, netbus, netcat

slide-8
SLIDE 8

User to Root (U2R) and Data Attacks

NT bugs: anypw, casesen, sechole, yaga UNIX bugs Buffer overflow: eject, fdformat, ffbconfig, xterm Other: loadmodule, perl, ps Restricted shell escape: sqlattack Data (security policy violation): secret, ntfsdos

slide-9
SLIDE 9

Example Attack: apache2 Normal HTTP request

GET /welcome.htm HTTP/1.0 Connection: Keep-Alive User-Agent: Mozilla/4.51[en] (WinNT; I) Host: www.eyrie.af.mil Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, image/png, */* Accept-Encoding: gzip Accept-Language: en Accept-Charset: iso-8859-1,*,utf-8

Malicious request

x User-Agent: sioux User-Agent: sioux User-Agent: sioux ... (repeated thousands of times)

slide-10
SLIDE 10

1999 DARPA Evaluation Method

Must identify source or destination IP address Must identify time within 60 seconds Duplicate detections (but not false alarms) are discarded Alarm scores: threshold at 100 false alarms (10/day) May restrict domain of detectable attacks by category,

  • perating system, or data examined

Attacks detected outside of domain do not count

slide-11
SLIDE 11

1999 DARPA Evaluation Results

8 Participating organizations 18 systems evaluated

Top Systems Detections Expert 1 85/169 (50%) Expert 2 81/173 (47%) Dmine 41/102 (40%) Forensics 15/27 (55%)

slide-12
SLIDE 12

Nonstationary Modeling of Network Traffic

Events tend to occur in bursts Counts (average frequency of events) are irrelevant Time since last event determines probability

Example 00000000000000000000000011111110000000000000 P(next bit is 1) does not depend on average rate.

slide-13
SLIDE 13

Two Models

Packet Header Anomaly Detection (PHAD) Examines network packets in isolation Models Ethernet, IP, TCP, UDP, ICMP Application Layer Anomaly Detection (ALAD) Reassembles incoming server TCP streams Models application text, source and destination

addresses and ports, and TCP open/close flags

slide-14
SLIDE 14

PHAD Model after Training

Attribute r/n Allowed Values Ethernet Size 508/12814738 42 60-1181 1182... Ether Dest Hi 9/12814738 x0000C0 x00105A... Ether Dest Lo 12/12814738 x000009 x09B949... Ether Src Hi 6/12814738 x0000C0 x00105A... Ether Src Lo 9/12814738 x09B949 x13E981... Ether Protocol 4/12814738 x0136 x0800 x0806... IP Header Len 1/12715589 x45 IP TOS 4/12715589 x00 x08 x10 xC0 IP Length 527/12715589 38-1500 IP Frag ID 4117/12715589 0-65461 65462... IP Frag Ptr 2/12715589 x0000 x4000 IP TTL 10/12715589 2 32 60 62-64... IP Protocol 3/12715589 1 6 17 IP Checksum 1/12715589 xFFFF IP Source Addr 293/12715589 12.2.169.104... IP Dest Addr 287/12715589 0.67.97.110... TCP Source Port 3546/10617293 20-135 139 515... TCP Dest Port 3545/10617293 20-135 139 515... TCP Seq Num 5455/10617293 0-395954185... TCP Ack Num 4235/10617293 0-395954185... TCP Header Len 2/10617293 x50 x60 TCP Flags 9/10617293 x02 x04 x10... TCP Window Size 1016/10617293 0-5374 5406-10028... TCP Checksum 1/10617293 xFFFF TCP URG Ptr 2/10617293 0 1 TCP Options 2/611126 x02040218 x020405B4 UCP Source Port 6052/2091127 53 123 137-138... UDP Dest Port 6050/2091127 53 123 137-138... UDP Length 128/2091127 25 27 29... UDP Checksum 2/2091127 x0000 xFFFF ICMP Type 3/7169 0 3 8 ICMP Code 3/7169 0 1 3 ICMP Checksum 1/7169 xFFFF

slide-15
SLIDE 15

ALAD

P(source IP | dest IP) (dest = local server) P(source IP | dest IP, dest port) P(dest IP, dest port) P(TCP open/close flags | dest port) P(keyword | dest port)

Attribute r/n Allowed Values 80 (HTTP) 13/83650 Accept-Charset: Accept-Encoding: Accept-Language: Accept: Cache-Control: Connection: GET Host: If-Modified-Since: Negotiate: Pragma: Referer: User-Agent: 25 (SMTP) 34/142610 (34 values...) 21 (FTP) 11/16822 (11 values...)

slide-16
SLIDE 16

PHAD/ALAD Score Score = Sum tn/r, where t = time since last anomaly n = number of training observations r = number of different values in training set n/r = 1/P(anomaly in training)

slide-17
SLIDE 17

Results 70 of 180 attacks detected

PHAD detects mostly DOS and Probes ALAD detects mostly R2L and U2R

Many PHAD detections were due to simulation artifacts in the TTL (time to live) field. These were removed manually.

slide-18
SLIDE 18

Analysis of Detections

Anomaly Det/70 Attacks Detected Learned Signature 24 (34%) PROBE: ipsweep, mscan, 2 ntinfoscan, 3 queso, 2 satan; DOS: crashiis, 4 dosnuke, 4 pod, 3 teardrop; R2L: ncftp, 2 sendmail Induced 5 (7%) DOS: apache2, 3 arppoison, tcpreset Evasion 3 (4%) PROBE: 3 portsweep Attacker Error 10 (14%) DOS: apache2, 3 mailbomb, 2 udpstorm; R2L: 2 dict, phf; U2R: yaga User Behavior 38 (54%) PROBE: mscan; DOS: 3 apache2, 5 crashiis, mailbomb, processtable, smurf, warazclient, warezmaster; R2L: dict, mailbomb, 4 ncftp, 2 netbus, 2 netcat, 2 phf, ppmacro, 2 sendmail, sshtrojan; U2R: 2 casesen, 2 fdformat, ffbconfig, sechole, xterm, yaga

slide-19
SLIDE 19

False Alarms

Distribution is similar to true detections No easy way for user to distinguish

Anomaly False alarms TCP source IP address 35 Keyword (7 SMTP, 4 FTP, 3 auth, 2 HTTP) 16 TTL (time to live, simulation artifact) 9 TCP checksum (simulation artifact) 8 Outgoing TCP connection on server port 7 TOS (type of service) 7 Urgent data pointer or URG flags 7 Bad TCP connection (3 no SYN, no FIN, RST) 5 Destination address/port 5 Packet size (Ethernet, IP, UDP) 3 Other (2 IP fragments, 2 TCP options) 4

slide-20
SLIDE 20

Attacks in Training Data (PHAD only, includes artifacts)

Attacks in training mask similar attacks in testing Shorten training period to reduce number of attacks Result: 50% loss of detections

Training set Detections Days 1-7 (no attacks) 72 Day 1 71 Day 2 62 Day 3 60 Day 4 60 Day 5 61 Day 6 76 Day 7 42 Average of days 1-7 62 Previous day (on-line with attacks) 36

slide-21
SLIDE 21

Overlap with Existing IDS 1999 DARPA

Best: 85/169 (50%) using signature + anomaly, blind PHAD/ALAD: 70/180 (39%) using anomaly only

(Evaluation methods similar but not identical) Hard to Detect Attacks (< 50% by all DARPA participants)

Original: 15/77 (19%) PHAD/ALAD: 23/65 (35%, 30% of original)

slide-22
SLIDE 22

Conclusions

PHAD/ALAD is the first adaptive network anomaly

detector to go beyond user modeling.

Model is nonstationary. The most significant anomalies

are the initial novel events (large t) in highly predictable contexts (large n/r).

Conventional user modeling detects about half of

  • attacks. Remainder are:

Attempts to exploit software bugs in target Symptoms of a successful attack Attempts to elude the IDS Bugs in the attacking software Significant non-overlap with existing systems should

allow them to be merged.

Lack of clean training data degrades performance by 1/2.