jVPFS: Adding Robustness to a Secure Stacked File System with - - PowerPoint PPT Presentation

jvpfs adding robustness to a secure stacked file system
SMART_READER_LITE
LIVE PREVIEW

jVPFS: Adding Robustness to a Secure Stacked File System with - - PowerPoint PPT Presentation

Department of Computer Science Institute of Systems Architecture, Operating Systems Group jVPFS: Adding Robustness to a Secure Stacked File System with Untrusted Local Storage Components Carsten Weinhold, Hermann Hrtig INTRODUCTION


slide-1
SLIDE 1

Department of Computer Science Institute of Systems Architecture, Operating Systems Group

Carsten Weinhold, Hermann Härtig

jVPFS: Adding Robustness to a Secure Stacked File System with Untrusted Local Storage Components

slide-2
SLIDE 2

TU Dresden jVPFS

INTRODUCTION

2

Secure Net Secure Net

Confidentiality, Integrity, Availability VPN:

Internet

slide-3
SLIDE 3

TU Dresden jVPFS

Linux Kernel TCB

INTRODUCTION

2

Microkernel / Micro!Hypervisor App

VPFS: Confidentiality, Integrity, Availability

[1] Weinhold, Härtig: „ VPFS: Building a Virtual Private File System With a Small Trusted Computing Base“, EuroSys’08

Secure File System 4,600 SLOC Commodity File System 50,000+ SLOC

slide-4
SLIDE 4

TU Dresden jVPFS

INTRODUCTION

2

VPFS: Confidentiality, Integrity, Availability

[1] Weinhold, Härtig: „ VPFS: Building a Virtual Private File System With a Small Trusted Computing Base“, EuroSys’08

Secure File System Commodity File System Secure File System Proxy

slide-5
SLIDE 5

TU Dresden jVPFS

OUTLINE

3

■ Introduction ■ VPFS: Virtual Private File System ■ jVPFS: Adding robustness securely ■ Evaluation ■ Lessons learned

slide-6
SLIDE 6

TU Dresden jVPFS

VPFS STACK

4

Secure File System Proxy Commodity File System Secure File System

slide-7
SLIDE 7

TU Dresden jVPFS

FILES & FILE CONTAINERS

5

D D D Virtual Private File System (TCB) M M M H H H H Reused Commodity File System (Untrusted)

■ Encrypted files in commodity file system ■ Merkle hash tree to detect tampering

D D

slide-8
SLIDE 8

TU Dresden jVPFS

UPDATING HASH TREE

6

H H D D D H D D M M M H

■ High overhead: many writes + crypto ops ■ Hash tree updates must be atomic

slide-9
SLIDE 9

TU Dresden jVPFS

CONSISTENCY PROBLEM

7

■ File!system consistency is complex problem:

■ Correct implementation is difficult[2,3] ■ Bugs often in corner cases, error checking[4] ■ Widely used file systems affected, too

■ Goal: keep complexity out of the TCB

[3] Prabhakaran et al.: „Model!Based Failure Analysis of Journaling File Systems“, DSN’05 [2] Yang et al.: „Using Model Checking to Find Serious File System Errors“, ACM TOCS Vol.24 Issue 4, 2006 [4] Gunawi et al.: „EIO: Error Handling is Occasionally Correct“, FAST’08

slide-10
SLIDE 10

TU Dresden jVPFS

HASH TREE + JOURNAL

8

H H D D D M M M H

H

■ Record new hash sums in journal ■ Recovery: valid hash either in tree or journal

H H H

slide-11
SLIDE 11

TU Dresden jVPFS

Security critical Critical only for Availability

BLOCK WRITE BACK

■ Calculate hash + encrypt block ■ Put ciphertext + hash into shared ring buffer ■ Do ordered write to legacy file system

■ Append hash sums to journal ■ Write blocks afterwards

■ Optimizations

9

slide-12
SLIDE 12

TU Dresden jVPFS

JOURNALING METADATA

■ Approach: log operations, not blocks

■ Code reuse: replay during recovery via API ■ Simple dependency tracking ■ Non!intrusive implementation

■ Dependencies:

■ New files: inode, name, parent dir ■ Updated files: file size, hash sums ■ Unlinked / moved files: name, parent dir

10

slide-13
SLIDE 13

TU Dresden jVPFS

TRACKING NEW FILES

11

fd File* 1 2 File* 3 File* 4 File* ... 511 p_fd p_inode name 1 2 1 „dir“ 3 2 „file23“ 4 2 113 „file42“ ... 511 file table new!file table file42 dir/ .../ Pathname elements to log:

slide-14
SLIDE 14

TU Dresden jVPFS

JOURNAL CONTENT

12

H H D D D M M M M M H

File_create

„dir“

File_create

„file42“

Update_block

H(block), file size

Update_block

H(block), file size

Update_block

H(block), file size H

slide-15
SLIDE 15

TU Dresden jVPFS

JOURNAL SECURITY

■ Confidentiality:

■ Filenames, inodes, etc.: encrypted ■ Block offset, type: plaintext

■ Tamper detection:

■ Anchor of journal in „Sealed Memory“ ■ Journal is continuously MAC’d

■ Record groups:

■ All records between two MACs

13

Record Root Record Record Record Record Record MAC MAC MAC

slide-16
SLIDE 16

TU Dresden jVPFS

Security critical Critical only for Availability

RECOVERY PROCEDURE

  • 1. Recover legacy file system
  • 2. Find complete record groups in journal
  • 3. Restore pre-journal versions of metadata blocks
  • 4. Read root info (aka superblock)
  • 5. Replay:

a) Get complete record groups b) Check integrity + decrypt c) Re-execute operations:

  • pen(), unlink(), ...

d) Repeat from a)

14

Record Root Record Record Record Record Record Record Record MAC MAC MAC

slide-17
SLIDE 17

TU Dresden jVPFS

COOPERATION

15

■ Extensive Reuse:

■ Complete commodity file system ■ Existing consistency primitives:

■ Journaling, copy!on!write, ... ■ Write ordering, snapshots

■ More details in paper:

■ Checkpoints + journal truncation ■ Flushing metadata blocks

slide-18
SLIDE 18

TU Dresden jVPFS

SOURCE COMPLEXITY

16

SLOC OC Subsystem ReiserFS Ext4 jVPFS Journal + replay ~3,200 ~5,000 325 Basic persistency 16,500+ 24,000+ 404 Core functionality 16,500+ 24,000+ 2,444 Crypto algorithms 667

slide-19
SLIDE 19

TU Dresden jVPFS

TESTING RECOVERY

■ Testcase for recovery:

■ Unpack tar archive (3,000+ files, 70 MB) ■ Power!cycle machine, interrupt write back ■ Recover jVPFS + try to open + read all files:

■ NILFS+Flash: successful ■ ReiserFS+HDD: successful

■ Example run: replay 1.2 MB journal in 5.1s ■ Restored: 2,710 files, 40 MB user data

17

slide-20
SLIDE 20

TU Dresden jVPFS

PM!2 ReiserFS HDD PM!2 NILFS Flash untar ReiserFS HDD untar NILFS Flash 0s 5s 10s 15s 20s 25s 30s

PERFORMANCE

18

Native Linux jVPFS, no journal jVPFS, journal

slide-21
SLIDE 21

TU Dresden jVPFS

LESSONS LEARNED

■ jVPFS: Less than 350 SLOC in TCB to

make secure file system robust

■ Security!critical core for journaling + replay:

■ Log API!level operations, replay via API ■ Code reuse, simple dependency tracking

■ Move complexity to untrusted file system ■ Reuse existing consistency primitives

19