Memory Management Disclaimer: some slides are adopted from book - - PowerPoint PPT Presentation

memory management
SMART_READER_LITE
LIVE PREVIEW

Memory Management Disclaimer: some slides are adopted from book - - PowerPoint PPT Presentation

Memory Management Disclaimer: some slides are adopted from book authors slides with permission 1 Roadmap CPU management Process, thread, synchronization, scheduling Memory management Virtual memory Disk management Other


slide-1
SLIDE 1

Memory Management

1

Disclaimer: some slides are adopted from book authors’ slides with permission

slide-2
SLIDE 2

Roadmap

  • CPU management

– Process, thread, synchronization, scheduling

  • Memory management

– Virtual memory

  • Disk management
  • Other topics

2

slide-3
SLIDE 3

Administrative

  • Project 2 is out

– Due: 11/06

3

slide-4
SLIDE 4

Memory Management

  • Goals

– Easy to use abstraction

  • Same virtual memory space for all processes

– Isolation among processes

  • Don’t corrupt each other

– Efficient use of capacity limited physical memory

  • Don’t waste memory

4

slide-5
SLIDE 5

Concepts to Learn

  • Virtual address translation
  • Paging and TLB
  • Page table management
  • Swap

5

slide-6
SLIDE 6

Virtual Memory (VM)

  • Abstraction

– 4GB linear address space for each process

  • Reality

– 1GB of actual physical memory shared with 20

  • ther processes
  • How?

6

slide-7
SLIDE 7

Virtual Memory

  • Hardware support

– MMU (memory management unit) – TLB (translation lookaside buffer)

  • OS support

– Manage MMU (sometimes TLB) – Determine address mapping

  • Alternatives

– No VM: many real-time OS (RTOS) don’t have VM

7

slide-8
SLIDE 8

Virtual Address

8

Process A Process B Process C Physical Memory

MMU

slide-9
SLIDE 9

MMU

  • Hardware unit that translates virtual address

to physical address

9

CPU MMU Memory

Virtual address Physical address

slide-10
SLIDE 10

A Simple MMU

  • BaseAddr: base register
  • Paddr = Vaddr + BaseAddr
  • Advantages

– Fast

  • Disadvantages

– No protection – Wasteful

10

P3 P2 P1

14000 28000

slide-11
SLIDE 11

A Better MMU

  • Base + Limit approach

– If Vaddr > limit, then trap to report error – Else Paddr = Vaddr + BaseAddr

11

slide-12
SLIDE 12

A Better MMU

  • Base + Limit approach

– If Vaddr > limit, then trap to report error – Else Paddr = Vaddr + BaseAddr

  • Advantages

– Support protection – Support variable size partitions

  • Disadvantages

– Fragmentation

12

P1 P2 P3

slide-13
SLIDE 13

Fragmentation

  • External fragmentation

– total available memory space exists to satisfy a request, but it is not contiguous

13

P1 P2 P3 P4 P1 P3

Free P2, P4

P1 P3

Alloc P5

P5

slide-14
SLIDE 14

Modern MMU

  • Paging approach

– Divide physical memory into fixed-sized blocks called frames (e.g., 4KB each) – Divide logical memory into blocks of the same size called pages (page size = frame size) – Pages are mapped onto frames via a table  page table

14

slide-15
SLIDE 15

Modern MMU

  • Paging hardware

15

slide-16
SLIDE 16

Modern MMU

  • Memory view

16

slide-17
SLIDE 17

Virtual Address Translation

17

0x12345678

Ox12345 0x678

Page # Offset frame #: 0xabcde frame #

  • ffset

0x678 0x12345

0xabcde678

Virtual address Physical address

slide-18
SLIDE 18

Advantages of Paging

  • No external fragmentation

– Efficient use of memory – Internal fragmentation (waste within a page) still exists

18

slide-19
SLIDE 19

Issues of Paging

  • Translation speed

– Each load/store instruction requires a translation – Table is stored in memory – Memory is slow to access

  • ~100 CPU cycles to access DRAM

19

slide-20
SLIDE 20

Translation Lookaside Buffer (TLB)

  • Cache frequent address translations

– So that CPU don’t need to access the page table all the time – Much faster

20

slide-21
SLIDE 21

Issues of Paging

  • Page size

– Small: minimize space waste, requires a large table – Big: can waste lots of space, the table size is small – Typical size: 4KB – How many pages are needed for 4GB (32bit)?

  • 4GB/4KB = 1M pages

– What is the required page table size?

  • assume 1 page table entry (PTE) is 4bytes
  • 1M * 4bytes = 4MB

– Btw, this is for each process. What if you have 100 processes? Or what if you have a 64bit address?

21

slide-22
SLIDE 22

Paging

  • Advantages

– No external fragmentation

  • Two main Issues

– Translation speed can be slow

  • TLB

– Table size is big

22

slide-23
SLIDE 23

Multi-level Paging

  • Two-level paging

23

slide-24
SLIDE 24

Two Level Address Translation

24

2nd level

  • ffset

1st level 1st level Page table 2nd level Page

Frame # Offset Base ptr

Virtual address Physical address

slide-25
SLIDE 25

Multi-level Paging

  • Can save table space
  • How, why?

25

slide-26
SLIDE 26

Summary

  • MMU

– Virtual address  physical address – Various designs are possible, but

  • Paged MMU

– Memory is divided into fixed-sized pages – Use page table to store the translation table – No external fragmentation: i.e., efficient space utilization

26