Data-Intensive Distributed Computing 431/451/631/651 (Fall 2020) - - PDF document

data intensive distributed computing
SMART_READER_LITE
LIVE PREVIEW

Data-Intensive Distributed Computing 431/451/631/651 (Fall 2020) - - PDF document

Data-Intensive Distributed Computing 431/451/631/651 (Fall 2020) Part 2: MapReduce Algorithm Design (2/3) Ali Abedi These slides are available at https://www.student.cs.uwaterloo.ca/~cs451/ 1 HDFS MapReduce blissful ignorance Abstraction


slide-1
SLIDE 1

1

Data-Intensive Distributed Computing

Part 2: MapReduce Algorithm Design (2/3)

431/451/631/651 (Fall 2020) Ali Abedi

These slides are available at https://www.student.cs.uwaterloo.ca/~cs451/

slide-2
SLIDE 2

Although we argued about having an abstraction layer to hide the complexities of underlying infrastructure, today we want to have a quick look at the architecture of

  • datacenters. This will help us later to understand the performance trade offs

different algorithms. It also makes us appreciate these systems more ☺ 2

Abstraction Cluster of computers

Storage/computing

HDFS MapReduce

blissful ignorance unpleasant truth

slide-3
SLIDE 3

3

A quick review of data center architecture

slide-4
SLIDE 4

Left: Top view of a server Right: the two top figures are the front of the server with two storage configurations: 1)16 2.5 inch drives 2) 8 3.5 inch drivers Right: bottom is the back of the server. We can see network interfaces (7) 4

The anatomy of a server

slide-5
SLIDE 5

We put multiple servers in a server rack. There is a network switch that connects the servers in a rack. This switch also connects the rack to other racks. 5

The anatomy of a server rack

slide-6
SLIDE 6

Clusters of racks of servers build a data center. This is a very simplistic view of a data center. 6

The anatomy of a data center

slide-7
SLIDE 7

Capacity, latency, and bandwidth for reading data change depending on where the data is. The lowest latency and highest bandwidth is achieved when the data we need is on

  • ur local server.

We can increase capacity by utilizing other servers but at the cost of higher latency and lower bandwidth. 7

Storage Hierarchy

Local Machine

L1/L2/L3 cache, memory, SSD, magnetic disks capacity, latency, bandwidth

Remote Machine Same Rack Remote Machine Different Rack Remote Machine Different Datacenter

slide-8
SLIDE 8

https://colin-scott.github.io/personal_website/research/interactive_latency.html 8

Latency numbers every programmer should know

Demo

slide-9
SLIDE 9

https://youtu.be/XZmGGAbHqa0 9

The anatomy of a data center

Google’s data center video

slide-10
SLIDE 10

10

Abstraction Cluster of computers

Storage/computing

slide-11
SLIDE 11

11

Distributed File System

How can we store a large file on a distributed system?

slide-12
SLIDE 12

Assume that we have 20 identical networked servers each with 100 TB of disk

  • space. How would you store a file on these server? This is the fundamental

question in distributed file systems. 12

. . .

100 TB 100 TB 100 TB 100 TB 100 TB S1 S2 S3 S19 S20

200 TB

File.txt

How do you store this file?

slide-13
SLIDE 13

We can split the file into smaller chunks. 13

. . .

100 TB 100 TB 100 TB 100 TB 100 TB S1 S2 S3 S19 S20 File.txt

Divide into smaller chunks

slide-14
SLIDE 14

And assign the chunks (e.g., randomly) to the servers. 14

. . .

100 TB 100 TB 100 TB 100 TB 100 TB S1 S2 S3 S19 S20

1

File.txt

2 3 4 5 6 7 8 Assign chunks to servers

slide-15
SLIDE 15

We need to track where each chunk is stored so that we can retrieve the file. 15

1 → S1 2 → S3 … 8 → S19

. . .

100 TB 100 TB 100 TB 100 TB 100 TB S1 S2 S3 S19 S20 File.txt

Keep track of the chunks using a master server

slide-16
SLIDE 16

If a server that contains one of the chunks fails, the files become corrupted. Since failure rate is high on commodity servers, we need to figure out a solution. 16

1 → S1 2 → S3 … 8 → S19

. . .

100 TB 100 TB 100 TB 100 TB 100 TB S1 S2 S3 S19 S20 File.txt

What happens when a server fails?!

slide-17
SLIDE 17

If each chunk is stored on multiple server, if a server fails there is a backup. The number of copies determines how much resilience we want. 17

. . .

100 TB 100 TB 100 TB 100 TB 100 TB S1 S2 S3 S19 S20

1

File.txt

2 3 4 5 6 7 8 FAULT TOLORANCE Store each chunk on multiple servers

REPLICATION

slide-18
SLIDE 18

18

From our made-up distributed file system to a real one

slide-19
SLIDE 19

19 Hadoop Distributed File System (HDFS)

Adapted from form Erik Jonsson (UT Dallas)

slide-20
SLIDE 20

20

Goals of HDFS

  • Very Large Distributed File System
  • 10K nodes, 100 million files, 10PB
  • Assumes Commodity Hardware
  • Files are replicated to handle hardware failure
  • Detect failures and recover from them
  • Optimized for Batch Processing
  • Provides very high aggregate bandwidth
slide-21
SLIDE 21

HDFS is not like a typical file system you use on Windows or Linux. It was specifically designed for Hadoop. It cannot perform some of the typical operations that other file systems can do like random write. Instead it is optimized for large sequential reads and append only writes.

21

Distributed File System

  • Data Coherency
  • Write-once-read-many access model
  • Client can only append to existing files
  • Files are broken up into blocks
  • Typically 64MB block size
  • Each block replicated on multiple DataNodes
  • Intelligent Client
  • Client can find location of blocks
  • Client accesses data directly from DataNode
slide-22
SLIDE 22

Note that the namenode is relatively lightweight, it's just storing where the data is located on datanodes not the actual data. May still have a redundant namenode in the background if the primary one fails HDFS client gets data information from namenode and then interacts with datanodes to get that data Note that namenode has to communicate with datanodes to ensure consistency and redundancy of data (e.g., if a new clone of the data needs to be created) 22

Adapted from (Ghemawat et al., SOSP 2003)

(file name, block id) (block id, block location) instructions to datanode datanode state (block id, byte range) block data

HDFS namenode HDFS datanode Linux file system

HDFS datanode Linux file system

File namespace /foo/bar

block 3df2

Application HDFS Client

HDFS Architecture

slide-23
SLIDE 23

23

Functions of a NameNode

  • Manages File System Namespace
  • Maps a file name to a set of blocks
  • Maps a block to the DataNodes where it resides
  • Cluster Configuration Management
  • Replication Engine for Blocks
slide-24
SLIDE 24

24

NameNode Metadata

  • Metadata in Memory
  • The entire metadata is in main memory
  • No demand paging of metadata
  • Types of metadata
  • List of files
  • List of Blocks for each file
  • List of DataNodes for each block
  • File attributes, e.g. creation time, replication factor
  • A Transaction Log
  • Records file creations, file deletions etc
slide-25
SLIDE 25

25

DataNode

  • A Block Server
  • Stores data in the local file system (e.g. ext3)
  • Stores metadata of a block (e.g. CRC)
  • Serves data and metadata to Clients
  • Block Report
  • Periodically sends a report of all existing blocks to the NameNode
  • Facilitates Pipelining of Data
  • Forwards data to other specified DataNodes
slide-26
SLIDE 26

26

Block Placement

  • Current Strategy
  • One replica on local node
  • Second replica on a remote rack
  • Third replica on same remote rack
  • Additional replicas are randomly placed
  • Clients read from nearest replicas
slide-27
SLIDE 27

27

Heartbeats

  • DataNodes send hearbeat to the NameNode
  • Once every 3 seconds
  • NameNode uses heartbeats to detect DataNode failure
slide-28
SLIDE 28

28

Replication Engine

  • NameNode detects DataNode failures
  • Chooses new DataNodes for new replicas
  • Balances disk usage
  • Balances communication traffic to DataNodes
slide-29
SLIDE 29

29

HDFS Demo

slide-30
SLIDE 30

30

Terminology differences:

GFS master = Hadoop namenode GFS chunkservers = Hadoop datanodes

Implementation differences:

Different consistency model for file appends Implementation language Performance

Google File System (GFS)

slide-31
SLIDE 31

31

Abstraction Cluster of computers

Storage/computing

HDFS MapReduce

slide-32
SLIDE 32

32

Hadoop Cluster Architecture

slide-33
SLIDE 33

SAN: Storage Area Network 33

How do we get data to the workers?

Let’s consider a typical supercomputer…

Compute Nodes SAN

slide-34
SLIDE 34

This makes sense for compute-intensive tasks as the computations (for some chunk

  • f data) are likely to take a long while even on such sophisticated hardware, so the

communication costs are greatly outweighed by the computation costs. For data- intensive tasks, the computations (for some chunk of data) aren’t likely to take nearly as long, so the computation costs are greatly outweighed by the communication costs. Likely to experience latency and bottleneck even with high speed transfer. 34

Compute-Intensive vs. Data-Intensive

Why does this make sense for compute-intensive tasks? What’s the issue for data-intensive tasks?

Compute Nodes SAN

slide-35
SLIDE 35

If a server is responsible for both data storage and processing, Hadoop can do a lot

  • f optimization. For example, when assigning mapreduce tasks to servers, Hadoop

considers which servers contain what part of the file locally to minimize copy over

  • network. If all of the data can be process locally where it is stored there will be no

need to move the data. 35

What’s the solution?

Don’t move data to workers… move workers to the data! Key idea: co-locate storage and compute

Start up worker on nodes that hold the data

slide-36
SLIDE 36

This figure shows how computation and storage is co-located on a Hadoop cluster. Node manager manages running tasks on a node (e.g., if we have spare resources, do the next job assigned to us) Resource manager is responsible for managing available resources in the cluster 36

DataNode Linux file system

Node Manager worker node DataNode Linux file system

Node Manager worker node DataNode Linux file system

Node Manager worker node NameNode Resource Manager

Putting everything together…