Distributed Systems

Höfundur Sukumar Ghosh

Útgefandi Taylor & Francis

Snið ePub

Print ISBN 9780367659127

Útgáfa 2

Útgáfuár 2015

6.690 kr.

Description

Efnisyfirlit

  • Front Matter
  • CHAPMAN & HALL/CRC: COMPUTER and INFORMATION SCIENCE SERIES
  • PUBLISHED TITLES
  • Dedication
  • Preface
  • Acknowledgments
  • Author
  • I Background Materials
  • CHAPTER 1 Introduction
  • 1.1 WHAT IS A DISTRIBUTED SYSTEM?
  • 1.2 WHY DISTRIBUTED SYSTEMS
  • 1.3 EXAMPLES OF DISTRIBUTED SYSTEMS
  • 1.4 IMPORTANT ISSUES IN DISTRIBUTED SYSTEMS
  • FIGURE 1.1 Examples of network topologies: (a) ring, (b) directed tree, and (c) 3D cube. Each node represents a process, and each edge connecting a pair of nodes represents a channel.
  • 1.5 COMMON SUBPROBLEMS
  • 1.6 IMPLEMENTING A DISTRIBUTED SYSTEM
  • 1.7 PARALLEL VERSUS DISTRIBUTED SYSTEMS
  • 1.8 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • FIGURE 1.2 Robot B crossing a road and trying to avoid collision with robot A.
  • CHAPTER 2 Interprocess Communication: An Overview
  • 2.1 INTRODUCTION
  • 2.1.1 Processes and Threads
  • 2.1.2 Client–Server Model
  • 2.1.3 Middleware
  • FIGURE 2.1 Understanding middleware.
  • 2.2 NETWORK PROTOCOLS
  • 2.2.1 Ethernet
  • 2.2.2 Wireless Networks
  • FIGURE 2.2 (a) Nodes 0, 1, 2 are within the range of node 3, but outside the ranges of nodes 4, 5, 6. Nodes 3, 4, 6 are within the range of node 5. Nodes 0, 1, 2 are within one another’s range and so are nodes 4, 6. (b) The topology of the network.
  • 2.2.3 OSI Model
  • FIGURE 2.3 The seven-layer OSI model.
  • 2.2.4 IP
  • 2.2.5 Transport Layer Protocols
  • FIGURE 2.4 LANs connected to WANs that serve as the backbone.
  • 2.2.6 Interprocess Communication Using Sockets
  • 2.3 NAMING
  • FIGURE 2.5 A naming hierarchy.
  • 2.3.1 Domain Name Service
  • 2.3.2 Naming Service for Mobile Clients
  • 2.4 REMOTE PROCEDURE CALL
  • 2.4.1 Implementing RPC
  • FIGURE 2.6 An RPC.
  • 2.4.2 Sun ONC/RPC
  • 2.5 REMOTE METHOD INVOCATION
  • FIGURE 2.7 Remote object invocation scheme.
  • 2.6 MESSAGES
  • 2.6.1 Transient and Persistent Messages
  • 2.6.2 Streams
  • 2.7 WEB SERVICES
  • 2.8 EVENT NOTIFICATION
  • 2.9 VIRTUALIZATION: CLOUD COMPUTING
  • 2.9.1 Classification of Cloud Services
  • 2.9.2 MapReduce
  • FIGURE 2.8 The architecture of MapReduce: the mappers receive their inputs from the files that store data. Each box is physically mapped to a computing node.
  • 2.9.3 Hadoop
  • 2.10 MOBILE AGENTS
  • 2.11 BASIC GROUP COMMUNICATION SERVICES
  • 2.12 CONCLUDING REMARKS
  • 2.13 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • II Foundational Topics
  • CHAPTER 3 Models for Communication
  • 3.1 NEED FOR A MODEL
  • 3.2 MESSAGE-PASSING MODEL FOR INTERPROCESS COMMUNICATION
  • 3.2.1 Process Actions
  • 3.2.2 Channels
  • FIGURE 3.1 Channel from a process P to a process Q.
  • 3.2.3 Synchronous versus Asynchronous Systems
  • 3.2.4 Real-Time Systems
  • 3.3 SHARED VARIABLES
  • FIGURE 3.2 The state-reading model: a directed edge (2, 3) means that 3 can read the state of 2. (b) The link register model: 1 and 3 access the information from the link registers R21 and R23, respectively, which are updated by 2. The contents of these registers are limited by how much 2 wants to share with them.
  • 3.3.1 Linda
  • 3.4 MODELING MOBILE AGENTS
  • 3.5 RELATIONSHIP AMONG MODELS
  • 3.5.1 Strong and Weak Models
  • 3.5.2 Implementing a FIFO Channel Using a Non-FIFO Channel
  • FIGURE 3.3 Implementation of a channel of capacity max from P to Q.
  • 3.5.3 Implementing Message Passing Using Shared Memory
  • 3.5.4 Implementing Shared Memory Using Message Passing
  • FIGURE 3.4 (a) A shared-memory location X; (b) its equivalent representation in message-passing model.
  • 3.5.5 Impossibility Result with Channels
  • 3.6 CLASSIFICATION BASED ON SPECIAL PROPERTIES
  • 3.6.1 Reactive versus Transformational Systems
  • 3.6.2 Named versus Anonymous Systems
  • 3.7 COMPLEXITY MEASURES
  • Example 3.1: Multicasting in a Hypercube
  • FIGURE 3.5 Multicasting in a 3-cube: process 0 is the initiator.
  • Example 3.2
  • 3.8 CONCLUDING REMARKS
  • 3.9 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • FIGURE 3.6 Pipeline of n processes 1 through n.
  • CHAPTER 4 Representing Distributed Algorithms: Syntax and Semantics
  • 4.1 INTRODUCTION
  • 4.2 GUARDED ACTIONS
  • FIGURE 4.1 The state transitions in program uncertain.
  • 4.3 NONDETERMINISM
  • 4.4 ATOMIC OPERATIONS
  • 4.5 FAIRNESS
  • 4.6 CENTRAL VERSUS DISTRIBUTED SCHEDULERS
  • FIGURE 4.2 A system of two processes.
  • FIGURE 4.3 Overlapped actions with distributed schedulers.
  • Theorem 4.1
  • 4.7 CONCLUDING REMARKS
  • 4.8 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • FIGURE 4.4 Three studious philosophers 0, 1, 2.
  • CHAPTER 5 Program Correctness
  • 5.1 INTRODUCTION
  • FIGURE 5.1 The history of a distributed system: the circles represent states and the arcs represent actions causing state transitions.
  • 5.2 CORRECTNESS CRITERIA
  • 5.2.1 Safety Properties
  • 5.2.2 Liveness Properties
  • Example 5.1
  • FIGURE 5.2 A system of four processes: each process tries to acquire a color that is different from the colors of its neighbors.
  • FIGURE 5.3 A partial history of the system in Figure 5.2 where the edges are labeled with the identifiers of the processes causing that transition: it shows an infinite behavior ABCDEFGHIJKLA. Note that X, Y are terminal states and are reachable, but there is no guarantee that the adversary will choose the transitions leading to those states.
  • 5.3 CORRECTNESS PROOFS
  • 5.3.1 Quick Review of Propositional Logic
  • Example
  • FIGURE 5.4 The basic axioms of propositional logic.
  • 5.3.2 Brief Overview of Predicate Logic
  • FIGURE 5.5 Some widely used axioms in predicate logic.
  • 5.4 ASSERTIONAL REASONING: PROVING SAFETY PROPERTIES
  • Example 5.2
  • FIGURE 5.6 A two-process system.
  • 5.5 PROVING LIVENESS PROPERTIES USING WELL-FOUNDED SETS
  • Example 5.3: Phase Synchronization Problem
  • FIGURE 5.7 An array of three-phase clocks: every clock ticks as 0, 1, 2, 0, 1, 2,….
  • 5.6 PROGRAMMING LOGIC
  • Example 5.4
  • Example 5.5
  • Example 5.6
  • Example 5.7
  • 5.7 PREDICATE TRANSFORMERS
  • 5.8 CONCLUDING REMARKS
  • 5.9 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • CHAPTER 6 Time in a Distributed System
  • 6.1 INTRODUCTION
  • 6.1.1 Physical Time
  • 6.1.2 Sequential and Concurrent Events
  • 6.2 LOGICAL CLOCKS
  • FIGURE 6.1 A space–time view of events in a distributed system consisting of three processes P, Q, R: the horizontal lines indicate the timelines of the individual processes, and the diagonal lines represent the flow of messages between processes. Each event is tagged with its logical clock value.
  • FIGURE 6.2 A network of three processes connected by FIFO channels.
  • 6.3 VECTOR CLOCKS
  • FIGURE 6.3 Example of vector time stamps.
  • 6.4 PHYSICAL CLOCK SYNCHRONIZATION
  • 6.4.1 Preliminary Definitions
  • FIGURE 6.4 The clock reading when the drawing of this diagram was completed.
  • FIGURE 6.5 The cumulative drift between two clocks drifting apart at the rate r is brought closer after every resynchronization interval R.
  • 6.4.2 Clock Reading Error
  • 6.4.3 Algorithms for Internal Synchronization
  • FIGURE 6.6 The readings of the clocks (a) before and (b) after one round of the Berkeley algorithm: A is the leader, and C is an outlier whose value lies outside the permissible limit of 0:00:06 chosen for this system.
  • FIGURE 6.7 Two nonfaulty clocks i and j reading the value of a faulty clock k.
  • 6.4.4 Algorithms for External Synchronization
  • FIGURE 6.8 An illustration of Christian’s algorithm for external synchronization: three different possibilities are shown.
  • FIGURE 6.9 A network of time servers used in NTP. The top-level devices (stratum 0) have the highest precision.
  • FIGURE 6.10 The exchange of messages between two time servers.
  • 6.5 CONCLUDING REMARKS
  • 6.6 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • FIGURE 6.11 A sample communication between processes P and Q.
  • III Important Paradigms
  • CHAPTER 7 Mutual Exclusion
  • 7.1 INTRODUCTION
  • 7.2 SOLUTIONS ON MESSAGE-PASSING SYSTEMS
  • 7.2.1 Lamport’s Solution
  • 7.2.2 Ricart–Agrawala’s Solution
  • 7.2.3 Maekawa’s Solution
  • FIGURE 7.1 An example showing the first two phases of Maekawa’s modified algorithm. Process 2 sends an ack to 5 and a failed message to 1. But when 2 later receives a request with a time stamp 12, it sends an inquire message to 5 to find out if it indeed entered its CS.
  • 7.3 TOKEN-PASSING ALGORITHMS
  • 7.3.1 Suzuki–Kasami Algorithm
  • 7.3.2 Raymond’s Algorithm
  • FIGURE 7.2 Two configurations in Raymond’s algorithm: (a) process 3 holds the token and 1 and 4 make requests in that order; (b) the token is transferred to 1.
  • 7.4 SOLUTIONS ON THE SHARED-MEMORY MODEL
  • 7.4.1 Peterson’s Algorithm
  • 7.5 MUTUAL EXCLUSION USING SPECIAL INSTRUCTIONS
  • 7.5.1 Solution Using Test-and-Set
  • 7.5.2 Solution Using Load-Linked and Store-Conditional
  • 7.6 GROUP MUTUAL EXCLUSION
  • 7.7 CONCLUDING REMARKS
  • 7.8 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • FIGURE 7.3 Cars crossing a narrow bridge on a river.
  • CHAPTER 8 Distributed Snapshot
  • 8.1 INTRODUCTION
  • FIGURE 8.1 token circulating through a system of three processes numbered 0, 1, and 2.
  • 8.2 PROPERTIES OF CONSISTENT SNAPSHOTS
  • 8.2.1 Cuts and Consistent Cuts
  • FIGURE 8.2 Two cuts of a distributed system. The broken lines represent cuts, and the soliddirected edges represent message transmission.
  • 8.3 CHANDY–LAMPORT ALGORITHM
  • FIGURE 8.3 An example illustrating colors of action and messages: 0 is the initiator, W, white message; R, red message; and the star represents a marker.
  • Lemma 8.1
  • FIGURE 8.4 (a) The ideal view of the snapshot state: w(i) and r(i) denote, respectively, the last white action and the first red action by process i. (b) An observed sequence is being reduced to the ideal view via number of swap actions.
  • Theorem 8.1
  • 8.3.1 Two Examples
  • FIGURE 8.5 A sequence of global states in a system of two communicating state machines.
  • FIGURE 8.6 A partial history of the system in Figure 8.5.
  • 8.4 LAI–YANG ALGORITHM
  • 8.5 DISTRIBUTED DEBUGGING
  • FIGURE 8.7 Illustration of distributed debugging. (a) Two communicating processes 0 and 1 updating variables x and y. (b) In the lattice of states, Sij denotes a consistent global state after i actions by P and j actions by process Q. Inconsistent states are left out.
  • 8.5.1 Constructing the State Lattice
  • 8.5.2 Evaluating Predicates
  • 8.6 CONCLUDING REMARKS
  • 8.7 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • FIGURE 8.8 The events in a system of three processes numbered 0, 1, and 2.
  • FIGURE 8.9 A pair of communicating processes S and T.
  • CHAPTER 9 Global State Collection
  • 9.1 INTRODUCTION
  • 9.2 ELEMENTARY ALGORITHM FOR ALL-TO-ALL BROADCASTING
  • Lemma 9.1
  • Lemma 9.2
  • FIGURE 9.1 Two processes, i and k, connected by a channel.
  • Theorem 9.1
  • 9.3 TERMINATION-DETECTION ALGORITHMS
  • FIGURE 9.2 A computation graph with active (black) and passive (white) processes. (a) 1 engaged 2 and both are active, (b) 1 turned passive but 2,3 are active, (c) 1,4 became passive, but 5 is active and is trying to engage 2.
  • 9.3.1 Dijkstra–Scholten Algorithm
  • 9.3.2 Termination Detection on a Unidirectional Ring
  • FIGURE 9.3 A ring of n processes: process 0 is the initiator. Process n − 2, after turning passive and releasing the token to its successor, received the message m from 2.
  • Theorem 9.2
  • 9.3.3 Credit-Recovery Algorithm for Termination Detection
  • 9.4 WAVE ALGORITHMS
  • 9.4.1 Propagation of Information with Feedback
  • FIGURE 9.4 The execution of a PIF algorithm with node 0 as the initiator: the broken lines reflect the parent relationship, and the darkshade of the nodes indicates the reception of the message M through all the incident channels. (a)–(f) show the different phases of the algorithm.
  • 9.5 DISTRIBUTED DEADLOCK DETECTION
  • 9.5.1 Resource Deadlock and Communication Deadlock
  • 9.5.2 Detection of Resource Deadlock
  • FIGURE 9.5 The WFG with five processes. Processes 0, 3, 4 are deadlocked.
  • Theorem 9.4
  • 9.5.3 Detection of Communication Deadlock
  • FIGURE 9.6 An example of a communication deadlock.
  • 9.6 CONCLUDING REMARKS
  • 9.7 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • FIGURE 9.7 A WFG using the OR model.
  • CHAPTER 10 Graph Algorithms
  • 10.1 INTRODUCTION
  • 10.2 ROUTING ALGORITHMS
  • 10.2.1 Computation of Shortest Path
  • Figure 10.1 Shortest path computation in a weighted graph: For each node i > 0, the directed edge (represented by a broken line) points to its parent node.
  • Lemma 10.1
  • 10.2.1.1 Complexity Analysis
  • Figure 10.2 Analysis of the complexity of asynchronous Bellman–Ford algorithm.
  • 10.2.1.2 Chandy–Misra Modification of the Shortest Path Algorithm
  • 10.2.2 Distance-Vector Routing
  • Figure 10.3 (a) Node i updates D(i, k) from multiple advertisements received by it. (b) The distance vectors when link (2, 3) crashes.
  • 10.2.3 Link-State Routing
  • 10.2.4 Interval Routing
  • Figure 10.4 An illustration of compact routing.
  • 10.2.4.1 Interval Routing Rule
  • Figure 10.5 An example of interval routing: (a) ports and message destinations. (b) A labeled tree of 11 nodes. Node labels appear inside the circles, and port numbers are assigned to each port connecting to a neighbor.
  • Figure 10.6 (a) An optimal labeling scheme on a ring of six nodes: Each node i has two ports with labels (i + 1) mod 6 and (i + 3) mod 6. (b) A network for which no linear interval-labeling scheme exists.
  • 10.2.5 Prefix Routing
  • Figure 10.7 Prefix routing in a network. The broken lines denote the nontree edges and the directed edges point to the parent of a node.
  • 10.3 GRAPH TRAVERSAL
  • 10.3.1 Spanning Tree Construction
  • Figure 10.8 A spanning tree generated by Chang’s algorithm. The directed edge from each nonroot node points to its parent.
  • 10.3.2 Tarry’s Graph Traversal Algorithm
  • Figure 10.9 A possible traversal route 0 1 2 5 3 1 4 6 2 6 4 1 3 5 2 1 0. The directed edges show the parent relationship, and these edges induce a spanning tree.
  • Lemma 10.2
  • Lemma 10.3
  • 10.3.3 Minimum Spanning Tree Construction
  • Lemma 10.4
  • Figure 10.10 An example showing two fragments T1 and T2 being joined by a minimum cost edge e into a larger fragment.
  • 10.3.3.1 Overall Strategy
  • 10.3.3.2 Detecting the Least Weight Outgoing Edge
  • Lemma 10.5
  • Figure 10.11 (a) Merge operation. (b) Absorb operation.
  • Figure 10.12 An example of MST formation using [GHS83]: The shaded nodes are the roots of the fragments and the thick lines denote the tree edges. In part (a), node 3 sends a join request to 5, but 5 does not respond until it has formed a fragment by joining with node 2 (part b). Part (c) shows the final absorb operation that leads to the formation of the MST.
  • Lemma 10.6
  • 10.3.3.3 Message Complexity
  • 10.4 GRAPH COLORING
  • Figure 10.13 A graph that can be colored with two colors 0 and 1.
  • 10.4.1 (D + 1)-Coloring Algorithm
  • Theorem 10.1
  • Theorem 10.2
  • 10.4.2 6-Coloring of Planar Graphs
  • Theorem 10.3
  • Corollary 10.1
  • Figure 10.14 An example of generating a dag from a planar graph: The core nodes are shaded. (a) The core node 0 executes its action. (b) Core nodes 1, 3, 2 execute their actions. In fact, now all nodes are core nodes, and the execution of their actions in any order will lead to the final dag.
  • Theorem 10.4
  • 10.5 COLE–VISHKIN REDUCTION ALGORITHM FOR TREE COLORING
  • Figure 10.15 One step of the execution of algorithm reduce. (a) Initial colors. (b) After one step.
  • Theorem 10.5
  • Figure 10.16 Illustration of the shift-down and the palette reduction steps.
  • 10.6 MAXIMAL INDEPENDENT SET: LUBY’s ALGORITHM
  • Figure 10.17 Examples of independent and MISs. Note: {a, d, h} is an independent set; {a, c, d, f, g, j} is a maximal independent set; {a, d, h, f} is also a maximal independent set.
  • Lemma 10.7
  • Lemma 10.8
  • Lemma 10.9
  • Theorem 10.6
  • Figure 10.18 At least 1/3 of the edges incident on a bad node are bad.
  • 10.7 CONCLUDING REMARKS
  • 10.8 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • Figure 10.19 Compute a spanning tree of this graph using Chang’s algorithm.
  • Figure 10.20 A graph for interval routing in problem 10.8.
  • Figure 10.21 (a) 3-cube. (b) 4-cube.
  • Figure 10.22 A tree for linear interval routing in problem 10.9.
  • CHAPTER 11 Coordination Algorithms
  • 11.1 INTRODUCTION
  • 11.2 LEADER ELECTION
  • 11.2.1 Bully Algorithm
  • Theorem 11.1
  • 11.2.2 Maxima Finding on a Ring
  • 11.2.2.1 Chang–Roberts Algorithm
  • FIGURE 11.1 Illustration of the execution of Chang–Robert’s election algorithm: the token from process 3 reached process (n − 1) and processes 2 and 1 turned black.
  • 11.2.2.2 Franklin’s Algorithm
  • Theorem 11.2
  • 11.2.2.3 Peterson’s Algorithm
  • FIGURE 11.2 Execution of Franklin’s algorithm: The shaded processes are black. After two rounds, process 9 is identified as the leader (maxima).
  • FIGURE 11.3 One round of execution of Peterson’s algorithm: For every process, its id appears inside the circle, and its alias appears outside the circle. Shaded circles represent black processes. After one more round, only 5 remains red.
  • Lemma 11.2
  • Theorem 11.3
  • 11.2.3 Election in Arbitrary Networks
  • 11.2.4 Election in Anonymous Networks
  • 11.3 SYNCHRONIZERS
  • 11.3.1 ABD Synchronizer
  • FIGURE 11.4 Simulation of a clock tick by an ABD synchronizer: (a) 1 and 3 spontaneously initiate the synchronizer operation, initialize themselves, and send start signals to the noninitiators 0 and 2; (b) 0 and 2 wake up and complete the initialization. This completes the action of tick 0.
  • 11.3.2 Awerbuch’s Synchronizers
  • 11.3.2.1 α-Synchronizer
  • FIGURE 11.5 Partial trace of the execution of an α-synchronizer: the numbers inside the circles indicate the tick number that they are simulating. The process at the top starts the computation. The message m(0) sent by it wakes up its neighbors. (a)–(f) show six different phases.
  • 11.3.2.2 β-Synchronizer
  • 11.3.2.3 γ-Synchronizer
  • FIGURE 11.6 Clusters in a γ-synchronizer: The shaded nodes are the leaders in the clusters, and the thick lines are the intercluster edges between clusters.
  • TABLE 11.1 Timetable of the Control Signals in a γ-Synchronizer
  • 11.3.2.4 Performance of Synchronizer-Based Algorithms
  • 11.4 CONCLUDING REMARKS
  • 11.5 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • FIGURE 11.7 A network of processes.
  • IV Faults and Fault-Tolerant Systems
  • CHAPTER 12 Fault-Tolerant Systems
  • 12.1 INTRODUCTION
  • 12.2 CLASSIFICATION OF FAULTS
  • 12.3 SPECIFICATION OF FAULTS
  • Example 12.1
  • Example 12.2
  • Example 12.3
  • 12.4 FAULT-TOLERANT SYSTEMS
  • 12.4.1 Masking Tolerance
  • 12.4.2 Nonmasking Tolerance
  • FIGURE 12.1 An illustration of fault recovery.
  • 12.4.3 Fail-Safe Tolerance
  • 12.4.4 Graceful Degradation
  • 12.4.5 Detection of Failures in Synchronous Systems
  • 12.5 TOLERATING CRASH FAILURES
  • 12.5.1 Double and Triple Modular Redundancy
  • FIGURE 12.2 (a) DMR that tolerates a single crash. (b) TMR that tolerates a single failure of arbitrary type: V is the voting unit.
  • 12.6 TOLERATING OMISSION FAILURES
  • FIGURE 12.3 Implementation of a reliable channel from S to R.
  • 12.6.1 Stenning’s Protocol
  • Theorem 12.1
  • 12.6.2 Sliding Window Protocol
  • FIGURE 12.4 The trace of a sliding window protocol with window size 4.
  • Theorem 12.2
  • Theorem 12.3
  • 12.6.3 Alternating Bit Protocol
  • FIGURE 12.5 A global state of the alternating bit protocol.
  • 12.6.4 How TCP Works
  • FIGURE 12.6 The exchange of messages in TCP.
  • 12.7 CONCLUDING REMARKS
  • 12.8 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • CHAPTER 13 Distributed Consensus
  • 13.1 INTRODUCTION
  • Example 13.1
  • Example 13.2
  • Example 13.3
  • Example 13.4
  • 13.2 CONSENSUS IN ASYNCHRONOUS SYSTEMS
  • 13.2.1 Bivalent and Univalent States
  • Lemma 13.1
  • Lemma 13.2
  • TABLE 13.1 Array of Possible Initial States for a System of n Processes
  • Lemma 13.3
  • FIGURE 13.1 The transition from bivalent to univalent states.
  • FIGURE 13.2 The different scenarios of Lemma 13.3. (a) p reads and q writes. (b) Both p and q write, but on different variables.
  • Theorem 13.1
  • 13.3 CONSENSUS IN SYNCHRONOUS SYSTEMS: BYZANTINE GENERALS PROBLEM
  • 13.3.1 Solution with No Traitor
  • 13.3.2 Solution with Traitors: Interactive Consistency Criteria
  • 13.3.3 Consensus with Oral Messages
  • FIGURE 13.3 (a) Commander is loyal. (b) Commander is a traitor.
  • 13.3.3.1 Impossibility Result
  • Theorem 13.2
  • FIGURE 13.4 (a) and (b) Two look-alike systems.
  • Corollary 13.1
  • 13.3.3.2 OM(m) Algorithm
  • FIGURE 13.5 An illustration of OM(1) with four generals and one traitor: the messages at the upper level reflect the opening messages of OM(1), and those at the lower level reflect the OM(0) messages that are triggered by the upper level messages. (a) Lieutenant 3 is the traitor. (b) Commander 0 is the traitor.
  • FIGURE 13.6 A partial trace of the OM(2) algorithm with seven generals and two traitors: The messages received by the traitors at the lowest level are not shown, since they do not matter.
  • Lemma 13.4
  • FIGURE 13.7 The OM(r + 1) algorithm with n generals and m traitors.
  • Theorem 13.3
  • 13.3.4 Consensus Using Signed Messages
  • FIGURE 13.8 Interactive consistency using signed messages. (a) Lieutenant 1 detects that the message from 2 is forged, (b) both 1 and 2 accept all messages since no lieutenant forged any message.
  • Theorem 13.4
  • 13.4 PAXOS ALGORITHM
  • FIGURE 13.9 The setup in Paxos: each circle is a process that can act as any one of the three agents: proposer, acceptor, or learner.
  • 13.4.1 Safety Properties
  • 13.4.2 Liveness Properties
  • 13.5 FAILURE DETECTORS
  • 13.5.1 Solving Consensus Using Failure Detectors
  • FIGURE 13.10 A scenario with n = 6 and t = 2 showing a fraction of the communications: Process 4 sends data to 5 in round 1 and then crashes. Process 5 sends data in round 2 to 0 and 1 and then crashes. In spite of the crashes, processes 0–3 receive the input values of 4 and 5.
  • 13.5.1.1 Consensus Using P
  • 13.5.1.2 Consensus Using S
  • 13.5.1.3 Rationale
  • 13.5.1.4 Implementing a Failure Detector
  • 13.6 CONCLUDING REMARKS
  • 13.7 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • CHAPTER 14 Distributed Transactions
  • 14.1 INTRODUCTION
  • 14.2 CLASSIFICATION OF TRANSACTIONS
  • 14.2.1 Flat Transactions
  • FIGURE 14.1 (a) A flat transaction and (b) a nested transaction.
  • 14.2.2 Nested Transactions
  • 14.2.3 Distributed Transactions
  • 14.3 IMPLEMENTING TRANSACTIONS
  • FIGURE 14.2 The handling of distributed transactions.
  • 14.4 CONCURRENCY CONTROL AND SERIALIZABILITY
  • 14.4.1 Testing for Serializability
  • Theorem 14.1
  • 14.4.2 Two-Phase Locking
  • Theorem 14.2
  • 14.4.3 Concurrency Control via Time Stamp Ordering
  • 14.5 ATOMIC COMMIT PROTOCOLS
  • FIGURE 14.3 Examples of concurrency control using time stamp ordering. Here, r[x] and w[x] refer to read and write operations on the shared variable x. (a)–(c) illustrate three different scenarios.
  • 14.5.1 One-Phase Commit
  • 14.5.2 Two-Phase Commit
  • FIGURE 14.4 (a) Implementation of 2PC and (b) participant 2 is blocked.
  • 14.5.3 Three-Phase Commit
  • 14.6 RECOVERY FROM FAILURES
  • 14.6.1 Stable Storage
  • FIGURE 14.5 The model of a stable storage: P performs the update operation and Q performs the inspect operation.
  • 14.6.2 Checkpointing and Rollback Recovery
  • FIGURE 14.6 An example of domino effect in uncoordinated checkpointing: The dark circles represent the local states of processes saved on stable storage. If R crashes after sending its last message m7 to Q, then the global state of the system will eventually roll back to (p0, q0, r0).
  • 14.6.3 Message Logging
  • FIGURE 14.7 Messages m1 and m3 have been logged by their receiving processes, but not m2. Process 0 crashes and then recovers. From the message log, m1 will be replayed, but not m2. This means the sending of m3, which is causally dependent on m2, may not be accounted for, and process 3 becomes an orphan.
  • 14.7 CONCLUDING REMARKS
  • 14.8 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • FIGURE 14.8 Three concurrent transactions: T1, T2, and T3.
  • FIGURE 14.9 An example of a nested transaction.
  • FIGURE 14.10 A set of checkpoints in three concurrent processes: P, Q, and R.
  • CHAPTER 15 Group Communication
  • 15.1 INTRODUCTION
  • 15.2 ATOMIC MULTICAST
  • 15.3 IP MULTICAST
  • FIGURE 15.1 Two distribution trees for multicast in a network of seven nodes: the thick lines are the tree edges (a) a source tree rooted at B and (b) a shared tree with E as RP.
  • 15.3.1 Reverse Path Forwarding
  • FIGURE 15.2 An example of RPF with the source node S: The recipients discard the incoming packets represented by the broken arrows.
  • 15.4 APPLICATION LAYER MULTICAST
  • FIGURE 15.3 Two examples of application layer multicast with host 0 as the source. (a) The stress on the link from host 0 to router A is 3. (b) The stress on no link exceeds 2.
  • 15.5 ORDERED MULTICASTS
  • 15.5.1 Implementing Total Order Multicast
  • 15.5.1.1 Implementation Using a Sequencer
  • 15.5.1.2 Distributed Implementation
  • FIGURE 15.4 Every process will deliver the messages from the senders in the order (p, r, q).
  • 15.5.2 Implementing Causal Order Multicast
  • FIGURE 15.5 An example illustrating the delivery of messages per causal order: P2 postpones the delivery of m2 until it receives m1.
  • 15.6 RELIABLE MULTICAST
  • 15.6.1 Scalable Reliable Multicast
  • 15.6.2 Reliable Ordered Multicast
  • Theorem 15.1
  • 15.7 OPEN GROUPS
  • 15.7.1 View-Synchronous Group Communication
  • FIGURE 15.6 Visualizing view-synchronous group communication.
  • FIGURE 15.7 An unacceptable schedule of message delivery in a group of changing size.
  • 15.8 OVERVIEW OF TRANSIS
  • FIGURE 15.8 Two different scenarios in the delivery of a safe message. (a) All but C sent ack to A, (b) C sent ack to A, and also received a copy of the acks sent by A and B before the partition occurred.
  • 15.9 CONCLUDING REMARKS
  • 15.10 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • CHAPTER 16 Replicated Data Management
  • 16.1 INTRODUCTION
  • 16.1.1 Reliability versus Availability
  • FIGURE 16.1 Two different ways of sharing a file F: (a) Users sharing a single copy and (b) each user has a local replica of the file.
  • 16.2 ARCHITECTURE OF REPLICATED DATA MANAGEMENT
  • 16.2.1 Passive versus Active Replication
  • FIGURE 16.2 Passive replication of servers.
  • FIGURE 16.3 An illustration of the primary-backup protocol. The broken lines represent the heartbeat messages.
  • FIGURE 16.4 An example of active replication with n = 4 clients and k = 4 servers. Each client’s message is multicast to every other server.
  • 16.2.2 Fault-Tolerant State Machines
  • 16.3 DATA-CENTRIC CONSISTENCY MODELS
  • 16.3.1 Strict Consistency
  • FIGURE 16.5 A DSM with four processes P, Q, R, S sharing a read–write object X.
  • 16.3.2 Linearizability
  • 16.3.3 Sequential Consistency
  • FIGURE 16.6 Two traces: (a) is linearizable but (b) is not linearizable. Here, ts denotes the time stamp of each action. Initially, x = y = 0.
  • FIGURE 16.7 (a) Sequential consistency is satisfied. (b) Sequential consistency is violated.
  • 16.3.4 Causal Consistency
  • FIGURE 16.8 The trace is causally consistent, but not sequentially consistent.
  • 16.3.5 FIFO Consistency
  • FIGURE 16.9 The trace satisfies FIFO consistency, but not causal consistency.
  • 16.4 CLIENT-CENTRIC CONSISTENCY PROTOCOLS
  • 16.4.1 Eventual Consistency
  • 16.4.2 Consistency Models for Mobile Clients
  • 16.4.2.1 Read-After-Read Consistency
  • 16.4.2.2 Write-After-Write Consistency
  • 16.4.2.3 Read-After-Write Consistency
  • 16.4.2.4 Write-After-Read Consistency
  • 16.5 IMPLEMENTATION OF DATA-CENTRIC CONSISTENCY MODELS
  • 16.6 QUORUM-BASED PROTOCOLS
  • FIGURE 16.10 A quorum system with N = 7, R = 4, W = 4. The old (value, version) of the data is (0, 0) and the updated (value, version) is (5, 1).
  • 16.7 REPLICA PLACEMENT
  • 16.8 BREWER’s CAP THEOREM
  • FIGURE 16.11 A value x is being updated to x′ across all the replicas in the system when the system partitioned: (a) before partition and (b) after partition.
  • 16.9 CASE STUDIES
  • 16.9.1 Replication Management in Coda
  • 16.9.2 Replication Management in Bayou
  • 16.9.3 Amazon Dynamo
  • FIGURE 16.12 (a) The key K is stored in the coordinating server SG and is also replicated in the next few servers like SH and SA. (b) The evolution of multiversion data is reflected by the values of the vector clocks.
  • 16.10 CONCLUDING REMARKS
  • 16.11 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • FIGURE 16.13 A history of read and writes by two processes P, Q.
  • FIGURE 16.14 A composition of two different traces: trace (c) combines (a) and (b).
  • FIGURE 16.15 A shared-memory multiprocessor with write buffers.
  • CHAPTER 17 Self-Stabilizing Systems
  • 17.1 INTRODUCTION
  • FIGURE 17.1 The states and state transitions in a stabilizing system.
  • 17.2 THEORETICAL FOUNDATIONS
  • 17.3 STABILIZING MUTUAL EXCLUSION
  • 17.3.1 Mutual Exclusion on a Unidirectional Ring
  • FIGURE 17.2 A unidirectional ring of n processes.
  • Lemma 17.1
  • Lemma 17.2
  • Lemma 17.3
  • 17.3.2 Mutual Exclusion on a Bidirectional Array
  • FIGURE 17.3 An array of n processes.
  • TABLE 17.1 Possible Changes Caused due to an Action by Process i
  • Lemma 17.4
  • Lemma 17.5
  • Lemma 17.6
  • Lemma 17.7
  • FIGURE 17.4 An illustration of convergence of the four-state algorithm.
  • 17.4 STABILIZING GRAPH COLORING
  • Lemma 17.8
  • 17.5 STABILIZING SPANNING TREE PROTOCOL
  • FIGURE 17.5 (a) A spanning tree with correct values of L and P for each node. (b) P(2) is corrupted and a cycle is created. The edge (2, 5) is not well formed.
  • 17.6 STABILIZING MAXIMAL MATCHING
  • 17.7 DISTRIBUTED RESET
  • FIGURE 17.6 A steady state of the wave layer: The reset wave with seq = 3 has made partial progress. Node 4 now initiates a fresh request for reset.
  • 17.8 STABILIZING CLOCK PHASE SYNCHRONIZATION
  • FIGURE 17.7 A sample step of the [ADG91] clock synchronization protocol: (a) initial configuration—an arrow from j to k indicates that clock j is now comparing its value with clock k. (b) The updated clock values after one step.
  • 17.9 CONCLUDING REMARKS
  • 17.10 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • FIGURE 17.8 A network of four processes: Each process has two states 0, 1.
  • V Real-World Issues
  • CHAPTER 18 Distributed Discrete-Event Simulation
  • 18.1 INTRODUCTION
  • 18.1.1 Event-Driven Simulation
  • FIGURE 18.1 Two bank tellers in a bank.
  • TABLE 18.1 Partial List of Events in the Bank
  • 18.2 DISTRIBUTED SIMULATION
  • 18.2.1 Challenges
  • FIGURE 18.2 A network of four LPs simulating the events in the bank: The message output of each LP indicates the times corresponding to the most recent event simulated by that LP.
  • FIGURE 18.3 List of events in the four LPs.
  • FIGURE 18.4 A network of LPs showing the life of a process alternating between the CPU and I/O.
  • 18.2.2 Correctness Issues
  • 18.3 CONSERVATIVE SIMULATION
  • 18.4 OPTIMISTIC SIMULATION AND TIME WARP
  • 18.4.1 Global Virtual Time
  • Theorem 18.1
  • FIGURE 18.5 The progress of local and global virtual clocks in optimistic simulation.
  • 18.5 CONCLUDING REMARKS
  • 18.6 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • FIGURE 18.6 An implementation of XOR using NAND gates: For each gate, the delay is shown in nanoseconds.
  • FIGURE 18.7 A M/M/1 queuing network.
  • CHAPTER 19 Security in Distributed Systems
  • 19.1 INTRODUCTION
  • 19.2 SECURITY MECHANISMS
  • 19.3 COMMON SECURITY ATTACKS
  • 19.3.1 Eavesdropping
  • 19.3.2 Denial of Service
  • 19.3.3 Data Tampering
  • 19.3.4 Masquerading
  • 19.3.5 Man in the Middle
  • 19.3.6 Malicious Software
  • 19.3.6.1 Virus
  • 19.3.6.2 Worms
  • 19.3.6.3 Spyware
  • 19.4 ENCRYPTION
  • FIGURE 19.1 A scheme for secure communication.
  • 19.5 SECRET KEY CRYPTOSYSTEM
  • 19.5.1 Confusion and Diffusion
  • Table 19.1 Lookup Table of a (6 × 4) S-Box
  • 19.5.2 DES
  • FIGURE 19.2 One stage of transformation in DES.
  • 19.5.3 3DES
  • 19.5.4 AES
  • 19.5.5 One-Time Pad
  • 19.5.6 Stream Ciphers
  • FIGURE 19.3 The encryption mechanism in RC4 stream cipher.
  • 19.5.7 Steganography
  • 19.6 PUBLIC KEY CRYPTOSYSTEMS
  • 19.6.1 Rivest–Shamir–Adleman Cryptosystem
  • 19.6.2 ElGamal Cryptosystem
  • 19.7 DIGITAL SIGNATURES
  • 19.7.1 Signatures in Secret-Key Cryptosystems
  • 19.7.2 Signatures in Public-Key Cryptosystems
  • 19.8 HASHING ALGORITHMS
  • 19.8.1 Birthday Attack
  • 19.9 ELLIPTIC CURVE CRYPTOGRAPHY
  • FIGURE 19.4 An elliptic curve.
  • 19.10 AUTHENTICATION SERVER
  • 19.10.1 Authentication Service for Secret-Key Cryptosystems
  • 19.10.2 Authentication Server for Public-Key Systems
  • 19.11 DIGITAL CERTIFICATES
  • 19.12 CASE STUDIES
  • 19.12.1 Kerberos
  • FIGURE 19.5 The components of Kerberos.
  • 19.12.2 Pretty Good Privacy
  • 19.12.3 Secure Socket Layer
  • 19.13 VIRTUAL PRIVATE NETWORKS AND FIREWALLS
  • 19.13.1 Virtual Private Network
  • 19.13.2 Firewall
  • 19.14 SHARING A SECRET
  • 19.15 CONCLUDING REMARKS
  • 19.16 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • Programming Exercises
  • CHAPTER 20 Sensor Networks
  • 20.1 VISION
  • 20.2 ARCHITECTURE OF SENSOR NODES
  • 20.2.1 MICA Mote
  • FIGURE 20.1 The architecture of a MICA mote.
  • 20.2.2 ZigBee-Enabled Sensor Nodes
  • FIGURE 20.2 The three types of topologies supported by ZigBee: (a) star, (b) mesh, and (c) cluster tree. The white circles represent RFDs, the gray circles represent FFDs, and the black circles denote network coordinators (base stations).
  • 20.2.3 TinyOS® Operating System
  • FIGURE 20.3 A component of TinyOS.
  • 20.3 CHALLENGES IN WIRELESS SENSOR NETWORKS
  • 20.3.1 Energy Conservation
  • FIGURE 20.4 (a) The radio range of a sensor node P is a disk, and (b) if Ed = K · dn and n > 2, then the path ACB is more energy efficient than the shortest path AB.
  • 20.3.2 Fault Tolerance
  • 20.3.3 Routing
  • 20.3.4 Time Synchronization
  • 20.3.5 Location Management
  • 20.3.6 Middleware Design
  • 20.3.7 Security
  • 20.4 ROUTING ALGORITHMS
  • 20.4.1 Directed Diffusion
  • FIGURE 20.5 Directed diffusion in a sensor network. The route in bold lines has links with the highest gradient and is the preferred route for data transfer from a source to the sink.
  • 20.4.2 Cluster-Based Routing
  • 20.4.2.1 LEACH
  • FIGURE 20.6 (a) Cluster-based routing in LEACH: the cluster heads are shown as dark circles. (b) Data transmission in PEGASIS.
  • 20.4.2.2 PEGASIS
  • 20.4.3 Metadata-Based Routing: SPIN
  • 20.5 TIME SYNCHRONIZATION USING REFERENCE BROADCAST
  • 20.5.1 Reference Broadcast
  • FIGURE 20.7 (a) Ref 0 broadcasts to sensors 1, 2, and 3 (b.delta1≈b.delta2≈b.delta3). (b) RBS-based time synchronization over two broadcast zones A and B: node 1 receives a message at time T1 = 100 ms after receiving the broadcast from Ref A, node 3 receives a message at time T3 = 600 ms before receiving the broadcast from Ref B, and node 2 receives the broadcast from Ref A 1000 ms before receiving the broadcast from Ref B.
  • 20.6 LOCALIZATION ALGORITHMS
  • 20.6.1 RSSI-Based Ranging
  • 20.6.2 Ranging Using Time Difference of Arrival
  • 20.6.3 Anchor-Based Ranging
  • FIGURE 20.8 The sensor k receives signals from the beacons A, B, and C.
  • 20.7 SECURITY IN SENSOR NETWORKS
  • 20.7.1 SPIN for Data Security
  • 20.7.1.1 Overview of SNEP
  • 20.7.1.2 Overview of μTESLA
  • 20.7.2 Attacks on Routing
  • FIGURE 20.9 Broadcasting in μTESLA using a chain of MAC keys.
  • 20.7.2.1 Hello Flood
  • 20.8 APPLICATIONS
  • 20.8.1 Health-Care Applications
  • 20.8.2 Environment Monitoring and Control
  • 20.8.3 Citizen Sensing
  • 20.8.4 Pursuer–Evader Game
  • FIGURE 20.10 Two stages (a) and (b) of the pursuit as the evader moves to a new location.
  • Lemma 20.1
  • Theorem 20.1
  • 20.9 CONCLUDING REMARKS
  • 20.10 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • FIGURE 20.11 Seven sensor nodes placed on a 7 × 7 grid.
  • FIGURE 20.12 (a) A set of sensor nodes using TDMA to avoid MAC level interference. (b) The available time slots.
  • FIGURE 20.13 Four beacons—A, B, C, and D—used for the localization of a sensor node.
  • FIGURE 20.14 Find the best location of the base station here.
  • FIGURE 20.15 Four sensor nodes in a topology control exercise.
  • FIGURE 20.16 Identify the minimum-energy broadcast tree with the initiator as the root.
  • Programming Exercises
  • CHAPTER 21 Social and Peer-to-Peer Networks
  • 21.1 INTRODUCTION TO SOCIAL NETWORKS
  • 21.1.1 Milgram’s Experiment
  • FIGURE 21.1 (a) A friendship graph with 10 persons: the distance between A and H is 4. (b) Another friendship graph with 5 nodes and (c) the distances between the different pairs of persons for the friendship group of (b).
  • 21.2 METRICS OF SOCIAL NETWORKS
  • 21.2.1 Clustering Coefficient
  • 21.2.2 Diameter
  • 21.3 MODELING SOCIAL NETWORKS
  • 21.3.1 Erdös–Rényi Model
  • 21.3.2 Small-World Model
  • FIGURE 21.2 Watts.Strogatz construction of a sample small-world graph: (a) A regular ring lattice with n nodes, each of degree k. (b) Each node replaces an existing neighbor by a randomly chosen long-range neighbor (see the broken lines) with a very low probability p ≈ 0.1. The resulting graph has a low diameter but a high clustering coefficient, which are the characteristics of a large class of social networks.
  • 21.3.3 Power-Law Graphs
  • FIGURE 21.3 Degree distributions in (a) ER graph and (b) power-law graph.
  • FIGURE 21.4 The evolution of power-law distribution via the rich gets richer model: the probability of the new node connecting to one of the existing nodes is proportional to the degree of that node.
  • 21.4 CENTRALITY MEASURES IN SOCIAL NETWORKS
  • 21.4.1 Degree Centrality
  • 21.4.2 Closeness Centrality
  • 21.4.3 Betweenness Centrality
  • FIGURE 21.5 Example of betweenness centrality.
  • 21.5 COMMUNITY DETECTION
  • 21.5.1 Girvan–Newman Algorithm
  • FIGURE 21.6 Illustration of a step of Girvan–Newman algorithm: the removal of the edge (4, 5) of highest betweenness splits the network into two partitions.
  • 21.6 INTRODUCTION TO PEER-TO-PEER NETWORKS
  • 21.7 FIRST-GENERATION P2P SYSTEMS
  • 21.7.1 Napster
  • 21.7.2 Gnutella
  • 21.8 SECOND-GENERATION P2P SYSTEMS
  • 21.8.1 KaZaA
  • 21.8.2 Chord
  • FIGURE 21.7 Node with key 8 queries for an object hosted by a node with key 51. No real machine maps to the keys 9 and 10 represented by blank circles.
  • 21.8.3 Content-Addressable Network
  • FIGURE 21.8 Three stages in CAN: The square boxes are objects. In (a), machine B joins the network and takes over three objects from machine A. In (b), machine C joins the network. In (c), two more nodes, E and D, join the network. The corresponding interconnection networks are shown in the bottom row.
  • 21.8.4 Pastry
  • FIGURE 21.9 (a) The routing table of a hypothetical Pastry node. X denotes a wildcard entry. (b) An example of routing in Pastry.
  • 21.9 KOORDE AND DE BRUIJN GRAPH
  • Theorem 21.1
  • FIGURE 21.10 (a) A De Bruijn graph with n = 8 and k = 2. (b) A route from 011 to 100.
  • 21.10 SKIP GRAPH
  • FIGURE 21.11 (a) A skip list. (b) A skip graph—only three levels are shown. Under each node in level 0, its membership vector is shown.
  • 21.11 REPLICATION MANAGEMENT
  • 21.12 BITTORRENT AND FREE RIDING
  • FIGURE 21.12 The states of the peers in BitTorrent: the shaded boxes denote the pieces that have been downloaded. (a) Each of the four peers has acquired a few of the eight pieces of the file. (b) The states of the peers after peer 2 downloads piece 4 from peers 3 and 4 downloads piece 6 from peer 1.
  • 21.13 CENSORSHIP RESISTANCE, ANONYMITY
  • 21.14 CONCLUDING REMARKS
  • 21.15 BIBLIOGRAPHIC NOTES
  • EXERCISES
  • FIGURE 21.13 Four possible degree distributions of nodes.
  • FIGURE 21.14 Two networks.
  • Back Matter
  • References
  • Index
Show More

Additional information

Veldu vöru

Rafbók til eignar

Reviews

There are no reviews yet.

Be the first to review “Distributed Systems”

Netfang þitt verður ekki birt. Nauðsynlegir reitir eru merktir *

Aðrar vörur

0
    0
    Karfan þín
    Karfan þín er tómAftur í búð