DISTRIBUTED SYSTEMS SAPE MULLENDER PDF

As a consequence, the material in this second edition is almostentirely new. Only one chapter, in fact, has survived unscathedfrom the first edition. This book, I believe, has now become acoherent treatment of an increasingly important research area. Teachers can usethis book torefresh their knowledge of distributed systems.

Author:Marr Yokora
Country:Portugal
Language:English (Spanish)
Genre:Love
Published (Last):3 January 2017
Pages:387
PDF File Size:19.16 Mb
ePub File Size:2.41 Mb
ISBN:316-7-23532-185-8
Downloads:72092
Price:Free* [*Free Regsitration Required]
Uploader:Nim



Mullender Based on a Lecture by Michael D. Schroeder The first four decades of computer technology are each characterized by a different approach to the way computers were used. In the s, programmers would reserve time on the computer and have the computer all to themselves while they were using it In the s, batch processing came about People would submit their jobs which were queued for processing.

They would be run one at a time and the owners would pick up their output later. Time-sharing became the way people used computers in the s so that users could share a computer under the illusion that they had it to themselves. The s are the decade of personal computing: people have their own dedicated machine on their desks. The evolution that took place in operating systems has been made possible by a combination of economic considerations and technological developments. Batch systems could be developed because computer memories became large enough to hold an operating system as well as an application program; they were developed because they made it possible to make better use of expensive computer cycles.

Time-sharing systems were desirable because they allow programmers to be more productive. They could be developed because computer cycles became cheaper and computers more powerful. Very large scale integration and the advent of local networks has made workstations an affordable alternative to time sharing, with the same guaranteed computer capacity at all times. Today, a processor of sufficient power to serve most needs of a single person costs less than one tenth of a processor powerful enough to serve ten.

Time sharing, in fact, is no longer always a satisfactory way to use a computer system: the arrival of bit-mapped displays with graphical interfaces demands instant visual feedback from the graphics subsystem, feedback which can only be provided by a dedicated, thus personal, processor. The workstations of the s will be even more powerful than those today. The text of this paper was taken from Distributed Systems, SJ. Mullender ed. Network interfaces will allow communication at rates matching the requirements of several channels of real-time video transmission.

A time-sharing system provides the users with a single, shared environment which allows resources, such as printers, storage space, software and data to be shared. To provide users access to the services, workstations are often connected by a network and workstation operating-system software allows copying files and remote login over the network from one workstation to another.

Users must know the difference between local and remote objects, and they must know at what machine remote objects are held. In large systems with many workstations this can become a serious problem. The problem of system management is an enormous one.

In the time-sharing days, the operators could back up the file system every night; the system adminis- trators could allocate the available processor cycles where they were most needed, and the systems programmers could simply install new or improved software. In the workstation environment, however, each user must be an operator, a system administrator and a systems programmer. In a building with a hundred autonomous workstations, the operators can no longer go round making backups, and the sys- tems programmers can no longer install new software by simply putting it on the file system.

Some solutions to these problems have been attempted, but none of the current solutions are as satisfactory as the shared environment of a time-sharing system. For example, a common and popular approach consists of the addition of network-copy commands with which files can be transferred from one workstation to another, or — as a slightly better alternative — of a network file system, which allows some real sharing of files.

In all but very few solutions, however, the user is aware of the difference between local and remote operations. For such environments a distributed operating system is required.

The s will be the decade of distributed systems. In distributed systems, the user makes no distinction between local and remote operations. Programs do not necessarily execute on the workstation where the command to run them was given. There is one file system, shared by all the users.

Peripherals can be shared. Processors can be allocated dynamically where the resource is needed most. A distributed system is a system with many processing elements and many stor- age devices, connected together by a network.

Potentially, this makes a distributed 30 system more powerful than a conventional, centralized one in two ways. First, it can be more reliable, because every function is replicated several times. When one processor fails, another can take over the work.

Each file can be stored on several disks, so a disk crash does not destroy any information. Second, a distributed system can do more work in the same amount of time, because many computations can be carried out in parallel. These two properties, fault tolerance and parallelism give a distributed system the potential to be much more powerful than a traditional operating system.

In distributed systems projects around the world, researchers attempt to build systems that are fault tolerant and exploit parallelism.

How does one recognize a distributed system? Definitions are hard to give. The key concept here is transparency, in other words, the use of multiple processors should be invisible transparent to the user. According to this definition, a multiprocessor operating system, such as versions of Unix for Encore or Sequent multiprocessors would be distributed operating systems.

Even dual-processor configurations of the IBM , or CDC number crunchers of many years ago would satisfy the definition, since one cannot tell whether a program runs on the master or slave processor. A distributed operating system also must not have any single points of failure — no single pan failing should should bring the whole system down.

This is not an easy condition to fulfill in practice. Just for starters, it means a distributed system should have many power supplies; if it had only one, and it 31 1 Symptoms of a distributed system failed, the whole system would stop.

If you count a fire in the computer room as a failure, it should not even be in one physical place, but it should be geographically distributed.

But one can carry failure transparency too far. It is dangerous to attempt an exact definition of a distributed system. Instead, Schroeder gave a list of symptoms of a distributed system. If your system has all of the symptoms listed below, it is probably a distributed system.

Therefore, each processing element, or node must contain at least a CPU and memory. A distributed system cannot be fault tolerant if all nodes always fail simultaneously.

In practice, this implies that the interconnections are unreliable as well. When a node fails, it is likely that messages will be lost. To see more clearly what constitutes a distributed system, we shall look at some examples of systems. Multiprocessor computer with shared memory A shared-memory multiprocessor has several of the characteristics of a distributed system. It has multiple processing elements, and an interconnect via shared mem- ory, interprocessor interrupt mechanisms and a memory bus.

The communication between processing elements is reliable, but this does not in itself mean that a multiprocessor cannot be considered as a distributed system. What disqualifies multiprocessors is that there is no independent failure: when one processor crashes, the whole system stops working.

However, it may well be that manufacturers, in- spired by distributed systems research, will design multiprocessors that are capable 32 of coping with partial failure; to my knowledge, only Tandem manufactures such machines, currently. Ethernet with packet-filtering bridges A bridge is a processor with local memory that can send and receive packets on two Ethernet segments. The bridges are interconnected via these segments and they share state which is necessary for routing packets over the internet formed by the bridges and the cable segments.

When a bridge fails, or when one is added to or removed from the network, the other bridges detect this and modify their routing tables to take the changed circumstances into account Therefore, an Ethernet with packet-filtering bridges can be viewed as a distributed system. Diskless workstations with NFSftle servers Each workstation and file server has a processor and memory, and a network interconnects the machines.

Workstations and servers fail independendy: when a workstation crashes, the other workstations and the file servers continue to work. When a file server crashes, its client workstations do not crash although client processes may hang until the server comes back up. But there is no shared state: when a server crashes, the information in it is inaccessible until the server comes back up; and when a client crashes, all of its internal state is lost A network of diskless workstations using NFS file servers, therefore, is not a distributed system.

People are distributed, information is distributed Distributed systems often evolve from networks of workstations. The owners of the workstations connect their systems together because of a desire to communicate and to share informations and resources.

Information generated in one place is often needed in another. The costs of both processors and of memories are generally going down. Each year, the same money, buys a more powerful workstation. That is, in real terms, computers are getting cheaper and cheaper. The cost of communication depends on the bandwidth of the communication channel and the length of the channel.

Bandwidth increases, but not beyond the limits set by the cables and interfaces used. Wide area network cables have to be used for decades, because exchanging them is extremely expensive. Communication costs, therefore, are going down much less rapidly than computer costs.

As computers become more powerful, demands on the man-machine bandwidth go up. Five or ten years ago, most computer users had a terminal on their desk, capable of displaying 24 lines of text containing 80 characters. The communication speed between computer and terminal was characters per second at most Today, we consider a bit-mapped display as normal, even a colour display is hardly luxury any more.

The communication speed between computer and screen has gone up a few orders of magnitude, especially in graphical applications. Soon, voice and animation will be used on workstations, increasing the man-machine bandwidth even more. Man-machine interfaces are also becoming more interactive.

Users want instant visual or audible feedback from their user interface, and the latency caused by distances of more than a few kilometres of network is often too high already. These effects make distributed systems not only economic, but necessary.

Modularity In a distributed system, interfaces between parts of the system have to be much more carefully designed than in a centralized system. As a consequence, distributed systems must be built in a much more modular fashion than centralized systems.

One of the things that one typically does in a distributed system is to run important services on their own machines. The interface between modules are usually remote procedure call interfaces, which automatically impose a certain standard for inter- module interfaces. To increase the storage or processing capacity of a distributed system, one can add file servers or processors one at a time. Availability Since distributed systems replicate data and have built-in redundancy in all resources that can fail, distributed systems have the potential to be available even when arbitrary single-point failures occur.

Ideally, distributed systems have no centralized components, so that this restriction on the maximum size the system can grow to does not exist. Reliability Availability is but one aspect of reliability.

APRISIONADA LAUREN DESTEFANO PDF

ISBN 13: 9780201416602

.

ERYX JACULUS PDF

Distributed Multimedia Systems

.

BOSCH VDC-260V04-10 PDF

Distributed Systems

.

CISTOGRAFIA MICCIONAL PDF

Introduction to distributed systems

.

Related Articles