INTRODUCTION OF DISTRIBUTED SYSTEM


CENTRALIZED SYSTEM:
Definition:
                “ A type of system where all users connect to a central server, which is the acting agent for all communications.”
The centralized device is normally known as the server that is responsible of managing all the processing and controlling.
>   Only the administrator has access to the server. All the other clients on the network request the services from the server.
>   The administrator is in charge to assign the rights to these different users/clients that request the services.
>   It is run on a single computer system and do not interact with other computer systems.
>   In short “distributed systems: physically separate computers working together.”


Example Of centralized :

Simple LAN network.

Cloud system.

Distributed System:

                                 A collection of independent computers that appears to its users as a single coherent system”

A distributed system consists of a collection of autonomous computers, connected through a network and distribution middleware, which enables computers to coordinate their activities and to share the resources of the system.

A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs.

Distributed system may have a common goal, such as solving a large computational problem.

Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users.



  
Each computer has its own local memory, and information can be exchanged only by passing messages from one node to another by using the available communication links.
  
Advantages of Distributed Computing
Reliability
The important advantage of distributed computing system is reliability. It is more reliable than a single system. If one machine from system crashes, the rest of the computers remain unaffected and the system can survive as a whole.
Incremental Growth
In distributed computing the computer power can be added in small increments i.e. new machines can be added incrementally as per requirements on processing power grow.
Sharing of Resources
Shared data is required to many applications such as banking, reservation system and computer-supported cooperative work. As data or resources are shared in distributed system, it is essential for various applications.
Flexibility
As the system is very flexible, it is very easy to install, implement and debug new services. Each service is equally accessible to every client remote or local.
Speed
A distributed computing system can have more computing power than a mainframe.  Its speed makes it different than other systems.
Open system
As it is open system, it can communicate with other systems at anytime. Because of an open system it has an advantage over self-contained system as well as closed system.
Performance
It is yet another advantage of distributed computing system. The collection of processors in the system can provide higher performance than a centralized computer.
Disadvantages of Distributed Computing
Troubleshooting
Troubleshooting and diagnosing problems are the most important disadvantages of distributed computing system. The analysis may require connecting to remote nodes or checking communication between nodes.


Software
Less software support is the main disadvantage of distributed computing system. Because of more software components that comprise a system there is a chance of error occurring.
Networking
The underlying network in distributed computing system can cause several problems such as transmission problem, overloading, loss of messages. Hence, the problems created by network infrastructure are the disadvantages of distributed computing.
Security
The easy distributed access in distributed computing system which increases the risk of security. The sharing of data creates the problem of data security.


EXAMPLES OF DISTRIBUTED SYSTEMS:

Ø  INTERNET: It is interconnected collection of computer network of many different types like WAN,
MAN, LAN e.g.

Ø  INTRANET: An intranet is a portion of the internet that is separately administered and has boundary that can be configured to enforce the local security policies. It is composed of several local area network linked by back bone connection. The intranet is connected to the internet via a router, which allows the user inside the intranet to access the services elsewhere such as web or email.

Ø  MOBILE FACILITY:
It is an integration of small and portable computing device into distributed system.
Examples of various mobile computing devices are:
a) Laptop computers
b) Handheld device like PDA Personal Digital Assistant, Mobile phone pagers, video camera and digital camera.

Ø  NETWORK OF WORKSTATIONS:
This is a computer network that connects several computer workstations together with special software forming a cluster.

Ø  AUTOMATIC BANKING (Teller machine) SYSTEM:
In teller machine, the data is shared by the server on different machine of Banks. A concept of distributed system in which all the data of a server in a bank will be stored and also be a shared data to the network.


Read more...

Distributed Systems RKU Lab work

  • TUTORIAL : 2
  • TUTORIAL : 7
  • TUTORIAL : 8
  • TUTORIAL : 9

Read more...

TUTORIAL 2 Parallel Processing


Q=1. Program to create n number of child processes from parent process and print process id of them.



int process_Fork(int n)
{
        int i;
        for(i=1;i<n;i++)
        {
                if(fork()==0)
                {
                        return i;
                }
}
return 0;
}
int main()
{
        int id;
        id = process_Fork(3);
        printf("Hello = %d\n ",id);

}

OUTPUT:-

[08ce55@linux ~]$ ./a.out
 Hello = 1
 Hello = 2
 Hello = 0

Q=2. Program to join the processes. N number of processes are forked out from parent process. Parent waits untill child processes finishes their task and terminates a process_join().


#include<stdio.h>

int process_Fork(int n)
{
        int i;
        for(i=1;i<n;i++)
        {
                if(fork()==0)
                {
                        return i;
                }
        }
        return 0;
}
int process_Join(int id, int n)
{
        int i;
        if(id==0)
        {
                for(i=1;i<n;i++)
                {
                        wait(0);
                }
        }
        else
        {
                exit(0);
        }
}
int main()
{
        int id;
        id = process_Fork(3);
        printf("Hello = %d\n ",id);
        process_Join(id,3);
}

  OUTPUT:-

[08ce55@linux ~]$ ./a.out
 Hello = 1
 Hello = 2
 Hello = 0

 

Read more...

TUTORIAL 1 Parallel Processing



Q.1.Write a program for creating a child process using fork() command.

PROGRAM:-


#include<stdio.h>
int main()
{
        int id;
        id=fork();
        printf("ID:%d\n",id);
}

OUTPUT:-

[08ce55@linux ~]$ ./a.out
ID:0
ID:6057

Q.2.Explain concept of parallel programming.

A parallel programming model is a concept that enables the expression of parallel programs which can be compiled and executed. The value of a programming model is usually judged on its generality: how well a range of different problems can be expressed and how well they execute on a range of different architectures. The implementation of a programming model can take several forms such as libraries invoked from traditional sequential languages, language extensions, or complete new execution models.
Consensus on a particular programming model is important as it enables software expressed within it to be transportable between different architectures. The von Neumann model has facilitated this with sequential architectures as it provides an efficient bridge between hardware and software, meaning that high-level languages can be efficiently compiled to it and it can be efficiently implemented in hardware.
Main classifications and paradigms
Classifications of parallel programming models can be divided broadly into two areas: process interaction and problem decomposition.
Process interaction
Process interaction relates to the mechanisms by which parallel processes are able to communicate with each other. The most common forms of interaction are shared memory and message passing, but it can also be implicit.
Shared memory
Main article: Shared memory
In a shared memory model, parallel tasks share a global address space which they read and write to asynchronously. This requires protection mechanisms such as locks, semaphores and monitors to control concurrent access. Shared memory can be emulated on distributed-memory systems but non-uniform memory access (NUMA) times can come in to play.
Message passing
Main article: Message passing
In a message passing model, parallel tasks exchange data through passing messages to one another. These communications can be asynchronous or synchronous. The Communicating Sequential Processes (CSP) formalisation of message-passing employed communication channels to 'connect' processes, and led to a number of important languages such as Joyce, occam and Erlang.
Implicit
Main article: Implicit parallelism
In an implicit model, no process interaction is visible to the programmer, instead the compiler and/or runtime is responsible for performing it. This is most common with domain-specific languages where the concurrency within a problem can be more prescribed.
Problem decomposition
Any parallel program is composed of simultaneously executing processes, problem decomposition relates to the way in which these processes are formulated. This classification may also be referred to as algorithmic skeletons or parallel programming paradigms.
Task parallelism
Main article: Task parallelism
A task-parallel model focuses on processes, or threads of execution. These processes will often be behaviourally distinct, which emphasises the need for communication. Task parallelism is a natural way to express message-passing communication. It is usually classified as MIMD/MPMD or MISD.
Data parallelism
Main article: Data parallelism
A data-parallel model focuses on performing operations on a data set which is usually regularly structured in an array. A set of tasks will operate on this data, but independently on separate partitions. In a shared memory system, the data will be accessible to all, but in a distributed-memory system it will divided between memories and worked on locally. Data parallelism is usually classified as SIMD/SPMD.


Read more...
Related Posts Plugin for WordPress, Blogger...

Engineering material

GTU IDP/ UDP PROJECT

GTU IDP/ UDP PROJECT

Patel free software download

  © Blogger templates The Professional Template by Ourblogtemplates.com 2008

Back to TOP