Client–server architecture


The client–server model, or client–server architecture, is an approach to computer network programming in which computers in a network assume one of two roles: The server selectively shares its resources, and the client initiates contact with a server in order to use those resources.[1]

The client–server model is prevalent in computer networks. Email, network printing, and the World Wide Web all apply the client–server model.

How clients and servers communicate

Clients and servers exchange messages in a request-response messaging pattern: The client sends a request, and the server returns a response. This exchange of messages is an example of inter-process communication. To communicate, the computers must have a common language, and they must follow rules so that both the client and the server know what to expect. The language and rules of communication are defined in a communications protocol. All client-server protocols operate in the application layer. The application-layer protocol defines the basic patterns of the dialogue. To formalize exchange data even further, the server may implement an API (such as a web service).[2] The API is an abstraction layer for such resources as databases and custom software. By restricting communication to a specific content format, it facilitates parsing. By abstracting access, it facilitates cross-platform data exchange.[3]

A server may receive requests from many different clients in a very short period of time. Because the computer can perform a limited number of tasks at any moment, it relies on a scheduling system to prioritize incoming requests from clients in order to accommodate them all in turn. To prevent abuse and maximize uptime, the server's software limits how a client can use the server's resources. Even so, a server is not immune from abuse. A denial of service attack exploits a server's obligation to process requests by bombarding it with requests incessently. This inhibits the server's ability to responding to legitimate requests.

What does a server serve?

Servers are classified by the services they provide. A web server serves web pages; a file server serves computer files. However, by requesting a web page or a file, a client implicitly makes use of the server's resources, like memory, CPU cycles, and software. A resource is virtually any of the computer's software and electronic components, from programs and data to processors and storage devices. Collectively, shared resources on a server constitute a service.

Because the service is an abstraction of computer resources, the client does not have to be concerned with how the server formulates a response. The client only has to understand the response—which it can do because the client and server use the same communications protocols. Communications protocols are the languages computers use to communicate over a network. The data transmitted in this way is divided into units called network packets. In addition to the data itself (the payload), each packet is encoded with information essential for transmission, such as the source and destination. Because network packets don't exceed a maximum size, the payload may either be the entire message or just a part of it.

The roles of client and server

Whether a computer is a client, a server, or both, it can serve multiple functions. For example, a single computer can run web server and file server software at the same time to serve different data to clients making different kinds of requests. Client software can also communicate with server software within the same computer.[4] Communication between servers, such as to synchronize data, is sometimes called inter-server or server-to-server communication.

Real-world example

When a bank customer accesses online banking services with a web browser (the client), she initiates a request to the bank's web server. Since the customer's login credentials are stored in a database, the web server runs a program to access a database server. This database server may, in turn, fetch financial transaction records from another database server. An application server interprets the returned data by following the bank's business logic, and provides the output to the web server. Finally, the web server sends the result to the web browser, which interprets the data.

Each server listed above acts as a client when it submits data in a request to another server for processing. In each step of this sequence of client–server message exchanges, a computer processes a request and returns data. This is the request-response messaging pattern. When all the requests are met, the sequence is complete and the web browser presents the data to the customer.

This example illustrates a design pattern applicable to the client–server model: Separation of concerns.

Early history

While formulating the client–server model in the 1960s and 1970s, computer scientists at Xerox and Xerox PARC in California used the terms server-host (or serving host) and user-host (or using-host).[5][6]

One context in which researchers used these terms was in the design of a computer network programming language called Decode-Encode Language (DEL).[5] The purpose of this language was to accept commands from one computer (the user-host), which would return status reports to the user as it encoded the commands in network packets. Another DEL-capable computer, the server-host, received the packets, decoded them, and returned formatted data to the user-host. A DEL program on the user-host received the results to present to the user. This is a client–server transaction. Development of DEL was just beginning in 1969, the year that the United States Department of Defense established ARPANET (predecessor of Internet).

Client-host and server-host

Client-host and server-host have subtly different meanings than client and server. A host is any computer connected to a network. Whereas the words server and client may refer either to a computer or to a computer program, server-host and user-host always refer to computers. The host is a versatile, multifunction computer; clients and servers are just programs that run on a host. In the client–server model, a server is more likely to be devoted to the task of serving.

An early use of the word client occurs in "Separating Data from Function in a Distributed File System", a 1978 paper by Xerox PARC computer scientists Howard Sturgis, James Mitchell, and Jay Israel. The authors are careful to define the term for readers, and explain that they use it to distinguish between the user and the user's network node (the client).[7] (By 1992, the word server had entered into general parlance.)[8][9]

The concepts of server-host and user-host predate the Internet protocol suite. In the early 1980s, there were several variations of the network-layer model, which is now standardized as the OSI model. One of these, the ARPANET Reference Model,[1] is a host-oriented model that combines the network layer and the transport layer into a host-to-host communication layer.

Centralized computing

Centralized computing applies the client–server model to offload most computation to a single computer in a network, such as a mainframe or minicomputer. The more computation is offloaded to the central computer, the simpler the client hosts can be.[10]

A thin client has few resources other than input devices and output devices.[11] It relies heavily on network resources (servers and infrastructure) for computation and storage. A diskless node loads even its operating system from the network, and a computer terminal has no operating system at all; it is only an input/output interface to the server. In contrast, a fat client, such as a personal computer, has many resources, and does not rely on a server for essential functions.

As microcomputers decreased in price and increased in power from the 1980s to the late 1990s, many organizations transitioned computation from centralized servers to fat clients.[12] This afforded greater, more individualized dominion over computer resources, but complicated information technology management.[10][11][13] During the 2000s, web applications matured enough to rival application software developed for a specific microarchitecture. This maturation, inexpensive mass storage, and the advent of service-oriented architecture were among the factors that gave rise to the cloud computing trend of the 2010s.[14]

Comparison with peer-to-peer architecture

Main article: Peer-to-peer

In modern applications of the client-server model, the server is often designed to be a centralized system that serves many clients. The more simultaneous clients a server has, the more resources it needs. In a peer-to-peer network, two or more computers (called peers) pool their resources and communicate in a decentralized system. Peers are coequal nodes in a non-hierarchical network. Collectively, lesser-powered computers can share the load and provide redundancy.

Since most peers are personal information appliances, their shared resources may not be available consistently. Although an individual node may have variable uptime, the resource remains available as long as one or more other nodes offer it. As the availability of nodes changes, an application-layer protocol reroutes requests.

The client–server and peer-to-peer models alike are used in distributed computing applications.

See also

Notes

This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.