Market analysts have noticed a trend toward developing multitier applications that are distributed over Internet-standard networks, and predict rapid growth in these distributed systems in the coming years. Some predict that, by 2005, the familiar architecture of client/server applications will be replaced by super-suites of interconnected components, operating in frameworks of widely-available distributed systems. In other words, applications will be assembled from reusable building blocks, by using a variety of cooperating subsystems.
Before delving into the implementation details of building Web applications, it might be helpful to take a brief look at the architecture of the Web from a historical perspective, beginning with the traditional client/server architecture.
Client/Server Revisited:Cooperating and communicating applications have typically been categorized as either client or server applications. While the client application requests services using Microsoft Distributed Component Object Model (DCOM) or remote procedure calls (RPCs), the server application responds to client requests. Traditional client/server interactions, shown in the figure below, are often data-centric and combine most (if not all) of the processing (or business) logic and user interface within the client application. The servers task is simply to process requests for data storage and retrieval.
Client/server (two-tier) applications have usually performed many of the functions of stand-alone systems; that is, they present a user interface, gather and process user input, perform the requested processing, and report the status of the request. Because servers only provide access to the data, the client uses its local resources to process it. Out of necessity, the client application can tell where the data resides and how it is laid out in the database. Once the server transmits the data, the client is responsible for formatting and displaying it to the user.
The primary advantage of two-tier applications over monolithic, single-tier applications is that they give multiple users access to the same data simultaneously, thereby creating a kind of interprocess communication. Updates from one computer are instantly available to all computers that have access to the server.
However, the server must trust clients to modify data appropriatelyunless data integrity rules are used, there is no protection against errors in client logic. Furthermore, client/server connections are hard to managethe server is forced to open one connection per client. Finally, because much of the business logic is spread throughout a suite of client applications, changes in business processes usually lead to expensive and time-consuming alterations to source code.
Although two-tier design still continues to drive many small-scale business applications, an increasing need for faster and more reliable data access, coupled with decreasing development time lines, has persuaded system developers to seek out a new distributed application design.
Multi-Tier Design:The new system design logically divides computing tasks across the application. Viewed from a purely functional standpoint, most applications perform the following three main tasks: gathering user input, storing the input as data, and manipulating the data as dictated by established operational procedures. These tasks can be grouped into three or more tiers, which is why the new system design provides for three-tier, or multitier applications. The application tiers, shown in the figure below, are:
Client Tier: The user interface or presentation layer. Through this topmost layer, the user can input data, view the results of requests, and interact with the underlying system. On the Web, the browser performs these user interface functions. In nonWebbased applications, the client tier is a stand-alone, compiled front-end application.
Middle Tier: Components that encapsulate an organizations business logic. These processing rules closely mimic everyday business tasks, and can be single-task-oriented, or part of a more elaborate series of tasks in a business workflow. In a Web application, the middle tier might consist of Microsoft Component Object Model (COM) components registered as part of a transactional application or instantiated by a script in Active Server Pages (ASP).
Third Tier : A database management system (DBMS) such as a Microsoft SQL Server database, an unstructured data store such as Microsoft Exchange, or a transaction-processing mechanism such as Transaction Services or Message Queuing. A single application can enlist the services of one or more of these data providers.
Application tiers dont always correspond to physical locations on the network. For example, the middle and third tiers may coexist on the same server running both IIS 5.0 and SQL Server, or they could be separate. The middle tier alone may tie together several computers, and sometimes the server becomes a client itself.
Separating the application into layers isolates each major area of functionality. The presentation is independent of the business logic, which is separate from the data. Designing applications in this way has its trade offs; it requires a little more analysis and design at the start, but greatly reduces maintenance costs and increases functional flexibility in the end.
The explosive growth of the Internet is a strong motivation for organizations to adopt n -tier architectures in their products. However, organizations still face challenges. How can they take advantage of new technologies while preserving existing investments in people, applications, and data? How can they build modern, scalable computing solutions that are dynamic and easy to change? How can they lower the overall cost of computing while making complex computing environments work? One solution is Microsoft Windows Distributed interNet Applications (DNA).
The Future of Applications on the Internet:Customers are beginning to demand global access to the information they need, both public and personal. Users increasingly want to use a single client application for their information access needs, and they rely on the versatility of the network and servers to provide content and services. Users will come to depend on these applications and want them to be universally available; they might even want to replace local applications on their desktop systems.
Consequently, there is likely to be an explosion of HTML-based server applications to feed the ubiquitous availability of the powerful Internet client. Applications will be factored into user interfaceonly client components (with little software required beyond the standard Internet browser), and a middle tier of server components that have no user interface and that provide services to the local desktop or across the Internet.