Client/server computing was a technology ahead of its time—and more importantly ahead of the enterprise network infrastructure capabilities of the time.
While vendors and users largely agreed on the value of the model and on the best methods to implement the model, hardware and software vendors’ desire to lock users into their product lines as well as the stirrings of the Internet and World Wide Web standards all made for an interesting time in the technology reporting business.
The client/server emerged during the boom years of the PC industry. Client/server computing also helped fuel the adoption of Novell Netware as a means to harness the increasing power of distributed personal computers, combined with the central servers that delivered the application management, data and storage while trying to bring some adult CPU supervision to the unbridled demands for PC access to data center resources.
Those demands came flooding in from the corporate desktops that were connected to local-area networks that were fairly speedy for the time as well as from laptop users who were dialing in via modems. The concept behind client/server computing was solid and an extension of what had been happening in mainframes and dumb terminals ever since the first command line started blinking.
The idea was that if you could distribute the compute workload, you could reach a broader computing audience, get results to the people doing the work and—as was said at the time—move intelligence to the computer periphery.
However, the era of client/server withered due to its own success. Think of 100 people working on editing documents both within corporate offices and on the road via modem. Tracking changes, setting levels of access priority and solving that old database problem of handling changes from many people working on one document and one person on many documents quickly produced client/server overload.
Take that simple document example and move it up to enterprise resource planning applications or financial systems and the infrastructure sagged and often just came to a standstill. Now take those workloads and try to mesh them up with other, incompatible systems from vendors that weren’t in sharing mode, and you can see why the 1990s version of client-server computing wasn’t going to scale.
When you look at the client/server-related articles in the '90s, you’ll find lots of arguments about networking and application standards, as well as questions concerning whether or not PC operating systems were really enterprise-ready or how convoluted licensing practices were holding back the advent of robust enterprise applications. With only a few new buzzwords inserted, the arguments sound very much like those of today concerning cloud computing.
A 1995 article looking at the differing client/server visions of Microsoft and IBM “focuses on the approaches of International Business Machines Corp. and Microsoft Corp. in client/server computing. One other article from the same year compared “Microsoft's plans to build, buy or license a transaction-processing monitor for Windows NT; IBM's Customer Information Control System TP monitor and Novell Inc.'s SuperNOS strategy.”
The Wikipedia.org entry on client/server has a decent definition, “The client/server model is a distributed application structure in computing that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often, clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system.
“A server is a host that is running one or more server programs which share their resources with clients," according to Wikipedia. "A client does not share any of its resources, but requests a server's content or service function. Clients, therefore, initiate communication sessions with servers which await incoming requests.”