Most people with management training have heard some mention of the Hawthorne effect, often loosely summarized as the tendency of people to work better when they think that anyone is really paying attention.
The actual outcomes, and the resulting insights, of Western Electrics productivity studies at its Hawthorne facility in Cicero, Ill., in the 1920s and 1930s are a good deal more complex.
For example, some of those Hawthorne studies found that social norms discouraged any worker from doing too much more than the rest of the group. In short, theres more to those results than the facile conclusion that youll get more work from people if you make them feel special.
Whats generally true, though, about the Hawthorne studies is that they triggered a breakthrough in the recognition that the workplace is a social system. Post-Hawthorne, managers became more likely to understand that it was a mistake to treat the worker as merely a component of a process and to seek to optimize that process by technical means alone—as urged by practitioners such as Frederick Taylor in the late 1890s and early 1900s.
The “scientific management” efforts of Taylor and others were effective in getting a worker to shovel more coal or even to assemble more electrical relays, but they were less likely to produce good results in more complex task domains.
This brings me to the subject of this column: the resurgence of the mainframe computer. Bear with me—Ill be the first to admit that the connection isnt obvious, but the Hawthorne studies seem to me to be the missing link that explains why anyone ever thought that the mainframe was destined for the scrap heap.
The mainframe computer is the place where todays neat things are happening. If that statement had fallen through a time warp to show up on someones breakfast plate back in 1991, we might not have had to listen to predictions at that time that “the last mainframe computer will be unplugged in five years,” as it was said by one such prognosticator, former pundit and now venture capitalist Stewart Alsop.
Alsop had the grace to confess in 1996, “I admit it: Were stuck with mainframes for my lifetime.” Unfortunately, he then said, “I still think that eventually there will be no mainframes,” and there he parted company, not only with me but also with the members of eWEEKs Corporate Partner Advisory Board. That group, in a June roundtable discussion that well summarize in an upcoming issue of eWEEK, told me that not only is the mainframe alive, but its also growing in its appeal as a playground for the best and the brightest of the graduates of academic IT programs.
Utility computing, hardware virtualization, self-healing systems and the synergies of massive search and data mining capabilities are the sorts of things that get done on large-scale systems. The problems are inherently those of complex machine organization and operation, and the paybacks are those that come from radically reducing the units of human time that are spent to deliver a unit of IT performance.
The question is, Why did things ever seem otherwise? Why was it ever fashionable to assert that the agility of independent thought and creativity on PCs, combined with Moores Law rates of performance improvement, made PCs the future growth path for all serious work?
I suggest that this resulted from a pre-Hawthorne kind of confusion between the technical and the social. When the PC was novel and costly, you had to be the kind of person who embraces complexity and invests in opportunity to be on the leading edge of that revolution. The early years of the PC were great because the first wave of PC people were great, but as the user base started to look more like the general population, the downside of the PC—its need for costly handholding and other maintenance—became much more obvious.
Mainframes are better than ever, but they need a new generation of talent to make the most of it. No tool is any better than the talent it attracts.
Peter Coffee can be reached at peter_coffee@ziffdavis.com.