In December 2012, Linux developer Ingo Molnar sent Linus Torvalds a note, suggesting that he drop support of Intel’s iconic i386 processor from Linux.
The problem, Molnar, a Red Hat employee, explained, was that the work involved in continuing support of the processor far outweighed any benefits. The “complexity has plagued us with extra work whenever we wanted to change SMP primitives, for years,” Molnar wrote.
“Unfortunately there’s a nostalgic cost: your old original 386 DX33 system from early 1991 won’t be able to boot modern Linux kernels anymore,” he told Torvalds. “Sniff.”
Torvalds reportedly shot back, “I’m not sentimental. Good riddance.”
There might not be a lot of nostalgia for the i386 or its follow-on, the i486, but when they hit the market in 1985 and 1989, respectively, they represented a significant step forward for Intel and its x86 architecture at a time when RISC (Reduced Instruction Set Computing) architecture was dominant.
The chips brought the processing power and memory capacities that would help Microsoft’s Windows operating system to run some of the larger data center applications. They also helped set the stage for the client/server computing model, added fuel to the worldwide adoption of PCs and enabled Intel to grow into the world’s largest chip maker. Torvalds would develop Linux on this platform in 1991.
The i386—or the 386, as it later become known—was the first Intel chip to offer 32-bit computing capabilities, a step up from the 16-bit 286 predecessor. Intel wouldn’t boost the capabilities of its x86 CPUs to 64 bit until the 2000s (following on the heels of Advanced Micro Devices’ move to produce its 64-bit x64 processors). The 486 would be the first x86-based CPU to contain a million transistors.
At the time, there were IBM-compatible PCs that were powered by Intel’s x86 chips and running Windows. But RISC was the top architecture, found in the massive servers and workstations of the time. However, those chips were too powerful and expensive for most desktop systems, leaving the field wide open for an architecture like the x86.
The 386 was the third generation of Intel’s processors, and the one that made the big difference in PCs. It would take awhile, but Windows would come to love the 386 and 486, particularly as Microsoft added greater sophistication to its graphical user interface (GUI) along with support for multitasking.
“When the  chips first came out, the conventional wisdom was that nobody would need that much processing power for a desktop computer,” Nathan Brookwood, principal analyst for Insight 64, told eWEEK. “So the first systems to use it were servers.”
The 386, which eventually hit speeds up to 33MHz, brought with it a range of capabilities and features that weren’t on the previous 286. Not only was it 32-bit, but it also could support operating systems that used virtual memory. It offered hardware debugging and operated in three modes, including one that made it backward-compatible with 8086 and 8088, a protected mode that enabled it to run like the 286 and the virtual mode for multitasking.
The chip also used a memory segment architecture that was similar to what had been used in earlier processor models, but Intel increased the size of the memory segments to 4GB, a significant jump up.
It took Microsoft several years to embrace the capabilities of the 386 in Windows, but the features in the chip provided the processing power to support Windows 3.0, 3.1 and 95.
30 Years Ago: Intel 386, 486 Chips Set the Stage for Windows Dominance
These versions allowed Windows to become the dominant graphical PC operating platform around the world in the 1990s.
“Transforming from 16 [bit] to 32 bit would have been very important to Microsoft for the graphical interface,” Roger Kay, principal analyst with Endpoint Technologies Associates, told eWEEK.
Intel eventually released different versions of the chip, including the 386DX, which could work at 16- and 32-bit, and the lower-cost 16-bit 386SX. Two other versions were the low-power 386SL and the 386EX chips for embedded systems.
Intel eventually discontinued the 386 in 2007, though as late as last year, the chips were still being used, primarily in embedded systems.
The 486 also featured 8KB of Level 1 cache (it later was doubled to 16KB) and an integrated floating-point unit, which made the 486 less expensive and more efficient than its predecessors, which had to rely on a coprocessor for floating-point capabilities, according to Insight 64’s Brookwood.
The 486 reached a performance of 20 MIPS and—initially—speeds of 25MHz and 33MHz. It also offered some features, like pipelining, that had before only been used in mainframe systems. These features helped the 486 execute one instruction per cycle—as long as the data was already in the cache.
Like the 386, there were a number of versions of the 486 produced, including the 486DX, which had the integrated floating-point unit, and the 486SX, which didn’t.
The 486 represented a key turning point in Intel’s history. In a situation echoing Intel’s issues with developing the non-x86 Itanium platform for 64-bit server computing, the company in the late 1980s was working on two different and non-compatible 32-bit chips. In his book “Only the Paranoid Survive,” which was published in 1999, former Intel CEO Andy Grove talked about how the company, while developing the 486—which was based primarily on CISC (Complex Instruction Set Computing) technology and compatible with all PC software—also was working on the 32-bit i860 CPU, which was based on RISC. The i860 was fast, but not compatible with most software, Grove said.
In a passage in his book he outlined the problems facing Intel.
“We didn’t know what to do,” Grove wrote. “So we introduced both, figuring we’d let the marketplace decide. However, things were not that simple. Supporting a microprocessor architecture with all the necessary computer-related products—software, sales, and technical support—takes enormous resources. Even a company like Intel had to strain to do an adequate job with just one architecture. And now we had two different and competing efforts, each demanding more and more internal resources.”
Internally, the company was conflicted—which project should get the resources? Which chip should be pitched to customers? And it caused confusion among system makers.
“The fight for resources and for marketing attention … led to internal debates that were fierce enough to tear apart our microprocessor organization,” Grove wrote. “Meanwhile, our equivocation caused our customers to wonder what Intel really stood for, the 486 or i860?”
Eventually, the market did decide, with vendors such as Compaq telling the company to stick with the x86-based 486 (though Microsoft fell into the i860 camp). Intel officials decided to stick with x86—as it did more than a decade later, when it followed AMD’s lead and brought 64-bit capabilities to its x86-based Xeon server chips.