But as the whole computer ecosystem grew, the size of the OS ballooned as it introduced new functionality to try to manage that whole environment – while removing virtually nothing as people moved on and stopped using things like modems.On 25 August 1991, Linus Torvalds released the Linux kernel.We look at how the open source operating system has evolved in the last quarter of a century.

Without some sort of base-level process, the hardware would not be set up correctly to support anything else that was to be layered on it.

However, now there is also a hypervisor, such as ESX, Hyper-V or KVM, which is also initiated for virtualisation.

As Quocirca wrote about almost two years ago, a “hardware-assisted software-defined” approach makes sense, but it does require hardware to operate at a high-function, highly standardised level.

So, let’s forget about the OS and focus on where the next-generation platform needs to go.

A means of initiating the hardware is still required.

The BIOS has moved on to the unified extensible firmware interface (UEFI), but it is still a core link in the chain of getting a server up and running.

Updates to a single layer are less likely to break an existing function at a different level in the overall stack.

After all, a cloud platform is the ultimate abstraction layer.

Management becomes easier – rather than having to manage these different layers, a more unified platform enables a simpler environment, with root cause analysis of problems easier to carry out.

Patching and updating are easier; standardisation of functions across and through the platform can be carried out.

Microsoft’s new Windows Server 2016 operating system (OS) is just being launched. IBM has its mainframe operating system and its Power operating system, Oracle has Solaris – and that is just a few of the OSs that still abound in the market. Let’s focus on Intel-architected systems, as other chip architectures have slightly different approaches.