Skip to content

Instantly share code, notes, and snippets.

@edecoux
Created September 23, 2022 22:00
Show Gist options
  • Select an option

  • Save edecoux/d822ceb43d84ffbe6d5487279121d3ec to your computer and use it in GitHub Desktop.

Select an option

Save edecoux/d822ceb43d84ffbe6d5487279121d3ec to your computer and use it in GitHub Desktop.
OS kernels.md

OS kernels

%% Begin Waypoint %%

%% End Waypoint %%

Kernel (operating system)
https://share.summari.com/kernel-operating-system?utm_source=Chrome

Core of a computer operating system

  • The kernel is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components.
  • On most systems, the kernel is one of the first programs loaded on startup. It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit.

Random-access memory

  • Used to store both program instructions and data
  • Often multiple programs will want access to memory, frequently demanding more memory than the computer has available.
  • The kernel decides which memory each process can use, and determines what to do when not enough memory is available.

Input/output devices

  • These include peripherals such as keyboards, mice, disk drives, printers, USB devices, network adapters, and display devices.

Resource management

  • Kernel defines the execution domain, and protection mechanism used to mediate access to resources within a domain
  • Kernels also provide methods for synchronization and inter-process communication (IPC).
  • Although the kernel must provide IPC in order to provide access to the facilities provided by each other, kernels must also provide running programs with a method to make requests to access these facilities.

Memory management

  • The kernel has full access to the system's memory and must allow processes to safely access this memory as they require it
  • Often the first step in doing this is virtual addressing, usually achieved by paging and/or segmentation.
  • Virtual addressing makes a given physical address appear to be another address, the virtual address. This allows every program to behave as if it is the only one (apart from the kernel) running and thus prevents applications from crashing each other.

Device Management

  • To perform useful functions, processes need access to the peripherals connected to the computer, which are controlled by the kernel through device drivers.
  • At the hardware level, common abstractions of device drivers include: Interfacing directly using a high-level interface (Video BIOS) Using a lower-level device driver (file drivers using disk drivers) Simulating work with hardware, while doing something entirely different
  • And at the software level, device driver abstractions include: Allowing the operating system direct access to hardware resources Only implementing primitives
  • Implementing an interface for non-driver software such as TWAIN Implementing a language

System calls

  • How a process requests a service from an operating system's kernel that it does not normally have permission to run
  • Most operations interacting with the system require permissions not available to a user-level process, e.g., I/O performed with a device present on the system, or any form of communication with other processes requires the use of system calls.

Kernel design decisions

Protection

  • An important consideration in the design of a kernel is the support it provides for protection from faults (fault tolerance) and from malicious behaviors (security).
  • The mechanisms or policies provided by the kernel can be classified according to several criteria: static, dynamic, pre-emptive, or post-detection
  • Support for hierarchical protection domains is typically implemented using CPU modes
  • Kernel security mechanisms play a critical role in supporting security at higher levels
  • One approach is to use firmware and kernel support for fault tolerance and build the security policy for malicious behavior on top of that
  • Another is to delegate the responsibility of checking access-rights for every memory access to the memory management unit (MMU)
  • A common misconception in computer security is that any security policy can be implemented in an application regardless of kernel support
  • If the firmware does not support protection mechanisms, it is possible to simulate protection at a higher level

Hardwareor language-based protection

  • Advantages of this approach include: No need for separate address spaces. Switching between address spaces is a slow operation that causes a great deal of overhead, and a lot of optimization work is currently performed in order to prevent unnecessary switches in current operating systems.
  • Flexibility. Changes to the protection scheme do not require new hardware. Disadvantages include: Longer application startup time. Applications must be verified when they are started to ensure they have been compiled by the correct compiler, or may need recompiling either from source code or from bytecode. Inflexible type systems.

Process cooperation

  • Edsger Dijkstra proved that from a logical point of view, atomic lock and unlock operations operating on binary semaphores are sufficient primitives to express any functionality of process cooperation.

I/O device management

  • Similar to physical memory, allowing applications direct access to controller ports and registers can cause the controller to malfunction or the system to crash.
  • To avoid this, providing a more abstract interface to manage the device is important. This interface is normally done by a device driver or hardware abstraction layer.

Kernel-wide design approaches

  • The principle of separation of mechanism and policy is the substantial difference between micro and monolithic kernels.
  • A mechanism is the support that allows the implementation of many different policies, while a policy is a particular "mode of operation".
  • In minimal microkernel, just some very basic policies are included.
  • This allows what is running on top of the kernel to decide which policies to adopt.
  • A monolithic kernel tends to include many policies, therefore restricting the rest of the system to rely on them.
  • The failure to properly fulfill this separation is one of the major causes of the lack of substantial innovation in existing operating systems.

Monolithic kernels

  • These types of kernels consist of the core functions of the operating system and the device drivers with the ability to load modules at runtime
  • They provide a small set of simple hardware abstractions and use applications called servers to provide more functionality
  • The main disadvantages of monolithic kernels are the dependencies between system components and the fact that large kernels can become very difficult to maintain.

Microkernel

  • Microkernel (also abbreviated μK or uK) is the term describing an approach to operating system design by which the functionality of the system is moved out of the traditional “kernel” into a set of “servers” that communicate through a “minimal” kernel.
  • The microkernel approach consists of defining a simple abstraction over the hardware, with minimal OS services such as memory management, multitasking, and inter-process communication, and implementing those services in user-space programs, referred to as servers.

Monolithic kernels vs microkernels

  • As the computer kernel grows, so grows the size and vulnerability of its trusted computing base; and, besides reducing security, there is the problem of enlarging the memory footprint.
  • To reduce the kernel's footprint, extensive editing has to be performed to carefully remove unneeded code, which can be very difficult with non-obvious interdependencies between parts of a kernel with millions of lines of code.

Performance

  • The performance of microkernels was poor in both the 1980s and early 1990s
  • These studies did not analyze the reasons for the inefficiency
  • It remained to be studied if the solution to build an efficient microkernel was to apply the correct construction techniques
  • On the other end, the hierarchical protection architecture that leads to the design of a monolithic kernel has a significant performance drawback each time there's an interaction between different levels of protection

Hybrid (or modular) kernels

  • These are micro kernels that have some "non-essential" code in kernel-space in order for the code to run more quickly than it would were it to be in user-space.
  • A few advantages to the modular (or) Hybrid kernel include: Faster development time for drivers that can operate from within modules, On demand capability versus spending time recompiling a whole kernel for things like new drivers or subsystems, Faster integration of third party technology, and More interfaces to pass through.

Exokernels

  • A still-experimental approach to operating system design
  • They differ from other types of kernels in limiting their functionality to the protection and multiplexing of the raw hardware
  • This separation of hardware protection from hardware management enables application developers to determine how to make the most efficient use of the available hardware for each specific program

Multikernels

  • A multikernel operating system treats a multi-core machine as a network of independent cores, as if it were a distributed system. It does not assume shared memory but rather implements inter-process communications as message-passing.

History of kernel development

Early operating system kernels

  • An operating system is not required to run a computer.
  • Programs can be directly loaded and executed on the "bare metal" machine provided that the authors of those programs are willing to work without any hardware abstraction or operating system support
  • Most early computers operated this way during the 1950s and early 1960s, which were reset and reloaded between the execution of different programs.

Time-sharing operating systems

  • In the decade preceding Unix, computers had grown enormously in power - to the point where computer operators were looking for new ways to get people to use their spare time on their machines
  • One of the major developments during this era was time-sharing, whereby a number of users would get small slices of computer time, at a rate at which it appeared they were each connected to their own, slower, machine
  • Security and access control became a major focus of the Multics project

Amiga

  • The AmigaOS kernel uses a microkernel message-passing design, but there are other kernel components, like graphics.library, that have direct access to the hardware.

Unix

  • In the Unix model, the operating system consists of two parts: a huge collection of utility programs that drive most operations and a kernel that runs the programs.
  • The kernel is a program, running in supervisor mode, that acts as a program loader and supervisor for the small utility programs making up the rest of the system and to provide locking and I/O services for these programs.

Mac OS

  • Apple first launched its classic Mac OS in 1984

History of Microsoft Windows

  • Windows was first released in 1985 as an add-on to MS-DOS
  • Initial releases of Windows, prior to Windows 95, were considered an operating environment
  • This product line continued to evolve through the 1980s and 1990s, with the Windows 9x series adding 32-bit addressing and pre-emptive multitasking
  • Microsoft also developed Windows NT, an operating system with a very similar interface, but intended for high-end and business users

IBM Supervisor

  • Supervisory program or supervisor is a computer program, usually part of an operating system, that controls the execution of other routines and regulates work scheduling, input/output operations, error actions, and similar functions.
  • Historically, this term was essentially associated with IBM's line of mainframe operating systems starting with OS/360.

Development of microkernels

  • Mach, developed by Richard Rashid at Carnegie Mellon University, is the best-known general-purpose microkernel.
  • The L4 microkernel family (mainly the L3 and the L4 kernel) was created to demonstrate that microkernel are not necessarily slow.
  • Newer implementations such as Fiasco and Pistachio are able to run Linux next to other L4 processes in separate address spaces.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment