Which protection ring has the highest privilege level and acts directly with the hardware?

  • School Austin Peay State University
  • Course Title CIS MISC
  • Pages 5
  • Ratings 67% (3) 2 out of 3 people found this document helpful

This preview shows page 1 - 5 out of 5 pages.

Chapter 14 System Hardening & BaselinesTRUE /FALSE1.(pg 467-468) Service pack is the term given to a small software update designed to address a specific

Get answer to your question and much more

Get answer to your question and much more

3.(pg 485) Windows Defender is now standard with all versions of the Windows desktop operating systems.

Get answer to your question and much more

4.(pg 501) Protecting data while in use is a much trickier proposition than protecting it in transit or in

Get answer to your question and much more

Get answer to your question and much more

6. (p. 461) What term refers to the process of establishing a system’s operational state?

Get answer to your question and much more

7. (p. 464-465) Which protection ring has the highest privilege level and acts directly with the physicalhardware?

Get answer to your question and much more

8. (p. 465) The security kernel is also known as a __________.

Get answer to your question and much more

End of preview. Want to read all 5 pages?

Upload your study docs or become a

Course Hero member to access this document

Network Survivability

Rajiv Ramaswami, ... Galen H. Sasaki, in Optical Networks (Third Edition), 2010

Problems

9.1

Consider a shared protection ring with two types of restoration possible. In the first scheme, the connection is rerouted by the source and destination around the ring in the event of a failure. In the second, the connection is rerouted around the ring by the nodes adjacent to the failed link (as in a BLSR). Give an example of a traffic pattern where the first scheme uses less ring bandwidth than the second. Give another example where the two require the same amount of bandwidth.

9.2

Show that in a ring architecture if the protection capacity is less than the working capacity, then service cannot be restored under certain single failure conditions.

9.3

Compare the performance of UPSRs and BLSR/2s in cases where all the traffic is between a hub node and the other nodes. Assume the same ring speed in both cases. Is a BLSR/2 any more efficient than a UPSR in traffic-carrying capacity in this scenario?

9.4

Construct a traffic distribution for which the traffic-carrying capacity of a BLSR/4 is maximized. What is this capacity as a multiple of the bit rate on the working fibers?

9.5

Assuming a uniform traffic distribution, compute the traffic-carrying capacity of a BLSR/4 as a multiple of the bit rate on the working fibers.

9.6

Consider the topology shown in Figure 9.28 over which STS-1s are to be transported as dictated by the bandwidth demands specified in the table below for each node pair. Assume all the bandwidth requirements are bidirectional.

Which protection ring has the highest privilege level and acts directly with the hardware?

Figure 9.28. Network topology for Problem 9.6.

STS-1BCDE
A 12 6 4 12
B 8 10 6
C 12 2
D 8

Given the fiber topology and the STS-1-based bandwidth requirements, we will utilize a two-fiber OC-N SONET ring architecture, but we need to determine which SONET ring architecture is the most suitable for the given network—the UPSR or the BLSR/2.

(a)

Provide a detailed illustration of how the six STS-1s between nodes A and C would be transported by a UPSR and a BLSR/2. Redraw Figure 9.28 to begin each illustration.

(b)

Suppose that a backhoe cuts the fiber pair between nodes B and C. Again, redrawing Figure 9.28 and referencing your illustrations above, provide a detailed illustration of how the six STS-1s between nodes A and C would be transported just after this failure for the UPSR and the BLSR/2. Use dashed lines to highlight any differences in the routing from normal operation.

(c)

Using the bandwidth demands given in the table above, design best-case ring routing plans for the UPSR and the BLSR/2. Illustrate the routing on the network topology of Figure 9.28. In addition, specify the quantity of STS-1s being transported over each fiber link for both cases.

(d)

Assuming that we want to use a single OC-N ring, what would be the minimum standard value of N in each case for the designed UPSR and BLSR/2?

(e)

Given all of this information, which ring architecture is better suited for this application? Briefly explain your reasoning.

9.7

The UPSR, BLSR/4, and BLSR/2 are designed primarily to handle single failures. However, they can handle some cases of simultaneous multiple failures as well. Carefully characterize the types of multiple link/node failure combinations that these different architectures can handle.

9.8

The 1 + 1 protection in a SONET UPSR is not implemented at a fiber level but at an individual SONET connection level: for each connection, the receiver picks the better of the two paths. An alternative and simpler approach would be to have the receiver simply pick the better of the two fiber lines coming in, say, based on the bit error rate. In this case, the receiver would not have to look at the individual connections in order to make its decision, but rather would look at the error rate of the composite signal on the fiber. Why doesn't this work?

9.9

Suppose you had only two fibers but could use two wavelengths, say, 1.3 μm and 1.55 μm, over each fiber. This can be used to deploy a BLSR/4 ring in three different ways: (1) the two working links could be multiplexed over one fiber and the two protection links over the other, (2) a working link and a protection link in the same direction could be multiplexed over one fiber, or (3) a working link and a protection link in the opposite direction could be multiplexed over one fiber. Which option would you choose?

9.10

Consider a four-fiber BLSR that uses both span and ring switching. What are the functions required in network management to (a) coordinate span and ring switching mechanisms and (b) allow multiple failures to be restored?

9.11

Consider the example shown in Figure 9.18. Carefully characterize the set of simultaneous multiple fiber cuts that can be handled by this arrangement.

9.12

Consider a five-node optical ring with one hub node and four access nodes. The traffic to be supported is one lightpath between each access node and the hub node. You can deploy either a two-fiber OCh-DPRing or a two-fiber OCh-SPRing in this application. No wavelength conversion is allowed inside the network, so each lightpath must use the same wavelength on every link along its path. Compare the amount of protection and working capacity needed for each case. Using a wavelength on a link counts as one unit of capacity. Would your answer change if wavelength conversion was allowed in both types of rings at any node in the ring?

9.13

Develop computer software that performs the following functions:

(a)

Allows you to input a network topology graph and a set of lightpaths (source-destinations).

(b)

Routes the lightpaths using a shortest-path algorithm.

(c)

Computes protection bandwidth in the network for two cases: 1 + 1 OCh protection and OCh shared mesh protection.

For 1 + 1 OCh protection, use an algorithm to provide two disjoint shortest paths for each lightpath, such as the one in [ST84]. For shared mesh protection, use the following algorithm: for each failure i, determine the amount of protection capacity, Ci(l), that would be required on each link l in the network. Prove that the total protection capacity needed on link l is then simply maxi Cj(l).

(d)

Experiment with a variety of topologies, traffic patterns, and different routing/protection computation algorithms. Summarize your conclusions.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123740922500175

Dinkar Sitaram, Geetha Manjunath, in Moving To The Cloud, 2012

Trap and Emulate Virtualization

At a very high level, all three types of hypervisors described earlier operate in a similar manner. In each case, the guests continue execution until they try to access a shared physical resource of the hardware (such as an I/O device), or an interrupt is received. When this happens, the hypervisor regains control and mediates access to the hardware, or handles the interrupt.

To accomplish this functionality, hypervisors rely on a feature of modern processors known as the privilege level or protection ring. The basic idea behind privilege levels is that all instructions that modify the physical hardware configuration are permitted at the highest level, At lower levels, only restricted sets of instructions can be executed. Figure 9.3 shows the protection rings in the Intel x86 architecture [6], as an example. Other hardware architectures have similar concepts. There are four rings, numbered from 0 to 3. Programs executing in Ring 0 have the highest privileges, and are allowed to execute any instructions or access any physical resources such as memory pages or I/O devices. Guests are typically made to execute in ring 3. This is accomplished by setting the Current Privilege Level (CPL) register of the processor to 3 before starting execution of the guest. If the guest tries to access a protected resource, such as an I/O device, an interrupt takes place, and the hypervisor regains control. The hypervisor then emulates the I/O operation for the guest. The exact details depend upon the particular hypervisor (e.g., Xen or Hyper-V) and are described in detail later. Note that in order to emulate the I/O operation, it is necessary for the hypervisor to have maintained the state of the guest and its virtual resources.

Which protection ring has the highest privilege level and acts directly with the hardware?

Figure 9.3. X86 protection rings.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597497251000093

Wavelength-Division-Multiplexed Passive Optical Networks (WDM PONs)

Y.C. Chung, Y. Takushima, in Optical Fiber Telecommunications (Sixth Edition), 2013

23.6.2.4 Network protection by using ring topology for WDM PON

It is well known that the ring topology provides excellent restoration capability for the optical network. As described in Section 23.3, there have been some efforts to utilize the self-healing ring architecture also for WDM PON [164–168,240–246]. In these networks, the CO is connected with ONUs through single- or dual-fiber rings. At the ONU, an OADM is used to add/drop its corresponding upstream/downstream signals to and from the ring. When the WDM PON is implemented in a dual-fiber ring topology, the disrupted signals are restored through the protection ring in case of a fiber failure [240–242]. However, this technique can be too costly for use in access networks. Thus, there have been several attempts to implement the WDM PON in a single-fiber ring topology [167,243–246]. In these networks, the protection is achieved by rerouting the disrupted signals to the opposite direction by using the optical switches installed at the ONUs. However, when the WDM PON is implemented in a ring topology, the maximum number of ONUs that can be supported by this network is usually very limited due to the large insertion loss of the ONU. Thus, to overcome this limitation, it is necessary to install an optical amplifier at every ONU and compensate for its large insertion loss. However, to make this approach cost-effective enough for use in the WDM PON, it is essential to develop inexpensive photonic integrated circuits consist of an OADM and optical amplifiers.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123969606000237

Case Studies

Marise Bafleur, ... Nicolas Nolhier, in ESD Protection Methodologies, 2017

5.1 Case 1: Interaction between two types of protection

This case study is concentrated around the analysis of the failure of a test vehicle intended for the development and the optimization of an ESD protection strategy for an analog 1.2 μm CMOS technology [TRE 04a]. For this, the test vehicle used is composed of an inverter circuit with an input IN, an output OUT and a power supply (VDD and VSS). As is shown in Figure 5.1, this inverter equally consists of ESD protection for the input composed of a double protection in π made of two NMOS transistors (GCNMOS) M1 and M3, a resistor, a diode D2, and a PMOS M4 transistor. The central protection between the VDD and VSS power supplies is a NMOS M2 transistor with a gate capacitive coupling (GCNMOS). These different protection components have been independently optimized and follow design rules that provide them with an HBM robustness that is equal to or greater than 6 kV.

Which protection ring has the highest privilege level and acts directly with the hardware?

Figure 5.1. Electrical diagram of an ESD test vehicle

Before its implementation on silicon, the protection circuit was first simulated and provided efficient protection for the different stress configurations. The target protection is 4 kV HBM. The test circuit was then characterized by an HBM tester for different pin combinations, and the results are presented in Table 5.1. Contrary to expectations, although each protection has an HBM robustness superior to the 4 kV targeted, one of the combinations, between IN and VDD, does not reach the specification. The robustness is only 3 kV.

Table 5.1. Results of HBM tests between each pin. A pin from the first column is positively stressed with respect to a pin from the first row

In order to understand the origin of this premature failure, a non-destructive failure analysis was carried out. The electrical signature of failure is a leakage current between VDD and VSS. To locate the origin of the leakage, the EMMI technique was first used, but without success. However, an OBIRCH analysis allowed localization of the default, as shown in Figure 5.2. It is located in the latch-up ring of the protection transistor M1. This default permits the assumption that the parasitic NPN transistor, formed by the drain of transistor M1 (emitter), the substrate (base) and the N+ protection ring against latch-up (collector), was activated during discharge.

Which protection ring has the highest privilege level and acts directly with the hardware?

Figure 5.2. Location of the failure in the circuit (circle) by OBIRCH

Protection rings against the latch-up protect the integrated circuit from the triggering of a parasitic thyristor that can lead to the destruction of the circuit. It consists of diffusions surrounding the active components that are connected either to the power supply or to the ground. Under standard operating conditions, they are therefore biased and collect potential carriers injected by the active component in the substrate. However, in the case of an ESD test, according to the chosen pin configuration, these rings can get in the way of the discharge. This is the case here where two parasitic transistors are formed, Q1 and Q3, as shown in the electrical diagram Figure 5.3.

Which protection ring has the highest privilege level and acts directly with the hardware?

Figure 5.3. Electrical diagram of the ESD test vehicle with its parasite bipolar components Q1 and Q3 associated with the latch-up rings

Figure 5.4 shows a magnified view of the defective zone and displays the parasitic component responsible. Indeed, during a positive discharge between IN and VDD, this parasitic NPN transistor is activated as diode D1 (drain-substrate diode of GGNMOS M1) marked on the diagram of Figure 5.1, conducts in forward mode and acts as the emitter for the parasitic NPN transistor. As it is not designed to absorb high currents, this leads to a thermal runaway and a localized fusion of silicon.

Which protection ring has the highest privilege level and acts directly with the hardware?

Figure 5.4. Magnified view of the location of the failure and a cross-sectional diagram A-A’ of the parasitic bipolar NPN transistor associated with the latch-up ring. For a color version of this figure, see www.iste.co.uk/bafleur/esd.zip

In order to validate our hypothesis, we carried out SPICE electrical simulations by adding bipolar transistors in the model and the results confronted with EMMI observations.

For a TLP current of 370 mA, as shown in Figure 5.5, the current flows primarily through the central protection M2. The active path therefore goes through D1, M2, and the drain/substrate diode D3 at the output stage. The emission of the two forward-biased diodes is not visible on the figure because it is extremely weak in relation to transistor M2, which operates in the bipolar mode with its base–collector junction being reverse-biased.

Which protection ring has the highest privilege level and acts directly with the hardware?

Figure 5.5. EMMI simulation and observation for a TLP current of 370 mA. For a color version of this figure, see www.iste.co.uk/bafleur/esd.zip

A TLP current of 480 mA must be reached before the M1 protection can conduct, as shown in Figure 5.6. The current no longer flows through the central protection M2.

Which protection ring has the highest privilege level and acts directly with the hardware?

Figure 5.6. EMMI simulation and observation for a TLP current of 480 mA. For a color version of this figure, see www.iste.co.uk/bafleur/esd.zip

For a current greater than 2.3 A, the current always flows through the input protection M1, but there is an emission at the latch-up ring, which is also shown by the simulation, as shown in Figure 5.7. There is therefore another discharge path across Q1, the parasitic transistor formed by the latch-up ring with diode D1. As the latter is not optimized to conduct a high level of current, the destruction of the circuit occurs for a level of current that is far lower than the level that protection M1 can dissipate.

Which protection ring has the highest privilege level and acts directly with the hardware?

Figure 5.7. EMMI simulation and observation for a TLP current of 2.3 A. For a color version of this figure, see www.iste.co.uk/bafleur/esd.zip

In order to avoid this premature failure, a corrective action can consist of modifying the design rules of this latch-up ring by inserting a ballast resistor (increasing the size of the ring) [TRE 04b]. This minor modification of the layout improves the robustness of the circuit while conforming to the initial predictions.

This simple example highlights the importance of having a global protection approach in order to avoid negative interferences between different types of protection. It also shows the importance of modeling the entirety of the protection system without forgetting parasitic components. One of the challenges of circuit design tools concerns the automated retrieval of potential parasite components and their modeling.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781785481222500054

Michał Pióro, Deepankar Medhi, in Routing, Flow, and Capacity Design in Communication and Computer Networks, 2004

3.6 SONET/SDH RINGS: RING BANDWIDTH DESIGN

An important alternative to mesh SDH networks discussed in the previous section are SDH ring networks (see [Muk97], [RS02], [Wu92]) where the restoration mechanisms are intrinsic to the network functionality. This is contrary to the mesh case where restoration requires inter-DCS signaling. Self-healing rings have been heavily deployed due to its < 50 ms restoration capability from a single-link failure. The nodes of a ring network are called ADMs and are capable of inserting or extracting any VC-12 container out of the set of all containers circulating around the ring. Figure 3.7 depicts a bi-directional line-switched ring (BLSR) with four optical fibers (because of four fibers, they are also referred to as BLSR/4). Now assume that this ring is based on the STM-1 transmission system, i.e., the system with 63 VC-12 containers. The ring is divided into two pairs of fibers, one basic pair and one protection pair. Note that BLSR protection can be classified as link protection (refer to Section 9.1.2). The VC-12 containers destined to a particular node are extracted from the incoming basic fiber (for example, the most outer fiber in Figure 3.7) while the originating containers are inserted into the outgoing (second outer) basic fiber. The design question is: given the inherent routing nature of a SONET/SDH ring and the demand volume, how do we determine what is the minimal number and type of (parallel) rings needed?

Which protection ring has the highest privilege level and acts directly with the hardware?

FIGURE 3.7. Bi-Directional Line-Switched Ring (BLSR)

For example, if demand d = 1 between ADMs v = 1 and v = 2 has volume of h1 = 3 VC-12s, then three selected numbers k, l, and n from the set {1, 2, …, 63} are reserved between nodes v = 1 and v = 2, and the VCs with these numbers are used for realizing the 90 circuits required between the two considered nodes to form a trunk group d. If the path between two nodes of a demand consists of more segments of the ring (a segment is the part of the ring between its two consecutive nodes), then the containers with the same numbers are reserved along all the segments of the path. When one of the ring segments fails, then the protection switches automatically re-switch the STM-1 module to the unaffected part of the inner protection pair of fibers.

The allocation of the demand flows in a BLSR for the given set of demands is not a trivial task, as each demand volume can be split into two parts (in general, not equal), and each part is realized on one of the two complementary parts of the ring. Suppose that nodes are numbered from 1 to V in the clockwise manner starting from some distinguished node v = 1. Then it is natural to number the segments from 1 to V (the number of segments E is equal to the number of nodes V as we deal with a ring!) such that segment e = 1 connects nodes v = 1 and v = 2, segment e = 2 connects nodes v = 2 and v = 3, and so on, with segment e = V connecting nodes v = V and v = 1 (Figure 3.8). If we assume undirected demands, then demand volume hvw between nodes v and w, with v < w can be realized on two paths: clockwise from v to w and clockwise from w to v. Denote the flow on the first path by uvw and on the second path by zvw. The corresponding design problem is as follows:

Which protection ring has the highest privilege level and acts directly with the hardware?

FIGURE 3.8. Node and Segment Labeling of BLSR

(3.6.1)minimizeu,z,rrsubjecttouυw+zυw=hυw,υ,w=1,2 ,…,V,υχwδeυwuυw+ (1−δeυw)zυw≤Mr,e =1,2,…,Euυw,zυw,r non-negative integers.

In the formulation, each segment-demand incidence coefficient δevw reflects the position of the end nodes of demand d = 〈v,w〉 in the ring, and specifies if segment e belongs to the clockwise path from v to w. Clearly,

δ eυw={1if υ≤eχw0otherwise.

In the example considered, M = 63, although for other types of SDN/SONET rings this module can be different (e.g., M = 252 in an STM-4-based BLSR). The self-healing feature of BLSR means that in the case when one segment of the ring is cut, then the broken basic segment of the ring is restored along the (long) path on the surviving part of the inner protection pair of fibers (Figure 3.7). This is done automatically by means of a functionality called protection switching.

Similar to BLSR, there is another SONET/SDH ring protection mechanism called unidirectional path-switched rings (UPSR) where the protection mechanism can be classified as path protection (refer to Section 9.1.2). The bandwidth design formulation problem is left as an exercise (Exercise 3.6).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780125571890500056

SAN Security

John McGowan, ... John McDonald, in Computer and Information Security Handbook (Third Edition), 2013

Review Topology and Architecture Options

Fabric security augments overall application security. It is not sufficient on its own; host and disk security are also required. You should consider each portion of the customer's SAN when determining the correct security configuration. Review the current security infrastructure and discuss present and future needs. Listed here are the most common discussion segments:

SAN management access—Secure access to management services.

Fabric access—Secure device access to fabric service.

Target access—Secure access to targets and logical unit numbers (LUNs).

SAN protocol—Secure switch-to-switch communication protocols.

IP storage access—Secure Fiber Channel over TCP/IP (FCIP) and Internet Small Computer System Interface (iSCSI) services.

Data integrity and secrecy—Encryption of data both in transit and at rest.

Additional subjects to include in a networked storage strategy involve the following:

Securing storage networking ports and devices

Securing transmission and ISL interfaces

Securing management tools and interfaces [Simple Network Management Protocol (SNMP), Telnet, IP interfaces]

Securing storage resources and volumes

Disabling SNMP management interfaces not used or needed

Restricting use and access to Telnet and File Transfer Protocol (FTP) for components

There are several major areas of focus for securing storage networks. These include securing the fabric and its access, securing the data and where it is stored, securing the components, securing the transports, and securing the management tools and interfaces. This part of the chapter describes the following components:

Protection rings (see sidebar, “Security and Protection”)

Restricting access to storage

Access control lists (ACLs) and policies

Port blocks and port prohibits

Zoning and isolating resources

File system permissions for network-attached storage (NAS) file access using Network File System (NFS) and Common Internet File System (CIFS)

Operating system access control and management interfaces

Control and monitor root and other supervisory access

Physical and logical security and protection

Virus protection and detection on management servers and PC

Security and Protection

Establish an overall security perimeter that is both physical and logical to restrict access to components and applications. Physical security includes placing equipment in locked cabinets and facilities that have access monitoring capabilities. Logical security involves securing those applications, servers, and other interfaces, including management consoles and maintenance ports, from unauthorized access. Also, consider who has access to backup and removable media and where the company stores them as part of an overall security perimeter and defense.

Secure your networks, including local area networks (LANs), metropolitan area networks (MANs), and wide area networks (WANs), with various subnets and segments including Internet, intranet, and extranets with firewalls and (de-militarized zone) DMZ access where applicable.

Secure your servers so that if someone attempts to access them using other applications, using public or private networks, or simply by walking up to a workstation or console, the servers are protected. Server protection is important; it is one of the most common points that an attacker will target. Make sure that the server has adequate protection on usernames, passwords, files, and application access permissions.

Control and monitor who has access to root and other supervisory modes of access, as well as who can install and configure software.

Protect your storage network interfaces including Fiber Channel, Ethernet for iSCSI, and NAS as well as management ports, interfaces, and tools. Tools including zoning, ACLs, binding, segmentation, authorizing, and authentication should be deployed within the storage network.

Protect your storage subsystems using zoning and LUN/volume mapping and masking. Your last line of defense should be the storage system itself, so make sure it is adequately protected.

Protect wide area interfaces when using Internet File Channel Protocol (iFCP), Fiber Channel over Internet Protocol (FCIP), iSCSI, Synchronous Optical Networking/Synchronous Digital Hierarchy (SONET/SDH), Asynchronous Transfer Mode (ATM), and other means for moving data between locations. Set up VPNs to help guard data while it is in transit, along with compression and encryption. Maintain access control and audit trails of the management interface tools, and make sure that you change their passwords.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128038437000612

An Introduction to Virtualization

In Virtualization for Security, 2009

How Does Virtualization Work?

While there are various ways to virtualize computing resources using a true VMM, they all have the same goal: to allow operating systems to run independently and in an isolated manner identical to when it is running directly on top of the hardware platform. But how exactly is this accomplished? While hardware virtualization still exists that fully virtualizes and abstracts hardware similar to how the System370 did, such hardware-based virtualization technologies tend to be less flexible and costly. As a result, a slew of software hypervisor and VMMs have cropped up to perform virtualization through software-based mechanisms. They ensure a level of isolation where the low-level, nucleus core of the CPU architecture is brought up closer to the software levels of the architecture to allow each virtual machine to have its own dedicated environment. In fact, the relationship between the CPU architecture and the virtualized operating systems is the key to how virtualization actually works successfully.

OS Relationships with the CPU Architecture

Ideal hardware architectures are those in which the operating system and CPU are designed and built for each other, and are tightly coupled. Proper use of complex system call requires careful coordination between the operating system and CPU. This symbiotic relationship in the OS and CPU architecture provides many advantages in security and stability. One such example was the MULTICS time-sharing system, which was designed for a special CPU architecture, which in turn was designed for it.

What made MULTICS so special in its day was its approach to segregating software operations to eliminate the risk or chance of a compromise or instability in a failed component from impacting other components. It placed formal mechanisms, called protection rings, in place to segregate the trusted operating system from the untrusted user programs. MULTICS included eight of these protection rings, a quite elaborate design, allowing different levels of isolation and abstraction from the core nucleus of the unrestricted interaction with the hardware. The hardware platform, designed in tandem by GE and MIT, was engineered specifically for the MULTICS operating system and incorporated hardware “hooks” enhancing the segregation even further. Unfortunately, this design approach proved to be too costly and proprietary for mainstream acceptance.

The most common CPU architecture used in modern computers is the IA-32, or x86-compatible, architecture. Beginning with the 80286 chipset, the x86 family provided two main methods of addressing memory: real mode and protected mode. In the 80386 chipset and later, a third mode was introduced called virtual 8086 mode, or VM86, that allowed for the execution of programs written for real mode but circumvented the real-mode rules without having to raise them into protected mode. Real mode, which is limited to a single megabyte of memory, quickly became obsolete; and virtual mode was locked in at 16-bit operation, becoming obsolete when 32-bit operating systems became widely available for the x86 architecture. Protected mode, the saving grace for x86, provided numerous new features to support multitasking. These included segmenting processes, so they could no longer write outside their address space, along with hardware support for virtual memory and task switching.

In the x86 family, protected mode uses four privilege levels, or rings, numbered 0 to 3. System memory is divided into segments, and each segment is assigned and dedicated to a particular ring. The processor uses the privilege level to determine what can and cannot be done with code or data within a segment. The term “rings” comes from the MULTICS system, where privilege levels were visualized as a set of concentric rings. Ring-0 is considered to be the innermost ring, with total control of the processor. Ring-3, the outermost ring, is provided only with restricted access, as illustrated in Figure 1.5.

Which protection ring has the highest privilege level and acts directly with the hardware?

Figure 1.5. Privilege Rings of the x86 Architecture

Which protection ring has the highest privilege level and acts directly with the hardware?

The same concept of protection rings exists in modern OS architecture. Windows, Linux, and most Unix variants all use rings, although they have reduced the four-ring structure to a two-layer approach that uses only Rings 0 and 3. Ring-0 is commonly called Supervisor Mode, while Ring-3 is known as User Mode. Security mechanisms in the hardware enforce restrictions on Ring-3 by limiting code access to segments, paging, and input/output. If a user program running in Ring-3 tries to address memory outside of its segments, a hardware interrupt stops code execution. Some assembly language instructions are not even available for execution outside of Ring-0 due to their low-level nature.

The Virtual Machine Monitor and Ring-0 Presentation

The Supervisor Mode is the execution mode on an x86 processor that enables the execution of all instructions, including privileged instructions such as I/O and memory management operations. It is in Supervisor Mode (Ring 0) where the operating system would normally run. Since Ring-3 is based on Ring-0, any system compromise or instability directly impacts User Mode running in Ring-3. In order to isolate Ring-0 for each virtualized guest, it then becomes necessary to move Ring-0 closer to the guests. By doing so, a Ring-0 failure for one virtualized guest does not impact Ring-0, or consequently Ring-3, of any other guest. The perceived Ring-0 for guests can reside in either Ring-1, -2, or -3 for x86 architectures. Of course, the further the perceived Ring-0 is away from the true Ring-0, the more distant it is from executing direct hardware operations, leading to reduced performance and independence.

Virtualization moves Ring-0 up the privilege rings model by placing the Virtual Machine Monitor, or VMM, in one of the rings, which in turn presents the Ring-0 implementation to the hosted virtual machines. It is upon this presented Ring-0 that guest operating systems run, while the VMM handles the actual interaction with the underlying hardware platform for CPU, memory, and I/O resource access. There are two types of VMMs that address the presentation of Ring-0 as follows:

Type 1 VMM Software that runs directly on top of a given hardware platform on the true Ring-0. Guest operating systems then run at a higher level above the hardware, allowing for true isolation of each virtual machine.

Type 2 VMM Software that runs within an operating system, usually in Ring-3. Since there are no additional rings above Ring-3 in the x86 architecture, the presented Ring-0 that the virtual machines run on is as distant from the actual hardware platform as it can be. Although this offers some advantages, it is usually compounded by performance-impeding factors as calls to the hardware must traverse many diverse layers before the operations are returned to the guest operating system.

The VMM Role Explored

To create virtual partitions in a server, a thin software layer called the Virtual Machine Monitor (VMM) runs directly on the physical hardware platform. One or more guest operating systems and application stacks can then be run on top of the VMM. Figure 1.6 expands our original illustration of a virtualized environment presented in Figure 1.1.

Which protection ring has the highest privilege level and acts directly with the hardware?

Figure 1.6. The OS and Application Stack Managed by the VMM Software Layer

The VMM is the center of server virtualization. It manages hardware resources and arbitrates the requests of the multiple guest operating systems and application stacks. It presents a virtual set of CPU, memory, I/O, and Disk resources to each guest either based on the actual physical hardware or based on a standard and consistent selection of custom hardware. This section further discusses the role of the VMM and design considerations that are used when designing a VMM.

The Popek and Goldberg Requirements

Often referred to as the original reference source for VMM criteria, the Popek and Goldberg Virtualization Requirements define the conditions for a computer architecture to support virtualization. Written in 1974 for the third-generation computer systems of those days, they generalized the conditions that the software that provides the abstraction of a virtual machine, or VMM, must satisfy. These conditions, or properties, are

Equivalence A program running under the VMM should exhibit a predictable behavior that is essentially identical to that demonstrated when running on the underlying hardware platform directly. This is sometimes referred to as Fidelity.

Resource Control The VMM must be in complete control of the actual hardware resources virtualized for the guest operating systems at all times. This is sometimes referred to as Safety.

Efficiency An overwhelming number of machine instructions must be executed without VMM intervention or, in other words, by the hardware itself. This is sometimes referred to as Performance.

According to Popek and Goldberg, the problem that VMM developers must address is creating a VMM that satisfies the preceding conditions when operating within the characteristics of the Instruction Set Architecture (ISA) of the targeted hardware platform. The ISA can be classified into three groups of instructions: privileged, control sensitive, and behavior. Privileged instructions are those that trap if the processor is in User Mode and do not trap if it is in Supervisor Mode. Control-sensitive instructions are those that attempt to change the configuration of actual resources in the hardware platform. Behavior instructions are those whose behavior or result depends on the configuration of resources.

VMMs must work with each group of instructions while maintaining the conditions of equivalence, resource control, and efficiency. Virtually all modern-day VMMs satisfy the first two: equivalence and resource control. They do so by effectively managing the guest operating system and hardware platform underneath through emulation, isolation, allocation, and encapsulation, as explained in Table 1.3.

Table 1.3. VMM Functions and Responsibilities

FunctionDescription
Emulation Emulation is important for all guest operating systems. The VMM must present a complete hardware environment, or virtual machine, for each software stack, whether they be an operating system or application. Ideally, the OS and application are completely unaware they are sharing hardware resources with other applications. Emulation is key to satisfying the equivalence property.
Isolation Isolation, though not required, is important for a secure and reliable environment. Through hardware abstraction, each virtual machine should be sufficiently separated and independent from the operations and activities of other virtual machines. Faults that occur in a single virtual machine should not impact others, thus providing high levels of security and availability.
Allocation The VMM must methodically allocate platform resources to the virtual machines that it manages. Resources for processing, memory, network I/O, and storage must be balanced to optimize performance and align service levels with business requirements. Through allocation, the VMM satisfies the resource control property and, to some extent, the efficiency property as well.
Encapsulation Encapsulation, though not mandated in the Popek and Goldberg requirements, enables each software stack (OS and application) to be highly portable, able to be copied or moved from one platform running the VMM to another. In some cases, this level or portability even allows live, running virtual machines to be migrated. Encapsulation must include state information in order to maintain the integrity of the transferred virtual machine.

The Challenge: VMMs for the x86 Architecture

Referring back to the IA-32 (x86) architecture, all software runs in one of the four privilege rings. The OS traditionally runs in Ring-0, which affords privileged access to the widest range of processor and platform resources. Individual applications usually run in Ring-3, which restricts certain functions (such as memory mapping) that might impact other applications. In this way, the OS retains control to ensure smooth operation.

Since the VMM must have privileged control of platform resources, the usual solution is to run the VMM in Ring-0, and guest operating systems in Ring-1 or Ring-3. However, modern operating systems have been specifically designed to run in Ring-0. This creates certain challenges. In particular, there are 17 “privileged” instructions that control critical platform resources. These instructions are used occasionally in most existing OS versions. When an OS is not running in Ring-0, any one of these instructions can create a conflict, causing either a system fault or an incorrect response. The challenge faced by VMMs for the IA-32 (x86) architecture is maintaining the Popek and Goldberg requirements while working with the IA-32 ISA.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597493055000013

Cloud Resource Virtualization

Dan C. Marinescu, in Cloud Computing (Second Edition), 2018

10.4 Hardware Support for Virtualization

In early 2000 it became obvious that hardware support for virtualization was necessary and Intel and AMD started working on the first generation virtualization extensions of the x863 architecture. In 2005 Intel released two Pentium 4 models supporting VT-x and in 2006 AMD announced Pacifica and then several Athlon 64 models.

The Virtual Machine Extension (VMX) was introduced by Intel in 2006 and AMD responded with the Secure Virtual Machine (SVM) instruction set extension. The Virtual Machine Control Structure (VMCS) of VMX tracks the host state and the guest VMs as control is transferred between them. Three types of data are stored in VMCS:

Guest state. Holds virtualized CPU registers (e.g., control registers or segment registers) automatically loaded by the CPU when switching from kernel mode to guest mode on VMEntry.

Host state. Data used by the CPU to restore register values when switching back from guest mode to kernel mode on VMExit.

Control data. Data used by the hypervisor to inject events such as exceptions or interrupts into VMs and to specify which events should cause a VMExit; it is also used by the CPU to specify the VMExit reason.

VMCS is shadowed in hardware to overcome the performance penalties of nested hypervisors discussed in Section 10.8. This allows the guest hypervisor to access VMCS directly, without disrupting the root hypervisor in case of nested virtualization. VMCS shadow access is almost as fast as a non-nested hypervisor environment. VMX includes several instructions [250]:

1.

VMXON – enter VMX operation;

2.

VMXOFF – leave VMX operation;

3.

VMREAD – read from the VMCS;

4.

VMWRITE – write to the VMCS;

5.

VMCLEAR – clear VMCS;

6.

VMPTRLD – load VMCS pointer;

7.

VMPTRST – store VMCS pointer;

8.

VMLAUNCH/VMRESUME – launch or resume a VM; and

9.

VMCALL – call to the hypervisor.

A 2006 paper [356] analyzes the challenges to virtualizing Intel architectures and then presents VT-x and VT-i virtualization architectures for x86 and Itanium architectures, respectively. Software solutions at that time addressed some of the challenges, but hardware solution could improve not only performance but also security and, at the same time simplify the software systems. The problems faced by virtualization of the x86 architecture are:

Ring deprivileging. This means that a hypervisor forces a guest VM including an OS and an application, to run at a privilege level greater than 0. Recall that the x86 architecture provides four protection rings, 0–3. Two solutions are then possible:

1.

The (0/1/3) mode when the hypervisor, the guest OS, and the application run at privilege levels 0,1, and 3, respectively; this mode is not feasible for x86 processors in 64-bit mode, as we shall see shortly.

2.

The (0 /3/3) mode when the hypervisor, a guest OS, and applications run at privilege levels 0,3 and 3, respectively.

Ring aliasing. Such problems are created where a guest OS is forced to run at a privilege level other than that it was originally designed for. For example, when the CS register4 is PUSHed, the current privilege level in the CR is also stored on the stack [356].

Address space compression. A hypervisor uses parts of the guest address space to store several system data structures such as the interrupt-descriptor table and the global-descriptor table. Such data structures must be protected, but the guest software must have access to them.

Non-faulting access to privileged state. Several instructions, LGDT, SIDT, SLDT, and LTR which load the registers GDTR, IDTR, LDTR, and TR, can only be executed by software running at privileged level 0 because these instructions point to data structures that control the CPU operation. Nevertheless, instructions that store these registers fail silently when executed at a privilege level other than 0. This implies that a guest OS executing one of these instructions does not realize that the instruction has failed.

Guest system calls. Two instructions, SYSENTER and SYSEXIT support low-latency system calls. The first causes a transition to privilege level 0, while the second causes a transition from privilege level 0 and fails if executed at a level higher than 0. The hypervisor must then emulate every guest execution of either of these instructions and that has a negative impact on performance.

Interrupt virtualization. In response to a physical interrupt the hypervisor generates a “virtual interrupt” and delivers it later to the target guest OS. But every OS has the ability to mask interrupts,5 thus the virtual interrupt could only be delivered to the guest OS when the interrupt is not masked. Keeping track of all guest OS attempts to mask interrupts greatly complicates the hypervisor and increases the overhead.

Access to hidden state. Elements of the system state, e.g., descriptor caches for segment registers, are hidden; there is no mechanism for saving and restoring the hidden components when there is a context switch from one VM to another.

Ring compression. Paging and segmentation are the two mechanisms to protect hypervisor code from being overwritten by guest OS and applications. Systems running in 64-bit mode can only use paging, but paging does not distinguish between privilege levels 0, 1, and 2, thus the guest OS must run at privilege level 3, the so called (0 /3/3) mode. Privilege levels 1 and 2 cannot be used thus, the name ring compression.

Frequent access to privileged resources increases hypervisor overhead. The task-priority register (TPR) is frequently used by a guest OS; the hypervisor must protect the access to this register and trap all attempts to access it. That can cause a significant performance degradation.

Similar problems exist for the Itanium architecture discussed in Section 10.10.

A major architectural enhancement provided by the VT-x is the support for two modes of operation and a new data structure, VMCS, including host-state and guest-state areas, see Figure 10.3:

Which protection ring has the highest privilege level and acts directly with the hardware?

Figure 10.3. (A) The two modes of operation of VT-x, and the two operations to transit from one to another; (B) VMCS includes host-state and guest-state areas which control the VM entry and VM exit transitions.

VMX root: intended for hypervisor operations, and very close to the x86 without VT-x.

VMX non-root: intended to support a VM.

When executing a VMEntry operation the processor state is loaded from the guest-state of the VM scheduled to run; then the control is transferred from the hypervisor to the VM. A VMExit saves the processor state in the guest-state area of the running VM; it loads the processor state from the host-state area, and finally transfers control to the hypervisor. All VMExit operations use a common entry point to the hypervisor.

Each VMExit operation saves in VMCS the reason for the exit and eventually some qualifications. Some of this information is stored as bitmaps. For example, the exception bitmap specifies which one of 32 possible exceptions caused the exit. The I/O bitmap contains one entry for each port in a 16-bit I/O space.

The VMCS area is referenced with a physical address and its layout is not fixed by the architecture, but can be optimized by a particular implementation. The VMCS includes control bits that facilitate the implementation of virtual interrupts. For example, external-interrupt exiting, when set, causes the execution of a VM exit operation; moreover the guest is not allowed to mask these interrupts. When the interrupt window exiting is set, a VM exit operation is triggered if the guest is ready to receive interrupts.

Processors based on two new virtualization architectures, VT-d6 and VT-c have been developed. The first supports the I/O Memory Management Unit (I/O MMU) virtualization and the second the network virtualization.

Also known as PCI pass-through the I/O MMU virtualization gives VMs direct access to peripheral devices. VT-d supports:

DMA address remapping, address translation for device DMA transfers.

Interrupt remapping, isolation of device interrupts and VM routing.

I/O device assignment, the devices can be assigned by an administrator to a VM in any configuration.

Reliability features, it reports and records DMA and interrupt errors that may otherwise corrupt memory and impact VM isolation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128128107000133

A comprehensive survey on internet outages

Giuseppe Aceto, ... Antonio Pescapé, in Journal of Network and Computer Applications, 2018

In this paragraph, we present solutions that mainly operate at the data-link layer. We focus on the adaptation SONET/SDH-like resilience techniques to IP networks. These solutions can be set in a reactive or proactive manner.

In (Suwala and Swallow, 2004), Suwala and Swallow argue that IP traffic can be protected using techniques below layer 3 by considering SONET/SDH-like mechanisms. In this paper, descriptions of linear SONET Automatic Protection Switching (APS) for routers, Resilient Packet Ring protection (RPRP), IP interface bundling (IPIB), and MPLS fast reroute (MPLS-FRR) are covered. The key motivation is that layer 3 solutions are limited by the need to communicate among multiple routers, whereas the aforementioned techniques are not subject to this constraint. APS and RPRP are mechanisms that aim at exploiting redundant paths in a fast and efficient way. These are based on the existence of protection links. IP interface bundles are used to group several physical link into a single virtual link, i.e. a logical link. If one or more physical links fails, traffic can be quickly shifted to other links in the bundle. This mechanism is transparent to the routing protocol. Nevertheless, disruptive outages are likely to cause the failure of all the links in the bundle.

MPLS-FRR aims at repairing damaged tunnels by creating a “bypass tunnels” that replace the failed links. Bypass tunnels simply represent other MPLS TE tunnels; these can be set in a reactive or proactive manner, and usually work by adding one more labels to the packets traveling on a primary tunnel, in order to divert traffic onto the bypass tunnel. However, as observed in (Wang et al., 2010; http), while MPLS-FRR is the major technique currently deployed to handle network failures, practical limitations still exist, in terms of complexity, congestion, and performance predictability.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1084804518301139

Which ring is the most privileged ring?

Other hardware architectures have similar concepts. There are four rings, numbered from 0 to 3. Programs executing in Ring 0 have the highest privileges, and are allowed to execute any instructions or access any physical resources such as memory pages or I/O devices. Guests are typically made to execute in ring 3.

Which ring is the least privilege mode in ring architecture?

Ring 3. User processes running in user mode have access to Ring 3. Therefore, this is the least privileged ring.

What are privilege rings in a processor?

CPU protection rings are structural layers that limit interaction between installed applications on a computer and core processes. They typically range from the outermost layer, which is Ring 3, to the innermost layer, which is Ring 0, also referred to as the kernel. Ring 0 is at the core of all system processes.

What does Level 0 represent in a ring protection scheme?

There are basically 4 levels ranging from 0 which is the most privileged to 3 which is least privileged. Most Operating Systems use level 0 as the kernel or executive and use level 3 for application programs.