What is equipment used to send information and receive it from one location to another?

Networking devices

Peng Zhang, in Advanced Industrial Control Technology, 2010

11.1.1 Overview

A hub (sometimes referred to as a concentrator) is to a network device which receives a data packet from one network node and retransmits to all other connected nodes. In its simplest form, a hub works by duplicating the data packets received at its entry port and making them available to all other ports, therefore allowing data sharing between all devices connected to the hub. It also centralizes network traffic coming from multiple hosts, and propagates the signal. Therefore, a hub needs enough ports to link machines to one another, usually 4, 8, 16 or 32 (Figure 11.2(A) shows some hubs). As with a repeater, a hub operates on layer 1 of the OSI reference model, the physical layer, which is why it is sometimes called a multiple-port repeater.

Figure 11.2. Some hubs and switches used in networks: (A) from the top down are a 4-port hub, an 8-port hub, and a 16-port hub; (B) from the top down are a wireless controllable network switch, a typical network switch for small office, and an Ethernet switch.

A switch (sometimes named a switching hub) refers to a network device which filters and forwards data packets across a network. A switch is normally a multiple-port device (it can have 48 of more ports; Figure 11.2(B) shows some switches), meaning that it is an active element working on layer 2 of the OSI model. Unlike a standard hub which simply replicates what it receives, a switching hub keeps a record of the medium access control (MAC) addresses of the devices attached to it. When the switch receives a data packet, it forwards it directly to the recipient device by looking up the MAC address. The switch analyses the frames coming in on its entry ports and filters the data in order to focus solely on the right ports; as a result, it can act as both a port when filtering and as a hub when handling connections. A network switch can utilize the full throughput potential of a network connection for each device, making it preferable to a standard hub.

Some discussion about how hubs and switches perform their functions in networks is presented here.

(1) A network hub or repeater is a fairly unsophisticated broadcast device. They do not manage any of the traffic that comes through them, and any packet entering any port is broadcast out on every other port except for the entry port. If two or more nodes try to send packets at the same time, a collision is said to occur, and the network nodes have to go though a routine to resolve the conflict. The process is prescribed by the Ethernet Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol. Each Ethernet adapter has both a receiver and a transmitter. If the adapters did not have to listen with their receivers for collisions they would be able to send data at the same time they are receiving it (full duplex). Because they have to operate at half duplex (data flow one way at a time) and a hub retransmits data from one node to all of the nodes, the maximum bandwidth is shared by all of the nodes connected to the hub.

A hub network behaves like a shared medium, that is, only one device can successfully transmit at a time and each host remains responsible for collision detection and retransmission. Some hubs have special stack ports, allowing them to be combined in a way that allows more hubs than simple chaining through Ethernet cables, but even so a large Fast Ethernet network is likely to require switches to avoid the chaining limits of hubs. It is possible to connect several hubs together in order to centralize a larger number of machines; this is sometimes called a daisy chain. To do this, the hubs are connected using crossover cable, a kind of cable which links the input and output ports on one end to those on the other. Hubs generally have a special port called an uplink for connecting two hubs together using a patch cable. There are also hubs which can cross or uncross their ports automatically depending on whether they are connected to a host or a hub.

(2) A network switch typically includes a set of input ports for receiving packets arriving on the buses, a set of output ports for forwarding packets outward on the buses, and a switch fabric such as a cross-point switch for routing packets from each input switch port to the output switch ports that are to forward them. Network switch input and output ports often include buffer memories for storing packets until they can be forwarded thorough the switch fabric or outward on a network bus. An output port’s buffer allows it to receive data faster than it can forward it, at least until the buffer fills up. When the buffer is full, incoming data are lost. A network switch port often uses one or more SDRAMs to implement its buffer memory since they are inexpensive. Some input switch ports include protocol processors for converting each incoming packet to a sequence of cells of uniform size. The input port stores the cells in its buffer memory until it can forward them through the switch fabric to one of the output ports. Each output switch port in turn stores cells received via the switch fabric in its buffer memory and later forwards them to another protocol processor which reassembles them into a packet to be forwarded outward on a network bus.

The traffic manager in a network switch forwards packets of the same flow in the order that it receives them, but may forward high-priority packets preferentially. An output switch port may include a traffic manager for storing the cells received via the switch fabric in its buffer memory and for forwarding them later to another protocol processor. The output port’s protocol processor reassembles each cell into a packet and forwards the packet outward on a network bus. A network switch output port may be able to forward a packet outward to another network switch or to a network node on a selected channel of any of several different network buses, and the traffic manager of a network switch output port decodes the packet’s a flow identification number (FIN) to determine which output bus or bus channel is to convey the packet away from the port. The output port’s traffic manager may also decode a packet’s FIN to determine a packet’s minimum and maximum forwarding rate and priority. The network switch also includes an address translation system which relates a network destination address to an output port that can forward the packet to that network address. When an input port receives an incoming packet it stores it, reads its network destination address, consults the address translation system to determine which output port is to forward the packet, and then sends a routing request to the switch’s arbitration system.

Hubs and switches are used to divide up networks into a number of subnetworks. For example, if a plant floor is dynamically exchanging large amounts of data across the network, its traffic will slow down the network for other users. To solve this problem, two switches can be used, with a floor’s computers being connected to form one network while the remaining computers are connected to form another. The two switches can then be connected to the router that sits between the internal network and the internet. The floor’s traffic is only seen by the computers on that network, but if they need to connect to a computer on the other network, data are sent through the router in the middle.

Modern networking equipment combines the multiple connectivity of the hub with the selective routing of data packets from different protocol networks with the help of bridges (see section 11.3). Modern switches also have plug-and-play capability. This means that the switches are capable of learning the unique addresses of devices attached to them, even if those devices are plugged into a hub which in turn is then attached to the switch, without any programming. If a computer or an industrial controller is plugged directly into a switch, that switch would only allow traffic addressed to that device to be sent to it. By controlling the flow of information between ports, switches achieve major advantages over current shared environments:

(a)

When all devices are directly connected into a switch port, the opportunity for collision between ports is eliminated. This ensures that packets arrive with much greater certainty than in a shared environment.

(b)

Each port has more bandwidth available to it. In a shared environment, any port in the system could consume the entire bandwidth at any given time. This means that during a traffic peak, the network availability of any other node is greatly reduced. In a completely port-switched environment, however, the only traffic flowing down the wire between any node and the switch is either traffic destined for, or created by, that particular node.

In conclusion, switches and hubs provide industrial users with much of the functionality that could only be provided by wiring distinct, proprietary control networks in the past. The elimination of collisions by connecting every node to a switched port, coupled with the hub ability to keep control and office traffic from interacting unwontedly, while still using one physical network, allows industrial users to enjoy the open architecture and massive bandwidth and speed of Ethernet without compromising the integrity of their control traffic.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781437778076100117

Circuit collection, volume V

Richard MarkellEditor, in Analog Circuit Design, 2013

Introduction

With the explosive growth in data networking equipment has come the need to support many different serial protocols using only one connector. The problem facing interface designers is to make the circuitry for each serial protocol share the same connector pins without introducing conflicts. The main source of frustration is that each serial protocol requires a different line termination that is not easily or cheaply switched.

With the introduction of the LTC1343 and LTC1344, a complete software-selectable serial interface port using an inexpensive DB-25 connector becomes possible. The chips form a serial interface port that supports the V.28 (RS232), V.35, V.36, RS449, EIA-530, EIA-530A or X.21 protocols in either DTE or DCE mode and is both NET1 and NET2 compliant. The port runs from a single 5V supply and supports an echoed clock and loop-back configuration that helps eliminate glue logic between the serial controller and the line transceivers.

A typical application is shown in Figure 38.29. Two LTC1343s and one LTC1344 form the interface port using a DB-25 connector, shown here in DTE mode.

Figure 38.29. LTC1343/LTC1344 Typical Application

Each LTC1343 contains four drivers and four receivers and the LTC1344 contains six switchable resistive terminators. The first LTC1343 is connected to the clock and data signal lines along with the diagnostic LL (local loop- back) and TM (test mode) signals. The second LTC1343 is connected to the control-signal lines along with the diagnostic RL (remote loop-back) signal. The single-ended driver and receiver could be separated to support the RI (ring-indicate) signal. The switchable line terminators in the LTC1344 are connected only to the high speed clock and data signals. When the interface protocol is changed via the digital mode selection pins (not shown), the drivers and receivers are automatically reconfigured and the appropriate line terminators are connected.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123978882000389

State of the Art on Technology and Practices for Improving the Energy Efficiency of Data Storage

Marcos Dias de Assunção, Laurent Lefèvre, in Advances in Computers, 2012

4.3.5 Consolidation at the Storage and Fabric Layers

Consolidation of both data storage and networking equipments can lead to substantial savings in floor space requirements and energy consumption. Some manufacturers argue that by providing multi-protocol network equipments, the network fabric can be consolidated on fewer resources, hence reducing floor space, power consumption, and cooling requirements.9 In addition, the increasing use of blade servers and migration of virtual machines encourage the use of networked storage, which then allows for improvements in storage efficiency by means of consolidation [35].

Storage consolidation is not a recent topic. In fact, SANs have been providing some level of storage consolidation and improved efficiency for several years by permitting the sharing of arrays of disks across multiple servers over a local private network, and avoiding islands of data. Hence, moving DAS to networked storage systems offers a range of benefits, which can increase the energy efficiency. These benefits include [35]:

Capacity sharing: administrators can improve storage utilization by pooling storage capacity and allocating it to servers as needed. Hence, it helps reducing the storage islands caused by direct attached storage.

Storage provisioning: storage can be provisioned in a more granular way. Volumes can be provided at any increment, in contrast to allocating physical capacity or entire disks to a particular server. In addition, volumes can be resized as needed without incurring server downtime.

Network boot: this allows administrators to move not only the servers data to the networked storage, but also the server boot images. Boot volumes can be created and accessed at boot time, without the need for local storage at the server.

Improved management: storage consolidation removes many of the individual tasks for backup, data recovery, and software updates. These tasks can be carried out centrally using only one set of tools.

Manufacturers of storage equipments have provided various consolidated solutions generally under the banner of unified storage. Traditionally, enterprise storage uses different storage systems for each storage function. One solution might be deployed for online network attached storage, another for backup and archival, while yet a third is used for secondary or near-line storage. These equipments can use different technologies and protocols. With the goal of minimizing cost by reducing floor space and power requirements, unified-storage solutions usually accommodate multiple protocols and offer transparent and unified access to a storage pool regardless of the storage tier where the data is located [38] (e.g., NetApps Data ONTAP, EMCs Celerra Unified Storage Platforms). Software systems are used to migrate data across different storage tiers according to their reuse patterns.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123965288000043

Data center architectures

C. DeCusatis, in Optical Interconnects for Data Centers, 2017

1.4 Application architectures

Within the data center, an application architecture describes how functions are spread across servers in order to deliver end services to data center users. Historically, in the early 1960s when compute power was expensive, applications processing was done on a large centralized computer; users interacted with this system using so-called “dumb terminals” (keyboards and displays with no processing power of their own). Data networking was limited to modems which communicated between dumb terminals and large computers over the public telephone network, often at very low speeds (perhaps 10–56 kbits/second). With the development of microprocessors, the first personal computers appeared in the mid-1970s. Although these could be configured to emulate a terminal connection to a larger computer, many new applications emerged which were supported on desktop or laptop computers. As more data became distributed across individual user computers, the need for an efficient file sharing mechanism quickly became apparent. The process of copying files to disk, manually carrying the disk to another computer, and reinstalling the files became far too cumbersome for most users. This led to the development of local area networks (LANs) in the late 1980s and early 1990s, which enabled file sharing between computers.

LANs enabled a new type of architecture called client−server computing, as shown in Fig. 1.5. Processing work could be divided between the personal computer client and a larger, centralized server. Generally, the client performance was less than the server for most applications, although the steady improvements in client performance with each technology generation shifted the nature of applications that would preferably run on the server. Client−server architectures were widely adopted during and after the 1990s, and are still used today by many large enterprises. In a traditional client−server design, the client may be a personal computer while the server is a mainframe or enterprise-class computer. The server-centric approach offered all the benefits of centralized processing and control. Since data communication flowed through the central server, implementation of policy-based control and security was simplified. Some enterprise architectures attempted to deploy “thin clients” (essentially dumb terminals) to reduce cost. More recently, with the increasing demands of a mobile and telecommuting workforce and the falling cost of computing hardware, many users needed the flexibility of a more powerful client device. In early client−server deployments, a more powerful client was often under-utilized, wasting storage, processing power, and bandwidth. The modern client−server model has evolved to leverage Internet connectivity into cloud computing or application as a service model. The role of clients has expanded to include smart phones and other mobile devices, which are served by VMs within warehouse-scale cloud data centers worldwide.

Figure 1.5. Client−server architecture.

An alternative design is the peer-to-peer system depicted in Fig. 1.6. Servers work directly with each other as peers to accomplish all or part of the workload, without the assistance of a central server. This is possible because of the increased processing power of low cost, commodity architecture servers, whose capacity may be under-utilized in a client−server design. Well known examples of peer-to-peer architectures include file sharing service BitTorrent, Skype’s voice over IP system, and distributed processing applications including [email protected] [21]. Similar designs have also begun to see adoption in large business environments. In a file sharing program such as BitTorrent, there is no concept of downloading files from a central server to a client. Instead, each computer hosts a client program, and a set of files. When one computer requests a file, a group of computers which contain all or parts of the file are assembled, and different parts of the file are downloaded simultaneously from multiple computers in this group. These downloads occur simultaneously, in parallel, and segments of the file are reassembled on the target computer to form the complete file. As more users share the same file, download speeds will increase for that file, since a computer can take smaller and smaller pieces of the file from each computer in the group. Another popular example of peer-to-peer architecture (with one exception that is technically a centralized server) is the Skype voice over IP system, which offers free or inexpensive phone calls over the Internet. This design is actually a hybrid, since it requires a central login server where users authenticate to the system. All other operations of Skype are done using a peer-to-peer design. To look up the name and address of someone you wish to call, Skype uses a directory search process. Compute nodes on the Skype network can be promoted to act as “super nodes” if they have enough memory, bandwidth, and processor capacity. The Skype directory search (as well as other signaling functions) is a peer-to-peer process performed on the super nodes. Transport of the call data is done by routing voice packets between the two host computers at either end of the call.

Figure 1.6. Peer-to-peer architecture.

In addition to the hardware architecture, data centers also employ various software architectures. A detailed discussion of software architecture is beyond the scope of this chapter, although we will mention a few examples for the sake of completeness. Modern cloud computing systems can employ a service oriented architecture, which is actually an application development methodology which creates solutions by integrating one or more web services [3]. Each web service is treated as a subroutine or function call designed to accomplish a specific task (e.g., processing a credit card or checking airline departure times). These web services are treated as remote procedure calls or application program interfaces (APIs). Cloud computing environments may develop software-as-a-service or platform-as-a-service offerings based on these approaches. Software architectures may include agile development methodologies, particularly the collaborative approach between developers, IT professionals, and quality assurance known as DevOps. These approaches emphasize principles, such as fail fast, and borrow from other continuous improvement development processes such as the Deming Cycle (also known as Plan, Do, Check, Act, or PDCA) [22].

The most common general purpose application architecture for enterprise data centers is a multitier model, as shown in Fig. 1.7. Based on the layered designs supporting enterprise resource planning and content resource management solutions, this design includes tiers of servers hosting web, application, and database systems. Multitier server farms provide improved resiliency by running redundant processes on separate servers within the same application tier. In this way, one server can be taken out of service without interrupting execution of the process. Resiliency is also encouraged by load balancing between the tiers. Using server virtualization, the web and application servers can be implemented as VMs on a common physical server (assuming this meets resiliency goals). Conventional database servers deploy separate physical machines rather than using virtualization, due to performance concerns. The data center tiers can be segregated from each other by using different physical routers within each tier, or by provisioning virtual local area networks (VLANs). Often VLAN-aware firewalls and load balancers will also be employed between tiers. Physically separate networks may achieve better performance, with the tradeoff of higher deployment cost and more devices to manage. The main advantages of VLANs are the reduced complexity and cost. System performance requirements and traffic patterns will often help determine whether physical or virtual network segmentation is preferred for a given design.

Figure 1.7. Traditional data center network architecture.

Source: From Handbook of Fiber Optic Data Communication, Chapter 1, Figure 1.1. After InfiniBand Trade Association [Online], //www.infinibandta.org/content/pages.php?pg=technology_overview [accessed 29.01.13].

The multitier approach creates clusters of servers, storage, and networking equipment which are used to achieve high availability, load balancing, and improved security. Resource clustering is a general principle which can be applied to other computing applications. For example, there is a particular class of high performance computing (HPC) applications which combine multiple processors to form a unified, high performance system, leveraging special software and high-speed networking. Examples of HPC clusters can be found in scientific and technical research (including meteorology, seismology, and aerodynamics), real-time financial trading analytics, rendering of high resolution graphics, and many other fields. HPCs are available in many different types, using both commodity, off-the-shelf hardware, and custom designed processors. There are three main categories of HPC which are generally recognized by the industry:

HPC Type I (parallel message passing or tightly coupled): Applications run on all compute nodes simultaneously, in parallel, while a master node determines workload allocations for each compute node.

HPC Type 2 (Distributed I/O processing, or search engines): Rapid response to client requests is achieved by balancing requests across master nodes, then sprayed across many compute nodes for parallel processing (current unicast systems are gradually being replaced by multicast).

HPC Type 3 (parallel file processing or loosely coupled): Data files are divided into segments and distributed across a server pool for parallel processing; partial results are later recombined.

These clusters can be large or small (up to 1000 servers), organized into subgroups with inter-processor communication between subgroups. A currently updated list of the top 500 supercomputers in the world [23] is maintained to provide an overview of the server, storage, and network interconnects currently in use. Most cluster networks are based on variations of Ethernet, although other proprietary network fabrics are also used. Topologies can include variations on a hypercube, torus, hypertree, and full or partial meshes (to provide equal latency and shortest paths to all compute nodes). HPC networks may include four-way or eight-way Equal Cost Multi-Pathing (ECMP), and distributed forwarding based on both Layer 3 and Layer 4 port hashing.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780081005125000012

Optical Amplifiers for Next Generation WDM Networks: A Perspective and Overview

Atul Srivastava, John Zyskind, in Optically Amplified WDM Networks, 2011

1.4.12 Chapter 13. Low-cost optical amplifiers (Bruce Nyman and Greg Cowle)

Cost reduction of optical amplifiers is of increasing concern because of continual pressure on the pricing of optical networking equipment, because of changes in applications and network architectures which are extending the range of applications of amplifiers beyond the line amplifier repeaters of the core network, and because the dominant EDFA technology is not as easily amenable to cost reduction through integration as other technologies such as semiconductors.

This chapter examines the issues involved in lowering the cost of optical amplifiers focusing on single stage optical amplifiers because, by and large, they will be used in the highest volume, most cost sensitive applications, such as metro and access network line amplifiers, single-channel amplification for high speed, advanced modulation format channels, cable television (CATV) distribution booster amplifiers, and ASE sources for WDM passive optical networks (PONs). The alternative technologies for low-cost amplifiers, such as semiconductor optical amplifiers, and erbium-doped waveguide amplifiers (EDWAs) are covered. EDFAs, which are the dominant technology, comprise multiple components with different features and are based on different technologies. The challenges and opportunities of reducing the costs of the primary components of EDFAs and the labor costs of assembling EDFAs are discussed. EDWAs offer opportunities for cost reduction by integrating the features of many of the components required for optical amplifiers. However, the lower efficiency of converting pump-to-signal power in erbium-doped planar waveguides compared with erbium-doped fiber, poses an obstacle to the commercial realization of the potential cost advantages of EDWAs. A recent approach is the PLC erbium-doped fiber amplifier, in which many of the passive devices are integrated on a PLC but the gain is provided by an erbium-doped fiber. This approach combines the cost advantages of PLC integration with the performance and pump efficiency of erbium-doped fiber and is especially advantageous for complex amplifier architectures requiring many optical components.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123749659100019

Networking

Colin Walls, in Embedded Software (Second Edition), 2012

8.2 Who Needs a Web Server?

It is becoming quite common for web servers—or, more correctly, HTTP servers—to be incorporated into routers, gateways, and other networking equipment. There is every reason to consider doing this for a multitude of other types of embedded systems. In a piece written for the “Nucleus Reactor” newsletter in 2000, Neil Henderson (CEO of Accelerated Technology) gave a very good outline of what was possible and how to get started. This provided the basis for this article. It may be interesting to compare the use of HTTP with SNMP because in this context, their objectives are quite similar. Take a look at the article “Introduction to SNMP” later in this chapter.

CW

8.2.1 Introduction

You may view web servers in the same manner I did before I understood what they could do in an embedded system. In my mind, web servers were located on machines with huge disk drive capacity and served up pages to web browsers. Well, huge disk drives are not necessary, and web servers can do much more than just serve up web pages!

With an embedded web server you can, of course, serve up pages. But did you know you can use a web server to provide an interactive user interface for your embedded system? Did you realize that you could program that interface once and be able to use it independent of the type of machine your user has? Did you further know that you could monitor and control your embedded system from any web browser with very little programming? All of these things are made possible by this very small, very efficient, and very powerful piece of software.

In this paper I will provide information that you will most likely be able to use on your embedded system. All you need is a TCP/IP networking stack, an embedded HTTP server (i.e., web server), and a little imagination. So, let’s get started.

8.2.2 Three Primary Capabilities

Web servers are capable of performing three basic functions:

Serve web pages to a web browser

Monitor the device within which they are embedded

Control the device within which they are embedded.

We will be examining these functions in more detail in the remainder of this chapter. Here, I will give you a brief introduction to each of these functions so that you can better understand the sections that follow.

Serve Web Pages to a Web Browser

This is the most fundamental capability of a web server. The web server waits on the network for a web browser to connect. Once connected, the web browser provides a filename to the web server, and the web server downloads that page to the web browser.

In the most basic case, the web server can download simple HTML files (simple because there are no inherent capabilities other than to show information) from within its file system to the web browser. This feature is ideal for downloading user documentation from the embedded system so that it can be used in a web browser.

A more sophisticated and extremely powerful capability is for the web server to download Java programs or applets (encapsulated in an HTML file) to the web browser. Once loaded in the web browser, the Java program or applet executes and can communicate with the target (that contains the web server) using the TCP/IP protocol. The power of this capability lies in the ability to:

Support legacy applications (existing TCP/IP applications that presently communicate with a Java application that runs in a browser rather than writing proprietary applications for different desktop operating systems).

Write sophisticated TCP/IP-based applications between a host and server where you control both sides regardless of where the host is running.

Monitor a Device

Often there is a need to retrieve (i.e., monitor) information about how an embedded system is functioning. Monitoring can range from determining the current pixel resolution of a digital camera to receiving vital signs from a medical device.

By embedding certain commands within an HTML page, dynamic information can be inserted into the HTML stream that is sent to the web browser. As the web server retrieves the file from the file system, it scans the text for special comments. These comments indicate functions to be performed on the target. These functions then format dynamic information into HTML text and include the text into the HTML stream being sent to the web browser.

Control a Device

HTML has the capability to maintain “forms.” If you have ever browsed the web and tried to download something, you probably have seen a form. A form is a collection of “widgets” such as text entry fields, radio buttons, and single-action buttons that can be assembled to collect virtually any type of data.

By constructing an HTML page with a group of widgets, information can be collected from the user in a web browser. That information can then be transmitted to the target and used to adjust or alter its behavior. For example, an HTML page could be constructed to configure a robot arm to move in certain sequences to perform some necessary function (e.g., to bend a piece of sheet metal). This could be done by placing specific text entry boxes in the HTML page that instruct the user to enter a number of specific data points. After being sent to the web server, the data points can then be analyzed by the embedded system’s application, validated, and then executed (or, if the data is invalid, to have the user reenter the data) to move the robot arm in the proper directions.

8.2.3 Web Servers at Work

Once again we will explore the use of the web server based on the three primary capabilities. We will look at the processing that is done on the web server and how information is supplied both from and to the web browser. I will discuss the ability to serve pages to a web browser. Then, we will progress into the more complex tasks that can be achieved by implementing an embedded web server—namely, using the web server to provide dynamic information to a web browser and using a web server to control your embedded system.

Communication between the web server and the web browser is controlled by HTTP (HyperText Transfer Protocol). HTTP supplies the rules for coordinating the requests for pages to the web server from the web browser and vice versa. The pages are transferred in HTML (HyperText Markup Language) format.

Serving Pages

As discussed previously, the simplest use of the web server is providing HTML pages from the web server to the web browser. This is a straightforward operation where the server maintains a directory structure containing a series of files. The user, from the web browser, specifies the URL (Uniform Resource Locator) that includes the IP address of the web server and the name of the file to be retrieved. The web browser transmits an HTTP packet to the web server with the requested filename. The web server locates the file and sends it to the browser via the HTTP protocol. Finally, the web browser displays the page to the user.

This feature can be used to supply information such as the device’s user manual from the embedded system to the user on a web browser. In most web server implementations, the ability to serve pages up to a web browser can be included in an embedded system with little or no coding effort.

Using the Web Server to Provide Dynamic Information to a Web Browser

By manipulating the HTML page that is sent to the web browser, the embedded system employing the web server can supply dynamic information to the user. The web server on the embedded device scans every HTML file that is sent to the web browser. If a certain string is encountered during the scanning process, the web server knows to call a function within the embedded system. The called function then knows how to format the dynamic information in HTML and append it to the buffer being sent to the web browser.

Let’s assume, for example, that our embedded system is a router. Let’s further assume that we want to display the router’s IP address. The complete HTML file to display this information may look something like this:

<BODY>The IP Address of the Router is: <!-# IPADDR> </BODY>

As the web server scans this HTML, it encounters the <!-# symbol, performs a lookup on the string IPADDR, and determines that a function display_IP_addr(Token *env, Request *req) is to be called.

display_IP_addr() may look something like this:

/* Create a temporary buffer. */

char ubuf[600];

void display_IP_addr(Token *env, Request *req)

{

  unsigned char *p;

  /* Get the IP address. */

  p = req->ip;

  /* Convert the IP addr to a string and place in ubuf. */

  sprintf(ubuf, "%d.%d.%d.%d", p[0], p[1], p[2], p[3]);

  /* Include the IP string in the buffer on its way to

  the browser. */

  ps_net_write(req, ubuf, (strlen(ubuf)), PLUGINDATA);

}

Let’s quickly review what we have just done. In the HTML file, we indicate that we want to display the string “The IP Address of the Router is.” Additionally, there is a command to display the value of IPADDR. It is not evident in what we see here, but the IPADDR reference is actually in a table on the target. In the table, IPADDR has a corresponding element named display_IP_addr that is a pointer to the function call of the same name.

In the code, we assume that the web server has already found the <!-#> string and has located the IPADDR element in the table. This has resulted in the call to display_IP_addr().

display_IP_addr() simply fetches the IP address from the req structure, formats it into the easily recognizable four-part IP number and then places the resultant string into the buffer that is on its way to the web browser.

From this simple example we can begin to see the power the web server possesses to transmit dynamic information to a web browser. By using more sophisticated HTML information, elaborate user displays can be created that are exciting and informative.

Using the Web Server to Control an Embedded System

For years, developers of network-enabled products (e.g., printers, routers, bridges) have had to develop multiple programs to remotely configure these devices. Since, in many cases, the products can be used on the Windows, Mac OS, and Linux operating systems, developers of these types of systems are forced to write applications for all three platforms. Using a web server can reduce this programming effort to developing one or more HTML pages and writing some code for the target. Using this paradigm, the users of the printers, routers, bridges, and so forth simply connect to the device using a web browser. I recently bought a SOHO router that had this capability. An IP address was specified in the literature that came with the router. I used that IP address in my web browser to communicate to the router’s web server. It supplied a full screen of options to configure the router for my particular circumstances. Let’s take a minute to look at a simple example of how something similar to this might be accomplished with an HTML file and a little code.

The HTML file will look as follows:

<BODY> Use DHCP to acquire IP Address? </BODY>

<br>

<br>

<INPUT TYPE="RADIO" NAME="RADIOB" VALUE="YES" CHECKED>YES

<br>

<INPUT TYPE="RADIO" NAME="RADIOB" VALUE="NO">NO

<br>

<br>

<INPUT TYPE="SUBMIT" VALUE="SUBMIT">

The code that will be used to process this request may be as follows:

int use_DHCP_flag;

int use_DHCP(Token *env, Request *req)

{

  /* Verify that we are looking at the right "command" */

  if(strcmp(req->pg_args->arg.name,"RADIOB")== 0)

    /* Should we use DHCP? */

    if(strncmp(req->pg_args->arg.value,"YES",3) == 0)

      /* Yes, use DHCP */

      use_DHCP_flag = TRUE;

}

Once again, we will review the elements just illustrated. First of all, let’s look at the HTML. We have three sections in this file separated by two line breaks. The first section is simply a prompt for the user. The second section is the code necessary to display the radio buttons as shown in the browser display shown previously. The third section serves two functions. First of all, it dictates the drawing of the Submit button. Second, it triggers the browser to send the information from this screen to the web server once it has been clicked on. For our discussion, the format of the data in the packet sent from the web browser to the web server is unnecessary. However, as you can see in the preceding function use_DHCP(), the information is easily provided to a function that is capable of executing upon the user’s request—in this case, for the router to use DHCP to acquire its IP address.

8.2.4 Brief Summary of the Web Server’s Capabilities

We have looked at three distinct capabilities of the web server: transmitting HTML pages to a web browser, providing HTML files with dynamic information in them to a web browser, and using a web browser to command or control an embedded system. The examples and explanation of these features are simple. However, their use is limitless!

I have given presentations to hundreds of people on the benefits of an embedded web server. In those presentations, I always emphasize the importance of imagination in the use of this software. For about 20 K of code and a little effort, you can build systems that have sophisticated user interfaces allowing your users to understand, utilize, and control your embedded system.

What has been discussed thus far in this paper are the basic capabilities of the web server. In the section that follows, we will look at some additional capabilities that a specific implementation of a web server may or may not have.

8.2.5 What Else Should You Consider?

As you continue to pursue information about and use embedded web servers, you will find that vendors of commercial packages will vary in their offerings. Some things that you should look out for are:

Authentication (security)

Utilities for embedding HTML files

File compression

File upload capabilities.

HTTP 1.0 provides for basic network authentication. If you have ever tried to access a web page and received a dialog box that asks you to enter your network ID and password, you have seen the use of this capability. You should verify that the package provides the ability to add and delete users to the username/password database in the web server. In some cases, you will be required to add the code to do this.

In general, most embedded web servers will be using a very simple file system that resides in memory. Vendors that provide support for this should also provide support for building that file system on your desktop so that it can be included in ROM or Flash on your target. Furthermore, the vendor should also supply an ability to use a more full-featured file system that is capable of handling the myriad of offline storage capabilities available for embedded systems.

If a vendor supports the building of an in-memory file system (files included in ROM or Flash) as just discussed, they should also provide a file compression capability. HTML files can become large and consume a lot of space in an in-memory file system. The compression capability should be able to compress the files while building the in-memory file system and uncompress the file when requested from the web server.

HTML 3.2 provides for the uploading of files from the web browser’s host machine to the web server. A vendor supplying a reasonable implementation of a web server should also provide the ability to support this feature.

8.2.6 Conclusion

Web servers will continue to proliferate in embedded systems. The capabilities afforded by this technology are as broad as the imagination of embedded developers like you. This is a technology that can be harnessed to build sophisticated user interfaces to embedded systems, maintain a local repository for user documentation, allow users of the embedded system to control it, and much more.

As the ubiquity of the web continues, embedded devices connected to it will increase. We know of no better method to monitor and control an embedded system right now than a web server. I hope you have the same excitement for this technology as I do. Most of all, I hope that you will be able to employ it in your system to its maximum advantage.

This paper was written primarily to introduce you to this technology and give you some idea how it might be beneficial. Hopefully it has encouraged you to get those creative juices flowing and find a way to use this great technology.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124158221000088

Cloud-based approach in data centers

In Cloud Control Systems, 2020

14.2.1 Server level control

There are many control variables available at the server level for IT, power, and cooling management. The “server” in this case means all the IT devices, including the computing servers, the storage units, and the networking equipment. The computing resources, such as central processing unit (CPU) cycles, memory capacity, storage access bandwidth, and networking bandwidth, are all local resources that can be dynamically tuned, especially in a virtualized environment. Power control can be performed from either the demand side or the supply side, even at the server level. The power consumption of servers can be controlled by active management of the workload hosted by the server, for instance through admission control, load balance, and by workload migration or consolidation. On the other hand, power consumption can be tuned through physical control variables such as dynamic voltage and frequency scaling (DVFS) and through the on-off state control [515], [516], [517], [518], [519], [520], [521], [522], [523]. DVFS has already been implemented in many operating systems, for example the “CPU governors” in Linux systems. CPU utilization usually drives the DVFS controller, which adapts the power consumption to the varying workload.

Previous work has focused on how to deal with the trade-off between power consumption and IT performance. For instance, Varma et al. [524] discuss a control-theory approach to DVFS. Cho et al. [515] discuss a control algorithm that varies both the clock frequency of a microprocessor and the clock frequency of the memory. Leverich et al. [525] propose a control approach to reduce static power consumption of the chips in a server through dynamic per-core power gating control.

Cooling control at the server level is usually implemented through active server fan tuning to cool down the servers [526]. Similar to power management, the thermal status of the servers (e.g., the temperature of the processors) can also be affected by active control of the workload or power. As one example Mutapcic et al. [520] focus on the maximization of the processing capabilities of a multicore processor subject to a given set of thermal constraints. In another example Cohen et al. [519] propose strategies to control the power consumption of a processor via DVFS so as to enforce the given constraints on the chip temperature and on the workload execution.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128187012000226

Software-Defined Networking and OpenFlow

Saurav Das, ... Rob Sherwood, in Handbook of Fiber Optic Data Communication (Fourth Edition), 2013

17.5 Example application: WAN TE

All Tier 1 and several Tier 2 Internet Service Providers (ISPs) use some form of TE in their WAN infrastructures today. Providing greater determinism and better utilization of network resources are the primary goals of TE, and MPLS-TE networks are the preferred solution, mainly because plain-vanilla IP networks are incapable of providing the same service, and older ways of providing TE using ATM or Frame Relay networks are no longer used.

But MPLS-TE networks are costly and complex; and they do not provide carriers with the level of determinism, optimality, and flexibility they need [14]. Consider the following: In MPLS-TE, a tunnel’s reserved bandwidth is usually an estimate of the potential usage of the tunnel, made by the network operator. But traffic matrices vary over time in unpredictable ways. And so, a given tunnel’s reservation could be very different from its actual usage at a given time, leading to an unoptimized network.

Router vendors try to get around this problem via mechanisms like auto-bandwidth, but it is at best a local optimization. Each router is only aware of the tunnels it originates and, to some extent, the tunnels that pass through it. For all other tunnels, the router is only aware of the aggregate bandwidth reserved by these tunnels on links. In other words, even though the router builds a map giving global TE-link state, it only has a local view of tunnel state (or TE-LSP state). As a result, local optimizations performed by multiple decision makers (tunnel head-ends) lead to considerable network churn.

Another option is the use of a PCE. The PCE can globally optimize all tunnels as it has full view of tunnel and link state. But the PCE is an offline tool. The results of the PCE calculation are difficult to implement in live networks. Head-end routers have to be locked one by one, and CLI scripts have to be executed carefully to avoid misconfiguration errors. This process is cumbersome enough that it is attempted less frequently (i.e., once a month).

With SDN and OpenFlow, we can get the best of both approaches by making the PCE tool “online.” We benefit from the global optimization afforded by the PCE tool, and then have the results of those optimizations directly and dynamically update forwarding state (like the routers can do). The net effect is that the network operator can run a network with much greater utilization because of frequent optimization (perhaps every day or every hour), without network churn and the operational hassles of CLI scripts, due to the programmatic interface of the SDN TE platform and online PCE application.

17.5.1 Google’s OpenFlow-based WAN

At the time of this writing, perhaps the best known deployment of OpenFlow and SDN in a production network is Google’s deployment of centralized TE in its inter–data center WAN.

In terms of traffic scale, Google’s networks are equivalent in size to the world’s largest carriers [14]. Google’ WAN infrastructure is organized as two core networks. One of them is the I-Scale network, which attaches to the Internet and carries user traffic (e.g., searches and Gmail) to and from their data centers. The other is the G-Scale network that carries traffic between their global data centers. The G-Scale network runs 100% on OpenFlow.

Google built its own switches for the G-Scale network. They were designed to have the minimal support needed for this solution, including support for OpenFlow and the ability to collectively switch terabits of bandwidth between sites. By deploying a cluster of these switches and a cluster of OpenFlow controllers at each site, they created a WAN “fabric” on which they implemented a centralized TE service. The TE service (or application) collects real-time state and resource information from the network and interacts with applications at the edge requesting resources from the network. Because it is aware of both the demand and it has global view of the supply, it can optimally compute paths for incoming traffic flows and have the results of those computations be programmed into the WAN “fabric” via OpenFlow.

Based on their production deployment experience, Google cites several benefits for SDN usage in the WAN [9,14]. As mentioned in the introduction, nearly all the advantages can be categorized into the three major benefits that SDN provides in any network:

Simpler control with greater flexiblity:

Dynamic TE with global view allowed them to run their networks “hot”—at a highly efficient (and previously unheard of) utilization of 95%. Typical carrier WANs usually run at utilizations of 30%.

Faster convergence to target optimum after network failures. This was directly a result of greater determinism and tighter control afforded by OpenFlow/SDN when compared to traditional distributed protocols.

SDN allowed them to move all control logic to external high-performance servers with more powerful CPUs instead of depending on the less capable CPUs embedded in networking equipment. These systems can also be upgraded more easily and independent of the data plane forwarding equipment.

Lower total cost of operations (TCO):

Traditionally, CapEx cost/bit should go down as networks are scaled, but in reality they do not. With SDN, Google was able to separate control from hardware and optimize them separately; they were able to choose the hardware based on the features they needed (and no more) and create the software based on the (TE) service requirements (instead of distributed protocol requirements in traditional network control planes), thereby reducing CapEx costs.

By separating the management, monitoring, and operation from the network elements, OpEx costs can be reduced by managing the WAN as a system instead of a collection of individual boxes.

Speed of innovation and faster time to market:

Once their backbone network was OpenFlow enabled, it took merely 2 months to roll out a production grade centralized TE solution on it.

Google could perform software upgrades and include new features “hitlessly,” that is, without incurring packet losses or capacity degradations, because in most instances the features do not “touch” the switch—they are completely handled in the decoupled control plane.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124016736000179

Energy-Efficient Telecommunications

Daniel C. Kilper, Rodney S. Tucker, in Optical Fiber Telecommunications (Sixth Edition), 2013

17.6 Conclusion

The six installments of this series of books have highlighted the importance of optical fiber technologies in advanced telecommunications systems. Combined with advances in electronic switching and signal processing, and with advanced protocols and network management systems, today’s telecommunications network has reached high levels of capacity, reach, reliability, flexibility, and affordability to users. For many years, the prime drivers behind advances in telecommunications have been considerations of capacity and cost. But recently, concerns about the rising energy use of telecommunications networks have brought the issue of energy efficiency into the mix, both for equipment vendors and for network operators.

We have identified several reasons for this recent increase in interest in energy efficiency. First, as networking equipment such as optical transceivers and network switches and routers grow in capacity, there is a need to increase the density of active devices in order to maintain an acceptably small footprint for the equipment. This expanding density of devices has resulted in challenges associated with heat dissipation from equipment racks. Improved thermal engineering can help to alleviate some of these problems, but ultimately there is a need to improve the energy efficiency of the active devices. Secondly, operational expenses (OpEx) associated with energy consumption of equipment are becoming an increasingly important part of the total OpEx in network operators. In the past, energy costs were such a small portion of an operator’s total OpEx that many operators paid little or no attention to energy. But this is now rapidly changing. Third, as the telecommunications network continues to expand to satisfy the ever-increasing demand for new services, new applications, and to accommodate an increasing user base, the energy consumption of the network has a small but growing impact on global GHG emissions.

We further provided a detailed analysis of the energy use of the different core elements of a telecommunication system, including switching and transport. The basic energy relationships were used to describe a lower bound on the energy use of a minimal network based upon practical technologies today and anticipated in the next decade. This result was discussed relative to estimates for energy use in networks based on technology projections for commercial systems. The four orders of magnitude separating these trends depend not only on the technology efficiencies, but also on the many functional and performance requirements on commercial systems today. Thus, progress on energy-efficient telecommunications will require a combination of technology improvements together with new, intelligent, service aware capability that can realize essential performance or functionality at lower energy.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123969606000171

Role of Blockchain Technology in IoT Applications

Aafaf Ouaddah, in Advances in Computers, 2019

2.3 Summary

The centralized and decentralized with trustful entity approaches where all devices are identified, authenticated and connected through cloud-based servers support huge processing and storage capacities. While this model has been used for decades to connect standards computing devices and will continue to fit small-scale IoT networks [32], it severely struggles to respond to the growing needs of the huge IoT ecosystems of tomorrow [33–35] for the following reasons.

Cost: Existing IoT solutions are expensive due to two main reasons: (1) high maintenance cost: from the manufacturer's side, the centralized clouds, large server farms, and networking equipment have a high maintenance cost considering the distribution of software updates to millions of devices for years after they have been long discontinued [36]. (2) High infrastructure cost: the sheer amount of communications that will have to be handled when there are tens of billions of IoT devices needs to cater to a very high volume of messages (communication costs), data generated by the devices (storage costs), and analytical processes (server costs).

Bottleneck and single point of failure: cloud servers and farms will remain a bottleneck and point of failure that can disrupt the entire network. This is particularly important when it is directly tied to critical IoT services such as healthcare services.

Scalability: within the centralized paradigm, cloud-based IoT application platforms acquire information from entities located in data acquisition networks, and provide raw data and services to other entities. These application platforms control the reception of the whole information flow. This enforcement creates a bottleneck to scaling the IoT solutions to the exponentially growing number of devices and the amount of data generated and processed by those devices (i.e., the concept of “Big Data”).

Insufficient security: The tremendous amount of data collected from millions of devices raises information security and privacy concerns for either individuals, corporations, or governments. As proven by recent denial-of-service attacks on IoT devices [37], the huge number of low-cost and insecure devices connected to the internet is proving to be a major challenge in assuring IoT security.

Privacy breaches and Lack of transparency: in the centralized models, from the consumer's side, there is an undebatable lack of a trust in service providers getting access to data collected by billions of entities creating information. There is a need for a “security through transparency” approach allowing users to retain their anonymity in this super connected world.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/S0065245818300676

What permits the transfer of information from one computer to another or device to another?

FTP is used for file transfers between one system and another, and it has several common use cases, including the following: Backup. FTP can be used by backup services or individual users to backup data from one location to a secured backup server running FTP services.

What device accepts the transmission of data instructions or information?

Hardware: Meeting the Machine A computer is a machine that can be programmed to accept data (input), process it into useful information (output), and store it away (in a secondary storage device) for safekeeping or later reuse. The processing of input to output is directed by the software but performed by the hardware.

What type of device allows your computer or mobile device to communicate with other devices and the Internet wirelessly?

A router is a hardware device that allows you to connect several computers and other devices to a single Internet connection, which is known as a home network. Many routers are wireless, which allows you to create a home wireless network, commonly known as a Wi-Fi network.

What is a computer designed to request information from a server?

Used in home and corporate networks, a client is any computer hardware or software device that requests access to a service provided by a server. Clients are typically seen as the requesting program or user in a client-server architecture.

Toplist

Neuester Beitrag

Stichworte