text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
A number of tools exist to generate computer network diagrams. Broadly, there are four types of tools that help create network maps and diagrams:
Hybrid tools
Network Mapping tools
Network Monitoring tools
Drawing tools
Network mapping and drawing software support IT systems managers to understand the hardware and software services on a network and how they are interconnected. Network maps and diagrams are a component of network documentation. They are required artifacts to better manage IT systems' uptime, performance, security risks, plan network changes and upgrades.
== Hybrid tools ==
These tools have capabilities in common with drawing tools and network monitoring tools. They are more specialized than general drawing tools and provide network engineers and IT systems administrators a higher level of automation and the ability to develop more detailed network topologies and diagrams. Typical capabilities include but not limited to:
Displaying port / interface information on connections between devices on the maps
Visualizing VLANs / subnets
Visualizing virtual servers and storage
Visualizing flow of network traffic across devices and networks
Displaying WAN and LAN maps by location
Importing network configuration files to generate topologies automatically
== Network mapping tools ==
These tools are specifically designed to generate automated network topology maps. These visual maps are automatically generated by scanning the network using network discovery protocols. Some of these tools integrate into documentation and monitoring tools. Typical capabilities include but not limited to:
Automatically scanning the network using SNMP, SSH, WMI, etc.
Scanning Windows and Unix servers
Scanning virtual hosts
Scanning routing protocols
Performing scheduled scans
Tracking changes to the network
Notifying users of changes to the network
== Network monitoring tools ==
Some network monitoring tools generate visual maps by automatically scanning the network using network discovery protocols. The maps are ideally suited for viewing network monitoring status and issues visually. Typical capabilities include but not limited to:
Automatically scanning the network using SNMP, WMI, etc.
Scanning Windows and Unix servers
Scanning virtual hosts
Scanning routing protocols
Scanning connection speeds
Performing scheduled scans
Tracking changes to the network
== Drawing tools ==
These tools help users to create network topology diagrams by adding icons to a canvas and using lines and connectors to draw linkages between nodes. This category of tools is similar to general drawing and paint tools. Typical capabilities include but not limited to:
Libraries of icons for devices
Ability to add shapes and annotations to maps
Ability to create free-form diagrams
== List of network monitoring tools that generate network maps ==
Some notable tools (may not be an exhaustive list):
== List of drawing tools ==
Some notable tools (may not be an exhaustive list):
== See also ==
Computer network diagram
== External links ==
"Cisco Brand Center / Network Topology Icons". www.cisco.com. Retrieved 2018-04-09.
"Cisco Unified Communications System for IP Telephony: Microsoft Visio network topology diagrams (resp. diagram templates)". www.cisco.com. Retrieved 2018-04-09. | Wikipedia/Network_diagram_software |
In computer networking, a reliable protocol is a communication protocol that notifies the sender whether or not the delivery of data to intended recipients was successful. Reliability is a synonym for assurance, which is the term used by the ITU and ATM Forum, and leads to fault-tolerant messaging.
Reliable protocols typically incur more overhead than unreliable protocols, and as a result, function more slowly and with less scalability. This often is not an issue for unicast protocols, but it may become a problem for reliable multicast protocols.
Transmission Control Protocol (TCP), the main protocol used on the Internet, is a reliable unicast protocol; it provides the abstraction of a reliable byte stream to applications. UDP is an unreliable protocol and is often used in computer games, streaming media or in other situations where speed is an issue and some data loss may be tolerated because of the transitory nature of the data.
Often, a reliable unicast protocol is also connection oriented. For example, TCP is connection oriented, with the virtual-circuit ID consisting of source and destination IP addresses and port numbers. However, some unreliable protocols are connection oriented, such as Asynchronous Transfer Mode and Frame Relay. In addition, some connectionless protocols, such as IEEE 802.11, are reliable.
== History ==
Building on the packet switching concepts proposed by Donald Davies, the first communication protocol on the ARPANET was a reliable packet delivery procedure to connect its hosts via the 1822 interface. A host computer simply arranged the data in the correct packet format, inserted the address of the destination host computer, and sent the message across the interface to its connected Interface Message Processor (IMP). Once the message was delivered to the destination host, an acknowledgment was delivered to the sending host. If the network could not deliver the message, the IMP would send an error message back to the sending host.
Meanwhile, the developers of CYCLADES and of ALOHAnet demonstrated that it was possible to build an effective computer network without providing reliable packet transmission. This lesson was later embraced by the designers of Ethernet.
If a network does not guarantee packet delivery, then it becomes the host's responsibility to provide reliability by detecting and retransmitting lost packets. Subsequent experience on the ARPANET indicated that the network itself could not reliably detect all packet delivery failures, and this pushed responsibility for error detection onto the sending host in any case. This led to the development of the end-to-end principle, which is one of the Internet's fundamental design principles.
== Reliability properties ==
A reliable service is one that notifies the user if delivery fails, while an unreliable one does not notify the user if delivery fails. For example, Internet Protocol (IP) provides an unreliable service. Together, Transmission Control Protocol (TCP) and IP provide a reliable service, whereas User Datagram Protocol (UDP) and IP provide an unreliable one.
In the context of distributed protocols, reliability properties specify the guarantees that the protocol provides with respect to the delivery of messages to the intended recipient(s).
An example of a reliability property for a unicast protocol is "at least once", i.e. at least one copy of the message is guaranteed to be delivered to the recipient.
Reliability properties for multicast protocols can be expressed on a per-recipient basis (simple reliability properties), or they may relate the fact of delivery or the order of delivery among the different recipients (strong reliability properties). In the context of multicast protocols, strong reliability properties express the guarantees that the protocol provides with respect to the delivery of messages to different recipients.
An example of a strong reliability property is last copy recall, meaning that as long as at least a single copy of a message remains available at any of the recipients, every other recipient that does not fail eventually also receives a copy. Strong reliability properties such as this one typically require that messages are retransmitted or forwarded among the recipients.
An example of a reliability property stronger than last copy recall is atomicity. The property states that if at least a single copy of a message has been delivered to a recipient, all other recipients will eventually receive a copy of the message. In other words, each message is always delivered to either all or none of the recipients.
One of the most complex strong reliability properties is virtual synchrony.
Reliable messaging is the concept of message passing across an unreliable infrastructure whilst being able to make certain guarantees about the successful transmission of the messages. For example, that if the message is delivered, it is delivered at most once, or that all messages successfully delivered arrive in a particular order.
Reliable delivery can be contrasted with best-effort delivery, where there is no guarantee that messages will be delivered quickly, in order, or at all.
== Implementations ==
A reliable delivery protocol can be built on an unreliable protocol. An extremely common example is the layering of Transmission Control Protocol on the Internet Protocol, a combination known as TCP/IP.
Strong reliability properties are offered by group communication systems (GCSs) such as IS-IS, Appia framework, JGroups or QuickSilver Scalable Multicast. The QuickSilver Properties Framework is a flexible platform that allows strong reliability properties to be expressed in a purely declarative manner, using a simple rule-based language, and automatically translated into a hierarchical protocol.
One protocol that implements reliable messaging is WS-ReliableMessaging, which handles reliable delivery of SOAP messages.
The ATM Service-Specific Coordination Function provides for transparent assured delivery with AAL5.
IEEE 802.11 attempts to provide reliable service for all traffic. The sending station will resend a frame if the sending station does not receive an ACK frame within a predetermined period of time.
== Real-time systems ==
There is, however, a problem with the definition of reliability as "delivery or notification of failure" in real-time computing. In such systems, failure to deliver the real-time data will adversely affect the performance of the systems, and some systems, e.g. safety-critical, safety-involved, and some secure mission-critical systems, must be proved to perform at some specified minimum level. This, in turn, requires that a specified minimum reliability for the delivery of the critical data be met. Therefore, in these cases, it is only the delivery that matters; notification of the failure to deliver does ameliorate the failure. In hard real-time systems, all data must be delivered by the deadline or it is considered a system failure. In firm real-time systems, late data is still valueless but the system can tolerate some amount of late or missing data.
There are a number of protocols that are capable of addressing real-time requirements for reliable delivery and timeliness:
MIL-STD-1553B and STANAG 3910 are well-known examples of such timely and reliable protocols for avionic data buses. MIL-1553 uses a 1 Mbit/s shared media for the transmission of data and the control of these transmissions, and is widely used in federated military avionics systems. It uses a bus controller (BC) to command the connected remote terminals (RTs) to receive or transmit this data. The BC can, therefore, ensure that there will be no congestion, and transfers are always timely. The MIL-1553 protocol also allows for automatic retries that can still ensure timely delivery and increase the reliability above that of the physical layer. STANAG 3910, also known as EFABus in its use on the Eurofighter Typhoon, is, in effect, a version of MIL-1553 augmented with a 20 Mbit/s shared media bus for data transfers, retaining the 1 Mbit/s shared media bus for control purposes.
The Asynchronous Transfer Mode (ATM), the Avionics Full-Duplex Switched Ethernet (AFDX), and Time Triggered Ethernet (TTEthernet) are examples of packet-switched networks protocols where the timeliness and reliability of data transfers can be assured by the network. AFDX and TTEthernet are also based on IEEE 802.3 Ethernet, though not entirely compatible with it.
ATM uses connection-oriented virtual channels (VCs) which have fully deterministic paths through the network, and usage and network parameter control (UPC/NPC), which are implemented within the network, to limit the traffic on each VC separately. This allows the usage of the shared resources (switch buffers) in the network to be calculated from the parameters of the traffic to be carried in advance, i.e. at system design time. That they are implemented by the network means that these calculations remain valid even when other users of the network behave in unexpected ways, i.e. transmit more data than they are expected to. The calculated usages can then be compared with the capacities of these resources to show that, given the constraints on the routes and the bandwidths of these connections, the resource used for these transfers will never be over-subscribed. These transfers will therefore never be affected by congestion and there will be no losses due to this effect. Then, from the predicted maximum usages of the switch buffers, the maximum delay through the network can also be predicted. However, for the reliability and timeliness to be proved, and for the proofs to be tolerant of faults in and malicious actions by the equipment connected to the network, the calculations of these resource usages cannot be based on any parameters that are not actively enforced by the network, i.e. they cannot be based on what the sources of the traffic are expected to do or on statistical analyses of the traffic characteristics (see network calculus).
AFDX uses frequency domain bandwidth allocation and traffic policing, that allows the traffic on each virtual link to be limited so that the requirements for shared resources can be predicted and congestion prevented so it can be proved not to affect the critical data. However, the techniques for predicting the resource requirements and proving that congestion is prevented are not part of the AFDX standard.
TTEthernet provides the lowest possible latency in transferring data across the network by using time-domain control methods – each time triggered transfer is scheduled at a specific time so that contention for shared resources is controlled and thus the possibility of congestion is eliminated. The switches in the network enforce this timing to provide tolerance of faults in, and malicious actions on the part of, the other connected equipment. However, "synchronized local clocks are the fundamental prerequisite for time-triggered communication". This is because the sources of critical data will have to have the same view of time as the switch, in order that they can transmit at the correct time and the switch will see this as correct. This also requires that the sequence with which a critical transfer is scheduled has to be predictable to both source and switch. This, in turn, will limit the transmission schedule to a highly deterministic one, e.g. the cyclic executive.
However, low latency in transferring data over the bus or network does not necessarily translate into low transport delays between the application processes that source and sink this data. This is especially true where the transfers over the bus or network are cyclically scheduled (as is commonly the case with MIL-STD-1553B and STANAG 3910, and necessarily so with AFDX and TTEthernet) but the application processes are not synchronized with this schedule.
With both AFDX and TTEthernet, there are additional functions required of the interfaces, e.g. AFDX's Bandwidth Allocation Gap control, and TTEthernet's requirement for very close synchronization of the sources of time-triggered data, that make it difficult to use standard Ethernet interfaces. Other methods for control of the traffic in the network that would allow the use of such standard IEEE 802.3 network interfaces is a subject of current research.
== See also ==
Robustness of complex networks – Ability of a complex network to withstand failures and perturbations
Efficiency (network science) – measure of how efficiently a network exchanges informationPages displaying wikidata descriptions as a fallback
Cascading failure – Systemic risk of failure
== References == | Wikipedia/Reliability_(computer_networking) |
Comparison of user features of operating systems refers to a comparison of the general user features of major operating systems in a narrative format. It does not encompass a full exhaustive comparison or description of all technical details of all operating systems. It is a comparison of basic roles and the most prominent features. It also includes the most important features of the operating system's origins, historical development, and role.
== Overview ==
An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs.
Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources.
For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is usually executed directly by the hardware and frequently makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers.
As of June 2024, the dominant general-purpose desktop operating system is Microsoft Windows with a market share of around 72.91%. macOS by Apple Inc. is in second place (14.93%), and the varieties of Linux are collectively in third place (4.04%). In the mobile sector, including both smartphones and tablets, Android is dominant with a market share of 71%, followed by Apple's iOS with 28%; for smartphones alone, Android has 72% and iOS has 28%. Linux distributions are dominant in the server and supercomputing sectors. Other specialized classes of operating systems (special-purpose operating systems)), such as embedded and real-time systems, exist for many applications. Security-focused operating systems also exist. Some operating systems have low system requirements (i.e. light-weight Linux distribution). Others may have higher system requirements.
Some operating systems require installation or may come pre-installed with purchased computers (OEM-installation), whereas others may run directly from media (i.e. live cd) or flash memory (i.e. USB stick).
== MS-DOS ==
=== Overview ===
MS-DOS (acronym for Microsoft Disk Operating System) is an operating system for x86-based personal computers mostly developed by Microsoft. Collectively, MS-DOS, its rebranding as IBM PC DOS, and some operating systems attempting to be compatible with MS-DOS, are sometimes referred to as "DOS" (which is also the generic acronym for disk operating system). MS-DOS was the main operating system for IBM PC compatible personal computers during the 1980s, from which point it was gradually superseded by operating systems offering a graphical user interface (GUI), in various generations of the graphical Microsoft Windows operating system.
IBM licensed and re-released it in 1981 as PC DOS 1.0 for use in its PCs. Although MS-DOS and PC DOS were initially developed in parallel by Microsoft and IBM, the two products diverged after twelve years, in 1993, with recognizable differences in compatibility, syntax, and capabilities.
During its lifetime, several competing products were released for the x86 platform, and MS-DOS went through eight versions, until development ceased in 2000. Initially, MS-DOS was targeted at Intel 8086 processors running on computer hardware using floppy disks to store and access not only the operating system, but application software and user data as well. Progressive version releases delivered support for other mass storage media in ever greater sizes and formats, along with added feature support for newer processors and rapidly evolving computer architectures. Ultimately, it was the key product in Microsoft's development from a programming language company to a diverse software development firm, providing the company with essential revenue and marketing resources. It was also the underlying basic operating system on which early versions of Windows ran as a GUI.
== Microsoft Windows ==
=== Overview ===
Microsoft Windows, commonly referred to as Windows, is a group of several proprietary graphical operating system families, all of which are developed and marketed by Microsoft. Each family caters to a certain sector of the computing industry. Active Microsoft Windows families include Windows NT and Windows IoT; these may encompass subfamilies, (e.g. Windows Server or Windows Embedded Compact) (Windows CE). Defunct Microsoft Windows families include Windows 9x, Windows Mobile, and Windows Phone.
Microsoft announced an operating environment named Windows on 10 November 1983, as a graphical operating system shell for MS-DOS in response to the growing interest in graphical user interfaces (GUIs); Windows 1.0 first shipped on 20 November 1985. Microsoft Windows came to dominate the world's personal computer (PC) market with over 90% market share, overtaking Mac OS, which had been introduced in 1984, while Microsoft has in 2020 lost its dominance of the consumer operating system market, with Windows down to 30%, lower than Apple's 31% mobile-only share (65% for desktop operating systems only, i.e. "PCs" vs. Apple's 28% desktop share) in its home market, the US, and 32% globally (77% for desktops), where Google's Android leads.
Apple came to see Windows as an unfair encroachment on their innovation in GUI development as implemented on products such as the Lisa and Macintosh (eventually settled in court in Microsoft's favor in 1993). As of January 2023, on PCs, Windows is still the most popular operating system in all countries. However, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones. In 2014, the number of Windows devices sold was less than 25% that of Android devices sold. This comparison, however, may not be fully relevant, as the two operating systems traditionally target different platforms. Still, numbers for server use of Windows (that are comparable to competitors) show one third market share, similar to that for end user use.
As of October 2020, the most recent version of Windows for PCs, tablets and embedded devices is Windows 10, version 20H2. The most recent version for server computers is Windows Server, version 20H2. A specialized version of Windows also runs on the Xbox One video game console.
=== Windows 95 ===
Windows 95 introduced a redesigned shell based around a desktop metaphor; File shortcuts (also known as shell links) were introduced and the desktop was re-purposed to hold shortcuts to applications, files and folders, reminiscent of Mac OS.
In Windows 3.1 the desktop was used to display icons of running applications. In Windows 95, the currently running applications were displayed as buttons on a taskbar across the bottom of the screen. The taskbar also contained a notification area used to display icons for background applications, a volume control and the current time.
The Start menu, invoked by clicking the "Start" button on the taskbar or by pressing the Windows key, was introduced as an additional means of launching applications or opening documents. While maintaining the program groups used by its predecessor Program Manager, it also displayed applications within cascading sub-menus.
The previous File Manager program was replaced by Windows Explorer and the Explorer-based Control Panel and several other special folders were added such as My Computer, Dial Up Networking, Recycle Bin, Network Neighborhood, My Documents, Recent documents, Fonts, Printers, and My Briefcase among others. AutoRun was introduced for CD drives.
The user interface looked dramatically different from prior versions of Windows, but its design language did not have a special name like Metro, Aqua or Material Design. Internally it was called "the new shell" and later simply "the shell". The subproject within Microsoft to develop the new shell was internally known as "Stimpy".
In 1994, Microsoft designers Mark Malamud and Erik Gavriluk approached Brian Eno to compose music for the Windows 95 project. The result was the six-second start-up music-sound of the Windows 95 operating system, The Microsoft Sound and it was first released as a startup sound in May 1995 on Windows 95 May Test Release build 468.
When released for Windows 95 and Windows NT 4.0, Internet Explorer 4 came with an optional Windows Desktop Update, which modified the shell to provide several additional updates to Windows Explorer, including a Quick Launch toolbar, and new features integrated with Internet Explorer, such as Active Desktop (which allowed Internet content to be displayed directly on the desktop).
Some of the user interface elements introduced in Windows 95, such as the desktop, taskbar, Start menu and Windows Explorer file manager, remained fundamentally unchanged on future versions of Windows.
=== Windows 10 ===
A new iteration of the Start menu is used on the Windows 10 desktop, with a list of places and other options on the left side, and tiles representing applications on the right. The menu can be resized, and expanded into a full-screen display, which is the default option in Tablet mode. A new virtual desktop system was added. A feature known as Task View displays all open windows and allows users to switch between them, or switch between multiple workspaces. Universal apps, which previously could be used only in full screen mode, can now be used in self-contained windows similarly to other programs. Program windows can now be snapped to quadrants of the screen by dragging them to the corner. When a window is snapped to one side of the screen, Task View appears and the user is prompted to choose a second window to fill the unused side of the screen (called "Snap Assist"). Windows' system icons were also changed.
Charms have been removed; their functionality in universal apps is accessed from an App commands menu on their title bar. In its place is Action Center, which displays notifications and settings toggles. It is accessed by clicking an icon in the notification area, or dragging from the right of the screen. Notifications can be synced between multiple devices. The Settings app (formerly PC Settings) was refreshed and now includes more options that were previously exclusive to the desktop Control Panel.
Windows 10 is designed to adapt its user interface based on the type of device being used and available input methods. It offers two separate user interface modes: a user interface optimized for mouse and keyboard, and a "Tablet mode" designed for touchscreens. Users can toggle between these two modes at any time, and Windows can prompt or automatically switch when certain events occur, such as disabling Tablet mode on a tablet if a keyboard or mouse is plugged in, or when a 2-in-1 PC is switched to its laptop state. In Tablet mode, programs default to a maximized view, and the taskbar contains a back button and hides buttons for opened or pinned programs by default; Task View is used instead to switch between programs. The full screen Start menu is used in this mode, similarly to Windows 8, but scrolls vertically instead of horizontally.
== Apple Macintosh ==
=== Apple Classic MacOS ===
==== Overview ====
The classic Mac OS (System Software) is the series of operating systems developed for the Macintosh family of personal computers by Apple Inc. from 1984 to 2001, starting with System 1 and ending with Mac OS 9. The Macintosh operating system is credited with having popularized the graphical user interface concept. It was included with every Macintosh that was sold during the era in which it was developed, and many updates to the system software were done in conjunction with the introduction of new Macintosh systems.
Apple released the original Macintosh on 24 January 1984. The first version of the system software, which had no official name, was partially based on the Lisa OS, which Apple previously released for the Lisa computer in 1983. As part of an agreement allowing Xerox to buy shares in Apple at a favorable price, it also used concepts from the Xerox PARC Alto computer, which former Apple CEO Steve Jobs and other Lisa team members had previewed. This operating system consisted of the Macintosh Toolbox ROM and the "System Folder", a set of files that were loaded from disk. The name Macintosh System Software came into use in 1987 with System 5. Apple rebranded the system as Mac OS in 1996, starting officially with version 7.6, due in part to its Macintosh clone program. That program ended after the release of Mac OS 8 in 1997. The last major release of the system was Mac OS 9 in 1999.
Initial versions of the System Software ran one application at a time. With the Macintosh 512K, a system extension called the Switcher was developed to use this additional memory to allow multiple programs to remain loaded. The software of each loaded program used the memory exclusively; only when activated by the Switcher did the program appear, even the Finder's desktop. With the Switcher, the now familiar Clipboard feature allowed cut and paste between the loaded programs across switches including the desktop.
With the introduction of System 5, a cooperative multitasking extension called MultiFinder was added, which allowed content in windows of each program to remain in a layered view over the desktop, and was later integrated into System 7 as part of the operating system along with support for virtual memory. By the mid-1990s, however, contemporary operating systems such as Windows NT, OS/2, and NeXTSTEP had all brought pre-emptive multitasking, protected memory, access controls, and multi-user capabilities to desktop computers, The Macintosh's limited memory management and susceptibility to conflicts among extensions that provide additional functionality, such as networking or support for a particular device, led to significant criticism of the operating system, and was a factor in Apple's declining market share at the time.
After two aborted attempts at creating a successor to the Macintosh System Software called Taligent and Copland, and a four-year development effort spearheaded by Steve Jobs' return to Apple in 1997, Apple replaced Mac OS with a new operating system in 2001 named Mac OS X; the X signifying the underlying Unix system family base shared with Jobs' development of the NeXTSTEP operating systems on the NeXT computer. It retained most of the user interface design elements of the classic Mac OS, and there was some overlap of application frameworks for compatibility, but the two operating systems otherwise have completely different origins and architectures.
The final updates to Mac OS 9 released in 2001 provided interoperability with Mac OS X. The name "Classic" that now signifies the historical Mac OS as a whole is a reference to the Classic Environment, a compatibility layer that helped ease the transition to Mac OS X (now macOS).
=== Apple macOS ===
==== Overview ====
macOS (previously Mac OS X and later OS X) is a series of proprietary graphical operating systems developed and marketed by Apple Inc. since 2001. It is the primary operating system for Apple's Mac computers. Within the market of desktop, laptop and home computers, and by web usage, it is the second most widely used desktop OS, after Microsoft Windows.
macOS is the direct successor to the classic Mac OS, the line of Macintosh operating systems with nine releases from 1984 to 1999. macOS adopted the Unix kernel and inherited technologies developed between 1985 and 1997 at NeXT, the company that Apple co-founder Steve Jobs created after leaving Apple in 1985. Releases from Mac OS X 10.5 Leopard and thereafter are UNIX 03 certified. Apple's mobile operating system, iOS, has been considered a variant of macOS.
Mac OS X 10.0 (code named Cheetah) was the first major release and version of macOS, Apple's desktop and server operating system. Mac OS X 10.0 was released on 24 March 2001 for a price of US$129. It was the successor of the Mac OS X Public Beta and the predecessor of Mac OS X 10.1 (code named Puma).
Mac OS X 10.0 was a radical departure from the classic Mac OS and was Apple's long-awaited answer for a next generation Macintosh operating system. It introduced a brand new code base completely separate from Mac OS 9's as well as all previous Apple operating systems, and had a new Unix-like core, Darwin, which features a new memory management system. Unlike releases of Mac OS X 10.2 to 10.8, the operating system was not externally marketed with the name of a big cat.
==== Apple MacOS Components ====
The Finder is a file browser allowing quick access to all areas of the computer, which has been modified throughout subsequent releases of macOS. Quick Look has been part of the Finder since version 10.5. It allows for dynamic previews of files, including videos and multi-page documents without opening any other applications. Spotlight, a file searching technology which has been integrated into the Finder since version 10.4, allows rapid real-time searches of data files; mail messages; photos; and other information based on item properties (metadata) and/or content. macOS makes use of a Dock, which holds file and folder shortcuts as well as minimized windows.
Apple added Exposé in version 10.3 (called Mission Control since version 10.7), a feature which includes three functions to help accessibility between windows and desktop. Its functions are to instantly display all open windows as thumbnails for easy navigation to different tasks, display all open windows as thumbnails from the current application, and hide all windows to access the desktop. FileVault is optional encryption of the user's files with the 128-bit Advanced Encryption Standard (AES-128).
Features introduced in version 10.4 include Automator, an application designed to create an automatic workflow for different tasks; Dashboard, a full-screen group of small applications called desktop widgets that can be called up and dismissed in one keystroke; and Front Row, a media viewer interface accessed by the Apple Remote. Sync Services allows applications to access a centralized extensible database for various elements of user data, including calendar and contact items. The operating system then managed conflicting edits and data consistency.
All system icons are scalable up to 512×512 pixels as of version 10.5 to accommodate various places where they appear in larger size, including for example the Cover Flow view, a three-dimensional graphical user interface included with iTunes, the Finder, and other Apple products for visually skimming through files and digital media libraries via cover artwork. That version also introduced Spaces, a virtual desktop implementation which enables the user to have more than one desktop and display them in an Exposé-like interface; an automatic backup technology called Time Machine, which allows users to view and restore previous versions of files and application data; and Screen Sharing was built in for the first time.
In more recent releases, Apple has developed support for emoji characters by including the proprietary Apple Color Emoji font. Apple has also connected macOS with social networks such as Twitter and Facebook through the addition of share buttons for content such as pictures and text. Apple has brought several applications and features that originally debuted in iOS, its mobile operating system, to macOS in recent releases, notably the intelligent personal assistant Siri, which was introduced in version 10.12 of macOS.
== Unix and Unix-like systems ==
=== Unix ===
Unix (; trademarked as UNIX) is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, whose development started in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, and others.
Initially intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Berkeley (BSD), Microsoft (Xenix), Sun Microsystems (SunOS/Solaris), HP/HPE (HP-UX), and IBM (AIX). In the early 1990s, AT&T sold its rights in Unix to Novell, which then sold its Unix business to the Santa Cruz Operation (SCO) in 1995. The UNIX trademark passed to The Open Group, an industry consortium founded in 1996, which allows the use of the mark for certified operating systems that comply with the Single UNIX Specification (SUS). However, Novell continues to own the Unix copyrights, which the SCO Group, Inc. v. Novell, Inc. court case (2010) confirmed.
Unix systems are characterized by a modular design that is sometimes called the "Unix philosophy". According to this philosophy, the operating system should provide a set of simple tools, each of which performs a limited, well-defined function. A unified filesystem (the Unix filesystem) and an inter-process communication mechanism known as "pipes" serve as the main means of communication, and a shell scripting and command language (the Unix shell) is used to combine the tools to perform complex workflows.
Unix distinguishes itself from its predecessors as the first portable operating system: almost the entire operating system is written in the C programming language, which allows Unix to operate on numerous platforms.
macOS, described above, is a Unix-like system, and, beginning with Mac OS X Leopard, is certified to comply with the SUS.
=== Linux ===
Linux is a family of open-source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on 17 September 1991, by Linus Torvalds. Linux is typically packaged in a Linux distribution.
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name "GNU/Linux" to emphasize the importance of GNU software, causing some controversy.
Popular Linux distributions include Debian, Fedora, and Ubuntu. Commercial distributions include Red Hat Enterprise Linux and SUSE Linux Enterprise Server. Desktop Linux distributions include a windowing system such as X11 or Wayland, and a desktop environment such as GNOME or KDE Plasma. Distributions intended for servers may omit graphics altogether, or include a solution stack such as LAMP. Because Linux is freely redistributable, anyone may create a distribution for any purpose.
Linux was originally developed for personal computers based on the Intel x86 architecture, but has since been ported to more platforms than any other operating system. Because of the dominance of the Linux-based Android on smartphones, as of January 2023, Linux also has the largest installed base of all general-purpose operating systems. Although it is, as of January 2023, used by only around 2.9 percent of desktop computers, the Chromebook, which runs the Linux kernel-based ChromeOS, dominates the US K–12 education market and represents nearly 20 percent of sub-$300 notebook sales in the US. Linux is the leading operating system on servers (over 96.4% of the top 1 million web servers' operating systems are Linux), leads other large systems such as mainframe computers, and is the only OS used on TOP500 supercomputers (since November 2017, having gradually eliminated all competitors).
Linux also runs on embedded systems, i.e. devices whose operating system is typically built into the firmware and is highly tailored to the system. This includes routers, automation controls, smart home technology (such as Google Nest), televisions (Samsung and LG Smart TVs use Tizen and WebOS, respectively), automobiles (for example, Tesla, Audi, Mercedes-Benz, Hyundai, and Toyota all rely on Linux), digital video recorders, video game consoles, and smartwatches. The Falcon 9's and the Dragon 2's avionics use a customized version of Linux.
Linux is one of the most prominent examples of free and open-source software collaboration. The source code may be used, modified and distributed commercially or non-commercially by anyone under the terms of its respective licenses, such as the GNU General Public License.
90% of all cloud infrastructure is powered by Linux including supercomputers and cloud providers. 74% of smartphones in the world are Linux-based.
==== KDE Plasma 5 ====
KDE Plasma 5 is the fifth and current generation of the graphical workspaces environment created by KDE primarily for Linux systems. KDE Plasma 5 is the successor of KDE Plasma 4 and was first released on 15 July 2014. It includes a new default theme, known as "Breeze", as well as increased convergence across different devices. The graphical interface was fully migrated to QML, which uses OpenGL for hardware acceleration, which resulted in better performance and reduced power consumption.
=== FreeBSD ===
FreeBSD is a free and open-source Unix-like operating system descended from the Berkeley Software Distribution (BSD), which was based on Research Unix. The first version of FreeBSD was released in 1993. In 2005, FreeBSD was the most popular open-source BSD operating system, accounting for more than three-quarters of all installed simply, permissively licensed BSD systems.
FreeBSD has similarities with Linux, with two major differences in scope and licensing. First, FreeBSD maintains a complete system, i.e. the project delivers a kernel, device drivers, userland utilities, and documentation, as opposed to Linux only delivering a kernel and drivers, and relying on third-parties for system software. Second, FreeBSD source code is generally released under a permissive BSD license, as opposed to the copyleft GPL used by Linux.
The FreeBSD project includes a security team overseeing all software shipped in the base distribution. A wide range of additional third-party applications may be installed using the pkg package management system or FreeBSD Ports, or by compiling source code.
Much of FreeBSD's codebase has become an integral part of other operating systems such as Darwin (the basis for macOS, iOS, iPadOS, watchOS, and tvOS), TrueNAS (an open-source NAS/SAN operating system), and the system software for the PlayStation 3 and PlayStation 4 game consoles.
== Google ChromeOS ==
ChromeOS (formerly Chrome OS, sometimes styled as chromeOS) is a Gentoo Linux-based operating system designed by Google. It is derived from the free software ChromiumOS and uses the Google Chrome web browser as its principal user interface. However, ChromeOS is proprietary software.
Google announced the project in July 2009, conceiving it as an operating system in which both applications and user data reside in the cloud: hence ChromeOS primarily runs web applications. Source code and a public demo came that November. The first ChromeOS laptop, known as a Chromebook, arrived in May 2011. Initial Chromebook shipments from Samsung and Acer occurred in July 2011.
ChromeOS has an integrated media player and file manager. It supports Chrome Apps, which resemble native applications, as well as remote access to the desktop. Reception was initially skeptical, with some observers arguing that a browser running on any operating system was functionally equivalent. As more ChromeOS machines have entered the market, the operating system is now seldom evaluated apart from the hardware that runs it.
Android applications started to become available for the operating system in 2014, and in 2016, access to Android apps in Google Play's entirety was introduced on supported ChromeOS devices. Support for a Linux terminal and applications, known as Project Crostini, was released to the stable channel in ChromeOS 69. This was made possible via a lightweight Linux kernel that runs containers inside a virtual machine.
ChromeOS is only available pre-installed on hardware from Google manufacturing partners, but there are unofficial methods that allow it to be installed in other equipment. Its open-source upstream, ChromiumOS, can be compiled from downloaded source code. Early on, Google provided design goals for ChromeOS, but has not otherwise released a technical description.
== See also ==
== Notes ==
== References == | Wikipedia/Comparison_of_user_features_of_operating_systems |
A transport network, or transportation network, is a network or graph in geographic space, describing an infrastructure that permits and constrains movement or flow.
Examples include but are not limited to road networks, railways, air routes, pipelines, aqueducts, and power lines. The digital representation of these networks, and the methods for their analysis, is a core part of spatial analysis, geographic information systems, public utilities, and transport engineering. Network analysis is an application of the theories and algorithms of graph theory and is a form of proximity analysis.
== History ==
The applicability of graph theory to geographic phenomena was recognized at an early date. Many of the early problems and theories undertaken by graph theorists were inspired by geographic situations, such as the Seven Bridges of Königsberg problem, which was one of the original foundations of graph theory when it was solved by Leonhard Euler in 1736.
In the 1970s, the connection was reestablished by the early developers of geographic information systems, who employed it in the topological data structures of polygons (which is not of relevance here), and the analysis of transport networks. Early works, such as Tinkler (1977), focused mainly on simple schematic networks, likely due to the lack of significant volumes of linear data and the computational complexity of many of the algorithms. The full implementation of network analysis algorithms in GIS software did not appear until the 1990s, but rather advanced tools are generally available today.
== Network data ==
Network analysis requires detailed data representing the elements of the network and its properties. The core of a network dataset is a vector layer of polylines representing the paths of travel, either precise geographic routes or schematic diagrams, known as edges. In addition, information is needed on the network topology, representing the connections between the lines, thus enabling the transport from one line to another to be modeled. Typically, these connection points, or nodes, are included as an additional dataset.
Both the edges and nodes are attributed with properties related to the movement or flow:
Capacity, measurements of any limitation on the volume of flow allowed, such as the number of lanes in a road, telecommunications bandwidth, or pipe diameter.
Impedance, measurements of any resistance to flow or to the speed of flow, such as a speed limit or a forbidden turn direction at a street intersection
Cost accumulated through individual travel along the edge or through the node, commonly elapsed time, in keeping with the principle of friction of distance. For example, a node in a street network may require a different amount of time to make a particular left turn or right turn. Such costs can vary over time, such as the pattern of travel time along an urban street depending on diurnal cycles of traffic volume.
Flow volume, measurements of the actual movement taking place. This may be specific time-encoded measurements collected using sensor networks such as traffic counters, or general trends over a period of time, such as Annual average daily traffic (AADT).
== Analysis methods ==
A wide range of methods, algorithms, and techniques have been developed for solving problems and tasks relating to network flow. Some of these are common to all types of transport networks, while others are specific to particular application domains. Many of these algorithms are implemented in commercial and open-source GIS software, such as GRASS GIS and the Network Analyst extension to Esri ArcGIS.
=== Optimal routing ===
One of the simplest and most common tasks in a network is to find the optimal route connecting two points along the network, with optimal defined as minimizing some form of cost, such as distance, energy expenditure, or time. A common example is finding directions in a street network, a feature of almost any web street mapping application such as Google Maps. The most popular method of solving this task, implemented in most GIS and mapping software, is Dijkstra's algorithm.
In addition to the basic point-to-point routing, composite routing problems are also common. The Traveling salesman problem asks for the optimal (least distance/cost) ordering and route to reach a number of destinations; it is an NP-hard problem, but somewhat easier to solve in network space than unconstrained space due to the smaller solution set. The Vehicle routing problem is a generalization of this, allowing for multiple simultaneous routes to reach the destinations. The Route inspection or "Chinese Postman" problem asks for the optimal (least distance/cost) path that traverses every edge; a common application is the routing of garbage trucks. This turns out to be a much simpler problem to solve, with polynomial time algorithms.
=== Location analysis ===
This class of problems aims to find the optimal location for one or more facilities along the network, with optimal defined as minimizing the aggregate or mean travel cost to (or from) another set of points in the network. A common example is determining the location of a warehouse to minimize shipping costs to a set of retail outlets, or the location of a retail outlet to minimize the travel time from the residences of its potential customers. In unconstrained (cartesian coordinate) space, this is an NP-hard problem requiring heuristic solutions such as Lloyd's algorithm, but in a network space it can be solved deterministically.
Particular applications often add further constraints to the problem, such as the location of pre-existing or competing facilities, facility capacities, or maximum cost.
=== Service areas ===
A network service area is analogous to a buffer in unconstrained space, a depiction of the area that can be reached from a point (typically a service facility) in less than a specified distance or other accumulated cost. For example, the preferred service area for a fire station would be the set of street segments it can reach in a small amount of time. When there are multiple facilities, each edge would be assigned to the nearest facility, producing a result analogous to a Voronoi diagram.
=== Fault analysis ===
A common application in public utility networks is the identification of possible locations of faults or breaks in the network (which is often buried or otherwise difficult to directly observe), deduced from reports that can be easily located, such as customer complaints.
=== Transport engineering ===
Traffic has been studied extensively using statistical physics methods.
=== Vertical analysis ===
To ensure the railway system is as efficient as possible a complexity/vertical analysis should also be undertaken. This analysis will aid in the analysis of future and existing systems which is crucial in ensuring the sustainability of a system (Bednar, 2022, pp. 75–76). Vertical analysis will consist of knowing the operating activities (day to day operations) of the system, problem prevention, control activities, development of activities and coordination of activities.
== See also ==
Braess's paradox
Flow network
Heuristic routing
Interplanetary Transport Network
Network science
Percolation theory
Street network
Rail network
Highway dimension
Multimodal transport
Supply chain
Logistics
== References == | Wikipedia/Transport_network |
Blockmodeling is a set or a coherent framework, that is used for analyzing social structure and also for setting procedure(s) for partitioning (clustering) social network's units (nodes, vertices, actors), based on specific patterns, which form a distinctive structure through interconnectivity. It is primarily used in statistics, machine learning and network science.
As an empirical procedure, blockmodeling assumes that all the units in a specific network can be grouped together to such extent to which they are equivalent. Regarding equivalency, it can be structural, regular or generalized. Using blockmodeling, a network can be analyzed using newly created blockmodels, which transforms large and complex network into a smaller and more comprehensible one. At the same time, the blockmodeling is used to operationalize social roles.
While some contend that the blockmodeling is just clustering methods, Bonacich and McConaghy state that "it is a theoretically grounded and algebraic approach to the analysis of the structure of relations". Blockmodeling's unique ability lies in the fact that it considers the structure not just as a set of direct relations, but also takes into account all other possible compound relations that are based on the direct ones.
The principles of blockmodeling were first introduced by Francois Lorrain and Harrison C. White in 1971. Blockmodeling is considered as "an important set of network analytic tools" as it deals with delineation of role structures (the well-defined places in social structures, also known as positions) and the discerning the fundamental structure of social networks.: 2, 3 According to Batagelj, the primary "goal of blockmodeling is to reduce a large, potentially incoherent network to a smaller comprehensible structure that can be interpreted more readily". Blockmodeling was at first used for analysis in sociometry and psychometrics, but has now spread also to other sciences.
== Definition ==
A network as a system is composed of (or defined by) two different sets: one set of units (nodes, vertices, actors) and one set of links between the units. Using both sets, it is possible to create a graph, describing the structure of the network.
During blockmodeling, the researcher is faced with two problems: how to partition the units (e.g., how to determine the clusters (or classes), that then form vertices in a blockmodel) and then how to determine the links in the blockmodel (and at the same time the values of these links).
In the social sciences, the networks are usually social networks, composed of several individuals (units) and selected social relationships among them (links). Real-world networks can be large and complex; blockmodeling is used to simplify them into smaller structures that can be easier to interpret. Specifically, blockmodeling partitions the units into clusters and then determines the ties among the clusters. At the same time, blockmodeling can be used to explain the social roles existing in the network, as it is assumed that the created cluster of units mimics (or is closely associated with) the units' social roles.
Blockmodeling can thus be defined as a set of approaches for partitioning units into clusters (also known as positions) and links into blocks, which are further defined by the newly obtained clusters. A block (also blockmodel) is defined as a submatrix, that shows interconnectivity (links) between nodes, present in the same or different clusters. Each of these positions in the cluster is defined by a set of (in)direct ties to and from other social positions. These links (connections) can be directed or undirected; there can be multiple links between the same pair of objects or they can have weights on them. If there are not any multiple links in a network, it is called a simple network.: 8
A matrix representation of a graph is composed of ordered units, in rows and columns, based on their names. The ordered units with similar patterns of links are partitioned together in the same clusters. Clusters are then arranged together so that units from the same clusters are placed next to each other, thus preserving interconnectivity. In the next step, the units (from the same clusters) are transformed into a blockmodel. With this, several blockmodels are usually formed, one being core cluster and others being cohesive; a core cluster is always connected to cohesive ones, while cohesive ones cannot be linked together. Clustering of nodes is based on the equivalence, such as structural and regular. The primary objective of the matrix form is to visually present relations between the persons included in the cluster. These ties are coded dichotomously (as present or absent), and the rows in the matrix form indicate the source of the ties, while the columns represent the destination of the ties.
Equivalence can have two basic approaches: the equivalent units have the same connection pattern to the same neighbors or these units have same or similar connection pattern to different neighbors. If the units are connected to the rest of network in identical ways, then they are structurally equivalent. Units can also be regularly equivalent, when they are equivalently connected to equivalent others.
With blockmodeling, it is necessary to consider the issue of results being affected by measurement errors in the initial stage of acquiring the data.
== Different approaches ==
Regarding what kind of network is undergoing blockmodeling, a different approach is necessary. Networks can be one–mode or two–mode. In the former all units can be connected to any other unit and where units are of the same type, while in the latter the units are connected only to the unit(s) of a different type.: 6–10 Regarding relationships between units, they can be single–relational or multi–relational networks. Further more, the networks can be temporal or multilevel and also binary (only 0 and 1) or signed (allowing negative ties)/values (other values are possible) networks.
Different approaches to blockmodeling can be grouped into two main classes: deterministic blockmodeling and stochastic blockmodeling approaches. Deterministic blockmodeling is then further divided into direct and indirect blockmodeling approaches.
Among direct blockmodeling approaches are: structural equivalence and regular equivalence. Structural equivalence is a state, when units are connected to the rest of the network in an identical way(s), while regular equivalence occurs when units are equally related to equivalent others (units are not necessarily sharing neighbors, but have neighbour that are themselves similar).: 24
Indirect blockmodeling approaches, where partitioning is dealt with as a traditional cluster analysis problem (measuring (dis)similarity results in a (dis)similarity matrix), are:
conventional blockmodeling,
generalized blockmodeling:
generalized blockmodeling of binary networks,
generalized blockmodeling of valued networks and
generalized homogeneity blockmodeling,
prespecified blockmodeling.
According to Brusco and Steinley (2011), the blockmodeling can be categorized (using a number of dimensions):
deterministic or stochastic blockmodeling,
one–mode or two–mode networks,
signed or unsigned networks,
exploratory or confirmatory blockmodeling.
== Blockmodels ==
Blockmodels (sometimes also block models) are structures in which:
vertices (e.g., units, nodes) are assembled within a cluster, with each cluster identified as a vertex; from such vertices a graph can be constructed;
combinations of all the links (ties), represented in a block as a single link between positions, while at the same time constructing one tie for each block. In a case, when there are no ties in a block, there will be no ties between the two positions that define the block.
Computer programs can partition the social network according to pre-set conditions.: 333 When empirical blocks can be reasonably approximated in terms of ideal blocks, such blockmodels can be reduced to a blockimage, which is a representation of the original network, capturing its underlying 'functional anatomy'. Thus, blockmodels can "permit the data to characterize their own structure", and at the same time not seek to manifest a preconceived structure imposed by the researcher.
Blockmodels can be created indirectly or directly, based on the construction of the criterion function. Indirect construction refers to a function, based on "compatible (dis)similarity measure between paris of units", while the direct construction is "a function measuring the fit of real blocks induced by a given clustering to the corresponding ideal blocks with perfect relations within each cluster and between clusters according to the considered types of connections (equivalence)".
=== Types ===
Blockmodels can be specified regarding the intuition, substance or the insight into the nature of the studied network; this can result in such models as follows:: 16–24
parent-child role systems,
organizational hierarchies,
systems of ranked clusters,...
== Specialized programs ==
Blockmodeling is done with specialized computer programs, dedicated to the analysis of networks or blockmodeling in particular, as:
Pajek (Vladimir Batagelj and Andrej Mrvar),
R–package Blockmodeling (Aleš Žiberna),
Socnet.se: The blockmodeling console app (Win/Linux/Mac) (Carl Nordlund)
StOCNET (Tom Snijders),...
BLOCKS (Tom Snijders),
CONCOR,
Model and Model2 (Vladimir Batagelj),
== See also ==
Stochastic block model
Mathematical sociology
Role assignment
Multiobjective blockmodeling
Blockmodeling linked networks
== References == | Wikipedia/Blockmodeling |
In network science, reciprocity is a measure of the likelihood of vertices in a directed network to be mutually linked. Like the clustering coefficient, scale-free degree distribution, or community structure, reciprocity is a quantitative measure used to study complex networks.
== Motivation ==
In real network problems, people are interested in determining the likelihood of occurring double links (with opposite directions) between vertex pairs. This problem is fundamental for several
reasons. First, in the networks that transport information or material (such as email networks, World Wide Web (WWW), World Trade Web, or Wikipedia ), mutual links facilitate the transportation process. Second, when analyzing directed networks, people often treat them as undirected ones for simplicity; therefore, the information obtained from reciprocity studies helps to estimate the error introduced when a directed network is treated as undirected (for example, when measuring the clustering coefficient). Finally, detecting nontrivial patterns of reciprocity can reveal possible mechanisms and organizing principles that shape the observed network's topology.
== Definitions ==
=== Traditional definition ===
A traditional way to define the reciprocity
r
{\displaystyle r}
is using the ratio of the number of links pointing in both directions
L
<
−
>
{\displaystyle L^{<->}}
to the total number of links L
r
=
L
<
−
>
L
{\displaystyle r={\frac {L^{<->}}{L}}}
With this definition,
r
=
1
{\displaystyle r=1}
is for a purely bidirectional network while
r
=
0
{\displaystyle r=0}
for a purely unidirectional one. Real networks have an intermediate value between 0 and 1.
However, this definition of reciprocity has some defects. It cannot tell the relative difference of reciprocity compared with purely random network with the same number of vertices and edges. The useful information from reciprocity is not the value itself, but whether mutual links occur more or less often than expected by chance. Besides, in those networks containing self-linking loops (links starting and ending at the same vertex), the self-linking loops should be excluded when calculating
L
{\displaystyle L}
.
=== Garlaschelli and Loffredo's definition ===
In order to overcome the defects of the above definition, Garlaschelli and Loffredo defined reciprocity as the correlation coefficient between the entries of the adjacency matrix of a directed graph (
a
i
j
=
1
{\displaystyle a_{ij}=1}
if a link from
i
{\displaystyle i}
to
j
{\displaystyle j}
exists, and
a
i
j
=
0
{\displaystyle a_{ij}=0}
if not):
ρ
≡
∑
i
≠
j
(
a
i
j
−
a
¯
)
(
a
j
i
−
a
¯
)
∑
i
≠
j
(
a
i
j
−
a
¯
)
2
{\displaystyle \rho \equiv {\frac {\sum _{i\neq j}(a_{ij}-{\bar {a}})(a_{ji}-{\bar {a}})}{\sum _{i\neq j}(a_{ij}-{\bar {a}})^{2}}}}
,
where the average value
a
¯
≡
∑
i
≠
j
a
i
j
N
(
N
−
1
)
=
L
N
(
N
−
1
)
{\displaystyle {\bar {a}}\equiv {\frac {\sum _{i\neq j}a_{ij}}{N(N-1)}}={\frac {L}{N(N-1)}}}
.
a
¯
{\displaystyle {\bar {a}}}
measures the ratio of observed to possible directed links (link density), and self-linking loops are now excluded from
L
{\displaystyle L}
since
i
{\displaystyle i}
is not equal to
j
{\displaystyle j}
.
The definition can be written in the following simple form:
ρ
=
r
−
a
¯
1
−
a
¯
{\displaystyle \rho ={\frac {r-{\bar {a}}}{1-{\bar {a}}}}}
The new definition of reciprocity gives an absolute quantity which directly allows one to distinguish between reciprocal (
ρ
>
0
{\displaystyle \rho >0}
) and antireciprocal (
ρ
<
0
{\displaystyle \rho <0}
) networks, with mutual links occurring more and less often than random respectively.
If all the links occur in reciprocal pairs,
ρ
=
1
{\displaystyle \rho =1}
; if
r
=
0
{\displaystyle r=0}
,
ρ
=
ρ
m
i
n
{\displaystyle \rho =\rho _{min}}
.
ρ
m
i
n
≡
−
a
¯
1
−
a
¯
{\displaystyle \rho _{min}\equiv {\frac {-{\bar {a}}}{1-{\bar {a}}}}}
This is another advantage of using
ρ
{\displaystyle \rho }
, since it incorporates the idea that complete antireciprocality is more statistically significant in networks with larger density, while it must be regarded as a less pronounced effect in sparser networks.
== References == | Wikipedia/Reciprocity_(network_science) |
In graph theory, a component of an undirected graph is a connected subgraph that is not part of any larger connected subgraph. The components of any graph partition its vertices into disjoint sets, and are the induced subgraphs of those sets. A graph that is itself connected has exactly one component, consisting of the whole graph. Components are sometimes called connected components.
The number of components in a given graph is an important graph invariant, and is closely related to invariants of matroids, topological spaces, and matrices. In random graphs, a frequently occurring phenomenon is the incidence of a giant component, one component that is significantly larger than the others; and of a percolation threshold, an edge probability above which a giant component exists and below which it does not.
The components of a graph can be constructed in linear time, and a special case of the problem, connected-component labeling, is a basic technique in image analysis. Dynamic connectivity algorithms maintain components as edges are inserted or deleted in a graph, in low time per change. In computational complexity theory, connected components have been used to study algorithms with limited space complexity, and sublinear time algorithms can accurately estimate the number of components.
== Definitions and examples ==
A component of a given undirected graph may be defined as a connected subgraph that is not part of any larger connected subgraph. For instance, the graph shown in the first illustration has three components. Every vertex
v
{\displaystyle v}
of a graph belongs to one of the graph's components, which may be found as the induced subgraph of the set of vertices reachable from
v
{\displaystyle v}
. Every graph is the disjoint union of its components. Additional examples include the following special cases:
In an empty graph, each vertex forms a component with one vertex and zero edges. More generally, a component of this type is formed for every isolated vertex in any graph.
In a connected graph, there is exactly one component: the whole graph.
In a forest, every component is a tree.
In a cluster graph, every component is a maximal clique. These graphs may be produced as the transitive closures of arbitrary undirected graphs, for which finding the transitive closure is an equivalent formulation of identifying the connected components.
Another definition of components involves the equivalence classes of an equivalence relation defined on the graph's vertices.
In an undirected graph, a vertex
v
{\displaystyle v}
is reachable from a vertex
u
{\displaystyle u}
if there is a path from
u
{\displaystyle u}
to
v
{\displaystyle v}
, or equivalently a walk (a path allowing repeated vertices and edges).
Reachability is an equivalence relation, since:
It is reflexive: There is a trivial path of length zero from any vertex to itself.
It is symmetric: If there is a path from
u
{\displaystyle u}
to
v
{\displaystyle v}
, the same edges in the reverse order form a path from
v
{\displaystyle v}
to
u
{\displaystyle u}
.
It is transitive: If there is a path from
u
{\displaystyle u}
to
v
{\displaystyle v}
and a path from
v
{\displaystyle v}
to
w
{\displaystyle w}
, the two paths may be concatenated together to form a walk from
u
{\displaystyle u}
to
w
{\displaystyle w}
.
The equivalence classes of this relation partition the vertices of the graph into disjoint sets, subsets of vertices that are all reachable from each other, with no additional reachable pairs outside of any of these subsets. Each vertex belongs to exactly one equivalence class. The components are then the induced subgraphs formed by each of these equivalence classes. Alternatively, some sources define components as the sets of vertices rather than as the subgraphs they induce.
Similar definitions involving equivalence classes have been used to defined components for other forms of graph connectivity, including the weak components and strongly connected components of directed graphs and the biconnected components of undirected graphs.
== Number of components ==
The number of components of a given finite graph can be used to count the number of edges in its spanning forests: In a graph with
n
{\displaystyle n}
vertices and
c
{\displaystyle c}
components, every spanning forest will have exactly
n
−
c
{\displaystyle n-c}
edges. This number
n
−
c
{\displaystyle n-c}
is the matroid-theoretic rank of the graph, and the rank of its graphic matroid. The rank of the dual cographic matroid equals the circuit rank of the graph, the minimum number of edges that must be removed from the graph to break all its cycles. In a graph with
m
{\displaystyle m}
edges,
n
{\displaystyle n}
vertices and
c
{\displaystyle c}
components, the circuit rank is
m
−
n
+
c
{\displaystyle m-n+c}
.
A graph can be interpreted as a topological space in multiple ways, for instance by placing its vertices as points in general position in three-dimensional Euclidean space and representing its edges as line segments between those points. The components of a graph can be generalized through these interpretations as the topological connected components of the corresponding space; these are equivalence classes of points that cannot be separated by pairs of disjoint closed sets. Just as the number of connected components of a topological space is an important topological invariant, the zeroth Betti number, the number of components of a graph is an important graph invariant, and in topological graph theory it can be interpreted as the zeroth Betti number of the graph.
The number of components arises in other ways in graph theory as well. In algebraic graph theory it equals the multiplicity of 0 as an eigenvalue of the Laplacian matrix of a finite graph. It is also the index of the first nonzero coefficient of the chromatic polynomial of the graph, and the chromatic polynomial of the whole graph can be obtained as the product of the polynomials of its components. Numbers of components play a key role in the Tutte theorem characterizing finite graphs that have perfect matchings and the associated Tutte–Berge formula for the size of a maximum matching, and in the definition of graph toughness.
== Algorithms ==
It is straightforward to compute the components of a finite graph in linear time (in terms of the numbers of the vertices and edges of the graph) using either breadth-first search or depth-first search. In either case, a search that begins at some particular vertex
v
{\displaystyle v}
will find the entire component containing
v
{\displaystyle v}
(and no more) before returning. All components of a graph can be found by looping through its vertices, starting a new breadth-first or depth-first search whenever the loop reaches a vertex that has not already been included in a previously found component. Hopcroft & Tarjan (1973) describe essentially this algorithm, and state that it was already "well known".
Connected-component labeling, a basic technique in computer image analysis, involves the construction of a graph from the image and component analysis on the graph.
The vertices are the subset of the pixels of the image, chosen as being of interest or as likely to be part of depicted objects. Edges connect adjacent pixels, with adjacency defined either orthogonally according to the Von Neumann neighborhood, or both orthogonally and diagonally according to the Moore neighborhood. Identifying the connected components of this graph allows additional processing to find more structure in those parts of the image or identify what kind of object is depicted. Researchers have developed component-finding algorithms specialized for this type of graph, allowing it to be processed in pixel order rather than in the more scattered order that would be generated by breadth-first or depth-first searching. This can be useful in situations where sequential access to the pixels is more efficient than random access, either because the image is represented in a hierarchical way that does not permit fast random access or because sequential access produces better memory access patterns.
There are also efficient algorithms to dynamically track the components of a graph as vertices and edges are added, by using a disjoint-set data structure to keep track of the partition of the vertices into equivalence classes, replacing any two classes by their union when an edge connecting them is added. These algorithms take amortized time
O
(
α
(
n
)
)
{\displaystyle O(\alpha (n))}
per operation, where adding vertices and edges and determining the component in which a vertex falls are both operations, and
α
{\displaystyle \alpha }
is a very slowly growing inverse of the very quickly growing Ackermann function. One application of this sort of incremental connectivity algorithm is in Kruskal's algorithm for minimum spanning trees, which adds edges to a graph in sorted order by length and includes an edge in the minimum spanning tree only when it connects two different components of the previously-added subgraph. When both edge insertions and edge deletions are allowed, dynamic connectivity algorithms can still maintain the same information, in amortized time
O
(
log
2
n
/
log
log
n
)
{\displaystyle O(\log ^{2}n/\log \log n)}
per change and time
O
(
log
n
/
log
log
n
)
{\displaystyle O(\log n/\log \log n)}
per connectivity query, or in near-logarithmic randomized expected time.
Components of graphs have been used in computational complexity theory to study the power of Turing machines that have a working memory limited to a logarithmic number of bits, with the much larger input accessible only through read access rather than being modifiable. The problems that can be solved by machines limited in this way define the complexity class L. It was unclear for many years whether connected components could be found in this model, when formalized as a decision problem of testing whether two vertices belong to the same component, and in 1982 a related complexity class, SL, was defined to include this connectivity problem and any other problem equivalent to it under logarithmic-space reductions. It was finally proven in 2008 that this connectivity problem can be solved in logarithmic space, and therefore that SL = L.
In a graph represented as an adjacency list, with random access to its vertices, it is possible to estimate the number of connected components, with constant probability of obtaining additive (absolute) error at most
ε
n
{\displaystyle \varepsilon n}
, in sublinear time
O
(
ε
−
2
log
ε
−
1
)
{\displaystyle O(\varepsilon ^{-2}\log \varepsilon ^{-1})}
.
== In random graphs ==
In random graphs the sizes of components are given by a random variable, which, in turn, depends on the specific model of how random graphs are chosen.
In the
G
(
n
,
p
)
{\displaystyle G(n,p)}
version of the Erdős–Rényi–Gilbert model, a graph on
n
{\displaystyle n}
vertices is generated by choosing randomly and independently for each pair of vertices whether to include an edge connecting that pair, with probability
p
{\displaystyle p}
of including an edge and probability
1
−
p
{\displaystyle 1-p}
of leaving those two vertices without an edge connecting them. The connectivity of this model depends on
p
{\displaystyle p}
, and there are three different ranges of
p
{\displaystyle p}
with very different behavior from each other. In the analysis below, all outcomes occur with high probability, meaning that the probability of the outcome is arbitrarily close to one for sufficiently large values of
n
{\displaystyle n}
. The analysis depends on a parameter
ε
{\displaystyle \varepsilon }
, a positive constant independent of
n
{\displaystyle n}
that can be arbitrarily close to zero.
Subcritical
p
<
(
1
−
ε
)
/
n
{\displaystyle p<(1-\varepsilon )/n}
In this range of
p
{\displaystyle p}
, all components are simple and very small. The largest component has logarithmic size. The graph is a pseudoforest. Most of its components are trees: the number of vertices in components that have cycles grows more slowly than any unbounded function of the number of vertices. Every tree of fixed size occurs linearly many times.
Critical
p
≈
1
/
n
{\displaystyle p\approx 1/n}
The largest connected component has a number of vertices proportional to
n
2
/
3
{\displaystyle n^{2/3}}
. There may exist several other large components; however, the total number of vertices in non-tree components is again proportional to
n
2
/
3
{\displaystyle n^{2/3}}
.
Supercritical
p
>
(
1
+
ε
)
/
n
{\displaystyle p>(1+\varepsilon )/n}
There is a single giant component containing a linear number of vertices. For large values of
p
{\displaystyle p}
its size approaches the whole graph:
|
C
1
|
≈
y
n
{\displaystyle |C_{1}|\approx yn}
where
y
{\displaystyle y}
is the positive solution to the equation
e
−
p
n
y
=
1
−
y
{\displaystyle e^{-pny}=1-y}
. The remaining components are small, with logarithmic size.
In the same model of random graphs, there will exist multiple connected components with high probability for values of
p
{\displaystyle p}
below a significantly higher threshold,
p
<
(
1
−
ε
)
(
log
n
)
/
n
{\displaystyle p<(1-\varepsilon )(\log n)/n}
, and a single connected component for values above the threshold,
p
>
(
1
+
ε
)
(
log
n
)
/
n
{\displaystyle p>(1+\varepsilon )(\log n)/n}
. This phenomenon is closely related to the coupon collector's problem: in order to be connected, a random graph needs enough edges for each vertex to be incident to at least one edge. More precisely, if random edges are added one by one to a graph, then with high probability the first edge whose addition connects the whole graph touches the last isolated vertex.
For different models including the random subgraphs of grid graphs, the connected components are described by percolation theory. A key question in this theory is the existence of a percolation threshold, a critical probability above which a giant component (or infinite component) exists and below which it does not.
== References ==
== External links ==
MATLAB code to find components in undirected graphs, MATLAB File Exchange.
Connected components, Steven Skiena, The Stony Brook Algorithm Repository | Wikipedia/Connected_component_(graph_theory) |
A hyperbolic geometric graph (HGG) or hyperbolic geometric network (HGN) is a special type of spatial network where (1) latent coordinates of nodes are sprinkled according to a probability density function into a
hyperbolic space of constant negative curvature and (2) an edge between two nodes is present if they are close according to a function of the metric (typically either a Heaviside step function resulting in deterministic connections between vertices closer than a certain threshold distance, or a decaying function of hyperbolic distance yielding the connection probability). A HGG generalizes a random geometric graph (RGG) whose embedding space is Euclidean.
== Mathematical formulation ==
Mathematically, a HGG is a graph
G
(
V
,
E
)
{\displaystyle G(V,E)}
with a vertex set V (cardinality
N
=
|
V
|
{\displaystyle N=|V|}
) and an edge set E constructed by considering the nodes as points placed onto a 2-dimensional hyperbolic space
H
ζ
2
{\displaystyle \mathbb {H} _{\zeta }^{2}}
of constant negative Gaussian curvature,
−
ζ
2
{\displaystyle -\zeta ^{2}}
and cut-off radius
R
{\displaystyle R}
, i.e. the radius of the Poincaré disk which can be visualized using a hyperboloid model.
Each point
i
{\displaystyle i}
has hyperbolic polar coordinates
(
r
i
,
θ
i
)
{\displaystyle (r_{i},\theta _{i})}
with
0
≤
r
i
≤
R
{\displaystyle 0\leq r_{i}\leq R}
and
0
≤
θ
i
<
2
π
{\displaystyle 0\leq \theta _{i}<2\pi }
.
The hyperbolic law of cosines allows to measure the distance
d
i
j
{\displaystyle d_{ij}}
between two points
i
{\displaystyle i}
and
j
{\displaystyle j}
,
cosh
(
ζ
d
i
j
)
=
cosh
(
ζ
r
i
)
cosh
(
ζ
r
j
)
{\displaystyle \cosh(\zeta d_{ij})=\cosh(\zeta r_{i})\cosh(\zeta r_{j})}
−
sinh
(
ζ
r
i
)
sinh
(
ζ
r
j
)
cos
(
π
−
|
π
−
|
θ
i
−
θ
j
|
|
⏟
Δ
)
.
{\displaystyle -\sinh(\zeta r_{i})\sinh(\zeta r_{j})\cos {\bigg (}\underbrace {\pi \!-\!{\bigg |}\pi -|\theta _{i}\!-\!\theta _{j}|{\bigg |}} _{\Delta }{\bigg )}.}
The angle
Δ
{\displaystyle \Delta }
is the (smallest) angle between the two
position vectors.
In the simplest case, an edge
(
i
,
j
)
{\displaystyle (i,j)}
is established iff (if and only if) two nodes are within a certain neighborhood radius
r
{\displaystyle r}
,
d
i
j
≤
r
{\displaystyle d_{ij}\leq r}
, this corresponds to an influence threshold.
=== Connectivity decay function ===
In general, a link will be established with a probability depending on the distance
d
i
j
{\displaystyle d_{ij}}
.
A connectivity decay function
γ
(
s
)
:
R
+
→
[
0
,
1
]
{\displaystyle \gamma (s):\mathbb {R} ^{+}\to [0,1]}
represents the probability of assigning an edge to a pair of nodes at distance
s
{\displaystyle s}
.
In this framework, the simple case of hard-code neighborhood like in random geometric graphs is referred to as truncation decay function.
== Generating hyperbolic geometric graphs ==
Krioukov et al. describe how to generate hyperbolic geometric graphs with uniformly random node distribution (as well as generalized versions) on a disk of radius
R
{\displaystyle R}
in
H
ζ
2
{\displaystyle \mathbb {H} _{\zeta }^{2}}
. These graphs yield a power-law distribution for the node degrees. The angular coordinate
θ
{\displaystyle \theta }
of each point/node is chosen uniformly random from
[
0
,
2
π
]
{\displaystyle [0,2\pi ]}
, while the density function for the radial coordinate r is chosen according to the probability distribution
ρ
{\displaystyle \rho }
:
ρ
(
r
)
=
α
sinh
(
α
r
)
cosh
(
α
R
)
−
1
{\displaystyle \rho (r)=\alpha {\frac {\sinh(\alpha r)}{\cosh(\alpha R)-1}}}
The growth parameter
α
>
0
{\displaystyle \alpha >0}
controls the distribution: For
α
=
ζ
{\displaystyle \alpha =\zeta }
, the distribution is uniform in
H
ζ
2
{\displaystyle \mathbb {H} _{\zeta }^{2}}
, for smaller values the nodes are distributed more towards the center of the disk and for bigger values more towards the border. In this model, edges between nodes
u
{\displaystyle u}
and
v
{\displaystyle v}
exist iff
d
u
v
<
R
{\displaystyle d_{uv}<R}
or with probability
γ
(
d
u
v
)
{\displaystyle \gamma (d_{uv})}
if a more general connectivity decay function is used. The average degree is controlled by the radius
R
{\displaystyle R}
of the hyperbolic disk. It can be shown, that for
α
/
ζ
>
1
/
2
{\displaystyle \alpha /\zeta >1/2}
the node degrees follow a power law distribution with exponent
γ
=
1
+
2
α
/
ζ
{\displaystyle \gamma =1+2\alpha /\zeta }
.
The image depicts randomly generated graphs for different values of
α
{\displaystyle \alpha }
and
R
{\displaystyle R}
in
H
1
2
{\displaystyle \mathbb {H} _{1}^{2}}
. It can be seen how
α
{\displaystyle \alpha }
has an effect on the distribution of the nodes and
R
{\displaystyle R}
on the connectivity of the graph. The native representation where the distance variables have their true hyperbolic values is used for the visualization of the graph, therefore edges are straight lines.
=== Quadratic complexity generator ===
Source:
The naive algorithm for the generation of hyperbolic geometric graphs distributes the nodes on the hyperbolic disk by choosing the angular and radial coordinates of each point are sampled randomly. For every pair of nodes an edge is then inserted with the probability of the value of the connectivity decay function of their respective distance. The pseudocode looks as follows:
V
=
{
}
,
E
=
{
}
{\displaystyle V=\{\},E=\{\}}
for
i
←
0
{\displaystyle i\gets 0}
to
N
−
1
{\displaystyle N-1}
do
θ
←
U
[
0
,
2
π
]
{\displaystyle \theta \gets U[0,2\pi ]}
r
←
1
α
acosh
(
1
+
(
cosh
α
R
−
1
)
U
[
0
,
1
]
)
{\displaystyle r\gets {\frac {1}{\alpha }}{\text{acosh}}(1+(\cosh \alpha R-1)U[0,1])}
V
=
V
∪
{
(
r
,
θ
)
}
{\displaystyle V=V\cup \{(r,\theta )\}}
for every pair
(
u
,
v
)
∈
V
×
V
,
u
≠
v
{\displaystyle (u,v)\in V\times V,u\neq v}
do
if
U
[
0
,
1
]
≤
γ
(
d
u
v
)
{\displaystyle U[0,1]\leq \gamma (d_{uv})}
then
E
=
E
∪
{
(
u
,
v
)
}
{\displaystyle E=E\cup \{(u,v)\}}
return
V
,
E
{\displaystyle V,E}
N
{\displaystyle N}
is the number of nodes to generate, the distribution of the radial coordinate by the probability density function
ρ
{\displaystyle \rho }
is achieved by using inverse transform sampling.
U
{\displaystyle U}
denotes the uniform sampling of a value in the given interval. Because the algorithm checks for edges for all pairs of nodes, the runtime is quadratic. For applications where
N
{\displaystyle N}
is big, this is not viable any more and algorithms with subquadratic runtime are needed.
=== Sub-quadratic generation ===
To avoid checking for edges between every pair of nodes, modern generators use additional data structures that partition the graph into bands. A visualization of this shows a hyperbolic graph with the boundary of the bands drawn in orange. In this case, the partitioning is done along the radial axis. Points are stored sorted by their angular coordinate in their respective band. For each point
u
{\displaystyle u}
, the limits of its hyperbolic circle of radius
R
{\displaystyle R}
can be (over-)estimated and used to only perform the edge-check for points that lie in a band that intersects the circle. Additionally, the sorting within each band can be used to further reduce the number of points to look at by only considering points whose angular coordinate lie in a certain range around the one of
u
{\displaystyle u}
(this range is also computed by over-estimating the hyperbolic circle around
u
{\displaystyle u}
).
Using this and other extensions of the algorithm, time complexities of
O
(
n
log
log
n
+
m
)
{\displaystyle {\mathcal {O}}(n\log \log n+m)}
(where
n
{\displaystyle n}
is the number of nodes and
m
{\displaystyle m}
the number of edges) are possible with high probability.
== Findings ==
For
ζ
=
1
{\displaystyle \zeta =1}
(Gaussian curvature
K
=
−
1
{\displaystyle K=-1}
), HGGs form an ensemble of networks for which is possible to express the degree distribution analytically as closed form for the limiting case of large number of nodes. This is worth mentioning since this is not true for many ensembles of graphs.
== Applications ==
HGGs have been suggested as promising model for social networks where the hyperbolicity appears through a competition between similarity and popularity of an individual.
== References == | Wikipedia/Hyperbolic_geometric_graph |
The stochastic block model is a generative model for random graphs. This model tends to produce graphs containing communities, subsets of nodes characterized by being connected with one another with particular edge densities. For example, edges may be more common within communities than between communities. Its mathematical formulation was first introduced in 1983 in the field of social network analysis by Paul W. Holland et al. The stochastic block model is important in statistics, machine learning, and network science, where it serves as a useful benchmark for the task of recovering community structure in graph data.
== Definition ==
The stochastic block model takes the following parameters:
The number
n
{\displaystyle n}
of vertices;
a partition of the vertex set
{
1
,
…
,
n
}
{\displaystyle \{1,\ldots ,n\}}
into disjoint subsets
C
1
,
…
,
C
r
{\displaystyle C_{1},\ldots ,C_{r}}
, called communities;
a symmetric
r
×
r
{\displaystyle r\times r}
matrix
P
{\displaystyle P}
of edge probabilities.
The edge set is then sampled at random as follows: any two vertices
u
∈
C
i
{\displaystyle u\in C_{i}}
and
v
∈
C
j
{\displaystyle v\in C_{j}}
are connected by an edge with probability
P
i
j
{\displaystyle P_{ij}}
. An example problem is: given a graph with
n
{\displaystyle n}
vertices, where the edges are sampled as described, recover the groups
C
1
,
…
,
C
r
{\displaystyle C_{1},\ldots ,C_{r}}
.
== Special cases ==
If the probability matrix is a constant, in the sense that
P
i
j
=
p
{\displaystyle P_{ij}=p}
for all
i
,
j
{\displaystyle i,j}
, then the result is the Erdős–Rényi model
G
(
n
,
p
)
{\displaystyle G(n,p)}
. This case is degenerate—the partition into communities becomes irrelevant—but it illustrates a close relationship to the Erdős–Rényi model.
The planted partition model is the special case that the values of the probability matrix
P
{\displaystyle P}
are a constant
p
{\displaystyle p}
on the diagonal and another constant
q
{\displaystyle q}
off the diagonal. Thus two vertices within the same community share an edge with probability
p
{\displaystyle p}
, while two vertices in different communities share an edge with probability
q
{\displaystyle q}
. Sometimes it is this restricted model that is called the stochastic block model. The case where
p
>
q
{\displaystyle p>q}
is called an assortative model, while the case
p
<
q
{\displaystyle p<q}
is called disassortative.
Returning to the general stochastic block model, a model is called strongly assortative if
P
i
i
>
P
j
k
{\displaystyle P_{ii}>P_{jk}}
whenever
j
≠
k
{\displaystyle j\neq k}
: all diagonal entries dominate all off-diagonal entries. A model is called weakly assortative if
P
i
i
>
P
i
j
{\displaystyle P_{ii}>P_{ij}}
whenever
i
≠
j
{\displaystyle i\neq j}
: each diagonal entry is only required to dominate the rest of its own row and column. Disassortative forms of this terminology exist, by reversing all inequalities. For some algorithms, recovery might be easier for block models with assortative or disassortative conditions of this form.
== Typical statistical tasks ==
Much of the literature on algorithmic community detection addresses three statistical tasks: detection, partial recovery, and exact recovery.
=== Detection ===
The goal of detection algorithms is simply to determine, given a sampled graph, whether the graph has latent community structure. More precisely, a graph might be generated, with some known prior probability, from a known stochastic block model, and otherwise from a similar Erdos-Renyi model. The algorithmic task is to correctly identify which of these two underlying models generated the graph.
=== Partial recovery ===
In partial recovery, the goal is to approximately determine the latent partition into communities, in the sense of finding a partition that is correlated with the true partition significantly better than a random guess.
=== Exact recovery ===
In exact recovery, the goal is to recover the latent partition into communities exactly. The community sizes and probability matrix may be known or unknown.
== Statistical lower bounds and threshold behavior ==
Stochastic block models exhibit a sharp threshold effect reminiscent of percolation thresholds. Suppose that we allow the size
n
{\displaystyle n}
of the graph to grow, keeping the community sizes in fixed proportions. If the probability matrix remains fixed, tasks such as partial and exact recovery become feasible for all non-degenerate parameter settings. However, if we scale down the probability matrix at a suitable rate as
n
{\displaystyle n}
increases, we observe a sharp phase transition: for certain settings of the parameters, it will become possible to achieve recovery with probability tending to 1, whereas on the opposite side of the parameter threshold, the probability of recovery tends to 0 no matter what algorithm is used.
For partial recovery, the appropriate scaling is to take
P
i
j
=
P
~
i
j
/
n
{\displaystyle P_{ij}={\tilde {P}}_{ij}/n}
for fixed
P
~
{\displaystyle {\tilde {P}}}
, resulting in graphs of constant average degree. In the case of two equal-sized communities, in the assortative planted partition model with probability matrix
P
=
(
p
~
/
n
q
~
/
n
q
~
/
n
p
~
/
n
)
,
{\displaystyle P=\left({\begin{array}{cc}{\tilde {p}}/n&{\tilde {q}}/n\\{\tilde {q}}/n&{\tilde {p}}/n\end{array}}\right),}
partial recovery is feasible with probability
1
−
o
(
1
)
{\displaystyle 1-o(1)}
whenever
(
p
~
−
q
~
)
2
>
2
(
p
~
+
q
~
)
{\displaystyle ({\tilde {p}}-{\tilde {q}})^{2}>2({\tilde {p}}+{\tilde {q}})}
, whereas any estimator fails partial recovery with probability
1
−
o
(
1
)
{\displaystyle 1-o(1)}
whenever
(
p
~
−
q
~
)
2
<
2
(
p
~
+
q
~
)
{\displaystyle ({\tilde {p}}-{\tilde {q}})^{2}<2({\tilde {p}}+{\tilde {q}})}
.
For exact recovery, the appropriate scaling is to take
P
i
j
=
P
~
i
j
log
n
/
n
{\displaystyle P_{ij}={\tilde {P}}_{ij}\log n/n}
, resulting in graphs of logarithmic average degree. Here a similar threshold exists: for the assortative planted partition model with
r
{\displaystyle r}
equal-sized communities, the threshold lies at
p
~
−
q
~
=
r
{\displaystyle {\sqrt {\tilde {p}}}-{\sqrt {\tilde {q}}}={\sqrt {r}}}
. In fact, the exact recovery threshold is known for the fully general stochastic block model.
== Algorithms ==
In principle, exact recovery can be solved in its feasible range using maximum likelihood, but this amounts to solving a constrained or regularized cut problem such as minimum bisection that is typically NP-complete. Hence, no known efficient algorithms will correctly compute the maximum-likelihood estimate in the worst case.
However, a wide variety of algorithms perform well in the average case, and many high-probability performance guarantees have been proven for algorithms in both the partial and exact recovery settings. Successful algorithms include spectral clustering of the vertices, semidefinite programming, forms of belief propagation, and community detection among others.
== Variants ==
Several variants of the model exist. One minor tweak allocates vertices to communities randomly, according to a categorical distribution, rather than in a fixed partition. More significant variants include the degree-corrected stochastic block model, the hierarchical stochastic block model, the geometric block model, censored block model and the mixed-membership block model.
== Topic models ==
Stochastic block model have been recognised to be a topic model on bipartite networks. In a network of documents and words, Stochastic block model can identify topics: group of words with a similar meaning.
== Extensions to signed graphs ==
Signed graphs allow for both favorable and adverse relationships and serve as a common model choice for various data analysis applications, e.g., correlation clustering. The stochastic block model can be trivially extended to signed graphs by assigning both positive and negative edge weights or equivalently using a difference of adjacency matrices of two stochastic block models.
== DARPA/MIT/AWS Graph Challenge: streaming stochastic block partition ==
GraphChallenge encourages community approaches to developing new solutions for analyzing graphs and sparse data derived from social media, sensor feeds, and scientific data to enable relationships between events to be discovered as they unfold in the field. Streaming stochastic block partition is one of the challenges since 2017.
Spectral clustering has demonstrated outstanding performance compared to the original and even improved
base algorithm, matching its quality of clusters while being multiple orders of magnitude faster.
== See also ==
blockmodeling
Girvan–Newman algorithm – Community detection algorithm
Lancichinetti–Fortunato–Radicchi benchmark – AlgorithmPages displaying short descriptions with no spaces for generating benchmark networks with communities
== References == | Wikipedia/Stochastic_block_model |
In the psychology of motivation, balance theory is a theory of attitude change, proposed by Fritz Heider. It conceptualizes the cognitive consistency motive as a drive toward psychological balance. The consistency motive is the urge to maintain one's values and beliefs over time. Heider proposed that "sentiment" or liking relationships are balanced if the affect valence in a system multiplies out to a positive result.
Research in 2020 provided neuroscientific evidence supporting Heider's balance theory. A study using neuroimaging techniques found distinct differences in brain activation when individuals were exposed to unbalanced versus balanced triads. These differences were observed in brain regions associated with processing cognitive dissonance, offering biological support for Heider's original psychological explanation of balance theory in social context.
Structural balance theory in social network analysis is the extension proposed by Dorwin Cartwright and Frank Harary. It was the framework for the discussion at a Dartmouth College symposium in September 1975.
== P-O-X model ==
For example: a Person (
P
{\displaystyle P}
) who likes (
+
{\displaystyle +}
) an Other (
O
{\displaystyle O}
) person will be balanced by the same valence attitude on behalf of the other. Symbolically,
P
(
+
)
>
O
{\displaystyle P(+)>O}
and
P
<
(
+
)
O
{\displaystyle P<(+)O}
results in psychological balance.
This can be extended to things or objects (
X
{\displaystyle X}
) as well, thus introducing triadic relationships. If a person
P
{\displaystyle P}
likes object
X
{\displaystyle X}
but dislikes other person
O
{\displaystyle O}
, what does
P
{\displaystyle P}
feel upon learning that person
O
{\displaystyle O}
created the object
X
{\displaystyle X}
? This is symbolized as such:
P
(
+
)
>
X
{\displaystyle P(+)>X}
P
(
−
)
>
O
{\displaystyle P(-)>O}
O
(
+
)
>
X
{\displaystyle O(+)>X}
Cognitive balance is achieved when there are three positive links or two negatives with one positive. Two positive links and one negative like the example above creates imbalance or cognitive dissonance.
Multiplying the signs shows that the person will perceive imbalance (a negative multiplicative product) in this relationship, and will be motivated to correct the imbalance somehow. The Person can either:
Decide that
O
{\displaystyle O}
isn't so bad after all,
Decide that
X
{\displaystyle X}
isn't as great as originally thought, or
Conclude that
O
{\displaystyle O}
couldn't really have made
X
{\displaystyle X}
.
Any of these will result in psychological balance, thus resolving the dilemma and satisfying the drive. (Person
P
{\displaystyle P}
could also avoid object
X
{\displaystyle X}
and other person
O
{\displaystyle O}
entirely, lessening the stress created by psychological imbalance.)
To predict the outcome of a situation using Heider's balance theory, one must weigh the effects of all the potential results, and the one requiring the least amount of effort will be the likely outcome.
Determining if the triad is balanced is simple math:
+
+
+
=
+
{\displaystyle +++=+}
; Balanced.
−
+
−
=
+
{\displaystyle -+-=+}
; Balanced.
−
+
+
=
−
{\displaystyle -++=-}
; Unbalanced.
== Examples ==
Balance theory is useful in examining how celebrity endorsement affects consumers' attitudes toward products. If a person likes a celebrity and perceives (due to the endorsement) that said celebrity likes a product, said person will tend to like the product more, in order to achieve psychological balance.
However, if the person already had a dislike for the product being endorsed by the celebrity, they may begin disliking the celebrity, again to achieve psychological balance.
Heider's balance theory can explain why holding the same negative attitudes of others promotes closeness.: 171 See The enemy of my enemy is my friend.
== Signed graphs and social networks ==
Dorwin Cartwright and Frank Harary looked at Heider's triads as 3-cycles in a signed graph. The sign of a path in a graph is the product of the signs of its edges. They considered cycles in a signed graph representing a social network.
A balanced signed graph has only cycles of positive sign.
Harary proved that a balanced graph is polarized, that is, it decomposes into two entirely positive subgraphs that are joined by negative edges.
In the interest of realism, a weaker property was suggested by Davis:
No cycle has exactly one negative edge.
Graphs with this property may decompose into more than two entirely positive subgraphs, called clusters.: 179 The property has been called the clusterability axiom. Then balanced graphs are recovered by assuming the
Parsimony axiom: The subgraph of positive edges has at most two components.
The significance of balance theory for social dynamics was expressed by Anatol Rapoport:
The hypothesis implies roughly that attitudes of the group members will tend to change in such a way that one's friends' friends will tend to become one's friends and one's enemies' enemies also one's friends, and one's enemies' friends and one's friends' enemies will tend to become one's enemies, and moreover, that these changes tend to operate even across several removes (one's friends' friends' enemies' enemies tend to become friends by an iterative process).
Note that a triangle of three mutual enemies makes a clusterable graph but not a balanced one. Therefore, in a clusterable network one cannot conclude that "the enemy of my enemy is my friend," although this aphorism is a fact in a balanced network.
=== Criticism ===
Claude Flament expressed a limit to balance theory imposed by reconciling weak ties with relationships of stronger force such as family bonds:
One might think that a valued algebraic graph is necessary to represent psycho-social reality, if it is to take into account the degree of intensity of interpersonal relationships. But in fact it then seems hardly possible to define the balance of a graph, not for mathematical but for psychological reasons. If the relationship AB is +3, the relationship BC is –4, what should the AC relationship be in order that the triangle be balanced? The psychological hypotheses are wanting, or rather they are numerous and little justified.
At the 1975 Dartmouth College colloquium on balance theory, Bo Anderson struck at the heart of the notion:
In graph theory there exists a formal balance theory that contains theorems that are analytically true. The statement that Heider's psychological balance can be represented, in its essential aspects, by a suitable interpretation of that formal balance theory should, however, be regarded as problematical. We cannot routinely identify the positive and negative lines in the formal theory with the positive and negative "sentiment relations", and identify the formal balance notion with the psychological idea of balance or structural tension. .. It is puzzling that the fine structure of the relationships between formal and psychological balance has been given scant attention by balance theorists.
== See also ==
Information integration theory
Social balance theory
== References == | Wikipedia/Balance_theory |
The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched network with distributed control and one of the first computer networks to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the Internet. The ARPANET was established by the Advanced Research Projects Agency (now DARPA) of the United States Department of Defense.
Building on the ideas of J. C. R. Licklider, Bob Taylor initiated the ARPANET project in 1966 to enable resource sharing between remote computers. Taylor appointed Larry Roberts as program manager. Roberts made the key decisions about the request for proposal to build the network. He incorporated Donald Davies' concepts and designs for packet switching, and sought input from Paul Baran on dynamic routing. In 1969, ARPA awarded the contract to build the Interface Message Processors (IMPs) for the network to Bolt Beranek & Newman (BBN). The design was led by Bob Kahn who developed the first protocol for the network. Roberts engaged Leonard Kleinrock at UCLA to develop mathematical methods for analyzing the packet network technology.
The first computers were connected in 1969 and the Network Control Protocol was implemented in 1970, development of which was led by Steve Crocker at UCLA and other graduate students, including Jon Postel. The network was declared operational in 1971. Further software development enabled remote login and file transfer, which was used to provide an early form of email. The network expanded rapidly and operational control passed to the Defense Communications Agency in 1975.
Bob Kahn moved to DARPA and, together with Vint Cerf at Stanford University, formulated the Transmission Control Program for internetworking. As this work progressed, a protocol was developed by which multiple separate networks could be joined into a network of networks; this incorporated concepts pioneered in the French CYCLADES project directed by Louis Pouzin. Version 4 of TCP/IP was installed in the ARPANET for production use in January 1983 after the Department of Defense made it standard for all military computer networking.
Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In the early 1980s, the NSF funded the establishment of national supercomputing centers at several universities and provided network access and network interconnectivity with the NSFNET project in 1986. The ARPANET was formally decommissioned in 1990, after partnerships with the telecommunication and computer industry had assured private sector expansion and commercialization of an expanded worldwide network, known as the Internet.
== Inspiration ==
Historically, voice and data communications were based on methods of circuit switching, as exemplified in the traditional telephone network, wherein each telephone call is allocated a dedicated end-to-end electronic connection between the two communicating stations. The connection is established by switching systems that connected multiple intermediate call legs between these systems for the duration of the call.
The traditional model of the circuit-switched telecommunication network was challenged in the early 1960s by Paul Baran at the RAND Corporation, who had been researching systems that could sustain operation during partial destruction, such as by nuclear war. He developed the theoretical model of distributed adaptive message block switching. However, the telecommunication establishment rejected the development in favor of existing models. Donald Davies at the United Kingdom's National Physical Laboratory (NPL) independently arrived at a similar concept in 1965.
The earliest ideas for a computer network intended to allow general communications among computer users were formulated by computer scientist J. C. R. Licklider of Bolt Beranek and Newman (BBN), in April 1963, in memoranda discussing the concept of the "Intergalactic Computer Network". Those ideas encompassed many of the features of the contemporary Internet. In October 1963, Licklider was appointed head of the Behavioral Sciences and Command and Control programs at the Defense Department's Advanced Research Projects Agency (ARPA). He convinced Ivan Sutherland and Bob Taylor that this network concept was very important and merited development, although Licklider left ARPA before any contracts were assigned for development.
Sutherland and Taylor continued their interest in creating the network, in part, to allow ARPA-sponsored researchers at various corporate and academic locales to utilize computers provided by ARPA, and, in part, to quickly distribute new software and other computer science results. Taylor had three computer terminals in his office, each connected to separate computers, which ARPA was funding: one for the System Development Corporation (SDC) Q-32 in Santa Monica, one for Project Genie at the University of California, Berkeley, and another for Multics at the Massachusetts Institute of Technology. Taylor recalls the circumstance: "For each of these three terminals, I had three different sets of user commands. So, if I was talking online with someone at S.D.C., and I wanted to talk to someone I knew at Berkeley, or M.I.T., about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them. I said, 'Oh Man!', it's obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go. That idea is the ARPANET".
Donald Davies' work caught the attention of ARPANET developers at Symposium on Operating Systems Principles in October 1967. He gave the first public presentation, having coined the term packet switching, in August 1968 and incorporated it into the NPL network in England. The NPL network and ARPANET were the first two networks in the world to implement packet switching. Roberts said the computer networks built in the 1970s were similar "in nearly all respects" to Davies' original 1965 design.
== Creation ==
In February 1966, Bob Taylor successfully lobbied ARPA's Director Charles M. Herzfeld to fund a network project. Herzfeld redirected funds in the amount of one million dollars from a ballistic missile defense program to Taylor's budget. Taylor hired Larry Roberts as a program manager in the ARPA Information Processing Techniques Office in January 1967 to work on the ARPANET. Roberts met Paul Baran in February 1967, but did not discuss networks.
Roberts asked Frank Westervelt to explore the questions of message size and contents for the network, and to write a position paper on the intercomputer communication protocol including “conventions for character and block transmission, error checking and re-transmission, and computer and user identification." In April 1967, ARPA held a design session on technical standards. The initial standards for identification and authentication of users, transmission of characters, and error checking and retransmission procedures were discussed. Roberts' proposal was that all mainframe computers would connect to one another directly. The other investigators were reluctant to dedicate these computing resources to network administration. After the design session, Wesley Clark proposed minicomputers should be used as an interface to create a message switching network. Roberts modified the ARPANET plan to incorporate Clark's suggestion and named the minicomputers Interface Message Processors (IMPs).
The plan was presented at the inaugural Symposium on Operating Systems Principles in October 1967. Donald Davies' work on packet switching and the NPL network, presented by a colleague (Roger Scantlebury), and that of Paul Baran, came to the attention of the ARPA investigators at this conference. Roberts applied Davies' concept of packet switching for the ARPANET, and sought input from Paul Baran on dynamic routing. The NPL network was using line speeds of 768 kbit/s, and the proposed line speed for the ARPANET was upgraded from 2.4 kbit/s to 50 kbit/s.
By mid-1968, Roberts and Barry Wessler wrote a final version of the IMP specification based on a Stanford Research Institute (SRI) report that ARPA commissioned to write detailed specifications describing the ARPANET communications network. Roberts gave a report to Taylor on 3 June, who approved it on 21 June. After approval by ARPA, a Request for Quotation (RFQ) was issued for 140 potential bidders. Most computer science companies regarded the ARPA proposal as outlandish, and only twelve submitted bids to build a network; of the twelve, ARPA regarded only four as top-rank contractors. At year's end, ARPA considered only two contractors and awarded the contract to build the network to BBN in January 1969.
The initial, seven-person BBN team were much aided by the technical specificity of their response to the ARPA RFQ, and thus quickly produced the first working system. The "IMP guys" were led by Frank Heart; the theoretical design of the network was led by Bob Kahn; the team included Dave Walden, Severo Ornstein, William Crowther and several others. The BBN-proposed network closely followed Roberts' ARPA plan: a network composed of small computers, the IMPs (similar to the later concept of routers), that functioned as gateways interconnecting local resources. Routing, flow control, software design and network control were developed by the BBN team. At each site, the IMPs performed store-and-forward packet switching functions and were interconnected with leased lines via telecommunication data sets (modems), with initial data rates of 50kbit/s. The host computers were connected to the IMPs via custom serial communication interfaces. The system, including the hardware and the packet switching software, was designed and installed in nine months. The BBN team continued to interact with the NPL team with meetings between them taking place in the U.S. and the U.K.
As with the NPL network, the first-generation IMPs were built by BBN using a rugged computer version of the Honeywell DDP-516 computer, configured with 24KB of expandable magnetic-core memory, and a 16-channel Direct Multiplex Control (DMC) direct memory access unit. The DMC established custom interfaces with each of the host computers and modems. In addition to the front-panel lamps, the DDP-516 computer also features a special set of 24 indicator lamps showing the status of the IMP communication channels. Each IMP could support up to four local hosts and could communicate with up to six remote IMPs via early Digital Signal 0 leased telephone lines. The network connected one computer in Utah with three in California. Later, the Department of Defense allowed the universities to join the network for sharing hardware and software resources.
According to Charles Herzfeld, ARPA Director (1965–1967):The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim. To build such a system was, clearly, a major military need, but it was not ARPA's mission to do this; in fact, we would have been severely criticized had we tried. Rather, the ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them.
The ARPANET used distributed computation and incorporated frequent re-computation of routing tables (automatic routing was technically challenging at the time). These features increased the survivability of the network in the event of significant interruption. Furthermore, the ARPANET was designed to survive subordinate network losses. However, the Internet Society agrees with Herzfeld in a footnote in their online article, A Brief History of the Internet:
It was from the RAND study that the false rumor started, claiming that the ARPANET was somehow related to building a network resistant to nuclear war. This was never true of the ARPANET, but was an aspect of the earlier RAND study of secure communication. The later work on internetworking did emphasize robustness and survivability, including the capability to withstand losses of large portions of the underlying networks.
Paul Baran, the first to put forward a theoretical model for communication using packet switching, conducted the RAND study referenced above. Though the ARPANET did not exactly share Baran's project's goal, he said his work did contribute to the development of the ARPANET. Minutes taken by Elmer Shapiro of Stanford Research Institute at the ARPANET design meeting of 9–10 October 1967 indicate that a version of Baran's routing method ("hot potato") may be used, consistent with the NPL team's proposal at the Symposium on Operating System Principles in Gatlinburg.
Later, in the 1970s, ARPA did emphasize the goal of "command and control". According to Stephen J. Lukasik, who was deputy director (1967–1970) and Director of DARPA (1970–1975):
The goal was to exploit new computer technologies to meet the needs of military command and control against nuclear threats, achieve survivable control of US nuclear forces, and improve military tactical and management decision making.
== Implementation ==
The first four nodes were designated as a testbed for developing and debugging the 1822 protocol, which was a major undertaking. While they were connected electronically in 1969, network applications were not possible until the Network Control Protocol was implemented in 1970 enabling the first two host-host protocols, remote login (Telnet) and file transfer (FTP) which were specified and implemented between 1969 and 1973. The network was declared operational in 1971. Network traffic began to grow once email was established at the majority of sites by around 1973.
=== Initial four hosts ===
The initial ARPANET configuration linked UCLA, ARC, UCSB, and the University of Utah School of Computing. The first node was created at UCLA, where Leonard Kleinrock could evaluate network performance and examine his theories on message delay. The locations were selected not only to reduce leased line costs but also because each had specific expertise beneficial for this initial implementation phase:
University of California, Los Angeles (UCLA), where Kleinrock had established a Network Measurement Center (NMC), with an SDS Sigma 7 being the first computer attached to it;
The Augmentation Research Center at Stanford Research Institute (now SRI International), where Douglas Engelbart had created the new NLS system, an early hypertext system, and would run the Network Information Center (NIC), with the SDS 940 that ran NLS, named "Genie", being the first host attached;
University of California, Santa Barbara (UCSB), with the Culler-Fried Interactive Mathematics Center's IBM 360/75, running OS/MVT being the machine attached;
The University of Utah School of Computing, where Ivan Sutherland had moved, running a DEC PDP-10 operating on TENEX.
The first successful host-to-host connection on the ARPANET was made between Stanford Research Institute (SRI) and UCLA, by SRI programmer Bill Duvall and UCLA student programmer Charley Kline, at 10:30 pm PST on 29 October 1969 (6:30 UTC on 30 October 1969). Kline connected from UCLA's SDS Sigma 7 Host computer (in Boelter Hall room 3420) to the Stanford Research Institute's SDS 940 Host computer. Kline typed the command "login," but initially the SDS 940 crashed after he typed two characters. About an hour later, after Duvall adjusted parameters on the machine, Kline tried again and successfully logged in. Hence, the first two characters successfully transmitted over the ARPANET were "lo". The first permanent ARPANET link was established on 21 November 1969, between the IMP at UCLA and the IMP at the Stanford Research Institute. By 5 December 1969, the initial four-node network was established.
Elizabeth Feinler created the first Resource Handbook for ARPANET in 1969 which led to the development of the ARPANET directory. The directory, built by Feinler and a team made it possible to navigate the ARPANET.
=== Network performance ===
In 1968, Roberts contracted with Kleinrock to measure the performance of the network and find areas for improvement. Building on his earlier work on queueing theory and optimization of packet delay in communication networks, Kleinrock specified mathematical models of the performance of packet-switched networks, which underpinned the development of the ARPANET as it expanded rapidly in the early 1970s.
=== Growth and evolution ===
Roberts engaged Howard Frank to consult on the topological design of the network. Frank made recommendations to increase throughput and reduce costs in a scaled-up network. By March 1970, the ARPANET reached the East Coast of the United States, when an IMP at BBN in Cambridge, Massachusetts was connected to the network. Thereafter, the ARPANET grew: 9 IMPs by June 1970 and 13 IMPs by December 1970, then 18 by September 1971 (when the network included 23 university and government hosts); 29 IMPs by August 1972, and 40 by September 1973. By June 1974, there were 46 IMPs, and in July 1975, the network numbered 57 IMPs. By 1981, the number was 213 host computers, with another host connecting approximately every twenty days.
Support for inter-IMP circuits of up to 230.4 kbit/s was added in 1970, although considerations of cost and IMP processing power meant this capability was not actively used.
Larry Roberts saw the ARPANET and NPL projects as complementary and sought in 1970 to connect them via a satellite link. Peter Kirstein's research group at University College London (UCL) was subsequently chosen in 1971 in place of NPL for the UK connection. In June 1973, a transatlantic satellite link connected ARPANET to the Norwegian Seismic Array (NORSAR), via the Tanum Earth Station in Sweden, and onward via a terrestrial circuit to a TIP at UCL. UCL provided a gateway for interconnection of the ARPANET with British academic networks, the first international resource sharing network, and carried out some of the earliest experimental research work on internetworking.
1971 saw the start of the use of the non-ruggedized (and therefore significantly lighter) Honeywell 316 as an IMP. It could also be configured as a Terminal Interface Processor (TIP), which provided terminal server support for up to 63 ASCII serial terminals through a multi-line controller in place of one of the hosts. The 316 featured a greater degree of integration than the 516, which made it less expensive and easier to maintain. The 316 was configured with 40 kB of core memory for a TIP. The size of core memory was later increased, to 32 kB for the IMPs, and 56 kB for TIPs, in 1973.
The ARPANET was demonstrated at the International Conference on Computer Communications in October 1972.
In 1975, BBN introduced IMP software running on the Pluribus multi-processor. These appeared in a few sites. In 1981, BBN introduced IMP software running on its own C/30 processor product.
== Networking evolution ==
=== IMP functionality ===
Because it was never a goal for the ARPANET to support IMPs from vendors other than BBN, the IMP-to-IMP protocol and message format were not standardized. However, the IMPs did nonetheless communicate amongst themselves to perform link-state routing, to do reliable forwarding of messages, and to provide remote monitoring and management functions to ARPANET's Network Control Center. Initially, each IMP had a 6-bit identifier and supported up to 4 hosts, which were identified with a 2-bit index. An ARPANET host address, therefore, consisted of both the port index on its IMP and the identifier of the IMP, which was written with either port/IMP notation or as a single byte; for example, the address of MIT-DMG (notable for hosting development of Zork) could be written as either 1/6 or 70. An upgrade in early 1976 extended the host and IMP numbering to 8-bit and 16-bit, respectively.
In addition to primary routing and forwarding responsibilities, the IMP ran several background programs, titled TTY, DEBUG, PARAMETER-CHANGE, DISCARD, TRACE, and STATISTICS. These were given host numbers in order to be addressed directly and provided functions independently of any connected host. For example, "TTY" allowed an on-site operator to send ARPANET packets manually via the teletype connected directly to the IMP.
=== 1822 protocol ===
The starting point for host-to-host communication on the ARPANET in 1969 was the 1822 protocol, which defined the transmission of messages to an IMP. The message format was designed to work unambiguously with a broad range of computer architectures. An 1822 message essentially consisted of a message type, a numeric host address, and a data field. To send a data message to another host, the transmitting host formatted a data message containing the destination host's address and the data message being sent, and then transmitted the message through the 1822 hardware interface. The IMP then delivered the message to its destination address, either by delivering it to a locally connected host, or by delivering it to another IMP. When the message was ultimately delivered to the destination host, the receiving IMP would transmit a Ready for Next Message (RFNM) acknowledgment to the sending, host IMP.
=== Network Control Protocol ===
Unlike modern Internet datagrams, the ARPANET was designed to reliably transmit 1822 messages, and to inform the host computer when it loses a message; the contemporary IP is unreliable, whereas the TCP is reliable. Nonetheless, the 1822 protocol proved inadequate for handling multiple connections among different applications residing in a host computer. This problem was addressed with the Network Control Protocol (NCP), which provided a standard method to establish reliable, flow-controlled, bidirectional communications links among different processes in different host computers. The NCP interface allowed application software to connect across the ARPANET by implementing higher-level communication protocols, an early example of the protocol layering concept later incorporated in the OSI model.
NCP was developed under the leadership of Steve Crocker, then a graduate student at UCLA. Crocker created and led the Network Working Group (NWG) which was made up of a collection of graduate students at universities and research laboratories, including Jon Postel and Vint Cerf at UCLA. They were sponsored by ARPA to carry out the development of the ARPANET and the software for the host computers that supported applications.
=== TCP/IP ===
Stephen J. Lukasik directed DARPA to focus on internetworking research in the early 1970s. Bob Kahn moved from BBN to DARPA in 1972, first as program manager for the ARPANET, under Larry Roberts, then as director of the IPTO when Roberts left to found Telenet. Kahn worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. Steve Crocker, now at DARPA, and the leaders of British and French network projects founded the International Network Working Group in 1972 and, on Crocker's recommendation, Vint Cerf, now on the faculty at Stanford University, became its Chair. This group considered how to interconnect packet switching networks with different specifications, that is, internetworking. Research led by Kahn and Cerf resulted in the formulation of the Transmission Control Program, which incorporated concepts from the French CYCLADES project directed by Louis Pouzin. Its specification was written by Cerf with Yogen Dalal and Carl Sunshine at Stanford in December 1974 (RFC 675). The following year, testing began through concurrent implementations at Stanford, BBN and University College London. At first a monolithic design, the software was redesigned as a modular protocol stack in version 3 in 1978. Version 4 was installed in the ARPANET for production use in January 1983, replacing NCP. The development of the complete Internet protocol suite by 1989, as outlined in RFC 1122 and RFC 1123, and partnerships with the telecommunication and computer industry laid the foundation for the adoption of TCP/IP as a comprehensive protocol suite as the core component of the emerging Internet.
== Operation ==
ARPA was intended to fund advanced research. The ARPANET was a research project that was communications-oriented, rather than user-oriented in design. Nonetheless, in the summer of 1975, operational control of the ARPANET passed to the Defense Communications Agency. At about this time, the first ARPANET encryption devices were deployed to support classified traffic. The ARPANET Completion Report, written in 1978 and published in 1981 jointly by BBN and DARPA, concludes that:
... it is somewhat fitting to end on the note that the ARPANET program has had a strong and direct feedback into the support and strength of computer science, from which the network, itself, sprang.
Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET).
The transatlantic connectivity with NORSAR and UCL later evolved into the SATNET. The ARPANET, SATNET and PRNET were interconnected in 1977. The DoD made TCP/IP the standard communication protocol for all military computer networking in 1980. NORSAR and University College London left the ARPANET and began using TCP/IP over SATNET in 1982. On January 1, 1983, known as flag day, TCP/IP protocols became the standard for the ARPANET, replacing the earlier Network Control Protocol.
In September 1984 work was completed on restructuring the ARPANET giving U.S. military sites their own Military Network (MILNET) for unclassified defense department communications. Both networks carried unclassified information and were connected at a small number of controlled gateways which would allow total separation in the event of an emergency. MILNET was part of the Defense Data Network (DDN). Separating the civil and military networks reduced the 113-node ARPANET by 68 nodes. After MILNET was split away, the ARPANET would continue to be used as an Internet backbone for researchers, but be slowly phased out.
== Applications ==
NCP provided a standard set of network services that could be shared by several applications running on a single host computer. This led to the evolution of application protocols that operated, more or less, independently of the underlying network service, and permitted independent advances in the underlying protocols.
The various application protocols such as TELNET for remote time-sharing access and File Transfer Protocol (FTP), the latter used to enable rudimentary electronic mail, were developed and eventually ported to run over the TCP/IP protocol suite. In the 1980s, FTP for email was replaced by the Simple Mail Transfer Protocol and, later, POP and IMAP.
Telnet was developed in 1969 beginning with RFC 15, extended in RFC 855.
The original specification for the File Transfer Protocol was written by Abhay Bhushan and published as RFC 114 on 16 April 1971. By 1973, the File Transfer Protocol (FTP) specification had been defined (RFC 354) and implemented, enabling file transfers over the ARPANET.
In 1971, Ray Tomlinson, of BBN sent the first network e-mail (RFC 524, RFC 561). An ARPA study in 1973, a year after network e-mail was introduced to the ARPANET community, found that three-quarters of the traffic over the ARPANET consisted of email messages. E-mail remained a very large part of the overall ARPANET traffic.
The Network Voice Protocol (NVP) specifications were defined in 1977 (RFC 741), and implemented. But, because of technical shortcomings, conference calls over the ARPANET never worked well; the contemporary Voice over Internet Protocol (packet voice) was decades away.
== Security ==
The Purdy Polynomial hash algorithm was developed for the ARPANET to protect passwords in 1971 at the request of Larry Roberts, head of ARPA at that time. It computed a polynomial of degree 224 + 17 modulo the 64-bit prime p = 264 − 59. The algorithm was later used by Digital Equipment Corporation (DEC) to hash passwords in the VMS operating system and is still being used for this purpose.
== Rules and etiquette ==
Because of its government funding, certain forms of traffic were discouraged or prohibited.
Leonard Kleinrock claims to have committed the first illegal act on the Internet, having sent a request for return of his electric razor after a meeting in England in 1973. At the time, use of the ARPANET for personal reasons was unlawful.
In 1978, against the rules of the network, Gary Thuerk of Digital Equipment Corporation (DEC) sent out the first mass email to approximately 400 potential clients via the ARPANET. He claims that this resulted in $13 million worth of sales in DEC products, and highlighted the potential of email marketing.
A 1982 handbook on computing at MIT's AI Lab stated regarding network etiquette:
It is considered illegal to use the ARPANet for anything which is not in direct support of Government business ... personal messages to other ARPANet subscribers (for example, to arrange a get-together or check and say a friendly hello) are generally not considered harmful ... Sending electronic mail over the ARPANet for commercial profit or political purposes is both anti-social and illegal. By sending such messages, you can offend many people, and it is possible to get MIT in serious trouble with the Government agencies which manage the ARPANet.
== Decommissioning ==
In 1985, the NSF funded the establishment of national supercomputing centers at several universities and provided network access and network interconnectivity with the NSFNET project in 1986. NSFNET became the Internet backbone for government agencies and universities.
The ARPANET project was formally decommissioned in 1990. The original IMPs and TIPs were phased out as the ARPANET was shut down after the introduction of the NSFNet, but some IMPs remained in service as late as July 1990.
In the wake of the decommissioning of the ARPANET on 28 February 1990, Vinton Cerf wrote the following lamentation, entitled "Requiem of the ARPANET":
It was the first, and being first, was best,
but now we lay it down to ever rest.
Now pause with me a moment, shed some tears.
For auld lang syne, for love, for years and years
of faithful service, duty done, I weep.
Lay down thy packet, now, O friend, and sleep.
-Vinton Cerf
== Legacy ==
The technological advancements and practical applications achieved through the ARPANET were instrumental in shaping modern computer networking including the Internet. Development and implementation of the concepts of packet switching, decentralized networks, and communication protocols, notably TCP/IP, laid the foundation for a global network that revolutionized communication, information sharing and collaborative research across the world.
The ARPANET was related to many other research projects, which either influenced the ARPANET design, were ancillary projects, or spun out of the ARPANET.
Senator Al Gore authored the High Performance Computing and Communication Act of 1991, commonly referred to as "The Gore Bill", after hearing the 1988 concept for a National Research Network submitted to Congress by a group chaired by Leonard Kleinrock. The bill was passed on 9 December 1991 and led to the National Information Infrastructure (NII) which Gore called the information superhighway.
The ARPANET project was honored with two IEEE Milestones, both dedicated in 2009.
== See also ==
.arpa – Internet top-level domain
Computer Networks: The Heralds of Resource Sharing – 1972 documentary film
History of the Internet
List of Internet pioneers
OGAS – Soviet internet-like project, automation of economy
Plan 55-A
Protocol Wars – Computer science debate
Telehack – ARPANET simulation
Usenet – Worldwide computer-based distributed discussion system
== References ==
== Sources ==
Evans, Claire L. (2018). Broad Band: The Untold Story of the Women Who Made the Internet. New York: Portfolio/Penguin. ISBN 978-0-7352-1175-9.
Hafner, Katie; Lyon, Matthew (1996). Where Wizards Stay Up Late: The Origins of the Internet. Simon and Schuster. ISBN 978-0-7434-6837-4.
== Further reading ==
Norberg, Arthur L.; O'Neill, Judy E. (1996). Transforming Computer Technology: Information Processing for the Pentagon, 1962–1982. Johns Hopkins University. pp. 153–196. ISBN 978-0-8018-6369-1.
A History of the ARPANET: The First Decade (PDF) (Report). Arlington, VA: Bolt, Beranek & Newman Inc. 1 April 1981. Archived from the original on 1 December 2012.
Abbate, Janet (2000). Inventing the Internet. Cambridge, MA: MIT Press. pp. 36–111. ISBN 978-0-2625-1115-5.
Banks, Michael A. (2008). On the Way to the Web: The Secret History of the Internet and Its Founders. APress/Springer Verlag. ISBN 978-1-4302-0869-3.
Salus, Peter H. (1 May 1995). Casting the Net: from ARPANET to Internet and Beyond. Addison-Wesley. ISBN 978-0-201-87674-1.
Waldrop, M. Mitchell (23 August 2001). The Dream Machine: J. C. R. Licklider and the Revolution That Made Computing Personal. New York: Viking. ISBN 978-0-670-89976-0.
"The Computer History Museum, SRI International, and BBN Celebrate the 40th Anniversary of First ARPANET Transmission". Computer History Museum. 27 October 2009.
Marill, Thomas; Roberts, Lawrence G. (1966). "Toward a cooperative network of time-shared computers". Proceedings of the November 7–10, 1966, fall joint computer conference (AFIPS '66 (Fall)). Association for Computing Machinery. pp. 425–431. doi:10.1145/1464291.1464336. S2CID 2051631. Archived from the original on 1 April 2002.
Roberts, Lawrence G. (1967). "Multiple computer networks and intercomputer communication". Proceedings of the first ACM symposium on Operating System Principles (SOSP '67). Association for Computing Machinery. pp. 3.1 – 3.6. doi:10.1145/800001.811680. S2CID 17409102. Archived from the original on 3 June 2002.
Davies, D.W.; Bartlett, K.A.; Scantlebury, R.A.; Wilkinson, P.T. (1967). "A digital communication network for computers giving rapid response at remote terminals". Proceedings of the first ACM symposium on Operating System Principles (SOSP '67). Association for Computing Machinery. pp. 2.1 – 2.17. doi:10.1145/800001.811669. S2CID 15215451.
Roberts, Lawrence G.; Wessler, Barry D. (1970). "Computer network development to achieve resource sharing". Proceedings of the May 5–7, 1970, Spring Joint Computer Conference (AFIPS '70 (Spring)). Association for Computing Machinery. pp. 543–9. doi:10.1145/1476936.1477020. S2CID 9343511.
Heart, Frank; Kahn, Robert; Ornstein, Severo; Crowther, William; Walden, David (1970). The Interface Message Processor for the ARPA Computer Network (PDF). 1970 Spring Joint Computer Conference. AFIPS Proc. Vol. 36. pp. 551–567. doi:10.1145/1476936.1477021.
Carr, Stephen; Crocker, Stephen; Cerf, Vinton (1970). Host-Host Communication Protocol in the ARPA Network. 1970 Spring Joint Computer Conference. AFIPS Proc. Vol. 36. pp. 589–598. doi:10.1145/1476936.1477024. RFC 33.
Ornstein, Severo; Heart, Frank; Crowther, William; Russell, S. B.; Rising, H. K.; Michel, A. (1972). The Terminal IMP for the ARPA Computer Network. 1972 Spring Joint Computer Conference. AFIPS Proc. Vol. 40. pp. 243–254. doi:10.1145/1478873.1478906.
McQuillan, John; Crowther, William; Cosell, Bernard; Walden, David; Heart, Frank (1972). Improvements in the Design and Performance of the ARPA Network. 1972 Fall Joint Computer Conference part II. AFIPS Proc. Vol. 41. pp. 741–754. doi:10.1145/1480083.1480096.
Heart, Frank; Kahn, Robert; Ornstein, Severo; Crowther, William; Walden, David (1970). The Interface Message Processor for the ARPA Computer Network (PDF). 1970 Spring Joint Computer Conference. AFIPS Proc. Vol. 36. pp. 551–567. doi:10.1145/1476936.1477021.
Carr, Stephen; Crocker, Stephen; Cerf, Vinton (1970). Host-Host Communication Protocol in the ARPA Network. 1970 Spring Joint Computer Conference. AFIPS Proc. Vol. 36. pp. 589–598. doi:10.1145/1476936.1477024. RFC 33.
Ornstein, Severo; Heart, Frank; Crowther, William; Russell, S. B.; Rising, H. K.; Michel, A. (1972). The Terminal IMP for the ARPA Computer Network. 1972 Spring Joint Computer Conference. AFIPS Proc. Vol. 40. pp. 243–254. doi:10.1145/1478873.1478906.
Feinler, E.; Postel, J. (1976). ARPANET Protocol Handbook. SRI International. OCLC 2817630. NTIS ADA027964.
Feinler, Elizabeth J.; Postel, Jonathan B. (January 1978). ARPANET Protocol Handbook. Menlo Park: Network Information Center (NIC), SRI International. ASIN B000EN742K. OCLC 7955574. NIC 7104, NTIS ADA052594.
Feinler, E.J.; Landsberden, J.M.; McGinnis, A.C. (1976). ARPANET Resource Handbook. Stanford Research Institute. OCLC 1110650114. NTIS ADA040452.
NTIS documents may be available from "National Technical Reports Library". NTIS National Technical Information Service. U.S. Department of Commerce. 2014.
Roberts, Larry (November 1978). "The Evolution of Packet Switching". Proceedings of the IEEE. 66 (11): 1307–13. doi:10.1109/PROC.1978.11141. S2CID 26876676. Archived from the original on 24 March 2016. Retrieved 3 September 2005.
Roberts, Larry (1986). "The ARPANET & Computer Networks". Proceedings of the ACM Conference on The history of personal workstations (HPW '86). Association for Computing Machinery. pp. 51–58. doi:10.1145/12178.12182. ISBN 978-0-89791-176-4. S2CID 24271168. Archived from the original on 24 March 2016.
== External links ==
"ARPANET Maps 1969 to 1977". California State University, Dominguez Hills (CSUDH). 4 January 1978. Archived from the original on 19 April 2012. Retrieved 17 May 2012.
Walden, David C. (February 2003). "Looking back at the ARPANET effort, 34 years later". Living Internet. East Sandwich, Massachusetts. Retrieved 19 March 2021.
"Images of ARPANET from 1964 onwards". The Computer History Museum. Retrieved 29 August 2004. Timeline.
"Paul Baran and the Origins of the Internet". RAND Corporation. Retrieved 3 September 2005.
Kleinrock, Leonard. "The Day the Infant Internet Uttered its First Words". UCLA. Retrieved 11 November 2004. Personal anecdote of the first message ever sent over the ARPANET
"Doug Engelbart's Role in ARPANET History". 2008. Retrieved 3 September 2009.
Waldrop, Mitch (April 2008). "DARPA and the Internet Revolution". 50 years of Bridging the Gap. DARPA. pp. 78–85. Archived from the original on 15 September 2012. Retrieved 26 August 2012.
"Robert X Cringely: A Brief History of the Internet". YouTube. Archived from the original on 20 March 2013.
"Oral history interviews".: includes conversations with multiple individuals who influenced DARPA and early computer networks | Wikipedia/Advanced_Research_Projects_Agency_Network |
In the mathematical discipline of graph theory, a graph labeling is the assignment of labels, traditionally represented by integers, to edges and/or vertices of a graph.
Formally, given a graph G = (V, E), a vertex labeling is a function of V to a set of labels; a graph with such a function defined is called a vertex-labeled graph. Likewise, an edge labeling is a function of E to a set of labels. In this case, the graph is called an edge-labeled graph.
When the edge labels are members of an ordered set (e.g., the real numbers), it may be called a weighted graph.
When used without qualification, the term labeled graph generally refers to a vertex-labeled graph with all labels distinct. Such a graph may equivalently be labeled by the consecutive integers { 1, …, |V| } , where |V| is the number of vertices in the graph. For many applications, the edges or vertices are given labels that are meaningful in the associated domain. For example, the edges may be assigned weights representing the "cost" of traversing between the incident vertices.
In the above definition a graph is understood to be a finite undirected simple graph. However, the notion of labeling may be applied to all extensions and generalizations of graphs. For example, in automata theory and formal language theory it is convenient to consider labeled multigraphs, i.e., a pair of vertices may be connected by several labeled edges.
== History ==
Most graph labelings trace their origins to labelings presented by Alexander Rosa in his 1967 paper. Rosa identified three types of labelings, which he called α-, β-, and ρ-labelings. β-labelings were later renamed as "graceful" by Solomon Golomb, and the name has been popular since.
== Special cases ==
=== Graceful labeling ===
A graph is known as graceful if its vertices are labeled from 0 to |E|, the size of the graph, and if this vertex labeling induces an edge labeling from 1 to |E|. For any edge e, the label of e is the positive difference between the labels of the two vertices incident with e. In other words, if e is incident with vertices labeled i and j, then e will be labeled |i − j|. Thus, a graph G = (V, E) is graceful if and only if there exists an injection from V to {0, ..., |E|} that induces a bijection from E to {1, ..., |E|}.
In his original paper, Rosa proved that all Eulerian graphs with size equivalent to 1 or 2 (mod 4) are not graceful. Whether or not certain families of graphs are graceful is an area of graph theory under extensive study. Arguably, the largest unproven conjecture in graph labeling is the Ringel–Kotzig conjecture, which hypothesizes that all trees are graceful. This has been proven for all paths, caterpillars, and many other infinite families of trees. Anton Kotzig himself has called the effort to prove the conjecture a "disease".
=== Edge-graceful labeling ===
An edge-graceful labeling on a simple graph without loops or multiple edges on p vertices and q edges is a labeling of the edges by distinct integers in {1, …, q} such that the labeling on the vertices induced by labeling a vertex with the sum of the incident edges taken modulo p assigns all values from 0 to p − 1 to the vertices. A graph G is said to be "edge-graceful" if it admits an edge-graceful labeling.
Edge-graceful labelings were first introduced by Sheng-Ping Lo in 1985.
A necessary condition for a graph to be edge-graceful is "Lo's condition":
q
(
q
+
1
)
=
p
(
p
−
1
)
2
mod
p
.
{\displaystyle q(q+1)={\frac {p(p-1)}{2}}\mod p.}
=== Harmonious labeling ===
A "harmonious labeling" on a graph G is an injection from the vertices of G to the group of integers modulo k, where k is the number of edges of G, that induces a bijection between the edges of G and the numbers modulo k by taking the edge label for an edge (x, y) to be the sum of the labels of the two vertices x, y (mod k). A "harmonious graph" is one that has a harmonious labeling. Odd cycles are harmonious, as are Petersen graphs. It is conjectured that trees are all harmonious if one vertex label is allowed to be reused. The seven-page book graph K1,7 × K2 provides an example of a graph that is not harmonious.
=== Graph coloring ===
A graph coloring is a subclass of graph labelings. Vertex colorings assign different labels to adjacent vertices, while edge colorings assign different labels to adjacent edges.
=== Lucky labeling ===
A lucky labeling of a graph G is an assignment of positive integers to the vertices of G such that if S(v) denotes the sum of the labels on the neighbors of v, then S is a vertex coloring of G. The "lucky number" of G is the least k such that G has a lucky labeling with the integers {1, …, k}.
== References == | Wikipedia/Labeled_graph |
A network bridge is a computer networking device that creates a single, aggregate network from multiple communication networks or network segments. This function is called network bridging. Bridging is distinct from routing. Routing allows multiple networks to communicate independently and yet remain separate, whereas bridging connects two separate networks as if they were a single network. In the OSI model, bridging is performed in the data link layer (layer 2). If one or more segments of the bridged network are wireless, the device is known as a wireless bridge.
The main types of network bridging technologies are simple bridging, multiport bridging, and learning or transparent bridging.
== Transparent bridging ==
Transparent bridging uses a table called the forwarding information base to control the forwarding of frames between network segments. The table starts empty and entries are added as the bridge receives frames. If a destination address entry is not found in the table, the frame is forwarded to all other ports of the bridge, flooding the frame to all segments except the one from which it was received. By means of these flooded frames, a host on the destination network will respond and a forwarding database entry will be created. Both source and destination addresses are used in this process: source addresses are recorded in entries in the table, while destination addresses are looked up in the table and matched to the proper segment to send the frame to. Digital Equipment Corporation (DEC) originally developed the technology in 1983 and introduced the LANBridge 100 that implemented it in 1986.
In the context of a two-port bridge, the forwarding information base can be seen as a filtering database. A bridge reads a frame's destination address and decides to either forward or filter. If the bridge determines that the destination host is on another segment on the network, it forwards the frame to that segment. If the destination address belongs to the same segment as the source address, the bridge filters the frame, preventing it from reaching the other network where it is not needed.
Transparent bridging can also operate over devices with more than two ports. As an example, consider a bridge connected to three hosts, A, B, and C. The bridge has three ports. A is connected to bridge port 1, B is connected to bridge port 2, C is connected to bridge port 3. A sends a frame addressed to B to the bridge. The bridge examines the source address of the frame and creates an address and port number entry for host A in its forwarding table. The bridge examines the destination address of the frame and does not find it in its forwarding table so it floods (broadcasts) it to all other ports: 2 and 3. The frame is received by hosts B and C. Host C examines the destination address and ignores the frame as it does not match with its address. Host B recognizes a destination address match and generates a response to A. On the return path, the bridge adds an address and port number entry for B to its forwarding table. The bridge already has A's address in its forwarding table so it forwards the response only to port 1. Host C or any other hosts on port 3 are not burdened with the response. Two-way communication is now possible between A and B without any further flooding to the network. Now, if A sends a frame addressed to C, the same procedure will be used, but this time the bridge will not create a new forwarding-table entry for A's address/port because it has already done so.
Bridging is called transparent when the frame format and its addressing aren't changed substantially. Non-transparent bridging is required especially when the frame addressing schemes on both sides of a bridge are not compatible with each other, e.g. between ARCNET with local addressing and Ethernet using IEEE MAC addresses, requiring translation. However, most often such incompatible networks are routed in between, not bridged.
== Simple bridging ==
A simple bridge connects two network segments, typically by operating transparently and deciding on a frame-by-frame basis whether or not to forward from one network to the other. A store and forward technique is typically used so, as part of forwarding, the frame integrity is verified on the source network and CSMA/CD delays are accommodated on the destination network. In contrast to repeaters which simply extend the maximum span of a segment, bridges only forward frames that are required to cross the bridge. Additionally, bridges reduce collisions by creating a separate collision domain on either side of the bridge.
== Multiport bridging ==
A multiport bridge connects multiple networks and operates transparently to decide on a frame-by-frame basis whether to forward traffic. Additionally, a multiport bridge must decide where to forward traffic. Like the simple bridge, a multiport bridge typically uses store and forward operation. The multiport bridge function serves as the basis for network switches.
== Implementation ==
The forwarding information base stored in content-addressable memory (CAM) is initially empty. For each received Ethernet frame the switch learns from the frame's source MAC address and adds this together with an interface identifier to the forwarding information base. The switch then forwards the frame to the interface found in the CAM based on the frame's destination MAC address. If the destination address is unknown the switch sends the frame out on all interfaces (except the ingress interface). This behavior is called unicast flooding.
== Forwarding ==
Once a bridge learns the addresses of its connected nodes, it forwards data link layer frames using a layer-2 forwarding method. There are four forwarding methods a bridge can use, of which the second through fourth methods were performance-increasing methods when used on switch products with the same input and output port bandwidths:
Store and forward: the switch buffers and verifies each frame before forwarding it; a frame is received in its entirety before it is forwarded.
Cut through: the switch starts forwarding after the frame's destination address is received. There is no error checking with this method. When the outgoing port is busy at the time, the switch falls back to store-and-forward operation. Also, when the egress port is running at a faster data rate than the ingress port, store-and-forward is usually used.
Fragment free: a method that attempts to retain the benefits of both store and forward and cut through. Fragment free checks the first 64 bytes of the frame, where addressing information is stored. According to Ethernet specifications, collisions should be detected during the first 64 bytes of the frame, so frame transmissions that are aborted because of a collision will not be forwarded. Error checking of the actual data in the packet is left for the end device.
Adaptive switching: a method of automatically selecting between the other three modes.
== Shortest Path Bridging ==
Shortest Path Bridging (SPB), specified in the IEEE 802.1aq standard and based on Dijkstra's algorithm, is a computer networking technology intended to simplify the creation and configuration of networks, while enabling multipath routing. It is a proposed replacement for Spanning Tree Protocol which blocks any redundant paths that could result in a switching loop. SPB allows all paths to be active with multiple equal-cost paths. SPB also increases the number of VLANs allowed on a layer-2 network.
TRILL (Transparent Interconnection of Lots of Links) is the successor to Spanning Tree Protocol, both having been created by the same person, Radia Perlman. The catalyst for TRILL was an event at Beth Israel Deaconess Medical Center which began on 13 November 2002. The concept of Rbridges [sic] was first proposed to the Institute of Electrical and Electronics Engineers in the year 2004, whom in 2005 rejected what came to be known as TRILL, and in the years 2006 through 2012 devised an incompatible variation known as Shortest Path Bridging.
== See also ==
Audio Video Bridging – Specifications for synchronized, low-latency streaming
IEEE 802.1D – Standard which includes bridging, Spanning Tree Protocol and others
IEEE 802.1Q – IEEE networking standard supporting VLANs
IEEE 802.1ah-2008 – Standard for bridging over a provider's networkPages displaying short descriptions of redirect targets
Promiscuous mode – Network interface controller mode that eavesdrops on messages intended for others
== References == | Wikipedia/Bridging_(networking) |
This is a glossary of graph theory. Graph theory is the study of graphs, systems of nodes or vertices connected in pairs by lines or edges.
== Symbols ==
Square brackets [ ]
G[S] is the induced subgraph of a graph G for vertex subset S.
Prime symbol '
The prime symbol is often used to modify notation for graph invariants so that it applies to the line graph instead of the given graph. For instance, α(G) is the independence number of a graph; α′(G) is the matching number of the graph, which equals the independence number of its line graph. Similarly, χ(G) is the chromatic number of a graph; χ ′(G) is the chromatic index of the graph, which equals the chromatic number of its line graph.
== A ==
absorbing
An absorbing set
A
{\displaystyle A}
of a directed graph
G
{\displaystyle G}
is a set of vertices such that for any vertex
v
∈
G
∖
A
{\displaystyle v\in G\setminus A}
, there is an edge from
v
{\displaystyle v}
towards a vertex of
A
{\displaystyle A}
.
achromatic
The achromatic number of a graph is the maximum number of colors in a complete coloring.
acyclic
1. A graph is acyclic if it has no cycles. An undirected acyclic graph is the same thing as a forest. An acyclic directed graph, which is a digraph without directed cycles, is often called a directed acyclic graph, especially in computer science.
2. An acyclic coloring of an undirected graph is a proper coloring in which every two color classes induce a forest.
adjacency matrix
The adjacency matrix of a graph is a matrix whose rows and columns are both indexed by vertices of the graph, with a one in the cell for row i and column j when vertices i and j are adjacent, and a zero otherwise.
adjacent
1. The relation between two vertices that are both endpoints of the same edge.
2. The relation between two distinct edges that share an end vertex.
α
For a graph G, α(G) (using the Greek letter alpha) is its independence number (see independent), and α′(G) is its matching number (see matching).
alternating
In a graph with a matching, an alternating path is a path whose edges alternate between matched and unmatched edges. An alternating cycle is, similarly, a cycle whose edges alternate between matched and unmatched edges. An augmenting path is an alternating path that starts and ends at unsaturated vertices. A larger matching can be found as the symmetric difference of the matching and the augmenting path; a matching is maximum if and only if it has no augmenting path.
antichain
In a directed acyclic graph, a subset S of vertices that are pairwise incomparable, i.e., for any
x
≤
y
{\displaystyle x\leq y}
in S, there is no directed path from x to y or from y to x. Inspired by the notion of antichains in partially ordered sets.
anti-edge
Synonym for non-edge, a pair of non-adjacent vertices.
anti-triangle
A three-vertex independent set, the complement of a triangle.
apex
1. An apex graph is a graph in which one vertex can be removed, leaving a planar subgraph. The removed vertex is called the apex. A k-apex graph is a graph that can be made planar by the removal of k vertices.
2. Synonym for universal vertex, a vertex adjacent to all other vertices.
arborescence
Synonym for a rooted and directed tree; see tree.
arc
See edge.
arrow
An ordered pair of vertices, such as an edge in a directed graph. An arrow (x, y) has a tail x, a head y, and a direction from x to y; y is said to be the direct successor to x and x the direct predecessor to y. The arrow (y, x) is the inverted arrow of the arrow (x, y).
articulation point
A vertex in a connected graph whose removal would disconnect the graph. More generally, a vertex whose removal increases the number of components.
-ary
A k-ary tree is a rooted tree in which every internal vertex has no more than k children. A 1-ary tree is just a path. A 2-ary tree is also called a binary tree, although that term more properly refers to 2-ary trees in which the children of each node are distinguished as being left or right children (with at most one of each type). A k-ary tree is said to be complete if every internal vertex has exactly k children.
augmenting
A special type of alternating path; see alternating.
automorphism
A graph automorphism is a symmetry of a graph, an isomorphism from the graph to itself.
== B ==
bag
One of the sets of vertices in a tree decomposition.
balanced
A bipartite or multipartite graph is balanced if each two subsets of its vertex partition have sizes within one of each other.
ball
A ball (also known as a neighborhood ball or distance ball) is the set of all vertices that are at most distance r from a vertex. More formally, for a given vertex v and radius r, the ball B(v,r) consists of all vertices whose shortest path distance to v is less than or equal to r.
bandwidth
The bandwidth of a graph G is the minimum, over all orderings of vertices of G, of the length of the longest edge (the number of steps in the ordering between its two endpoints). It is also one less than the size of the maximum clique in a proper interval completion of G, chosen to minimize the clique size.
biclique
Synonym for complete bipartite graph or complete bipartite subgraph; see complete.
biconnected
Usually a synonym for 2-vertex-connected, but sometimes includes K2 though it is not 2-connected. See connected; for biconnected components, see component.
binding number
The smallest possible ratio of the number of neighbors of a proper subset of vertices to the size of the subset.
bipartite
A bipartite graph is a graph whose vertices can be divided into two disjoint sets such that the vertices in one set are not connected to each other, but may be connected to vertices in the other set. Put another way, a bipartite graph is a graph with no odd cycles; equivalently, it is a graph that may be properly colored with two colors. Bipartite graphs are often written G = (U,V,E) where U and V are the subsets of vertices of each color. However, unless the graph is connected, it may not have a unique 2-coloring.
biregular
A biregular graph is a bipartite graph in which there are only two different vertex degrees, one for each set of the vertex bipartition.
block
1. A block of a graph G is a maximal subgraph which is either an isolated vertex, a bridge edge, or a 2-connected subgraph. If a block is 2-connected, every pair of vertices in it belong to a common cycle. Every edge of a graph belongs in exactly one block.
2. The block graph of a graph G is another graph whose vertices are the blocks of G, with an edge connecting two vertices when the corresponding blocks share an articulation point; that is, it is the intersection graph of the blocks of G. The block graph of any graph is a forest.
3. The block-cut (or block-cutpoint) graph of a graph G is a bipartite graph where one partite set consists of the cut-vertices of G, and the other has a vertex
b
i
{\displaystyle b_{i}}
for each block
B
i
{\displaystyle B_{i}}
of G. When G is connected, its block-cutpoint graph is a tree.
4. A block graph (also called a clique tree if connected, and sometimes erroneously called a Husimi tree) is a graph all of whose blocks are complete graphs. A forest is a block graph; so in particular the block graph of any graph is a block graph, and every block graph may be constructed as the block graph of a graph.
bond
A minimal cut-set: a set of edges whose removal disconnects the graph, for which no proper subset has the same property.
book
1. A book, book graph, or triangular book is a complete tripartite graph K1,1,n; a collection of n triangles joined at a shared edge.
2. Another type of graph, also called a book, or a quadrilateral book, is a collection of 4-cycles joined at a shared edge; the Cartesian product of a star with an edge.
3. A book embedding is an embedding of a graph onto a topological book, a space formed by joining a collection of half-planes along a shared line. Usually, the vertices of the embedding are required to be on the line, which is called the spine of the embedding, and the edges of the embedding are required to lie within a single half-plane, one of the pages of the book.
boundary
1. In a graph embedding, a boundary walk is the subgraph containing all incident edges and vertices to a face.
bramble
A bramble is a collection of mutually touching connected subgraphs, where two subgraphs touch if they share a vertex or each includes one endpoint of an edge. The order of a bramble is the smallest size of a set of vertices that has a nonempty intersection with all of the subgraphs. The treewidth of a graph is the maximum order of any of its brambles.
branch
A path of degree-two vertices, ending at vertices whose degree is unequal to two.
branch-decomposition
A branch-decomposition of G is a hierarchical clustering of the edges of G, represented by an unrooted binary tree with its leaves labeled by the edges of G. The width of a branch-decomposition is the maximum, over edges e of this binary tree, of the number of shared vertices between the subgraphs determined by the edges of G in the two subtrees separated by e. The branchwidth of G is the minimum width of any branch-decomposition of G.
branchwidth
See branch-decomposition.
bridge
1. A bridge, isthmus, or cut edge is an edge whose removal would disconnect the graph. A bridgeless graph is one that has no bridges; equivalently, a 2-edge-connected graph.
2. A bridge of a subgraph H is a maximal connected subgraph separated from the rest of the graph by H. That is, it is a maximal subgraph that is edge-disjoint from H and in which each two vertices and edges belong to a path that is internally disjoint from H. H may be a set of vertices. A chord is a one-edge bridge.
In planarity testing, H is a cycle and a peripheral cycle is a cycle with at most one bridge; it must be a face boundary in any planar embedding of its graph.
3. A bridge of a cycle can also mean a path that connects two vertices of a cycle but is shorter than either of the paths in the cycle connecting the same two vertices. A bridged graph is a graph in which every cycle of four or more vertices has a bridge.
bridgeless
A bridgeless or isthmus-free graph is a graph that has no bridge edges (i.e., isthmi); that is, each connected component is a 2-edge-connected graph.
butterfly
1. The butterfly graph has five vertices and six edges; it is formed by two triangles that share a vertex.
2. The butterfly network is a graph used as a network architecture in distributed computing, closely related to the cube-connected cycles.
== C ==
C
Cn is an n-vertex cycle graph; see cycle.
cactus
A cactus graph, cactus tree, cactus, or Husimi tree is a connected graph in which each edge belongs to at most one cycle. Its blocks are cycles or single edges. If, in addition, each vertex belongs to at most two blocks, then it is called a Christmas cactus.
cage
A cage is a regular graph with the smallest possible order for its girth.
canonical
canonization
A canonical form of a graph is an invariant such that two graphs have equal invariants if and only if they are isomorphic. Canonical forms may also be called canonical invariants or complete invariants, and are sometimes defined only for the graphs within a particular family of graphs. Graph canonization is the process of computing a canonical form.
card
A graph formed from a given graph by deleting one vertex, especially in the context of the reconstruction conjecture. See also deck, the multiset of all cards of a graph.
carving width
Carving width is a notion of graph width analogous to branchwidth, but using hierarchical clusterings of vertices instead of hierarchical clusterings of edges.
caterpillar
A caterpillar tree or caterpillar is a tree in which the internal nodes induce a path.
center
The center of a graph is the set of vertices of minimum eccentricity.
centroid
A centroid of a tree is a vertex v such that if rooted at v, no other vertex has subtree size greater than half the size of the tree.
chain
1. Synonym for walk.
2. When applying methods from algebraic topology to graphs, an element of a chain complex, namely a set of vertices or a set of edges.
Cheeger constant
See expansion.
cherry
A cherry is a path on three vertices.
χ
χ(G) (using the Greek letter chi) is the chromatic number of G and χ ′(G) is its chromatic index; see chromatic and coloring.
child
In a rooted tree, a child of a vertex v is a neighbor of v along an outgoing edge, one that is directed away from the root.
chord
chordal
1. A chord of a cycle is an edge that does not belong to the cycle, for which both endpoints belong to the cycle.
2. A chordal graph is a graph in which every cycle of four or more vertices has a chord, so the only induced cycles are triangles.
3. A strongly chordal graph is a chordal graph in which every cycle of length six or more has an odd chord.
4. A chordal bipartite graph is not chordal (unless it is a forest); it is a bipartite graph in which every cycle of six or more vertices has a chord, so the only induced cycles are 4-cycles.
5. A chord of a circle is a line segment connecting two points on the circle; the intersection graph of a collection of chords is called a circle graph.
chromatic
Having to do with coloring; see color. Chromatic graph theory is the theory of graph coloring. The chromatic number χ(G) is the minimum number of colors needed in a proper coloring of G. χ ′(G) is the chromatic index of G, the minimum number of colors needed in a proper edge coloring of G.
choosable
choosability
A graph is k-choosable if it has a list coloring whenever each vertex has a list of k available colors. The choosability of the graph is the smallest k for which it is k-choosable.
circle
A circle graph is the intersection graph of chords of a circle.
circuit
A circuit may refer to a closed trail or an element of the cycle space (an Eulerian spanning subgraph). The circuit rank of a graph is the dimension of its cycle space.
circumference
The circumference of a graph is the length of its longest simple cycle. The graph is Hamiltonian if and only if its circumference equals its order.
class
1. A class of graphs or family of graphs is a (usually infinite) collection of graphs, often defined as the graphs having some specific property. The word "class" is used rather than "set" because, unless special restrictions are made (such as restricting the vertices to be drawn from a particular set, and defining edges to be sets of two vertices) classes of graphs are usually not sets when formalized using set theory.
2. A color class of a colored graph is the set of vertices or edges having one particular color.
3. In the context of Vizing's theorem, on edge coloring simple graphs, a graph is said to be of class one if its chromatic index equals its maximum degree, and class two if its chromatic index equals one plus the degree. According to Vizing's theorem, all simple graphs are either of class one or class two.
claw
A claw is a tree with one internal vertex and three leaves, or equivalently the complete bipartite graph K1,3. A claw-free graph is a graph that does not have an induced subgraph that is a claw.
clique
A clique is a set of mutually adjacent vertices (or the complete subgraph induced by that set). Sometimes a clique is defined as a maximal set of mutually adjacent vertices (or maximal complete subgraph), one that is not part of any larger such set (or subgraph). A k-clique is a clique of order k. The clique number ω(G) of a graph G is the order of its largest clique. The clique graph of a graph G is the intersection graph of the maximal cliques in G. See also biclique, a complete bipartite subgraph.
clique tree
A synonym for a block graph.
clique-width
The clique-width of a graph G is the minimum number of distinct labels needed to construct G by operations that create a labeled vertex, form the disjoint union of two labeled graphs, add an edge connecting all pairs of vertices with given labels, or relabel all vertices with a given label. The graphs of clique-width at most 2 are exactly the cographs.
closed
1. A closed neighborhood is one that includes its central vertex; see neighbourhood.
2. A closed walk is one that starts and ends at the same vertex; see walk.
3. A graph is transitively closed if it equals its own transitive closure; see transitive.
4. A graph property is closed under some operation on graphs if, whenever the argument or arguments to the operation have the property, then so does the result. For instance, hereditary properties are closed under induced subgraphs; monotone properties are closed under subgraphs; and minor-closed properties are closed under minors.
closure
1. For the transitive closure of a directed graph, see transitive.
2. A closure of a directed graph is a set of vertices that have no outgoing edges to vertices outside the closure. For instance, a sink is a one-vertex closure. The closure problem is the problem of finding a closure of minimum or maximum weight.
co-
This prefix has various meanings usually involving complement graphs. For instance, a cograph is a graph produced by operations that include complementation; a cocoloring is a coloring in which each vertex induces either an independent set (as in proper coloring) or a clique (as in a coloring of the complement).
color
coloring
1. A graph coloring is a labeling of the vertices of a graph by elements from a given set of colors, or equivalently a partition of the vertices into subsets, called "color classes", each of which is associated with one of the colors.
2. Some authors use "coloring", without qualification, to mean a proper coloring, one that assigns different colors to the endpoints of each edge. In graph coloring, the goal is to find a proper coloring that uses as few colors as possible; for instance, bipartite graphs are the graphs that have colorings with only two colors, and the four color theorem states that every planar graph can be colored with at most four colors. A graph is said to be k-colored if it has been (properly) colored with k colors, and k-colorable or k-chromatic if this is possible.
3. Many variations of coloring have been studied, including edge coloring (coloring edges so that no two edges with the same endpoint share a color), list coloring (proper coloring with each vertex restricted to a subset of the available colors), acyclic coloring (every 2-colored subgraph is acyclic), co-coloring (every color class induces an independent set or a clique), complete coloring (every two color classes share an edge), and total coloring (both edges and vertices are colored).
4. The coloring number of a graph is one plus the degeneracy. It is so called because applying a greedy coloring algorithm to a degeneracy ordering of the graph uses at most this many colors.
comparability
An undirected graph is a comparability graph if its vertices are the elements of a partially ordered set and two vertices are adjacent when they are comparable in the partial order. Equivalently, a comparability graph is a graph that has a transitive orientation. Many other classes of graphs can be defined as the comparability graphs of special types of partial order.
complement
The complement graph
G
¯
{\displaystyle {\bar {G}}}
of a simple graph G is another graph on the same vertex set as G, with an edge for each two vertices that are not adjacent in G.
complete
1. A complete graph is one in which every two vertices are adjacent: all edges that could exist are present. A complete graph with n vertices is often denoted Kn. A complete bipartite graph is one in which every two vertices on opposite sides of the partition of vertices are adjacent. A complete bipartite graph with a vertices on one side of the partition and b vertices on the other side is often denoted Ka,b. The same terminology and notation has also been extended to complete multipartite graphs, graphs in which the vertices are divided into more than two subsets and every pair of vertices in different subsets are adjacent; if the numbers of vertices in the subsets are a, b, c, ... then this graph is denoted Ka, b, c, ....
2. A completion of a given graph is a supergraph that has some desired property. For instance, a chordal completion is a supergraph that is a chordal graph.
3. A complete matching is a synonym for a perfect matching; see matching.
4. A complete coloring is a proper coloring in which each pairs of colors is used for the endpoints of at least one edge. Every coloring with a minimum number of colors is complete, but there may exist complete colorings with larger numbers of colors. The achromatic number of a graph is the maximum number of colors in a complete coloring.
5. A complete invariant of a graph is a synonym for a canonical form, an invariant that has different values for non-isomorphic graphs.
component
A connected component of a graph is a maximal connected subgraph. The term is also used for maximal subgraphs or subsets of a graph's vertices that have some higher order of connectivity, including biconnected components, triconnected components, and strongly connected components.
condensation
The condensation of a directed graph G is a directed acyclic graph with one vertex for each strongly connected component of G, and an edge connecting pairs of components that contain the two endpoints of at least one edge in G.
cone
A graph that contains a universal vertex.
connect
Cause to be connected.
connected
A connected graph is one in which each pair of vertices forms the endpoints of a path. Higher forms of connectivity include strong connectivity in directed graphs (for each two vertices there are paths from one to the other in both directions), k-vertex-connected graphs (removing fewer than k vertices cannot disconnect the graph), and k-edge-connected graphs (removing fewer than k edges cannot disconnect the graph).
connected component
Synonym for component.
contraction
Edge contraction is an elementary operation that removes an edge from a graph while merging the two vertices that it previously joined. Vertex contraction (sometimes called vertex identification) is similar, but the two vertices are not necessarily connected by an edge. Path contraction occurs upon the set of edges in a path that contract to form a single edge between the endpoints of the path. The inverse of edge contraction is vertex splitting.
converse
The converse graph is a synonym for the transpose graph; see transpose.
core
1. A k-core is the induced subgraph formed by removing all vertices of degree less than k, and all vertices whose degree becomes less than k after earlier removals. See degeneracy.
2. A core is a graph G such that every graph homomorphism from G to itself is an isomorphism.
3. The core of a graph G is a minimal graph H such that there exist homomorphisms from G to H and vice versa. H is unique up to isomorphism. It can be represented as an induced subgraph of G, and is a core in the sense that all of its self-homomorphisms are isomorphisms.
4. In the theory of graph matchings, the core of a graph is an aspect of its Dulmage–Mendelsohn decomposition, formed as the union of all maximum matchings.
cotree
1. The complement of a spanning tree.
2. A rooted tree structure used to describe a cograph, in which each cograph vertex is a leaf of the tree, each internal node of the tree is labeled with 0 or 1, and two cograph vertices are adjacent if and only if their lowest common ancestor in the tree is labeled 1.
cover
A vertex cover is a set of vertices incident to every edge in a graph. An edge cover is a set of edges incident to every vertex in a graph. A set of subgraphs of a graph covers that graph if its union – taken vertex-wise and edge-wise – is equal to the graph.
critical
A critical graph for a given property is a graph that has the property but such that every subgraph formed by deleting a single vertex does not have the property. For instance, a factor-critical graph is one that has a perfect matching (a 1-factor) for every vertex deletion, but (because it has an odd number of vertices) has no perfect matching itself. Compare hypo-, used for graphs which do not have a property but for which every one-vertex deletion does.
cube
cubic
1. Cube graph, the eight-vertex graph of the vertices and edges of a cube.
2. Hypercube graph, a higher-dimensional generalization of the cube graph.
3. Folded cube graph, formed from a hypercube by adding a matching connecting opposite vertices.
4. Halved cube graph, the half-square of a hypercube graph.
5. Partial cube, a distance-preserving subgraph of a hypercube.
6. The cube of a graph G is the graph power G3.
7. Cubic graph, another name for a 3-regular graph, one in which each vertex has three incident edges.
8. Cube-connected cycles, a cubic graph formed by replacing each vertex of a hypercube by a cycle.
cut
cut-set
A cut is a partition of the vertices of a graph into two subsets, or the set (also known as a cut-set) of edges that span such a partition, if that set is non-empty. An edge is said to span the partition if it has endpoints in both subsets. Thus, the removal of a cut-set from a connected graph disconnects it.
cut point
See articulation point.
cut space
The cut space of a graph is a GF(2)-vector space having the cut-sets of the graph as its elements and symmetric difference of sets as its vector addition operation.
cycle
1. A cycle may be either a kind of graph or a kind of walk. As a walk it may be either be a closed walk (also called a tour) or more usually a closed walk without repeated vertices and consequently edges (also called a simple cycle). In the latter case it is usually regarded as a graph, i.e., the choices of first vertex and direction are usually considered unimportant; that is, cyclic permutations and reversals of the walk produce the same cycle. Important special types of cycle include Hamiltonian cycles, induced cycles, peripheral cycles, and the shortest cycle, which defines the girth of a graph. A k-cycle is a cycle of length k; for instance a 2-cycle is a digon and a 3-cycle is a triangle. A cycle graph is a graph that is itself a simple cycle; a cycle graph with n vertices is commonly denoted Cn.
2. The cycle space is a vector space generated by the simple cycles in a graph, often over the field of 2 elements but also over other fields.
== D ==
DAG
Abbreviation for directed acyclic graph, a directed graph without any directed cycles.
deck
The multiset of graphs formed from a single graph G by deleting a single vertex in all possible ways, especially in the context of the reconstruction conjecture. An edge-deck is formed in the same way by deleting a single edge in all possible ways. The graphs in a deck are also called cards. See also critical (graphs that have a property that is not held by any card) and hypo- (graphs that do not have a property that is held by all cards).
decomposition
See tree decomposition, path decomposition, or branch-decomposition.
degenerate
degeneracy
A k-degenerate graph is an undirected graph in which every induced subgraph has minimum degree at most k. The degeneracy of a graph is the smallest k for which it is k-degenerate. A degeneracy ordering is an ordering of the vertices such that each vertex has minimum degree in the induced subgraph of it and all later vertices; in a degeneracy ordering of a k-degenerate graph, every vertex has at most k later neighbours. Degeneracy is also known as the k-core number, width, and linkage, and one plus the degeneracy is also called the coloring number or Szekeres–Wilf number. k-degenerate graphs have also been called k-inductive graphs.
degree
1. The degree of a vertex in a graph is its number of incident edges. The degree of a graph G (or its maximum degree) is the maximum of the degrees of its vertices, often denoted Δ(G); the minimum degree of G is the minimum of its vertex degrees, often denoted δ(G). Degree is sometimes called valency; the degree of v in G may be denoted dG(v), d(G), or deg(v). The total degree is the sum of the degrees of all vertices; by the handshaking lemma it is an even number. The degree sequence is the collection of degrees of all vertices, in sorted order from largest to smallest. In a directed graph, one may distinguish the in-degree (number of incoming edges) and out-degree (number of outgoing edges).
2. The homomorphism degree of a graph is a synonym for its Hadwiger number, the order of the largest clique minor.
Δ, δ
Δ(G) (using the Greek letter delta) is the maximum degree of a vertex in G, and δ(G) is the minimum degree; see degree.
density
In a graph of n nodes, the density is the ratio of the number of edges of the graph to the number of edges in a complete graph on n nodes. See dense graph.
depth
The depth of a node in a rooted tree is the number of edges in the path from the root to the node. For instance, the depth of the root is 0 and the depth of any one of its adjacent nodes is 1. It is the level of a node minus one. Note, however, that some authors instead use depth as a synonym for the level of a node.
diameter
The diameter of a connected graph is the maximum length of a shortest path. That is, it is the maximum of the distances between pairs of vertices in the graph. If the graph has weights on its edges, then its weighted diameter measures path length by the sum of the edge weights along a path, while the unweighted diameter measures path length by the number of edges.
For disconnected graphs, definitions vary: the diameter may be defined as infinite, or as the largest diameter of a connected component, or it may be undefined.
diamond
The diamond graph is an undirected graph with four vertices and five edges.
diconnected
Strongly connected. (Not to be confused with disconnected)
digon
A digon is a simple cycle of length two in a directed graph or a multigraph. Digons cannot occur in simple undirected graphs as they require repeating the same edge twice, which violates the definition of simple.
digraph
Synonym for directed graph.
dipath
See directed path.
direct predecessor
The tail of a directed edge whose head is the given vertex.
direct successor
The head of a directed edge whose tail is the given vertex.
directed
A directed graph is one in which the edges have a distinguished direction, from one vertex to another. In a mixed graph, a directed edge is again one that has a distinguished direction; directed edges may also be called arcs or arrows.
directed arc
See arrow.
directed edge
See arrow.
directed line
See arrow.
directed path
A path in which all the edges have the same direction. If a directed path leads from vertex x to vertex y, x is a predecessor of y, y is a successor of x, and y is said to be reachable from x.
direction
1. The asymmetric relation between two adjacent vertices in a graph, represented as an arrow.
2. The asymmetric relation between two vertices in a directed path.
disconnect
Cause to be disconnected.
disconnected
Not connected.
disjoint
1. Two subgraphs are edge disjoint if they share no edges, and vertex disjoint if they share no vertices.
2. The disjoint union of two or more graphs is a graph whose vertex and edge sets are the disjoint unions of the corresponding sets.
dissociation number
A subset of vertices in a graph G is called dissociation if it induces a subgraph with maximum degree 1.
distance
The distance between any two vertices in a graph is the length of the shortest path having the two vertices as its endpoints.
domatic
A domatic partition of a graph is a partition of the vertices into dominating sets. The domatic number of the graph is the maximum number of dominating sets in such a partition.
dominating
A dominating set is a set of vertices that includes or is adjacent to every vertex in the graph; not to be confused with a vertex cover, a vertex set that is incident to all edges in the graph. Important special types of dominating sets include independent dominating sets (dominating sets that are also independent sets) and connected dominating sets (dominating sets that induced connected subgraphs). A single-vertex dominating set may also be called a universal vertex. The domination number of a graph is the number of vertices in the smallest dominating set.
dualA dual graph of a plane graph G is a graph that has a vertex for each face of G.
== E ==
E
E(G) is the edge set of G; see edge set.
ear
An ear of a graph is a path whose endpoints may coincide but in which otherwise there are no repetitions of vertices or edges.
ear decomposition
An ear decomposition is a partition of the edges of a graph into a sequence of ears, each of whose endpoints (after the first one) belong to a previous ear and each of whose interior points do not belong to any previous ear. An open ear is a simple path (an ear without repeated vertices), and an open ear decomposition is an ear decomposition in which each ear after the first is open; a graph has an open ear decomposition if and only if it is biconnected. An ear is odd if it has an odd number of edges, and an odd ear decomposition is an ear decomposition in which each ear is odd; a graph has an odd ear decomposition if and only if it is factor-critical.
eccentricity
The eccentricity of a vertex is the farthest distance from it to any other vertex.
edge
An edge is (together with vertices) one of the two basic units out of which graphs are constructed. Each edge has two (or in hypergraphs, more) vertices to which it is attached, called its endpoints. Edges may be directed or undirected; undirected edges are also called lines and directed edges are also called arcs or arrows. In an undirected simple graph, an edge may be represented as the set of its vertices, and in a directed simple graph it may be represented as an ordered pair of its vertices. An edge that connects vertices x and y is sometimes written xy.
edge cut
A set of edges whose removal disconnects the graph. A one-edge cut is called a bridge, isthmus, or cut edge.
edge set
The set of edges of a given graph G, sometimes denoted by E(G).
edgeless graph
The edgeless graph or totally disconnected graph on a given set of vertices is the graph that has no edges. It is sometimes called the empty graph, but this term can also refer to a graph with no vertices.
embedding
A graph embedding is a topological representation of a graph as a subset of a topological space with each vertex represented as a point, each edge represented as a curve having the endpoints of the edge as endpoints of the curve, and no other intersections between vertices or edges. A planar graph is a graph that has such an embedding onto the Euclidean plane, and a toroidal graph is a graph that has such an embedding onto a torus. The genus of a graph is the minimum possible genus of a two-dimensional manifold onto which it can be embedded.
empty graph
1. An edgeless graph on a nonempty set of vertices.
2. The order-zero graph, a graph with no vertices and no edges.
end
An end of an infinite graph is an equivalence class of rays, where two rays are equivalent if there is a third ray that includes infinitely many vertices from both of them.
endpoint
One of the two vertices joined by a given edge, or one of the first or last vertex of a walk, trail or path. The first endpoint of a given directed edge is called the tail and the second endpoint is called the head.
enumeration
Graph enumeration is the problem of counting the graphs in a given class of graphs, as a function of their order. More generally, enumeration problems can refer either to problems of counting a certain class of combinatorial objects (such as cliques, independent sets, colorings, or spanning trees), or of algorithmically listing all such objects.
Eulerian
An Eulerian path is a walk that uses every edge of a graph exactly once. An Eulerian circuit (also called an Eulerian cycle or an Euler tour) is a closed walk that uses every edge exactly once. An Eulerian graph is a graph that has an Eulerian circuit. For an undirected graph, this means that the graph is connected and every vertex has even degree. For a directed graph, this means that the graph is strongly connected and every vertex has in-degree equal to the out-degree. In some cases, the connectivity requirement is loosened, and a graph meeting only the degree requirements is called Eulerian.
even
Divisible by two; for instance, an even cycle is a cycle whose length is even.
expander
An expander graph is a graph whose edge expansion, vertex expansion, or spectral expansion is bounded away from zero.
expansion
1. The edge expansion, isoperimetric number, or Cheeger constant of a graph G is the minimum ratio, over subsets S of at most half of the vertices of G, of the number of edges leaving S to the number of vertices in S.
2. The vertex expansion, vertex isoperimetric number, or magnification of a graph G is the minimum ratio, over subsets S of at most half of the vertices of G, of the number of vertices outside but adjacent to S to the number of vertices in S.
3. The unique neighbor expansion of a graph G is the minimum ratio, over subsets of at most half of the vertices of G, of the number of vertices outside S but adjacent to a unique vertex in S to the number of vertices in S.
4. The spectral expansion of a d-regular graph G is the spectral gap between the largest eigenvalue d of its adjacency matrix and the second-largest eigenvalue.
5. A family of graphs has bounded expansion if all its r-shallow minors have a ratio of edges to vertices bounded by a function of r, and polynomial expansion if the function of r is a polynomial.
== F ==
face
In a plane graph or graph embedding, a connected component of the subset of the plane or surface of the embedding that is disjoint from the graph. For an embedding in the plane, all but one face will be bounded; the one exceptional face that extends to infinity is called the outer (or infinite) face.
factor
A factor of a graph is a spanning subgraph: a subgraph that includes all of the vertices of the graph. The term is primarily used in the context of regular subgraphs: a k-factor is a factor that is k-regular. In particular, a 1-factor is the same thing as a perfect matching. A factor-critical graph is a graph for which deleting any one vertex produces a graph with a 1-factor.
factorization
A graph factorization is a partition of the edges of the graph into factors; a k-factorization is a partition into k-factors. For instance a 1-factorization is an edge coloring with the additional property that each vertex is incident to an edge of each color.
family
A synonym for class.
finite
A graph is finite if it has a finite number of vertices and a finite number of edges. Many sources assume that all graphs are finite without explicitly saying so. A graph is locally finite if each vertex has a finite number of incident edges. An infinite graph is a graph that is not finite: it has infinitely many vertices, infinitely many edges, or both.
first order
The first order logic of graphs is a form of logic in which variables represent vertices of a graph, and there exists a binary predicate to test whether two vertices are adjacent. To be distinguished from second order logic, in which variables can also represent sets of vertices or edges.
-flap
For a set of vertices X, an X-flap is a connected component of the induced subgraph formed by deleting X. The flap terminology is commonly used in the context of havens, functions that map small sets of vertices to their flaps. See also the bridge of a cycle, which is either a flap of the cycle vertices or a chord of the cycle.
forbidden
A forbidden graph characterization is a characterization of a family of graphs as being the graphs that do not have certain other graphs as subgraphs, induced subgraphs, or minors. If H is one of the graphs that does not occur as a subgraph, induced subgraph, or minor, then H is said to be forbidden.
forcing graph
A forcing graph is a graph H such that evaluating the subgraph density of H in the graphs of a graph sequence G(n) is sufficient to test whether that sequence is quasi-random.
forest
A forest is an undirected graph without cycles (a disjoint union of unrooted trees), or a directed graph formed as a disjoint union of rooted trees.
free edge
An edge which is not in a matching.
free vertex
1. A vertex not on a matched edge in a matching
2. A vertex which has not been matched.
Frucht
1. Robert Frucht
2. The Frucht graph, one of the two smallest cubic graphs with no nontrivial symmetries.
3. Frucht's theorem that every finite group is the group of symmetries of a finite graph.
full
Synonym for induced.
functional graph
A functional graph is a directed graph where every vertex has out-degree one. Equivalently, a functional graph is a maximal directed pseudoforest.
== G ==
G
A variable often used to denote a graph.
genus
The genus of a graph is the minimum genus of a surface onto which it can be embedded; see embedding.
geodesic
As a noun, a geodesic is a synonym for a shortest path. When used as an adjective, it means related to shortest paths or shortest path distances.
giant
In the theory of random graphs, a giant component is a connected component that contains a constant fraction of the vertices of the graph. In standard models of random graphs, there is typically at most one giant component.
girth
The girth of a graph is the length of its shortest cycle.
graph
The fundamental object of study in graph theory, a system of vertices connected in pairs by edges. Often subdivided into directed graphs or undirected graphs according to whether the edges have an orientation or not. Mixed graphs include both types of edges.
greedy
Produced by a greedy algorithm. For instance, a greedy coloring of a graph is a coloring produced by considering the vertices in some sequence and assigning each vertex the first available color.
Grötzsch
1. Herbert Grötzsch
2. The Grötzsch graph, the smallest triangle-free graph requiring four colors in any proper coloring.
3. Grötzsch's theorem that triangle-free planar graphs can always be colored with at most three colors.
Grundy number
1. The Grundy number of a graph is the maximum number of colors produced by a greedy coloring, with a badly-chosen vertex ordering.
== H ==
H
A variable often used to denote a graph, especially when another graph has already been denoted by G.
H-coloring
An H-coloring of a graph G (where H is also a graph) is a homomorphism from H to G.
H-free
A graph is H-free if it does not have an induced subgraph isomorphic to H, that is, if H is a forbidden induced subgraph. The H-free graphs are the family of all graphs (or, often, all finite graphs) that are H-free. For instance the triangle-free graphs are the graphs that do not have a triangle graph as a subgraph. The property of being H-free is always hereditary. A graph is H-minor-free if it does not have a minor isomorphic to H.
Hadwiger
1. Hugo Hadwiger
2. The Hadwiger number of a graph is the order of the largest complete minor of the graph. It is also called the contraction clique number or the homomorphism degree.
3. The Hadwiger conjecture is the conjecture that the Hadwiger number is never less than the chromatic number.
Hamiltonian
A Hamiltonian path or Hamiltonian cycle is a simple spanning path or simple spanning cycle: it covers all of the vertices in the graph exactly once. A graph is Hamiltonian if it contains a Hamiltonian cycle, and traceable if it contains a Hamiltonian path.
haven
A k-haven is a function that maps every set X of fewer than k vertices to one of its flaps, often satisfying additional consistency conditions. The order of a haven is the number k. Havens can be used to characterize the treewidth of finite graphs and the ends and Hadwiger numbers of infinite graphs.
height
1. The height of a node in a rooted tree is the number of edges in a longest path, going away from the root (i.e. its nodes have strictly increasing depth), that starts at that node and ends at a leaf.
2. The height of a rooted tree is the height of its root. That is, the height of a tree is the number of edges in a longest possible path, going away from the root, that starts at the root and ends at a leaf.
3. The height of a directed acyclic graph is the maximum length of a directed path in this graph.
hereditary
A hereditary property of graphs is a property that is closed under induced subgraphs: if G has a hereditary property, then so must every induced subgraph of G. Compare monotone (closed under all subgraphs) or minor-closed (closed under minors).
hexagon
A simple cycle consisting of exactly six edges and six vertices.
hole
A hole is an induced cycle of length four or more. An odd hole is a hole of odd length. An anti-hole is an induced subgraph of order four whose complement is a cycle; equivalently, it is a hole in the complement graph. This terminology is mainly used in the context of perfect graphs, which are characterized by the strong perfect graph theorem as being the graphs with no odd holes or odd anti-holes. The hole-free graphs are the same as the chordal graphs.
homomorphic equivalence
Two graphs are homomorphically equivalent if there exist two homomorphisms, one from each graph to the other graph.
homomorphism
1. A graph homomorphism is a mapping from the vertex set of one graph to the vertex set of another graph that maps adjacent vertices to adjacent vertices. This type of mapping between graphs is the one that is most commonly used in category-theoretic approaches to graph theory. A proper graph coloring can equivalently be described as a homomorphism to a complete graph.
2. The homomorphism degree of a graph is a synonym for its Hadwiger number, the order of the largest clique minor.
hyperarc
A directed hyperedge having a source and target set.
hyperedge
An edge in a hypergraph, having any number of endpoints, in contrast to the requirement that edges of graphs have exactly two endpoints.
hypercube
A hypercube graph is a graph formed from the vertices and edges of a geometric hypercube.
hypergraph
A hypergraph is a generalization of a graph in which each edge (called a hyperedge in this context) may have more than two endpoints.
hypo-
This prefix, in combination with a graph property, indicates a graph that does not have the property but such that every subgraph formed by deleting a single vertex does have the property. For instance, a hypohamiltonian graph is one that does not have a Hamiltonian cycle, but for which every one-vertex deletion produces a Hamiltonian subgraph. Compare critical, used for graphs which have a property but for which every one-vertex deletion does not.
== I ==
in-degree
The number of incoming edges in a directed graph; see degree.
incidence
An incidence in a graph is a vertex-edge pair such that the vertex is an endpoint of the edge.
incidence matrix
The incidence matrix of a graph is a matrix whose rows are indexed by vertices of the graph, and whose columns are indexed by edges, with a one in the cell for row i and column j when vertex i and edge j are incident, and a zero otherwise.
incident
The relation between an edge and one of its endpoints.
incomparability
An incomparability graph is the complement of a comparability graph; see comparability.
independent
1. An independent set is a set of vertices that induces an edgeless subgraph. It may also be called a stable set or a coclique. The independence number α(G) is the size of the maximum independent set.
2. In the graphic matroid of a graph, a subset of edges is independent if the corresponding subgraph is a tree or forest. In the bicircular matroid, a subset of edges is independent if the corresponding subgraph is a pseudoforest.
indifference
An indifference graph is another name for a proper interval graph or unit interval graph; see proper.
induced
An induced subgraph or full subgraph of a graph is a subgraph formed from a subset of vertices and from all of the edges that have both endpoints in the subset. Special cases include induced paths and induced cycles, induced subgraphs that are paths or cycles.
inductive
Synonym for degenerate.
infinite
An infinite graph is one that is not finite; see finite.
internal
A vertex of a path or tree is internal if it is not a leaf; that is, if its degree is greater than one. Two paths are internally disjoint (some people call it independent) if they do not have any vertex in common, except the first and last ones.
intersection
1. The intersection of two graphs is their largest common subgraph, the graph formed by the vertices and edges that belong to both graphs.
2. An intersection graph is a graph whose vertices correspond to sets or geometric objects, with an edge between two vertices exactly when the corresponding two sets or objects have a nonempty intersection. Several classes of graphs may be defined as the intersection graphs of certain types of objects, for instance chordal graphs (intersection graphs of subtrees of a tree), circle graphs (intersection graphs of chords of a circle), interval graphs (intersection graphs of intervals of a line), line graphs (intersection graphs of the edges of a graph), and clique graphs (intersection graphs of the maximal cliques of a graph). Every graph is an intersection graph for some family of sets, and this family is called an intersection representation of the graph. The intersection number of a graph G is the minimum total number of elements in any intersection representation of G.
interval
1. An interval graph is an intersection graph of intervals of a line.
2. The interval [u, v] in a graph is the union of all shortest paths from u to v.
3. Interval thickness is a synonym for pathwidth.
invariant
A synonym of property.
inverted arrow
An arrow with an opposite direction compared to another arrow. The arrow (y, x) is the inverted arrow of the arrow (x, y).
isolated
An isolated vertex of a graph is a vertex whose degree is zero, that is, a vertex with no incident edges.
isomorphic
Two graphs are isomorphic if there is an isomorphism between them; see isomorphism.
isomorphism
A graph isomorphism is a one-to-one incidence preserving correspondence of the vertices and edges of one graph to the vertices and edges of another graph. Two graphs related in this way are said to be isomorphic.
isoperimetric
See expansion.
isthmus
Synonym for bridge, in the sense of an edge whose removal disconnects the graph.
== J ==
join
The join of two graphs is formed from their disjoint union by adding an edge from each vertex of one graph to each vertex of the other. Equivalently, it is the complement of the disjoint union of the complements.
== K ==
K
For the notation for complete graphs, complete bipartite graphs, and complete multipartite graphs, see complete.
κ
κ(G) (using the Greek letter kappa) can refer to the vertex connectivity of G or to the clique number of G.
kernel
A kernel of a directed graph is a set of vertices which is both stable and absorbing.
knot
An inescapable section of a directed graph. See knot (mathematics) and knot theory.
== L ==
L
L(G) is the line graph of G; see line.
label
1. Information associated with a vertex or edge of a graph. A labeled graph is a graph whose vertices or edges have labels. The terms vertex-labeled or edge-labeled may be used to specify which objects of a graph have labels. Graph labeling refers to several different problems of assigning labels to graphs subject to certain constraints. See also graph coloring, in which the labels are interpreted as colors.
2. In the context of graph enumeration, the vertices of a graph are said to be labeled if they are all distinguishable from each other. For instance, this can be made to be true by fixing a one-to-one correspondence between the vertices and the integers from 1 to the order of the graph. When vertices are labeled, graphs that are isomorphic to each other (but with different vertex orderings) are counted as separate objects. In contrast, when the vertices are unlabeled, graphs that are isomorphic to each other are not counted separately.
leaf
1. A leaf vertex or pendant vertex (especially in a tree) is a vertex whose degree is 1. A leaf edge or pendant edge is the edge connecting a leaf vertex to its single neighbour.
2. A leaf power of a tree is a graph whose vertices are the leaves of the tree and whose edges connect leaves whose distance in the tree is at most a given threshold.
length
In an unweighted graph, the length of a cycle, path, or walk is the number of edges it uses. In a weighted graph, it may instead be the sum of the weights of the edges that it uses. Length is used to define the shortest path, girth (shortest cycle length), and longest path between two vertices in a graph.
level
1. This is the depth of a node plus 1, although some define it instead to be synonym of depth. A node's level in a rooted tree is the number of nodes in the path from the root to the node. For instance, the root has level 1 and any one of its adjacent nodes has level 2.
2. A set of all node having the same level or depth.
line
A synonym for an undirected edge. The line graph L(G) of a graph G is a graph with a vertex for each edge of G and an edge for each pair of edges that share an endpoint in G.
linkage
A synonym for degeneracy.
list
1. An adjacency list is a computer representation of graphs for use in graph algorithms.
2. List coloring is a variation of graph coloring in which each vertex has a list of available colors.
local
A local property of a graph is a property that is determined only by the neighbourhoods of the vertices in the graph. For instance, a graph is locally finite if all of its neighborhoods are finite.
loop
A loop or self-loop is an edge both of whose endpoints are the same vertex. It forms a cycle of length 1. These are not allowed in simple graphs.
== M ==
magnification
Synonym for vertex expansion.
matching
A matching is a set of edges in which no two share any vertex. A vertex is matched or saturated if it is one of the endpoints of an edge in the matching. A perfect matching or complete matching is a matching that matches every vertex; it may also be called a 1-factor, and can only exist when the order is even. A near-perfect matching, in a graph with odd order, is one that saturates all but one vertex. A maximum matching is a matching that uses as many edges as possible; the matching number α′(G) of a graph G is the number of edges in a maximum matching. A maximal matching is a matching to which no additional edges can be added.
maximal
1. A subgraph of given graph G is maximal for a particular property if it has that property but no other supergraph of it that is also a subgraph of G also has the same property. That is, it is a maximal element of the subgraphs with the property. For instance, a maximal clique is a complete subgraph that cannot be expanded to a larger complete subgraph. The word "maximal" should be distinguished from "maximum": a maximum subgraph is always maximal, but not necessarily vice versa.
2. A simple graph with a given property is maximal for that property if it is not possible to add any more edges to it (keeping the vertex set unchanged) while preserving both the simplicity of the graph and the property. Thus, for instance, a maximal planar graph is a planar graph such that adding any more edges to it would create a non-planar graph.
maximum
A subgraph of a given graph G is maximum for a particular property if it is the largest subgraph (by order or size) among all subgraphs with that property. For instance, a maximum clique is any of the largest cliques in a given graph.
median
1. A median of a triple of vertices, a vertex that belongs to shortest paths between all pairs of vertices, especially in median graphs and modular graphs.
2. A median graph is a graph in which every three vertices have a unique median.
Meyniel
1. Henri Meyniel, French graph theorist.
2. A Meyniel graph is a graph in which every odd cycle of length five or more has at least two chords.
minimal
A subgraph of given graph is minimal for a particular property if it has that property but no proper subgraph of it also has the same property. That is, it is a minimal element of the subgraphs with the property.
minimum cut
A cut whose cut-set has minimum total weight, possibly restricted to cuts that separate a designated pair of vertices; they are characterized by the max-flow min-cut theorem.
minor
A graph H is a minor of another graph G if H can be obtained by deleting edges or vertices from G and contracting edges in G. It is a shallow minor if it can be formed as a minor in such a way that the subgraphs of G that were contracted to form vertices of H all have small diameter. H is a topological minor of G if G has a subgraph that is a subdivision of H. A graph is H-minor-free if it does not have H as a minor. A family of graphs is minor-closed if it is closed under minors; the Robertson–Seymour theorem characterizes minor-closed families as having a finite set of forbidden minors.
mixed
A mixed graph is a graph that may include both directed and undirected edges.
modular
1. Modular graph, a graph in which each triple of vertices has at least one median vertex that belongs to shortest paths between all pairs of the triple.
2. Modular decomposition, a decomposition of a graph into subgraphs within which all vertices connect to the rest of the graph in the same way.
3. Modularity of a graph clustering, the difference of the number of cross-cluster edges from its expected value.
monotone
A monotone property of graphs is a property that is closed under subgraphs: if G has a monotone property, then so must every subgraph of G. Compare hereditary (closed under induced subgraphs) or minor-closed (closed under minors).
Moore graph
A Moore graph is a regular graph for which the Moore bound is met exactly. The Moore bound is an inequality relating the degree, diameter, and order of a graph, proved by Edward F. Moore. Every Moore graph is a cage.
multigraph
A multigraph is a graph that allows multiple adjacencies (and, often, self-loops); a graph that is not required to be simple.
multiple adjacency
A multiple adjacency or multiple edge is a set of more than one edge that all have the same endpoints (in the same direction, in the case of directed graphs). A graph with multiple edges is often called a multigraph.
multiplicity
The multiplicity of an edge is the number of edges in a multiple adjacency. The multiplicity of a graph is the maximum multiplicity of any of its edges.
== N ==
N
1. For the notation for open and closed neighborhoods, see neighbourhood.
2. A lower-case n is often used (especially in computer science) to denote the number of vertices in a given graph.
neighbor
neighbour
A vertex that is adjacent to a given vertex.
neighborhood
neighbourhood
The open neighbourhood (or neighborhood) of a vertex v is the subgraph induced by all vertices that are adjacent to v. The closed neighbourhood is defined in the same way but also includes v itself. The open neighborhood of v in G may be denoted NG(v) or N(v), and the closed neighborhood may be denoted NG[v] or N[v]. When the openness or closedness of a neighborhood is not specified, it is assumed to be open.
network
A graph in which attributes (e.g. names) are associated with the nodes and/or edges.
node
A synonym for vertex.
non-edge
A non-edge or anti-edge is a pair of vertices that are not adjacent; the edges of the complement graph.
null graph
See empty graph.
== O ==
odd
1. An odd cycle is a cycle whose length is odd. The odd girth of a non-bipartite graph is the length of its shortest odd cycle. An odd hole is a special case of an odd cycle: one that is induced and has four or more vertices.
2. An odd vertex is a vertex whose degree is odd. By the handshaking lemma every finite undirected graph has an even number of odd vertices.
3. An odd ear is a simple path or simple cycle with an odd number of edges, used in odd ear decompositions of factor-critical graphs; see ear.
4. An odd chord is an edge connecting two vertices that are an odd distance apart in an even cycle. Odd chords are used to define strongly chordal graphs.
5. An odd graph is a special case of a Kneser graph, having one vertex for each (n − 1)-element subset of a (2n − 1)-element set, and an edge connecting two subsets when their corresponding sets are disjoint.
open
1. See neighbourhood.
2. See walk.
order
1. The order of a graph G is the number of its vertices, |V(G)|. The variable n is often used for this quantity. See also size, the number of edges.
2. A type of logic of graphs; see first order and second order.
3. An order or ordering of a graph is an arrangement of its vertices into a sequence, especially in the context of topological ordering (an order of a directed acyclic graph in which every edge goes from an earlier vertex to a later vertex in the order) and degeneracy ordering (an order in which each vertex has minimum degree in the induced subgraph of it and all later vertices).
4. For the order of a haven or bramble, see haven and bramble.
orientation
oriented
1. An orientation of an undirected graph is an assignment of directions to its edges, making it into a directed graph. An oriented graph is one that has been assigned an orientation. So, for instance, a polytree is an oriented tree; it differs from a directed tree (an arborescence) in that there is no requirement of consistency in the directions of its edges. Other special types of orientation include tournaments, orientations of complete graphs; strong orientations, orientations that are strongly connected; acyclic orientations, orientations that are acyclic; Eulerian orientations, orientations that are Eulerian; and transitive orientations, orientations that are transitively closed.
2. Oriented graph, used by some authors as a synonym for a directed graph.
out-degree
See degree.
outer
See face.
outerplanar
An outerplanar graph is a graph that can be embedded in the plane (without crossings) so that all vertices are on the outer face of the graph.
== P ==
parent
In a rooted tree, a parent of a vertex v is a neighbor of v along the incoming edge, the one that is directed toward the root.
path
A path may either be a walk or a walk without repeated vertices and consequently edges (also called a simple path), depending on the source. Important special cases include induced paths and shortest paths.
path decomposition
A path decomposition of a graph G is a tree decomposition whose underlying tree is a path. Its width is defined in the same way as for tree decompositions, as one less than the size of the largest bag. The minimum width of any path decomposition of G is the pathwidth of G.
pathwidth
The pathwidth of a graph G is the minimum width of a path decomposition of G. It may also be defined in terms of the clique number of an interval completion of G. It is always between the bandwidth and the treewidth of G. It is also known as interval thickness, vertex separation number, or node searching number.
pendant
See leaf.
perfect
1. A perfect graph is a graph in which, in every induced subgraph, the chromatic number equals the clique number. The perfect graph theorem and strong perfect graph theorem are two theorems about perfect graphs, the former proving that their complements are also perfect and the latter proving that they are exactly the graphs with no odd holes or anti-holes.
2. A perfectly orderable graph is a graph whose vertices can be ordered in such a way that a greedy coloring algorithm with this ordering optimally colors every induced subgraph. The perfectly orderable graphs are a subclass of the perfect graphs.
3. A perfect matching is a matching that saturates every vertex; see matching.
4. A perfect 1-factorization is a partition of the edges of a graph into perfect matchings so that each two matchings form a Hamiltonian cycle.
peripheral
1. A peripheral cycle or non-separating cycle is a cycle with at most one bridge.
2. A peripheral vertex is a vertex whose eccentricity is maximum. In a tree, this must be a leaf.
Petersen
1. Julius Petersen (1839–1910), Danish graph theorist.
2. The Petersen graph, a 10-vertex 15-edge graph frequently used as a counterexample.
3. Petersen's theorem that every bridgeless cubic graph has a perfect matching.
planar
A planar graph is a graph that has an embedding onto the Euclidean plane. A plane graph is a planar graph for which a particular embedding has already been fixed. A k-planar graph is one that can be drawn in the plane with at most k crossings per edge.
polytree
A polytree is an oriented tree; equivalently, a directed acyclic graph whose underlying undirected graph is a tree.
power
1. A graph power Gk of a graph G is another graph on the same vertex set; two vertices are adjacent in Gk when they are at distance at most k in G. A leaf power is a closely related concept, derived from a power of a tree by taking the subgraph induced by the tree's leaves.
2. Power graph analysis is a method for analyzing complex networks by identifying cliques, bicliques, and stars within the network.
3. Power laws in the degree distributions of scale-free networks are a phenomenon in which the number of vertices of a given degree is proportional to a power of the degree.
predecessor
A vertex coming before a given vertex in a directed path.
prime
1. A prime graph is defined from an algebraic group, with a vertex for each prime number that divides the order of the group.
2. In the theory of modular decomposition, a prime graph is a graph without any nontrivial modules.
3. In the theory of splits, cuts whose cut-set is a complete bipartite graph, a prime graph is a graph without any splits. Every quotient graph of a maximal decomposition by splits is a prime graph, a star, or a complete graph.
4. A prime graph for the Cartesian product of graphs is a connected graph that is not itself a product. Every connected graph can be uniquely factored into a Cartesian product of prime graphs.
proper
1. A proper subgraph is a subgraph that removes at least one vertex or edge relative to the whole graph; for finite graphs, proper subgraphs are never isomorphic to the whole graph, but for infinite graphs they can be.
2. A proper coloring is an assignment of colors to the vertices of a graph (a coloring) that assigns different colors to the endpoints of each edge; see color.
3. A proper interval graph or proper circular arc graph is an intersection graph of a collection of intervals or circular arcs (respectively) such that no interval or arc contains another interval or arc. Proper interval graphs are also called unit interval graphs (because they can always be represented by unit intervals) or indifference graphs.
property
A graph property is something that can be true of some graphs and false of others, and that depends only on the graph structure and not on incidental information such as labels. Graph properties may equivalently be described in terms of classes of graphs (the graphs that have a given property). More generally, a graph property may also be a function of graphs that is again independent of incidental information, such as the size, order, or degree sequence of a graph; this more general definition of a property is also called an invariant of the graph.
pseudoforest
A pseudoforest is an undirected graph in which each connected component has at most one cycle, or a directed graph in which each vertex has at most one outgoing edge.
pseudograph
A pseudograph is a graph or multigraph that allows self-loops.
== Q ==
quasi-line graph
A quasi-line graph or locally co-bipartite graph is a graph in which the open neighborhood of every vertex can be partitioned into two cliques. These graphs are always claw-free and they include as a special case the line graphs. They are used in the structure theory of claw-free graphs.
quasi-random graph sequence
A quasi-random graph sequence is a sequence of graphs that shares several properties with a sequence of random graphs generated according to the Erdős–Rényi random graph model.
quiver
A quiver is a directed multigraph, as used in category theory. The edges of a quiver are called arrows.
== R ==
radius
The radius of a graph is the minimum eccentricity of any vertex.
Ramanujan
A Ramanujan graph is a graph whose spectral expansion is as large as possible. That is, it is a d-regular graph, such that the second-largest eigenvalue of its adjacency matrix is at most
2
d
−
1
{\displaystyle 2{\sqrt {d-1}}}
.
ray
A ray, in an infinite graph, is an infinite simple path with exactly one endpoint. The ends of a graph are equivalence classes of rays.
reachability
The ability to get from one vertex to another within a graph.
reachable
Has an affirmative reachability. A vertex y is said to be reachable from a vertex x if there exists a path from x to y.
recognizable
In the context of the reconstruction conjecture, a graph property is recognizable if its truth can be determined from the deck of the graph. Many graph properties are known to be recognizable. If the reconstruction conjecture is true, all graph properties are recognizable.
reconstruction
The reconstruction conjecture states that each undirected graph G is uniquely determined by its deck, a multiset of graphs formed by removing one vertex from G in all possible ways. In this context, reconstruction is the formation of a graph from its deck.
rectangle
A simple cycle consisting of exactly four edges and four vertices.
regular
A graph is d-regular when all of its vertices have degree d. A regular graph is a graph that is d-regular for some d.
regular tournament
A regular tournament is a tournament where in-degree equals out-degree for all vertices.
reverse
See transpose.
root
1. A designated vertex in a graph, particularly in directed trees and rooted graphs.
2. The inverse operation to a graph power: a kth root of a graph G is another graph on the same vertex set such that two vertices are adjacent in G if and only if they have distance at most k in the root.
== S ==
saturated
See matching.
searching number
Node searching number is a synonym for pathwidth.
second order
The second order logic of graphs is a form of logic in which variables may represent vertices, edges, sets of vertices, and (sometimes) sets of edges. This logic includes predicates for testing whether a vertex and edge are incident, as well as whether a vertex or edge belongs to a set. To be distinguished from first order logic, in which variables can only represent vertices.
self-loop
Synonym for loop.
separating vertex
See articulation point.
separation number
Vertex separation number is a synonym for pathwidth.
sibling
In a rooted tree, a sibling of a vertex v is a vertex which has the same parent vertex as v.
simple
1. A simple graph is a graph without loops and without multiple adjacencies. That is, each edge connects two distinct endpoints and no two edges have the same endpoints. A simple edge is an edge that is not part of a multiple adjacency. In many cases, graphs are assumed to be simple unless specified otherwise.
2. A simple path or a simple cycle is a path or cycle that has no repeated vertices and consequently no repeated edges.
sink
A sink, in a directed graph, is a vertex with no outgoing edges (out-degree equals 0).
size
The size of a graph G is the number of its edges, |E(G)|. The variable m is often used for this quantity. See also order, the number of vertices.
small-world network
A small-world network is a graph in which most nodes are not neighbors of one another, but most nodes can be reached from every other node by a small number of hops or steps. Specifically, a small-world network is defined to be a graph where the typical distance L between two randomly chosen nodes (the number of steps required) grows proportionally to the logarithm of the number of nodes N in the network
snark
A snark is a simple, connected, bridgeless cubic graph with chromatic index equal to 4.
source
A source, in a directed graph, is a vertex with no incoming edges (in-degree equals 0).
space
In algebraic graph theory, several vector spaces over the binary field may be associated with a graph. Each has sets of edges or vertices for its vectors, and symmetric difference of sets as its vector sum operation. The edge space is the space of all sets of edges, and the vertex space is the space of all sets of vertices. The cut space is a subspace of the edge space that has the cut-sets of the graph as its elements. The cycle space has the Eulerian spanning subgraphs as its elements.
spanner
A spanner is a (usually sparse) graph whose shortest path distances approximate those in a dense graph or other metric space. Variations include geometric spanners, graphs whose vertices are points in a geometric space; tree spanners, spanning trees of a graph whose distances approximate the graph distances, and graph spanners, sparse subgraphs of a dense graph whose distances approximate the original graph's distances. A greedy spanner is a graph spanner constructed by a greedy algorithm, generally one that considers all edges from shortest to longest and keeps the ones that are needed to preserve the distance approximation.
spanning
A subgraph is spanning when it includes all of the vertices of the given graph.
Important cases include spanning trees, spanning subgraphs that are trees, and perfect matchings, spanning subgraphs that are matchings. A spanning subgraph may also be called a factor, especially (but not only) when it is regular.
sparse
A sparse graph is one that has few edges relative to its number of vertices. In some definitions the same property should also be true for all subgraphs of the given graph.
spectral
spectrum
The spectrum of a graph is the collection of eigenvalues of its adjacency matrix. Spectral graph theory is the branch of graph theory that uses spectra to analyze graphs. See also spectral expansion.
split
1. A split graph is a graph whose vertices can be partitioned into a clique and an independent set. A related class of graphs, the double split graphs, are used in the proof of the strong perfect graph theorem.
2. A split of an arbitrary graph is a partition of its vertices into two nonempty subsets, such that the edges spanning this cut form a complete bipartite subgraph. The splits of a graph can be represented by a tree structure called its split decomposition. A split is called a strong split when it is not crossed by any other split. A split is called nontrivial when both of its sides have more than one vertex. A graph is called prime when it has no nontrivial splits.
3. Vertex splitting (sometimes called vertex cleaving) is an elementary graph operation that splits a vertex into two, where these two new vertices are adjacent to the vertices that the original vertex was adjacent to. The inverse of vertex splitting is vertex contraction.
square
1. The square of a graph G is the graph power G2; in the other direction, G is the square root of G2. The half-square of a bipartite graph is the subgraph of its square induced by one side of the bipartition.
2. A squaregraph is a planar graph that can be drawn so that all bounded faces are 4-cycles and all vertices of degree ≤ 3 belong to the outer face.
3. A square grid graph is a lattice graph defined from points in the plane with integer coordinates connected by unit-length edges.
stable
A stable set is a synonym for an independent set.
star
A star is a tree with one internal vertex; equivalently, it is a complete bipartite graph K1,n for some n ≥ 2. The special case of a star with three leaves is called a claw.
strength
The strength of a graph is the minimum ratio of the number of edges removed from the graph to components created, over all possible removals; it is analogous to toughness, based on vertex removals.
strong
1. For strong connectivity and strongly connected components of directed graphs, see connected and component. A strong orientation is an orientation that is strongly connected; see orientation.
2. For the strong perfect graph theorem, see perfect.
3. A strongly regular graph is a regular graph in which every two adjacent vertices have the same number of shared neighbours and every two non-adjacent vertices have the same number of shared neighbours.
4. A strongly chordal graph is a chordal graph in which every even cycle of length six or more has an odd chord.
5. A strongly perfect graph is a graph in which every induced subgraph has an independent set meeting all maximal cliques. The Meyniel graphs are also called "very strongly perfect graphs" because in them, every vertex belongs to such an independent set.
subforest
A subgraph of a forest.
subgraph
A subgraph of a graph G is another graph formed from a subset of the vertices and edges of G. The vertex subset must include all endpoints of the edge subset, but may also include additional vertices. A spanning subgraph is one that includes all vertices of the graph; an induced subgraph is one that includes all the edges whose endpoints belong to the vertex subset.
subtree
A subtree is a connected subgraph of a tree. Sometimes, for rooted trees, subtrees are defined to be a special type of connected subgraph, formed by all vertices and edges reachable from a chosen vertex.
successor
A vertex coming after a given vertex in a directed path.
superconcentrator
A superconcentrator is a graph with two designated and equal-sized subsets of vertices I and O, such that for every two equal-sized subsets S of I and T of O there exists a family of disjoint paths connecting every vertex in S to a vertex in T. Some sources require in addition that a superconcentrator be a directed acyclic graph, with I as its sources and O as its sinks.
supergraph
A graph formed by adding vertices, edges, or both to a given graph. If H is a subgraph of G, then G is a supergraph of H.
== T ==
theta
1. A theta graph is the union of three internally disjoint (simple) paths that have the same two distinct end vertices.
2. The theta graph of a collection of points in the Euclidean plane is constructed by constructing a system of cones surrounding each point and adding one edge per cone, to the point whose projection onto a central ray of the cone is smallest.
3. The Lovász number or Lovász theta function of a graph is a graph invariant related to the clique number and chromatic number that can be computed in polynomial time by semidefinite programming.
Thomsen graph
The Thomsen graph is a name for the complete bipartite graph
K
3
,
3
{\displaystyle K_{3,3}}
.
topological
1. A topological graph is a representation of the vertices and edges of a graph by points and curves in the plane (not necessarily avoiding crossings).
2. Topological graph theory is the study of graph embeddings.
3. Topological sorting is the algorithmic problem of arranging a directed acyclic graph into a topological order, a vertex sequence such that each edge goes from an earlier vertex to a later vertex in the sequence.
totally disconnected
Synonym for edgeless.
tour
A closed trail, a walk that starts and ends at the same vertex and has no repeated edges. Euler tours are tours that use all of the graph edges; see Eulerian.
tournament
A tournament is an orientation of a complete graph; that is, it is a directed graph such that every two vertices are connected by exactly one directed edge (going in only one of the two directions between the two vertices).
traceable
A traceable graph is a graph that contains a Hamiltonian path.
trail
A walk without repeated edges.
transitive
Having to do with the transitive property. The transitive closure of a given directed graph is a graph on the same vertex set that has an edge from one vertex to another whenever the original graph has a path connecting the same two vertices. A transitive reduction of a graph is a minimal graph having the same transitive closure; directed acyclic graphs have a unique transitive reduction. A transitive orientation is an orientation of a graph that is its own transitive closure; it exists only for comparability graphs.
transpose
The transpose graph of a given directed graph is a graph on the same vertices, with each edge reversed in direction. It may also be called the converse or reverse of the graph.
tree
1. A tree is an undirected graph that is both connected and acyclic, or a directed graph in which there exists a unique walk from one vertex (the root of the tree) to all remaining vertices.
2. A k-tree is a graph formed by gluing (k + 1)-cliques together on shared k-cliques. A tree in the ordinary sense is a 1-tree according to this definition.
tree decomposition
A tree decomposition of a graph G is a tree whose nodes are labeled with sets of vertices of G; these sets are called bags. For each vertex v, the bags that contain v must induce a subtree of the tree, and for each edge uv there must exist a bag that contains both u and v. The width of a tree decomposition is one less than the maximum number of vertices in any of its bags; the treewidth of G is the minimum width of any tree decomposition of G.
treewidth
The treewidth of a graph G is the minimum width of a tree decomposition of G. It can also be defined in terms of the clique number of a chordal completion of G, the order of a haven of G, or the order of a bramble of G.
triangle
A cycle of length three in a graph. A triangle-free graph is an undirected graph that does not have any triangle subgraphs.
trivial
A trivial graph is a graph with 0 or 1 vertices. A graph with 0 vertices is also called null graph.
Turán
1. Pál Turán
2. A Turán graph is a balanced complete multipartite graph.
3. Turán's theorem states that Turán graphs have the maximum number of edges among all clique-free graphs of a given order.
4. Turán's brick factory problem asks for the minimum number of crossings in a drawing of a complete bipartite graph.
twin
Two vertices u,v are true twins if they have the same closed neighborhood: NG[u] = NG[v] (this implies u and v are neighbors), and they are false twins if they have the same open neighborhood: NG(u) = NG(v)) (this implies u and v are not neighbors).
== U ==
unary vertex
In a rooted tree, a unary vertex is a vertex which has exactly one child vertex.
undirected
An undirected graph is a graph in which the two endpoints of each edge are not distinguished from each other. See also directed and mixed. In a mixed graph, an undirected edge is again one in which the endpoints are not distinguished from each other.
uniform
A hypergraph is k-uniform when all its edges have k endpoints, and uniform when it is k-uniform for some k. For instance, ordinary graphs are the same as 2-uniform hypergraphs.
universal
1. A universal graph is a graph that contains as subgraphs all graphs in a given family of graphs, or all graphs of a given size or order within a given family of graphs.
2. A universal vertex (also called an apex or dominating vertex) is a vertex that is adjacent to every other vertex in the graph. For instance, wheel graphs and connected threshold graphs always have a universal vertex.
3. In the logic of graphs, a vertex that is universally quantified in a formula may be called a universal vertex for that formula.
unweighted graph
A graph whose vertices and edges have not been assigned weights; the opposite of a weighted graph.
utility graph
The utility graph is a name for the complete bipartite graph
K
3
,
3
{\displaystyle K_{3,3}}
.
== V ==
V
See vertex set.
valency
Synonym for degree.
vertex
A vertex (plural vertices) is (together with edges) one of the two basic units out of which graphs are constructed. Vertices of graphs are often considered to be atomic objects, with no internal structure.
vertex cut
separating set
A set of vertices whose removal disconnects the graph. A one-vertex cut is called an articulation point or cut vertex.
vertex set
The set of vertices of a given graph G, sometimes denoted by V(G).
vertices
See vertex.
Vizing
1. Vadim G. Vizing
2. Vizing's theorem that the chromatic index is at most one more than the maximum degree.
3. Vizing's conjecture on the domination number of Cartesian products of graphs.
volume
The sum of the degrees of a set of vertices.
== W ==
W
The letter W is used in notation for wheel graphs and windmill graphs. The notation is not standardized.
Wagner
1. Klaus Wagner
2. The Wagner graph, an eight-vertex Möbius ladder.
3. Wagner's theorem characterizing planar graphs by their forbidden minors.
4. Wagner's theorem characterizing the K5-minor-free graphs.
walk
A walk is a finite or infinite sequence of edges which joins a sequence of vertices. Walks are also sometimes called chains. A walk is open if its first and last vertices are distinct, and closed if they are repeated.
weakly connected
A directed graph is called weakly connected if replacing all of its directed edges with undirected edges produces a connected (undirected) graph.
weight
A numerical value, assigned as a label to a vertex or edge of a graph. The weight of a subgraph is the sum of the weights of the vertices or edges within that subgraph.
weighted graph
A graph whose vertices or edges have been assigned weights. A vertex-weighted graph has weights on its vertices and an edge-weighted graph has weights on its edges.
well-colored
A well-colored graph is a graph all of whose greedy colorings use the same number of colors.
well-covered
A well-covered graph is a graph all of whose maximal independent sets are the same size.
wheel
A wheel graph is a graph formed by adding a universal vertex to a simple cycle.
width
1. A synonym for degeneracy.
2. For other graph invariants known as width, see bandwidth, branchwidth, clique-width, pathwidth, and treewidth.
3. The width of a tree decomposition or path decomposition is one less than the maximum size of one of its bags, and may be used to define treewidth and pathwidth.
4. The width of a directed acyclic graph is the maximum cardinality of an antichain.
windmill
A windmill graph is the union of a collection of cliques, all of the same order as each other, with one shared vertex belonging to all the cliques and all other vertices and edges distinct.
== See also ==
List of graph theory topics
Gallery of named graphs
Graph algorithms
Glossary of areas of mathematics
== References == | Wikipedia/Edge_(graph_theory) |
In networking, a node (Latin: nodus, ‘knot’) is either a redistribution point or a communication endpoint within telecommunication networks.
A physical network node is an electronic device that is attached to a network, and is capable of creating, receiving, or transmitting information over a communication channel. In data communication, a physical network node may either be data communication equipment (such as a modem, hub, bridge or switch) or data terminal equipment (such as a digital telephone handset, a printer or a host computer).
A passive distribution point such as a distribution frame or patch panel is not a node.
== Computer networks ==
In data communication, a physical network node may either be data communication equipment (DCE) such as a modem, hub, bridge or switch; or data terminal equipment (DTE) such as a digital telephone handset, a printer or a host computer.
If a network is a local area network (LAN) or wide area network (WAN), every LAN or WAN node that participates on the data link layer must have a network address, typically one for each network interface controller it possesses. Examples are computers, a DSL modem with Ethernet interface and wireless access point. Equipment, such as an Ethernet hub or modem with serial interface, that operates only below the data link layer does not require a network address.
If the network in question is the Internet or an intranet, many physical network nodes are host computers, also known as Internet nodes, identified by an IP address, and all hosts are physical network nodes. However, some data-link-layer devices such as switches, bridges and wireless access points do not have an IP host address (except sometimes for administrative purposes), and are not considered to be Internet nodes or hosts, but are considered physical network nodes and LAN nodes.
== Telecommunications ==
In the fixed telephone network, a node may be a public or private telephone exchange, a remote concentrator or a computer providing some intelligent network service. In cellular communication, switching points and databases such as the base station controller, home location register, gateway GPRS Support Node (GGSN) and serving GPRS support node (SGSN) are examples of nodes. Cellular network base stations are not considered to be nodes in this context.
In cable television systems (CATV), this term has assumed a broader context and is generally associated with a fiber optic node. This can be defined as those homes or businesses within a specific geographic area that are served from a common fiber optic receiver. A fiber optic node is generally described in terms of the number of "homes passed" that are served by that specific fiber node.
== Distributed systems ==
In a distributed system network, the nodes are clients, servers or peers. A peer may sometimes serve as client, sometimes server. In a peer-to-peer or overlay network, nodes that actively route data for the other networked devices as well as themselves are called supernodes.
Distributed systems may sometimes use virtual nodes so that the system is not oblivious to the heterogeneity of the nodes. This issue is addressed with special algorithms, like consistent hashing, as it is the case in Amazon's Dynamo.
Within a vast computer network, the individual computers on the periphery of the network, those that do not also connect other networks, and those that often connect transiently to one or more clouds are called end nodes. Typically, within the cloud computing construct, the individual user or customer computer that connects into one well-managed cloud is called an end node. Since these computers are a part of the network yet unmanaged by the cloud's host, they present significant risks to the entire cloud. This is called the end node problem. There are several means to remedy this problem but all require instilling trust in the end node computer.
== See also ==
End system
Middlebox
Networking hardware
Terminal (telecommunication)
== References == | Wikipedia/Node_(networking) |
Network mapping is the study of the physical connectivity of networks e.g. the Internet. Network mapping discovers the devices on the network and their connectivity. It is not to be confused with network discovery or network enumeration which discovers devices on the network and their characteristics such as operating system, open ports, listening network services, etc. The field of automated network mapping has taken on greater importance as networks become more dynamic and complex in nature.
== Large-scale mapping project ==
Images of some of the first attempts at a large scale map of the internet were produced by the Internet Mapping Project and appeared in Wired magazine. The maps produced by this project were based on the layer 3 or IP level connectivity of the Internet (see OSI model), but there are different aspects of internet structure that have also been mapped.
More recent efforts to map the internet have been improved by more sophisticated methods, allowing them to make faster and more sensible maps. An example of such an effort is the OPTE project, which is attempting to develop a system capable of mapping the internet in a single day.
The "Map of the Internet Project" maps over 4 billion internet locations as cubes in 3D cyberspace. Users can add URLs as cubes and re-arrange objects on the map.
In early 2011 Canadian based ISP PEER 1 Hosting created their own Map of the Internet that depicts a graph of 19,869 autonomous system nodes connected by 44,344 connections. The sizing and layout of the autonomous systems was calculated based on their eigenvector centrality, which is a measure of how central to the network each autonomous system is.
Graph theory can be used to better understand maps of the internet and to help choose between the many ways to visualize internet maps. Some projects have attempted to incorporate geographical data into their internet maps (for example, to draw locations of routers and nodes on a map of the world), but others are only concerned with representing the more abstract structures of the internet, such as the allocation, structure, and purpose of IP space.
== Enterprise network mapping ==
Many organizations create network maps of their network system. These maps can be made manually using simple tools such as Microsoft Visio, or the mapping process can be simplified by using tools that integrate auto network discovery with Network mapping, one such example being the Fabric platform. Many of the vendors from the Notable network mappers list enable you to customize the maps and include your own labels, add un-discoverable items and background images. Sophisticated mapping is used to help visualize the network and understand relationships between end devices and the transport layers that provide service. Mostly, network scanners detect the network with all its components and deliver a list which is used for creating charts and maps using network mapping software. Items such as bottlenecks and root cause analysis can be easier to spot using these tools.
There are three main techniques used for network mapping: SNMP based approaches, active probing and route analytics.
The SNMP based approach retrieves data from Router and Switch MIBs in order to build the network map. The active probing approach relies on a series of traceroute-like probe packets in order to build the network map. The route analytics approach relies on information from the routing protocols to build the network map. Each of the three approaches have advantages and disadvantages in the methods that they use.
== Internet mapping techniques ==
There are two prominent techniques used today to create Internet maps. The first works on the data plane of the Internet and is called active probing. It is used to infer Internet topology based on router adjacencies. The second works on the control plane and infers autonomous system connectivity based on BGP data. A BGP speaker sends 19-byte keep-alive messages every 60 seconds to maintain the connection.
=== Active probing ===
This technique relies on traceroute-like probing on the IP address space. These probes report back IP forwarding paths to the destination address. By combining these paths one can infer router level topology for a given POP. Active probing is advantageous in that the paths returned by probes constitute the actual forwarding path that data takes through networks. It is also more likely to find peering links between ISPs. However, active probing requires massive amounts of probes to map the entire Internet. It is more likely to infer false topologies due to load balancing routers and routers with multiple IP address aliases. Decreased global support for enhanced probing mechanisms such as source-route probing, ICMP Echo Broadcasting, and IP Address Resolution techniques leaves this type of probing in the realm of network diagnosis.
=== AS PATH inference ===
This technique relies on various BGP collectors who collect routing updates and tables and provide this information publicly. Each BGP entry contains a Path Vector attribute called the AS Path. This path represents an autonomous system forwarding path from a given origin for a given set of prefixes. These paths can be used to infer AS-level connectivity and in turn be used to build AS topology graphs. However, these paths do not necessarily reflect how data is actually forwarded and adjacencies between AS nodes only represent a policy relationship between them. A single AS link can in reality be several router links. It is also much harder to infer peerings between two AS nodes as these peering relationships are only propagated to an ISP's customer networks. Nevertheless, support for this type of mapping is increasing as more and more ISP's offer to peer with public route collectors such as Route-Views and RIPE. New toolsets are emerging such as Cyclops and NetViews that take advantage of a new experimental BGP collector BGPMon. NetViews can not only build topology maps in seconds but visualize topology changes moments after occurring at the actual router. Hence, routing dynamics can be visualized in real time.
In comparison to what the tools using BGPMon does there is another tool netTransformer able to discover and generate BGP peering maps either through SNMP polling or by converting MRT dumps to a graphml file format. netTransformer allows us also to perform network diffs between any two dumps and thus to reason how does the BGP peering has evolved through the years. WhatsUp Gold, an IT monitoring tool, tracks networks, servers, applications, storage devices, virtual devices and incorporates infrastructure management, application performance management.
== See also ==
Comparison of network diagram software
DIMES
Idea networking
Network topology
Opte Project
Webometrics
== Notes ==
== External links ==
Cheleby Internet Topology Mapping System
Center for Applied Internet Data Analysis
NetViews: Multi-level Realtime Internet Mapping
Cyclops: An AS level Observatory
DIMES Research Project
Internet Mapping Research Project
The Opte Project | Wikipedia/Network_mapping |
A campus network, campus area network, corporate area network or CAN is a computer network made up of an interconnection of local area networks (LANs) within a limited geographical area. The networking equipments (switches, routers) and transmission media (optical fiber, copper plant, Cat5 cabling etc.) are almost entirely owned by the campus tenant / owner: an enterprise, university, government etc. A campus area network is larger than a local area network but smaller than a metropolitan area network (MAN) or wide area network (WAN).
== University campuses ==
College or university campus area networks often interconnect a variety of buildings, including administrative buildings, academic buildings, laboratories, university libraries, or student centers, residence halls, gymnasiums, and other outlying structures, like conference centers, technology centers, and training institutes.
Early examples include the Stanford University Network at Stanford University, Project Athena at MIT, and the Andrew Project at Carnegie Mellon University.
== Corporate campuses ==
Much like a university campus network, a corporate campus network serves to connect buildings. Examples of such are the networks at Googleplex and Microsoft's campus. Campus networks are normally interconnected with high speed Ethernet links operating over optical fiber such as gigabit Ethernet and 10 Gigabit Ethernet.
== Area range ==
The range of CAN is 1 km to 5 km. If two buildings have the same domain and they are connected with a network, then it will be considered as CAN only. Though the CAN is mainly used for corporate campuses so the link will be high speed.
== References == | Wikipedia/Campus_network |
A backbone or core network is a part of a computer network which interconnects networks, providing a path for the exchange of information between different LANs or subnetworks. A backbone can tie together diverse networks in the same building, in different buildings in a campus environment, or over wide areas. Normally, the backbone's capacity is greater than the networks connected to it.
A large corporation that has many locations may have a backbone network that ties all of the locations together, for example, if a server cluster needs to be accessed by different departments of a company that are located at different geographical locations. The pieces of the network connections (for example: Ethernet, wireless) that bring these departments together is often mentioned as network backbone. Network congestion is often taken into consideration while designing backbones.
One example of a backbone network is the Internet backbone.
== History ==
The theory, design principles, and first instantiation of the backbone network came from the telephone core network when traffic was purely voice. The core network was the central part of a telecommunications network that provided various services to customers who were connected by the access network. One of the main functions was to route telephone calls across the PSTN.
Typically the term referred to the high capacity communication facilities that connect primary nodes. A core network provided paths for the exchange of information between different sub-networks.
In the United States, local exchange core networks were linked by several competing interexchange networks; in the rest of the world, the core network has been extended to national boundaries.
Core networks usually had a mesh topology that provided any-to-any connections among devices on the network. Many main service providers would have their own core/backbone networks that are interconnected. Some large enterprises have their own core/backbone network, which are typically connected to the public networks.
Backbone networks create links that allow long-distance transmission, usually 10 to 100 miles, and in certain cases - up to 150 miles. This makes backbone network essential to providing long-haul wireless solutions to provide internet service, especially to remote areas.
== Functions ==
Core networks typically provided the following functionality:
Aggregation: The highest level of aggregation in a service provider network. The next level in the hierarchy under the core nodes is the distribution networks and then the edge networks. Customer-premises equipment (CPE) do not normally connect to the core networks of a large service provider.
Authentication: The function to decide whether the user requesting a service from the telecom network is authorized to do so within this network or not.
Call control and switching: call control or switching functionality decides the future course of call based on the call signaling processing. E.g. switching functionality may decide based on the "called number" that the call be routed towards a subscriber within this operator's network or with number portability more prevalent to another operator's network.
Charging: This functionality of the collation and processing of charging data generated by various network nodes. Two common types of charging mechanisms found in present-day networks are prepaid charging and postpaid charging. See Automatic Message Accounting
Service invocation: The core network performs the task of service invocation for its subscribers. Service invocation may happen based on some explicit action (e.g. call transfer) by user or implicitly (call waiting). It's important to note however that service execution may or may not be a core network functionality as third-party networks and nodes may take part in actual service execution.
Gateways: Gateways shall be present in the core network to access other networks. Gateway functionality is dependent on the type of network it interfaces with.
Physically, one or more of these logical functionalities may simultaneously exist in a given core network node.
Besides the above-mentioned functionalities, the following also formed part of a telecommunications core network:
O&M: Network operations center and operations support systems to configure and provision the core network nodes. Number of subscribers, peak hour call rate, nature of services, geographical preferences are some of the factors that impact the configuration. Network statistics collection, alarm monitoring and logging of various network nodes actions also happens in the O&M center. These stats, alarms and traces form important tools for a network operator to monitor the network health and performance and improvise on the same.
Subscriber database: The core network also hosts the subscriber database (e.g. HLR in GSM systems). The subscriber database is accessed by core network nodes for functions like authentication, profiling, service invocation etc.
== Distributed backbone ==
A distributed backbone is a backbone network that consists of a number of connectivity devices connected to a series of central connectivity devices, such as hubs, switches, or routers, in a hierarchy. This kind of topology allows for simple expansion and limited capital outlay for growth, because more layers of devices can be added to existing layers. In a distributed backbone network, all of the devices that access the backbone share the transmission media, as every device connected to this network is sent all transmissions placed on that network.
Distributed backbones, in all practicality, are in use by all large-scale networks. Applications in enterprise-wide scenarios confined to a single building are also practical, as certain connectivity devices can be assigned to certain floors or departments. Each floor or department possesses a LAN and a wiring closet with that workgroup's main hub or router connected to a bus-style network using backbone cabling. Another advantage of using a distributed backbone is the ability for network administrator to segregate workgroups for ease of management.
There is the possibility of single points of failure, referring to connectivity devices high in the series hierarchy. The distributed backbone must be designed to separate network traffic circulating on each individual LAN from the backbone network traffic by using access devices such as routers and bridges.
== Collapsed backbone ==
A conventional backbone network spans distance to provide interconnectivity across multiple locations. In most cases, the backbones are the links while the switching or routing functions are done by the equipment at each location. It is a distributed architecture.
A collapsed backbone (also known as inverted backbone or backbone-in-a-box) is a type of backbone network architecture. In the case of a collapsed backbone, each location features a link back to a central location to be connected to the collapsed backbone. The collapsed backbone can be a cluster or a single switch or router. The topology and architecture of a collapsed backbone is a star or a rooted tree.
The main advantages of the collapsed backbone approach are
ease of management since the backbone is in a single location and in a single box, and
since the backbone is essentially the back plane or internal switching matrix of the box, proprietary, high performance technology can be used.
However, the drawback of the collapsed backbone is that if the box housing the backbone is down or there are reachability problem to the central location, the entire network will crash. These problems can be minimized by having redundant backbone boxes as well as having secondary/backup backbone locations.
== Parallel backbone ==
There are a few different types of backbones that are used for an enterprise-wide network. When organizations are looking for a very strong and trustworthy backbone they should choose a parallel backbone. This backbone is a variation of a collapsed backbone in that it uses a central node (connection point). Although, with a parallel backbone, it allows for duplicate connections when there is more than one router or switch. Each switch and router are connected by two cables. By having more than one cable connecting each device, it ensures network connectivity to any area of the enterprise-wide network.
Parallel backbones are more expensive than other backbone networks because they require more cabling than the other network topologies. Although this can be a major factor when deciding which enterprise-wide topology to use, the expense of it makes up for the efficiency it creates by adding increased performance and fault tolerance. Most organizations use parallel backbones when there are critical devices on the network. For example, if there is important data, such as payroll, that should be accessed at all times by multiple departments, then your organization should choose to implement a parallel backbone to make sure that the connectivity is never lost.
== Serial backbone ==
A serial backbone is the simplest kind of backbone network. Serial backbones consist of two or more internet working devices connected to each other by a single cable in a daisy-chain fashion. A daisy chain is a group of connectivity devices linked together in a serial fashion. Hubs are often connected in this way to extend a network. However, hubs are not the only device that can be connected in a serial backbone. Gateways, routers, switches and bridges more commonly form part of the backbone. The serial backbone topology could be used for enterprise-wide networks, though it is rarely implemented for that purpose.
== See also ==
Backhaul
Core router
Network service provider
== References ==
== External links ==
IPv6 Backbone Network Topology | Wikipedia/Backbone_network |
A star network is an implementation of a spoke–hub distribution paradigm in computer networks. In a star network, every host is connected to a central hub. In its simplest form, one central hub acts as a conduit to transmit messages. The star network is one of the most common computer network topologies.
== Network ==
The hub and hosts, and the transmission lines between them, form a graph with the topology of a star. Data on a star network passes through the hub before continuing to its destination. The hub manages and controls all functions of the network. It also acts as a repeater for the data flow. In a typical network the hub can be a network switch, Ethernet hub, wireless access point or a router
The star topology reduces the impact of a transmission line failure by independently connecting each host to the hub. Each host may thus communicate with all others by transmitting to, and receiving from, the hub. The failure of a transmission line linking any host to the hub will result in the isolation of that host from all others, but the rest of the network will be unaffected.
The star configuration is commonly used with twisted pair cable and optical fiber cable. However, it can also be used with coaxial cable as in, for example, a video router.
== Advantages and disadvantages ==
=== Advantages ===
If one node or its connection fails, it does not affect the other nodes.
Devices can be added or removed without disturbing the network.
Works well under heavy load.
Appropriate for a large network.
=== Disadvantages ===
Expensive due to the number and length of cables needed to wire each host to the central hub.
The central hub is a single point of failure for the network.
Each device needs a separate cable connection to the central hub, leading to higher cable usage.
The number of devices is limited by the capacity of the central hub.
== References == | Wikipedia/Star_network |
In the mathematical field of graph theory, the distance between two vertices in a graph is the number of edges in a shortest path (also called a graph geodesic) connecting them. This is also known as the geodesic distance or shortest-path distance. Notice that there may be more than one shortest path between two vertices. If there is no path connecting the two vertices, i.e., if they belong to different connected components, then conventionally the distance is defined as infinite.
In the case of a directed graph the distance d(u,v) between two vertices u and v is defined as the length of a shortest directed path from u to v consisting of arcs, provided at least one such path exists. Notice that, in contrast with the case of undirected graphs, d(u,v) does not necessarily coincide with d(v,u)—so it is just a quasi-metric, and it might be the case that one is defined while the other is not.
== Related concepts ==
A metric space defined over a set of points in terms of distances in a graph defined over the set is called a graph metric.
The vertex set (of an undirected graph) and the distance function form a metric space, if and only if the graph is connected.
The eccentricity ϵ(v) of a vertex v is the greatest distance between v and any other vertex; in symbols,
ϵ
(
v
)
=
max
u
∈
V
d
(
v
,
u
)
.
{\displaystyle \epsilon (v)=\max _{u\in V}d(v,u).}
It can be thought of as how far a node is from the node most distant from it in the graph.
The radius r of a graph is the minimum eccentricity of any vertex or, in symbols,
r
=
min
v
∈
V
ϵ
(
v
)
=
min
v
∈
V
max
u
∈
V
d
(
v
,
u
)
.
{\displaystyle r=\min _{v\in V}\epsilon (v)=\min _{v\in V}\max _{u\in V}d(v,u).}
The diameter d of a graph is the maximum eccentricity of any vertex in the graph. That is, d is the greatest distance between any pair of vertices or, alternatively,
d
=
max
v
∈
V
ϵ
(
v
)
=
max
v
∈
V
max
u
∈
V
d
(
v
,
u
)
.
{\displaystyle d=\max _{v\in V}\epsilon (v)=\max _{v\in V}\max _{u\in V}d(v,u).}
To find the diameter of a graph, first find the shortest path between each pair of vertices. The greatest length of any of these paths is the diameter of the graph.
A central vertex in a graph of radius r is one whose eccentricity is r—that is, a vertex whose distance from its furthest vertex is equal to the radius, equivalently, a vertex v such that ϵ(v) = r.
A peripheral vertex in a graph of diameter d is one whose eccentricity is d—that is, a vertex whose distance from its furthest vertex is equal to the diameter. Formally, v is peripheral if ϵ(v) = d.
A pseudo-peripheral vertex v has the property that, for any vertex u, if u is as far away from v as possible, then v is as far away from u as possible. Formally, a vertex v is pseudo-peripheral if, for each vertex u with d(u,v) = ϵ(v), it holds that ϵ(u) = ϵ(v).
A level structure of the graph, given a starting vertex, is a partition of the graph's vertices into subsets by their distances from the starting vertex.
A geodetic graph is one for which every pair of vertices has a unique shortest path connecting them. For example, all trees are geodetic.
The weighted shortest-path distance generalises the geodesic distance to weighted graphs. In this case it is assumed that the weight of an edge represents its length or, for complex networks the cost of the interaction, and the weighted shortest-path distance dW(u, v) is the minimum sum of weights across all the paths connecting u and v. See the shortest path problem for more details and algorithms.
== Algorithm for finding pseudo-peripheral vertices ==
Often peripheral sparse matrix algorithms need a starting vertex with a high eccentricity. A peripheral vertex would be perfect, but is often hard to calculate. In most circumstances a pseudo-peripheral vertex can be used. A pseudo-peripheral vertex can easily be found with the following algorithm:
Choose a vertex
u
{\displaystyle u}
.
Among all the vertices that are as far from
u
{\displaystyle u}
as possible, let
v
{\displaystyle v}
be one with minimal degree.
If
ϵ
(
v
)
>
ϵ
(
u
)
{\displaystyle \epsilon (v)>\epsilon (u)}
then set
u
=
v
{\displaystyle u=v}
and repeat with step 2, else
u
{\displaystyle u}
is a pseudo-peripheral vertex.
== See also ==
== Notes == | Wikipedia/Distance_(graph_theory) |
Network security is a umbrella term to describe security controls, policies, processes and practices adopted to prevent, detect and monitor unauthorized access, misuse, modification, or denial of a computer network and network-accessible resources. Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID and password or other authenticating information that allows them access to information and programs within their authority. Network security covers a variety of computer networks, both public and private, that are used in everyday jobs: conducting transactions and communications among businesses, government agencies and individuals. Networks can be private, such as within a company, and others which might be open to public access. Network security is involved in organizations, enterprises, and other types of institutions. It does as its title explains: it secures the network, as well as protecting and overseeing operations being done. The most common and simple way of protecting a network resource is by assigning it a unique name and a corresponding password.
== Network security concept ==
Network security starts with authentication, commonly with a username and a password. Since this requires just one detail authenticating the user name—i.e., the password—this is sometimes termed one-factor authentication. With two-factor authentication, something the user 'has' is also used (e.g., a security token or 'dongle', an ATM card, or a mobile phone); and with three-factor authentication, something the user 'is' is also used (e.g., a fingerprint or retinal scan).
Once authenticated, a firewall enforces access policies such as what services are allowed to be accessed by the network users. Though effective to prevent unauthorized access, this component may fail to check potentially harmful content such as computer worms or Trojans being transmitted over the network. Anti-virus software or an intrusion prevention system (IPS) help detect and inhibit the action of such malware. An anomaly-based intrusion detection system may also monitor the network like wireshark traffic and may be logged for audit purposes and for later high-level analysis. Newer systems combining unsupervised machine learning with full network traffic analysis can detect active network attackers from malicious insiders or targeted external attackers that have compromised a user machine or account.
Communication between two hosts using a network may be encrypted to maintain security and privacy.
Honeypots, essentially decoy network-accessible resources, may be deployed in a network as surveillance and early-warning tools, as the honeypots are not normally accessed for legitimate purposes. Honeypots are placed at a point in the network where they appear vulnerable and undefended, but they Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID ...are actually isolated and monitored. Techniques used by the attackers that attempt to compromise these decoy resources are studied during and after an attack to keep an eye on new exploitation techniques. Such analysis may be used to further tighten security of the actual network being protected by the honeypot. A honeypot can also direct an attacker's attention away from legitimate servers. A honeypot encourages attackers to spend their time and energy on the decoy server while distracting their attention from the data on the real server. Similar to a honeypot, a honeynet is a network set up with intentional vulnerabilities. Its purpose is also to invite attacks so that the attacker's methods can be studied and that information can be used to increase network security. A honeynet typically contains one or more honeypots.
Previous research on network security was mostly about using tools to secure transactions and information flow, and how well users knew about and used these tools. However, more recently, the discussion has expanded to consider information security in the broader context of the digital economy and society. This indicates that it's not just about individual users and tools; it's also about the larger culture of information security in our digital world.
== Security management ==
Security management for networks is different for all kinds of situations. A home or small office may only require basic security while large businesses may require high-maintenance and advanced software and hardware to prevent malicious attacks from hacking and spamming. In order to minimize susceptibility to malicious attacks from external threats to the network, corporations often employ tools which carry out network security verifications].
Andersson and Reimers (2014) found that employees often do not see themselves as part of their organization's information security effort and often take actions that impede organizational changes.
=== Types of attack ===
Networks are subject to attacks from malicious sources. Attacks can be from two categories: "Passive" when a network intruder intercepts data traveling through the network, and "Active" in which an intruder initiates commands to disrupt the network's normal operation or to conduct reconnaissance and lateral movements to find and gain access to assets available via the network.
Types of attacks include:
Passive
Network
Wiretapping – Third-party monitoring of electronic communicationsPages displaying short descriptions of redirect targets
Passive Port scanner – Application designed to probe for open ports
Idle scan – computer-related activityPages displaying wikidata descriptions as a fallback
Encryption – Process of converting plaintext to ciphertext
Traffic analysis – Process of intercepting and examining messages
Active:
Network virus (router viruses)
Eavesdropping – Act of secretly listening to the private conversation of others
Data modification
== See also ==
== References ==
== Further reading ==
Case Study: Network Clarity Archived 2016-05-27 at the Wayback Machine, SC Magazine 2014
Cisco. (2011). What is network security?. Retrieved from cisco.com Archived 2016-04-14 at the Wayback Machine
Security of the Internet (The Froehlich/Kent Encyclopedia of Telecommunications vol. 15. Marcel Dekker, New York, 1997, pp. 231–255.)
Introduction to Network Security Archived 2014-12-02 at the Wayback Machine, Matt Curtin, 1997.
Security Monitoring with Cisco Security MARS, Gary Halleen/Greg Kellogg, Cisco Press, Jul. 6, 2007. ISBN 1587052709
Self-Defending Networks: The Next Generation of Network Security, Duane DeCapite, Cisco Press, Sep. 8, 2006. ISBN 1587052539
Security Threat Mitigation and Response: Understanding CS-MARS, Dale Tesch/Greg Abelar, Cisco Press, Sep. 26, 2006. ISBN 1587052601
Securing Your Business with Cisco ASA and PIX Firewalls, Greg Abelar, Cisco Press, May 27, 2005. ISBN 1587052148
Deploying Zone-Based Firewalls, Ivan Pepelnjak, Cisco Press, Oct. 5, 2006. ISBN 1587053101
Network Security: PRIVATE Communication in a PUBLIC World, Charlie Kaufman | Radia Perlman | Mike Speciner, Prentice-Hall, 2002. ISBN 9780137155880
Network Infrastructure Security, Angus Wong and Alan Yeung, Springer, 2009. ISBN 978-1-4419-0165-1 | Wikipedia/Network_Security |
Graph drawing is an area of mathematics and computer science combining methods from geometric graph theory and information visualization to derive two-dimensional depictions of graphs arising from applications such as social network analysis, cartography, linguistics, and bioinformatics.
A drawing of a graph or network diagram is a pictorial representation of the vertices and edges of a graph. This drawing should not be confused with the graph itself: very different layouts can correspond to the same graph. In the abstract, all that matters is which pairs of vertices are connected by edges. In the concrete, however, the arrangement of these vertices and edges within a drawing affects its understandability, usability, fabrication cost, and aesthetics. The problem gets worse if the graph changes over time by adding and deleting edges (dynamic graph drawing) and the goal is to preserve the user's mental map.
== Graphical conventions ==
Graphs are frequently drawn as node–link diagrams in which the vertices are represented as disks, boxes, or textual labels and the edges are represented as line segments, polylines, or curves in the Euclidean plane. Node–link diagrams can be traced back to the 14th-16th century works of Pseudo-Lull which were published under the name of Ramon Llull, a 13th century polymath. Pseudo-Lull drew diagrams of this type for complete graphs in order to analyze all pairwise combinations among sets of metaphysical concepts.
In the case of directed graphs, arrowheads form a commonly used graphical convention to show their orientation; however, user studies have shown that other conventions such as tapering provide this information more effectively. Upward planar drawing uses the convention that every edge is oriented from a lower vertex to a higher vertex, making arrowheads unnecessary.
Alternative conventions to node–link diagrams include adjacency representations such as circle packings, in which vertices are represented by disjoint regions in the plane and edges are represented by adjacencies between regions; intersection representations in which vertices are represented by non-disjoint geometric objects and edges are represented by their intersections; visibility representations in which vertices are represented by regions in the plane and edges are represented by regions that have an unobstructed line of sight to each other; confluent drawings, in which edges are represented as smooth curves within mathematical train tracks; fabrics, in which nodes are represented as horizontal lines and edges as vertical lines; and visualizations of the adjacency matrix of the graph.
== Quality measures ==
Many different quality measures have been defined for graph drawings, in an attempt to find objective means of evaluating their aesthetics and usability. In addition to guiding the choice between different layout methods for the same graph, some layout methods attempt to directly optimize these measures.
The crossing number of a drawing is the number of pairs of edges that cross each other. If the graph is planar, then it is often convenient to draw it without any edge intersections; that is, in this case, a graph drawing represents a graph embedding. However, nonplanar graphs frequently arise in applications, so graph drawing algorithms must generally allow for edge crossings.
The area of a drawing is the size of its smallest bounding box, relative to the closest distance between any two vertices. Drawings with smaller area are generally preferable to those with larger area, because they allow the features of the drawing to be shown at greater size and therefore more legibly. The aspect ratio of the bounding box may also be important.
Symmetry display is the problem of finding symmetry groups within a given graph, and finding a drawing that displays as much of the symmetry as possible. Some layout methods automatically lead to symmetric drawings; alternatively, some drawing methods start by finding symmetries in the input graph and using them to construct a drawing.
It is important that edges have shapes that are as simple as possible, to make it easier for the eye to follow them. In polyline drawings, the complexity of an edge may be measured by its number of bends, and many methods aim to provide drawings with few total bends or few bends per edge. Similarly for spline curves the complexity of an edge may be measured by the number of control points on the edge.
Several commonly used quality measures concern lengths of edges: it is generally desirable to minimize the total length of the edges as well as the maximum length of any edge. Additionally, it may be preferable for the lengths of edges to be uniform rather than highly varied.
Angular resolution is a measure of the sharpest angles in a graph drawing. If a graph has vertices with high degree then it necessarily will have small angular resolution, but the angular resolution can be bounded below by a function of the degree.
The slope number of a graph is the minimum number of distinct edge slopes needed in a drawing with straight line segment edges (allowing crossings). Cubic graphs have slope number at most four, but graphs of degree five may have unbounded slope number; it remains open whether the slope number of degree-4 graphs is bounded.
== Layout methods ==
There are many different graph layout strategies:
In force-based layout systems, the graph drawing software modifies an initial vertex placement by continuously moving the vertices according to a system of forces based on physical metaphors related to systems of springs or molecular mechanics. Typically, these systems combine attractive forces between adjacent vertices with repulsive forces between all pairs of vertices, in order to seek a layout in which edge lengths are small while vertices are well-separated. These systems may perform gradient descent based minimization of an energy function, or they may translate the forces directly into velocities or accelerations for the moving vertices.
Spectral layout methods use as coordinates the eigenvectors of a matrix such as the Laplacian derived from the adjacency matrix of the graph.
Orthogonal layout methods, which allow the edges of the graph to run horizontally or vertically, parallel to the coordinate axes of the layout. These methods were originally designed for VLSI and PCB layout problems but they have also been adapted for graph drawing. They typically involve a multiphase approach in which an input graph is planarized by replacing crossing points by vertices, a topological embedding of the planarized graph is found, edge orientations are chosen to minimize bends, vertices are placed consistently with these orientations, and finally a layout compaction stage reduces the area of the drawing.
Tree layout algorithms these show a rooted tree-like formation, suitable for trees. Often, in a technique called "balloon layout", the children of each node in the tree are drawn on a circle surrounding the node, with the radii of these circles diminishing at lower levels in the tree so that these circles do not overlap.
Layered graph drawing methods (often called Sugiyama-style drawing) are best suited for directed acyclic graphs or graphs that are nearly acyclic, such as the graphs of dependencies between modules or functions in a software system. In these methods, the nodes of the graph are arranged into horizontal layers using methods such as the Coffman–Graham algorithm, in such a way that most edges go downwards from one layer to the next; after this step, the nodes within each layer are arranged in order to minimize crossings.
Arc diagrams, a layout style dating back to the 1960s, place vertices on a line; edges may be drawn as semicircles above or below the line, or as smooth curves linked together from multiple semicircles.
Circular layout methods place the vertices of the graph on a circle, choosing carefully the ordering of the vertices around the circle to reduce crossings and place adjacent vertices close to each other. Edges may be drawn either as chords of the circle or as arcs inside or outside of the circle. In some cases, multiple circles may be used.
Dominance drawing places vertices in such a way that one vertex is upwards, rightwards, or both of another if and only if it is reachable from the other vertex. In this way, the layout style makes the reachability relation of the graph visually apparent.
== Application-specific graph drawings ==
Graphs and graph drawings arising in other areas of application include
Sociograms, drawings of a social network, as often offered by social network analysis software
Hasse diagrams, a type of graph drawing specialized to partial orders
Dessin d'enfants, a type of graph drawing used in algebraic geometry
State diagrams, graphical representations of finite-state machines
Computer network diagrams, depictions of the nodes and connections in a computer network
Flowcharts and drakon-charts, drawings in which the nodes represent the steps of an algorithm and the edges represent control flow between steps.
Project network, graphical depiction of the chronological order in which activities of a project are to be completed.
Data-flow diagrams, drawings in which the nodes represent the components of an information system and the edges represent the movement of information from one component to another.
Bioinformatics including phylogenetic trees, protein–protein interaction networks, and metabolic pathways.
In addition, the placement and routing steps of electronic design automation (EDA) are similar in many ways to graph drawing, as is the problem of greedy embedding in distributed computing, and the graph drawing literature includes several results borrowed from the EDA literature. However, these problems also differ in several important ways: for instance, in EDA, area minimization and signal length are more important than aesthetics, and the routing problem in EDA may have more than two terminals per net while the analogous problem in graph drawing generally only involves pairs of vertices for each edge.
== Software ==
Software, systems, and providers of systems for drawing graphs include:
BioFabric open-source software for visualizing large networks by drawing nodes as horizontal lines.
Cytoscape, open-source software for visualizing molecular interaction networks
Gephi, open-source network analysis and visualization software
graph-tool, a free/libre Python library for analysis of graphs
Graphviz, an open-source graph drawing system from AT&T Corporation
Linkurious, a commercial network analysis and visualization software for graph databases
Mathematica, a general-purpose computation tool that includes 2D and 3D graph visualization and graph analysis tools.
Microsoft Automatic Graph Layout, open-source .NET library (formerly called GLEE) for laying out graphs
NetworkX is a Python library for studying graphs and networks.
Tulip, an open-source data visualization tool
yEd, a graph editor with graph layout functionality
PGF/TikZ 3.0 with the graphdrawing package (requires LuaTeX).
LaNet-vi, an open-source large network visualization software
== See also ==
International Symposium on Graph Drawing
List of Unified Modeling Language tools
== References ==
=== Footnotes ===
=== General references ===
=== Specialized subtopics ===
== Further reading ==
== External links ==
GraphX library for .NET Archived 2018-01-26 at the Wayback Machine: open-source WPF library for graph calculation and visualization. Supports many layout and edge routing algorithms.
Graph drawing e-print archive: including information on papers from all Graph Drawing symposia. | Wikipedia/Network_diagram |
A biological network is a method of representing systems as complex sets of binary interactions or relations between various biological entities. In general, networks or graphs are used to capture relationships between entities or objects. A typical graphing representation consists of a set of nodes connected by edges.
== History of networks ==
As early as 1736 Leonhard Euler analyzed a real-world issue known as the Seven Bridges of Königsberg, which established the foundation of graph theory. From the 1930s-1950s the study of random graphs were developed. During the mid 1990s, it was discovered that many different types of "real" networks have structural properties quite different from random networks. In the late 2000's, scale-free and small-world networks began shaping the emergence of systems biology, network biology, and network medicine. In 2014, graph theoretical methods were used by Frank Emmert-Streib to analyze biological networks.
In the 1980s, researchers started viewing DNA or genomes as the dynamic storage of a language system with precise computable finite states represented as a finite-state machine. Recent complex systems research has also suggested some far-reaching commonality in the organization of information in problems from biology, computer science, and physics.
== Networks in biology ==
=== Protein–protein interaction networks ===
Protein-protein interaction networks (PINs) represent the physical relationship among proteins present in a cell, where proteins are nodes, and their interactions are undirected edges. Due to their undirected nature, it is difficult to identify all the proteins involved in an interaction. Protein–protein interactions (PPIs) are essential to the cellular processes and also the most intensely analyzed networks in biology. PPIs could be discovered by various experimental techniques, among which the yeast two-hybrid system is a commonly used technique for the study of binary interactions. Recently, high-throughput studies using mass spectrometry have identified large sets of protein interactions.
Many international efforts have resulted in databases that catalog experimentally determined protein-protein interactions. Some of them are the Human Protein Reference Database, Database of Interacting Proteins, the Molecular Interaction Database (MINT), IntAct, and BioGRID. At the same time, multiple computational approaches have been proposed to predict interactions. FunCoup and STRING are examples of such databases, where protein-protein interactions inferred from multiple evidences are gathered and made available for public usage.
Recent studies have indicated the conservation of molecular networks through deep evolutionary time. Moreover, it has been discovered that proteins with high degrees of connectedness are more likely to be essential for survival than proteins with lesser degrees. This observation suggests that the overall composition of the network (not simply interactions between protein pairs) is vital for an organism's overall functioning.
=== Gene regulatory networks (DNA–protein interaction networks) ===
The genome encodes thousands of genes whose products (mRNAs, proteins) are crucial to the various processes of life, such as cell differentiation, cell survival, and metabolism. Genes produce such products through a process called transcription, which is regulated by a class of proteins called transcription factors. For instance, the human genome encodes almost 1,500 DNA-binding transcription factors that regulate the expression of more than 20,000 human genes. The complete set of gene products and the interactions among them constitutes gene regulatory networks (GRN). GRNs regulate the levels of gene products within the cell and in-turn the cellular processes.
GRNs are represented with genes and transcriptional factors as nodes and the relationship between them as edges. These edges are directional, representing the regulatory relationship between the two ends of the edge. For example, the directed edge from gene A to gene B indicates that A regulates the expression of B. Thus, these directional edges can not only represent the promotion of gene regulation but also its inhibition.
GRNs are usually constructed by utilizing the gene regulation knowledge available from databases such as., Reactome and KEGG. High-throughput measurement technologies, such as microarray, RNA-Seq, ChIP-chip, and ChIP-seq, enabled the accumulation of large-scale transcriptomics data, which could help in understanding the complex gene regulation patterns.
=== Gene co-expression networks (transcript–transcript association networks) ===
Gene co-expression networks can be perceived as association networks between variables that measure transcript abundances. These networks have been used to provide a system biologic analysis of DNA microarray data, RNA-seq data, miRNA data, etc. weighted gene co-expression network analysis is extensively used to identify co-expression modules and intramodular hub genes. Co-expression modules may correspond to cell types or pathways, while highly connected intramodular hubs can be interpreted as representatives of their respective modules.
=== Metabolic networks ===
Cells break down the food and nutrients into small molecules necessary for cellular processing through a series of biochemical reactions. These biochemical reactions are catalyzed by enzymes. The complete set of all these biochemical reactions in all the pathways represents the metabolic network. Within the metabolic network, the small molecules take the roles of nodes, and they could be either carbohydrates, lipids, or amino acids. The reactions which convert these small molecules from one form to another are represented as edges. It is possible to use network analyses to infer how selection acts on metabolic pathways.
=== Signaling networks ===
Signals are transduced within cells or in between cells and thus form complex signaling networks which plays a key role in the tissue structure. For instance, the MAPK/ERK pathway is transduced from the cell surface to the cell nucleus by a series of protein-protein interactions, phosphorylation reactions, and other events. Signaling networks typically integrate protein–protein interaction networks, gene regulatory networks, and metabolic networks. Single cell sequencing technologies allows the extraction of inter-cellular signaling, an example is NicheNet, which allows to modeling intercellular communication by linking ligands to target genes.
=== Neuronal networks ===
The complex interactions in the brain make it a perfect candidate to apply network theory. Neurons in the brain are deeply connected with one another, and this results in complex networks being present in the structural and functional aspects of the brain. For instance, small-world network properties have been demonstrated in connections between cortical regions of the primate brain or during swallowing in humans. This suggests that cortical areas of the brain are not directly interacting with each other, but most areas can be reached from all others through only a few interactions.
=== Food webs ===
All organisms are connected through feeding interactions. If a species eats or is eaten by another species, they are connected in an intricate food web of predator and prey interactions. The stability of these interactions has been a long-standing question in ecology. That is to say if certain individuals are removed, what happens to the network (i.e., does it collapse or adapt)? Network analysis can be used to explore food web stability and determine if certain network properties result in more stable networks. Moreover, network analysis can be used to determine how selective removals of species will influence the food web as a whole. This is especially important considering the potential species loss due to global climate change.
=== Between-species interaction networks ===
In biology, pairwise interactions have historically been the focus of intense study. With the recent advances in network science, it has become possible to scale up pairwise interactions to include individuals of many species involved in many sets of interactions to understand the structure and function of larger ecological networks. The use of network analysis can allow for both the discovery and understanding of how these complex interactions link together within the system's network, a property that has previously been overlooked. This powerful tool allows for the study of various types of interactions (from competitive to cooperative) using the same general framework. For example, plant-pollinator interactions are mutually beneficial and often involve many different species of pollinators as well as many different species of plants. These interactions are critical to plant reproduction and thus the accumulation of resources at the base of the food chain for primary consumers, yet these interaction networks are threatened by anthropogenic change. The use of network analysis can illuminate how pollination networks work and may, in turn, inform conservation efforts. Within pollination networks, nestedness (i.e., specialists interact with a subset of species that generalists interact with), redundancy (i.e., most plants are pollinated by many pollinators), and modularity play a large role in network stability. These network properties may actually work to slow the spread of disturbance effects through the system and potentially buffer the pollination network from anthropogenic changes somewhat. More generally, the structure of species interactions within an ecological network can tell us something about the diversity, richness, and robustness of the network. Researchers can even compare current constructions of species interactions networks with historical reconstructions of ancient networks to determine how networks have changed over time. Much research into these complex species interactions networks is highly concerned with understanding what factors (e.g., species richness, connectance, nature of the physical environment) lead to network stability.
=== Within-species interaction networks ===
Network analysis provides the ability to quantify associations between individuals, which makes it possible to infer details about the network as a whole at the species and/or population level. One of the most attractive features of the network paradigm would be that it provides a single conceptual framework in which the social organization of animals at all levels (individual, dyad, group, population) and for all types of interaction (aggressive, cooperative, sexual, etc.) can be studied.
Researchers interested in ethology across many taxa, from insects to primates, are starting to incorporate network analysis into their research. Researchers interested in social insects (e.g., ants and bees) have used network analyses better to understand the division of labor, task allocation, and foraging optimization within colonies. Other researchers are interested in how specific network properties at the group and/or population level can explain individual-level behaviors. Studies have demonstrated how animal social network structure can be influenced by factors ranging from characteristics of the environment to characteristics of the individual, such as developmental experience and personality. At the level of the individual, the patterning of social connections can be an important determinant of fitness, predicting both survival and reproductive success. At the population level, network structure can influence the patterning of ecological and evolutionary processes, such as frequency-dependent selection and disease and information transmission. For instance, a study on wire-tailed manakins (a small passerine bird) found that a male's degree in the network largely predicted the ability of the male to rise in the social hierarchy (i.e., eventually obtain a territory and matings). In bottlenose dolphin groups, an individual's degree and betweenness centrality values may predict whether or not that individual will exhibit certain behaviors, like the use of side flopping and upside-down lobtailing to lead group traveling efforts; individuals with high betweenness values are more connected and can obtain more information, and thus are better suited to lead group travel and therefore tend to exhibit these signaling behaviors more than other group members.
Social network analysis can also be used to describe the social organization within a species more generally, which frequently reveals important proximate mechanisms promoting the use of certain behavioral strategies. These descriptions are frequently linked to ecological properties (e.g., resource distribution). For example, network analyses revealed subtle differences in the group dynamics of two related equid fission-fusion species, Grevy's zebra and onagers, living in variable environments; Grevy's zebras show distinct preferences in their association choices when they fission into smaller groups, whereas onagers do not. Similarly, researchers interested in primates have also utilized network analyses to compare social organizations across the diverse primate order, suggesting that using network measures (such as centrality, assortativity, modularity, and betweenness) may be useful in terms of explaining the types of social behaviors we see within certain groups and not others.
Finally, social network analysis can also reveal important fluctuations in animal behaviors across changing environments. For example, network analyses in female chacma baboons (Papio hamadryas ursinus) revealed important dynamic changes across seasons that were previously unknown; instead of creating stable, long-lasting social bonds with friends, baboons were found to exhibit more variable relationships which were dependent on short-term contingencies related to group-level dynamics as well as environmental variability. Changes in an individual's social network environment can also influence characteristics such as 'personality': for example, social spiders that huddle with bolder neighbors tend to increase also in boldness. This is a very small set of broad examples of how researchers can use network analysis to study animal behavior. Research in this area is currently expanding very rapidly, especially since the broader development of animal-borne tags and computer vision can be used to automate the collection of social associations. Social network analysis is a valuable tool for studying animal behavior across all animal species and has the potential to uncover new information about animal behavior and social ecology that was previously poorly understood.
=== DNA-DNA chromatin networks ===
Within a nucleus, DNA is constantly in motion. Perpetual actions such as genome folding and Cohesin extrusion morph the shape of a genome in real time. The spatial location of strands of chromatin relative to each other plays an important role in the activation or suppression of certain genes. DNA-DNA Chromatin Networks help biologists to understand these interactions by analyzing commonalities amongst different loci. The size of a network can vary significantly, from a few genes to several thousand and thus network analysis can provide vital support in understanding relationships among different areas of the genome. As an example, analysis of spatially similar loci within the organization in a nucleus with Genome Architecture Mapping (GAM) can be used to construct a network of loci with edges representing highly linked genomic regions.
The first graphic showcases the Hist1 region of the mm9 mouse genome with each node representing genomic loci. Two nodes are connected by an edge if their linkage disequilibrium is greater than the average across all 81 genomic windows. The locations of the nodes within the graphic are randomly selected and the methodology of choosing edges yields a, simple to show, but rudimentary graphical representation of the relationships in the dataset. The second visual exemplifies the same information as the previous; However, the network starts with every loci placed sequentially in a ring configuration. It then pulls nodes together using linear interpolation by their linkage as a percentage. The figure illustrates strong connections between the center genomic windows as well as the edge loci at the beginning and end of the Hist1 region.
== Modelling biological networks ==
=== Introduction ===
To draw useful information from a biological network, an understanding of the statistical and mathematical techniques of identifying relationships within a network is vital. Procedures to identify association, communities, and centrality within nodes in a biological network can provide insight into the relationships of whatever the nodes represent whether they are genes, species, etc. Formulation of these methods transcends disciplines and relies heavily on graph theory, computer science, and bioinformatics.
=== Association ===
There are many different ways to measure the relationships of nodes when analyzing a network. In many cases, the measure used to find nodes that share similarity within a network is specific to the application it is being used. One of the types of measures that biologists utilize is correlation which specifically centers around the linear relationship between two variables. As an example, weighted gene co-expression network analysis uses Pearson correlation to analyze linked gene expression and understand genetics at a systems level. Another measure of correlation is linkage disequilibrium. Linkage disequilibrium describes the non-random association of genetic sequences among loci in a given chromosome. An example of its use is in detecting relationships in GAM data across genomic intervals based upon detection frequencies of certain loci.
=== Centrality ===
The concept of centrality can be extremely useful when analyzing biological network structures. There are many different methods to measure centrality such as betweenness, degree, Eigenvector, and Katz centrality. Every type of centrality technique can provide different insights on nodes in a particular network; However, they all share the commonality that they are to measure the prominence of a node in a network.
In 2005, Researchers at Harvard Medical School utilized centrality measures with the yeast protein interaction network. They found that proteins that exhibited high Betweenness centrality were more essential and translated closely to a given protein's evolutionary age.
=== Communities ===
Studying the community structure of a network by subdividing groups of nodes into like-regions can be an integral tool for bioinformatics when exploring data as a network. A food web of The Secaucus High School Marsh exemplifies the benefits of grouping as the relationships between nodes are far easier to analyze with well-made communities. While the first graphic is hard to visualize, the second provides a better view of the pockets of highly connected feeding relationships that would be expected in a food web. The problem of community detection is still an active problem. Scientists and graph theorists continuously discover new ways of subsectioning networks and thus a plethora of different algorithms exist for creating these relationships. Like many other tools that biologists utilize to understand data with network models, every algorithm can provide its own unique insight and may vary widely on aspects such as accuracy or time complexity of calculation.
In 2002, a food web of marine mammals in the Chesapeake Bay was divided into communities by biologists using a community detection algorithm based on neighbors of nodes with high degree centrality. The resulting communities displayed a sizable split in pelagic and benthic organisms. Two very common community detection algorithms for biological networks are the Louvain Method and Leiden Algorithm.
The Louvain method is a greedy algorithm that attempts to maximize modularity, which favors heavy edges within communities and sparse edges between, within a set of nodes. The algorithm starts by each node being in its own community and iteratively being added to the particular node's community that favors a higher modularity. Once no modularity increase can occur by joining nodes to a community, a new weighted network is constructed of communities as nodes with edges representing between-community edges and loops representing edges within a community. The process continues until no increase in modularity occurs. While the Louvain Method provides good community detection, there are a few ways that it is limited. By mainly focusing on maximizing a given measure of modularity, it may be led to craft badly connected communities by degrading a model for the sake of maximizing a modularity metric; However, the Louvain Method performs fairly and is easy to understand compared to many other community detection algorithms.
The Leiden Algorithm expands on the Louvain Method by providing a number of improvements. When joining nodes to a community, only neighborhoods that have been recently changed are considered. This greatly improves the speed of merging nodes. Another optimization is in the refinement phase in which the algorithm randomly chooses for a node from a set of communities to merge with. This allows for greater depth in choosing communities as the Louvain Method solely focuses on maximizing the modularity that was chosen. The Leiden algorithm, while more complex than the Louvain Method, performs faster with better community detection and can be a valuable tool for identifying groups.
=== Network Motifs ===
Network motifs, or statistically significant recurring interaction patterns within a network, are a commonly used tool to understand biological networks. A major use case of network motifs is in Neurophysiology where motif analysis is commonly used to understand interconnected neuronal functions at varying scales. As an example, in 2017, researchers at Beijing Normal University analyzed highly represented 2 and 3 node network motifs in directed functional brain networks constructed by Resting state fMRI data to study the basic mechanisms in brain information flow.
== See also ==
List of omics topics in biology
Biological network inference
Biostatistics
Computational biology
Systems biology
Weighted correlation network analysis
Interactome
Network medicine
Ecological network
== References ==
== Books ==
== External links ==
Networkbio.org, The site of the series of Integrative Network Biology (INB) meetings. For the 2012 event also see www.networkbio.org
Network Tools and Applications in Biology (NETTAB) workshops.
Networkbiology.org, NetworkBiology wiki site.
Linding Lab, Technical University of Denmark (DTU) studies Network Biology and Cellular Information Processing, and is also organizing the Denmark branch of the annual "Integrative Network Biology and Cancer" symposium series.
NRNB.org, The National Resource for Network Biology. A US National Institute of Health (NIH) Biomedical Technology Research Center dedicated to the study of biological networks.
Network Repository The first interactive data and network data repository with real-time visual analytics.
Animal Social Network Repository (ASNR) The first multi-taxonomic repository that collates 790 social networks from more than 45 species, including those of mammals, reptiles, fish, birds, and insects | Wikipedia/Biological_network |
This article presents a timeline of events in the history of computer operating systems from 1951 to the current day. For a narrative explaining the overall developments, see the History of operating systems.
== 1950s ==
1951
LEO I 'Lyons Electronic Office' was the commercial development of EDSAC computing platform, supported by British firm J. Lyons and Co.
1953
DYSEAC - an early machine capable of distributing computing
1955
General Motors Operating System made for IBM 701
MIT's Tape Director operating system made for UNIVAC 1103
1956
GM-NAA I/O for IBM 704, based on General Motors Operating System
1957
Atlas Supervisor (Manchester University) (Atlas computer project start)
BESYS (Bell Labs), for IBM 704, later IBM 7090 and IBM 7094
1958
University of Michigan Executive System (UMES), for IBM 704, 709, and 7090
1959
SHARE Operating System (SOS), based on GM-NAA I/O
== 1960s ==
1960
IBSYS (IBM for its 7090 and 7094)
1961
CTSS demonstration (MIT's Compatible Time-Sharing System for the IBM 7094)
MCP (Burroughs Master Control Program) for B5000
1962
Atlas Supervisor (Manchester University) (Atlas computer commissioned)
BBN Time-Sharing System
GCOS (GE's General Comprehensive Operating System, originally GECOS, General Electric Comprehensive Operating Supervisor)
1963
ADMIRAL
AN/FSQ-32, another early time-sharing system begun
CTSS becomes operational (MIT's Compatible Time-Sharing System for the IBM 7094)
JOSS, an interactive time-shared system that did not distinguish between operating system and language
Titan Supervisor, early time-sharing system begun
1964
Berkeley Timesharing System (for Scientific Data Systems' SDS 940)
Chippewa Operating System (for CDC 6600 supercomputer)
Dartmouth Time-Sharing System (Dartmouth College's DTSS for GE computers)
EXEC 8 (UNIVAC)
KDF9 Timesharing Director (English Electric) – an early, fully hardware secured, fully pre-emptive process switching, multi-programming operating system for KDF9 (originally announced in 1960)
OS/360 (IBM's primary OS for its S/360 series) (announced)
PDP-6 Monitor (DEC) descendant renamed TOPS-10 in 1970
SCOPE (CDC 3000 series)
1965
BOS/360 (IBM's Basic Operating System)
DECsys
TOS/360 (IBM's Tape Operating System)
Livermore Time Sharing System (LTSS)
Multics (MIT, GE, Bell Labs for the GE-645) (announced)
Pick operating system
SIPROS 66 (Simultaneous Processing Operating System)
THE multiprogramming system (Technische Hogeschool Eindhoven) development
TSOS (later VMOS) (RCA)
1966
DOS/360 (IBM's Disk Operating System)
GEORGE 1 & 2 for ICT 1900 series
Mod 1
Mod 2
Mod 8
MS/8 (Richard F. Lary's DEC PDP-8 system)
MSOS (Mass Storage Operating System)
OS/360 (IBM's primary OS for its S/360 series) PCP and MFT (shipped)
RAX
Remote Users of Shared Hardware (RUSH), a time-sharing system developed by Allen-Babcock for the IBM 360/50
SODA for Elwro's Odra 1204
Universal Time-Sharing System (XDS Sigma series)
1967
CP-40, predecessor to CP-67 on modified IBM System/360 Model 40
CP-67 (IBM, also known as CP/CMS)
Conversational Programming System (CPS), an IBM time-sharing system under OS/360
Michigan Terminal System (MTS) (time-sharing system for the IBM S/360-67 and successors)
ITS (MIT's Incompatible Timesharing System for the DEC PDP-6 and PDP-10)
OS/360 MVT
ORVYL (Stanford University's time-sharing system for the IBM S/360-67)
TSS/360 (IBM's Time-sharing System for the S/360-67, never officially released, canceled in 1969 and again in 1971)
WAITS (SAIL, Stanford Artificial Intelligence Laboratory, time-sharing system for DEC PDP-6 and PDP-10, later TOPS-10)
1968
Airline Control Program (ACP) (IBM)
B1 (NCR Century series)
CALL/360, an IBM time-sharing system for System/360
Real-Time Executive (RTE) – Hewlett-Packard
THE multiprogramming system (Eindhoven University of Technology) publication
TSS/8 (DEC for the PDP-8)
VP/CSS
1969
B2 (NCR Century series)
B3 (NCR Century series)
GEORGE 3 For ICL 1900 series
Multics (MIT, GE, Bell Labs for the GE-645 and later the Honeywell 6180) (opened for paying customers in October)
RC 4000 Multiprogramming System (RC)
TENEX (Bolt, Beranek and Newman for DEC systems, later TOPS-20)
Unics (later Unix) (AT&T, initially on DEC computers)
Xerox Operating System
== 1970s ==
1970
DOS-11 (PDP-11)
1971
EMAS
Kronos
RSTS-11 2A-19 (First released version; PDP-11)
RSX-15
OS/8
1972
B4 (NCR Century series)
COS-300
Data General RDOS
Edos
MUSIC/SP
OS/4
OS 1100
Operating System/Virtual Storage 1 (OS/VS1)
Operating System/Virtual Storage 2 R1 (OS/VS2 SVS)
PRIMOS (written in FORTRAN IV, that didn't have pointers, while later versions, around version 18, written in a version of PL/I, called PL/P)
Virtual Machine/Basic System Extensions Program Product (BSEPP or VM/SE)
Virtual Machine/System Extensions Program Product (SEPP or VM/BSE)
Virtual Machine Facility/370 (VM/370), sometimes known as VM/CMS
1973
Эльбрус-1 (Elbrus-1) – Soviet computer – created using high-level language uЭль-76 (AL-76/ALGOL 68)
Alto OS
CP-V (Control Program V)
RSX-11D
RT-11
VME – implementation language S3 (ALGOL 68)
1974
ACOS-2 (NEC)
ACOS-4
ACOS-6
CP/M
DOS-11 V09-20C (Last stable release, June 1974)
Hydra – capability-based, multiprocessing OS kernel
MONECS
Multi-Programming Executive (MPE) – Hewlett-Packard
Operating System/Virtual Storage 2 R2 (MVS)
OS/7
OS/16
OS/32
Sintran III
1975
BS2000 V2.0 (First released version)
COS-350
ISIS
NOS (Control Data Corporation)
OS/3 (Univac)
VS/9 (formerly RCA's TSOS, later named VMOS)
Version 6 Unix
XVM/DOS
XVM/RSX
1976
Cambridge CAP computer – all operating system procedures written in ALGOL 68C, with some closely associated protected procedures in BCPL
Cray Operating System
DX10
FLEX
TOPS-20
TX990/TXDS
Tandem Nonstop OS v1
Thoth
1977
1BSD
AMOS
KERNAL
OASIS operating system
OS68
OS4000
RMX-80
System 88 (Exec)
System Support Program (IBM System/34 and System/36)
TRSDOS
Virtual Memory System (VMS) V1.0 (Initial commercial release, October 25)
VRX (Virtual Resource eXecutive)
VS Virtual Memory Operating System
1978
2BSD
Apple DOS
Control Program Facility (IBM System/38)
Cray Time Sharing System (CTSS)
DPCX (IBM)
DPPX (IBM)
HDOS
KSOS – secure OS design from Ford Aerospace
KVM/370 – security retro-fit of IBM VM/370
Lisp machine (CADR)
MVS/System Extensions (MVS/SE)
OS4 (Naked Mini 4)
PTDOS
TRIPOS
UCSD p-System (First released version)
1979
Atari DOS
3BSD
CP-6
Idris
MP/M
MVS/System Extensions R2 (MVS/SE2)
NLTSS
POS
Sinclair BASIC
Transaction Processing Facility (TPF) (IBM)
UCLA Secure UNIX – an early secure UNIX OS based on security kernel
UNIX/32V
DOS/VSE
Version 7 Unix
== 1980s ==
1980
86-DOS
AOS/VS (Data General)
Business Operating System
CTOS
MVS/System Product (MVS/SP) V1
NewDos/80
OS-9
RMX-86
RS-DOS
SOS
Virtual Machine/System Product (VM/SP)
Xenix
1981
Acorn MOS
Aegis SR1 (First Apollo/DOMAIN systems shipped on March 27)
CP/M-86
iMAX – OS for Intel's iAPX 432 capability machine
MCS (Multi-user Control System)
MS-DOS
PC DOS
Pilot (Xerox Star operating system)
UNOS
UTS
V
VERSAdos
VRTX
VSOS (Virtual Storage Operating System)
Xinu first release
1982
Commodore DOS
LDOS (By Logical Systems, Inc. – for the Radio Shack TRS-80 Models I, II & III)
PCOS (Olivetti M20)
pSOS
QNX
Stratus VOS
Sun UNIX (later SunOS) 0.7
Ultrix
Unix System III
VAXELN
1983
Coherent
DNIX
EOS
GNU (project start)
Lisa Office System 7/7
LOCUS – UNIX compatible, high reliability, distributed OS
MVS/System Product V2 (MVS/Extended Architecture, MVS/XA)
Novell NetWare (S-Net)
PERPOS
ProDOS
RTU (Real-Time Unix)
STOP – TCSEC A1-class, secure OS for SCOMP hardware
SunOS 1.0
VSE/System Package (VSE/SP) Version 1
1984
AMSDOS
CTIX (Unix variant)
DYNIX
Mac OS (System 1.0)
MSX-DOS
NOS/VE
PANOS
PC/IX
ROS
Sinclair QDOS
QNX
SINIX
UNICOS
Venix 2.0
Virtual Machine/Extended Architecture Migration Assistance (VM/XA MA)
1985
AmigaOS
Atari TOS
DG/UX
DOS Plus
Graphics Environment Manager
Harmony
MIPS RISC/os
Oberon – written in Oberon
SunOS 2.0
Version 8 Unix
Virtual Machine/Extended Architecture System Facility (VM/XA SF)
Windows 1.0
Windows 1.01
Xenix 2.0
1986
AIX 1.0
Cronus distributed OS
FlexOS
GEMSOS – TCSEC A1-class, secure kernel for BLACKER VPN & GTNP
GEOS
Genera 7.0
HP-UX
SunOS 3.0
TR-DOS
TRIX
Version 9 Unix
1987
Arthur (much improved version came in 1989 under the name RISC OS)
BS2000 V9.0
IRIX (3.0 is first SGI version)
MDOS
MINIX 1.0
OS/2 (1.0)
PC-MOS/386
Topaz – semi-distributed OS for DEC Firefly workstation written in Modula-2+ and garbage collected
Windows 2.0
1988
A/UX (Apple Computer)
AOS/VS II (Data General)
CP/M rebranded as DR-DOS
Flex machine – tagged, capability machine with OS and other software written in ALGOL 68RS
GS/OS
HeliOS 1.0
KeyKOS – capability-based microkernel for IBM mainframes with automated persistence of app data
LynxOS
Mac OS (System 6)
MVS/System Product V3 (MVS/Enterprise Systems Architecture, MVS/ESA)
OS/2 (1.1)
OS/400
RISC iX
SpartaDOS X
SunOS 4.0
TOPS-10 7.04 (Last stable release, July 1988)
Virtual Machine/Extended Architecture System Product (VM/XA SP)
VAX VMM – TCSEC A1-class, VMM for VAX computers (limited use before cancellation)
1989
Army Secure Operating System (ASOS) – TCSEC A1-class secure, real-time OS for Ada applications
EPOC (EPOC16)
NeXTSTEP (1.0)
OS/2 (1.2)
RISC OS (First release was to be called Arthur 2, but was renamed to RISC OS 2, and was first sold as RISC OS 2.00 in April 1989)
SCO UNIX (Release 3)
TSX-32
Version 10 Unix
Xenix 2.3.4 (Last stable release)
== 1990s ==
1990
AIX 3.0
AmigaOS 2.0
BeOS (v1)
DOS/V
Genera 8.0
iS-DOS
LOCK – TCSEC A1-class secure system with kernel and hardware support for type enforcement
MVS/ESA SP Version 4
Novell NetWare 3
OS/2 1.3
OSF/1
RTEMS
PC/GEOS
Windows 3.0
Virtual Machine/Enterprise Systems Architecture (VM/XA ESA)
VSE/Enterprise Systems Architecture (VSE/ESA) Version 1
1991
Amoeba – microkernel-based, POSIX-compliant, distributed OS
GNO/ME
Linux 0.01-0.1
Mac OS (System 7)
MINIX 1.5
PenPoint OS
RISC OS 3
SUNMOS
Trusted Xenix – rewritten & security enhanced Xenix evaluated at TCSEC B2-class
1992
386BSD 0.1
Amiga Unix 2.01 (Latest stable release)
AmigaOS 3.0
BSD/386, by BSDi and later known as BSD/OS.
LGX
OpenVMS V1.0 (First OpenVMS AXP (Alpha) specific version, November 1992)
OS/2 2.0 (First i386 32-bit based version)
Plan 9 First Edition (First public release was made available to universities)
RSTS/E 10.1 (Last stable release, September 1992)
SLS
Solaris 2.0 (Successor to SunOS 4.x; based on SVR4 instead of BSD)
Windows 3.1
1993
IBM 4690 Operating System
FreeBSD
NetBSD
Novell NetWare 4
Newton OS
Nucleus RTOS
Open Genera 1.0
OS 2200 (Unisys)
OS/2 2.1
PTS-DOS
Slackware 1.0
Spring
Windows NT 3.1 (First Windows NT kernel public release)
1994
AIX 4.0, 4.1
IBM MVS/ESA SP Version 5
NetBSD 1.0 (First multi-platform release, October 1994)
OS/2 Warp 3.0
Red Hat
RISC OS 3.5
SPIN – extensible OS written in Modula-3
1995
Digital UNIX (aka Tru64 UNIX)
OpenBSD
OS/390
Plan 9 Second Edition (Commercial second release version was made available to the general public.)
SMSQ/E
Ultrix 4.5 (Last major release)
Windows 95
1996
AIX 4.2
Debian 1.1
JN – microkernel OS for embedded, Java apps
Mac OS 7.6 (First officially-named Mac OS)
OS/2 Warp 4.0
Palm OS
RISC OS 3.6
Windows NT 4.0
Windows CE 1.0
1997
AIX 4.3
DR-WebSpyder 1.0
EPOC (EPOC32)
Inferno
Mac OS 8
MINIX 2.0
Nemesis
RISC OS 3.7
SkyOS
Windows CE 2.0
1998
DR-WebSpyder 2.0
Junos
Novell NetWare 5
RT-11 5.7 (Last stable release, October 1998)
Solaris 7 (first 64-bit Solaris release – names from this point drop "2.", otherwise would've been Solaris 2.7)
Windows 98
1999
Amiga OS 3.5 (unofficial)
AROS (Boot for the first time in Stand Alone version)
Inferno Second Edition (Last distribution (Release 2.3, c. July 1999) from Lucent's Inferno Business Unit)
Mac OS 9
OS/2 Warp 4.5
RISC OS 4
Windows 98 (2nd edition)
== 2000s ==
== 2010s ==
== 2020s ==
== See also ==
Comparison of operating systems
List of operating systems
Comparison of real-time operating systems
Timeline of DOS operating systems
Timeline of Linux distributions (Diagram 1992–2010)
== References ==
== External links ==
UNIX History – a timeline of UNIX 1969 and its descendants at present
Concise Microsoft O.S. Timeline – a color-coded concise timeline for various Microsoft operating systems (1981–present)
Bitsavers – an effort to capture, salvage, and archive historical computer software and manuals from minicomputers and mainframes of the 1950s, 1960s, 1970s, and 1980s
A brief history of operating systems
Microsoft operating system time-line | Wikipedia/Timeline_of_operating_systems |
The Watts–Strogatz model is a random graph generation model that produces graphs with small-world properties, including short average path lengths and high clustering. It was proposed by Duncan J. Watts and Steven Strogatz in their article published in 1998 in the Nature scientific journal. The model also became known as the (Watts) beta model after Watts used
β
{\displaystyle \beta }
to formulate it in his popular science book Six Degrees.
== Rationale for the model ==
The formal study of random graphs dates back to the work of Paul Erdős and Alfréd Rényi. The graphs they considered, now known as the classical or Erdős–Rényi (ER) graphs, offer a simple and powerful model with many applications.
However the ER graphs do not have two important properties observed in many real-world networks:
They do not generate local clustering and triadic closures. Instead, because they have a constant, random, and independent probability of two nodes being connected, ER graphs have a low clustering coefficient.
They do not account for the formation of hubs. Formally, the degree distribution of ER graphs converges to a Poisson distribution, rather than a power law observed in many real-world, scale-free networks.
The Watts and Strogatz model was designed as the simplest possible model that addresses the first of the two limitations. It accounts for clustering while retaining the short average path lengths of the ER model. It does so by interpolating between a randomized structure close to ER graphs and a regular ring lattice. Consequently, the model is able to at least partially explain the "small-world" phenomena in a variety of networks, such as the power grid, neural network of C. elegans, networks of movie actors, or fat-metabolism communication in budding yeast.
== Algorithm ==
Given the desired number of nodes
N
{\displaystyle N}
, the mean degree
K
{\displaystyle K}
(assumed to be an even integer), and a parameter
β
{\displaystyle \beta }
, all satisfying
0
≤
β
≤
1
{\displaystyle 0\leq \beta \leq 1}
and
N
≫
K
≫
ln
N
≫
1
{\displaystyle N\gg K\gg \ln N\gg 1}
, the model constructs an undirected graph with
N
{\displaystyle N}
nodes and
N
K
2
{\displaystyle {\frac {NK}{2}}}
edges in the following way:
Construct a regular ring lattice, a graph with
N
{\displaystyle N}
nodes each connected to
K
{\displaystyle K}
neighbors,
K
/
2
{\displaystyle K/2}
on each side. That is, if the nodes are labeled
0
…
N
−
1
,
{\displaystyle 0\ldots {N-1},}
there is an edge
(
i
,
j
)
{\displaystyle (i,j)}
if and only if
0
<
|
i
−
j
|
m
o
d
(
N
−
1
−
K
2
)
≤
K
2
.
{\displaystyle 0<|i-j|\ \mathrm {mod} \ \left(N-1-{\frac {K}{2}}\right)\leq {\frac {K}{2}}.}
For every node
i
=
0
,
…
,
N
−
1
{\displaystyle i=0,\dots ,{N-1}}
take every edge connecting
i
{\displaystyle i}
to its
K
/
2
{\displaystyle K/2}
rightmost neighbors, that is every edge
(
i
,
j
)
{\displaystyle (i,j)}
such that
0
<
(
j
−
i
)
m
o
d
N
≤
K
/
2
{\displaystyle 0<(j-i)\ \mathrm {mod} \ N\leq K/2}
, and rewire it with probability
β
{\displaystyle \beta }
. Rewiring is done by replacing
(
i
,
j
)
{\displaystyle (i,j)}
with
(
i
,
k
)
{\displaystyle (i,k)}
where
k
{\displaystyle k}
is chosen uniformly at random from all possible nodes while avoiding self-loops (
k
≠
i
{\displaystyle k\neq i}
) and link duplication (there is no edge
(
i
,
k
′
)
{\displaystyle (i,{k'})}
with
k
′
=
k
{\displaystyle k'=k}
at this point in the algorithm).
== Properties ==
The underlying lattice structure of the model produces a locally clustered network, while the randomly rewired links dramatically reduce the average path lengths. The algorithm introduces about
β
N
K
2
{\displaystyle \beta {\frac {NK}{2}}}
of such non-lattice edges. Varying
β
{\displaystyle \beta }
makes it possible to interpolate between a regular lattice (
β
=
0
{\displaystyle \beta =0}
) and a structure close to an Erdős–Rényi random graph
G
(
N
,
p
)
{\displaystyle G(N,p)}
with
p
=
K
N
−
1
{\displaystyle p={\frac {K}{N-1}}}
at
β
=
1
{\displaystyle \beta =1}
. It does not approach the actual ER model since every node will be connected to at least
K
/
2
{\displaystyle K/2}
other nodes.
The three properties of interest are the average path length, the clustering coefficient, and the degree distribution.
=== Average path length ===
For a ring lattice, the average path length is
ℓ
(
0
)
≈
N
/
2
K
≫
1
{\displaystyle \ell (0)\approx N/2K\gg 1}
and scales linearly with the system size. In the limiting case of
β
→
1
{\displaystyle \beta \rightarrow 1}
, the graph approaches a random graph with
ℓ
(
1
)
≈
ln
N
ln
K
{\displaystyle \ell (1)\approx {\frac {\ln N}{\ln K}}}
, while not actually converging to it. In the intermediate region
0
<
β
<
1
{\displaystyle 0<\beta <1}
, the average path length falls very rapidly with increasing
β
{\displaystyle \beta }
, quickly approaching its limiting value.
=== Clustering coefficient ===
For the ring lattice the clustering coefficient
C
(
0
)
=
3
(
K
−
2
)
4
(
K
−
1
)
{\displaystyle C(0)={\frac {3(K-2)}{4(K-1)}}}
, and so tends to
3
/
4
{\displaystyle 3/4}
as
K
{\displaystyle K}
grows, independently of the system size. In the limiting case of
β
→
1
{\displaystyle \beta \rightarrow 1}
the clustering coefficient is of the same order as the clustering coefficient for classical random graphs,
C
=
K
/
(
N
−
1
)
{\displaystyle C=K/(N-1)}
and is thus inversely proportional to the system size. In the intermediate region the clustering coefficient remains quite close to its value for the regular lattice, and only falls at relatively high
β
{\displaystyle \beta }
. This results in a region where the average path length falls rapidly, but the clustering coefficient does not, explaining the "small-world" phenomenon.
If we use the Barrat and Weigt measure for clustering
C
′
(
β
)
{\displaystyle C'(\beta )}
defined as the fraction between the average number of edges between the neighbors of a node and the average number of possible edges between these neighbors, or, alternatively,
C
′
(
β
)
≡
3
×
number of triangles
number of connected triples
{\displaystyle C'(\beta )\equiv {\frac {3\times {\text{number of triangles}}}{\text{number of connected triples}}}}
then we get
C
′
(
β
)
∼
C
(
0
)
(
1
−
β
)
3
.
{\displaystyle C'(\beta )\sim C(0)(1-\beta )^{3}.}
=== Degree distribution ===
The degree distribution in the case of the ring lattice is just a Dirac delta function centered at
K
{\displaystyle K}
. The degree distribution for a large number of nodes and
0
<
β
<
1
{\displaystyle 0<\beta <1}
can be written as,
P
(
k
)
≈
∑
n
=
0
f
(
k
,
K
)
(
K
/
2
n
)
(
1
−
β
)
n
β
K
/
2
−
n
(
β
K
/
2
)
k
−
K
/
2
−
n
(
k
−
K
/
2
−
n
)
!
e
−
β
K
/
2
,
{\displaystyle P(k)\approx \sum _{n=0}^{f(k,K)}{{K/2} \choose {n}}(1-\beta )^{n}\beta ^{K/2-n}{\frac {(\beta K/2)^{k-K/2-n}}{(k-K/2-n)!}}e^{-\beta K/2},}
where
k
i
{\displaystyle k_{i}}
is the number of edges that the
i
th
{\displaystyle i^{\text{th}}}
node has or its degree. Here
k
≥
K
/
2
{\displaystyle k\geq K/2}
, and
f
(
k
,
K
)
=
min
(
k
−
K
/
2
,
K
/
2
)
{\displaystyle f(k,K)=\min(k-K/2,K/2)}
. The shape of the degree distribution is similar to that of a random graph and has a pronounced peak at
k
=
K
{\displaystyle k=K}
and decays exponentially for large
|
k
−
K
|
{\displaystyle |k-K|}
. The topology of the network is relatively homogeneous, meaning that all nodes are of similar degree.
== Limitations ==
The major limitation of the model is that it produces an unrealistic degree distribution. In contrast, real networks are often scale-free networks inhomogeneous in degree, having hubs and a scale-free degree distribution. Such networks are better described in that respect by the preferential attachment family of models, such as the Barabási–Albert (BA) model. (On the other hand, the Barabási–Albert model fails to produce the high levels of clustering seen in real networks, a shortcoming not shared by the Watts and Strogatz model. Thus, neither the Watts and Strogatz model nor the Barabási–Albert model should be viewed as fully realistic.)
The Watts and Strogatz model also implies a fixed number of nodes and thus cannot be used to model network growth.
== See also ==
Small-world networks
Erdős–Rényi (ER) model
Barabási–Albert model
Social networks
== References == | Wikipedia/Watts–Strogatz_model |
Modularity is a measure of the structure of networks or graphs which measures the strength of division of a network into modules (also called groups, clusters or communities). Networks with high modularity have dense connections between the nodes within modules but sparse connections between nodes in different modules. Modularity is often used in optimization methods for detecting community structure in networks. Biological networks, including animal brains, exhibit a high degree of modularity. However, modularity maximization is not statistically consistent, and finds communities in its own null model, i.e. fully random graphs, and therefore it cannot be used to find statistically significant community structures in empirical networks. Furthermore, it has been shown that modularity suffers a resolution limit and, therefore, it is unable to detect small communities.
== Motivation ==
Many scientifically important problems can be represented and empirically studied using networks. For example, biological and social patterns, the World Wide Web, metabolic networks, food webs, neural networks and pathological networks are real world problems that can be mathematically represented and topologically studied to reveal some unexpected structural features. Most of these networks possess a certain community structure that has substantial importance in building an understanding regarding the dynamics of the network. For instance, a closely connected social community will imply a faster rate of transmission of information or rumor among them than a loosely connected community. Thus, if a network is represented by a number of individual nodes connected by links which signify a certain degree of interaction between the nodes, communities are defined as groups of densely interconnected nodes that are only sparsely connected with the rest of the network. Hence, it may be imperative to identify the communities in networks since the communities may have quite different properties such as node degree, clustering coefficient, betweenness, centrality, etc., from that of the average network. Modularity is one such measure, which when maximized, leads to the appearance of communities in a given network.
== Definition ==
Modularity is the fraction of the edges that fall within the given groups minus the expected fraction if edges were distributed at random. The value of the modularity for unweighted and undirected graphs lies in the range
[
−
1
/
2
,
1
]
{\displaystyle [-1/2,1]}
. It is positive if the number of edges within groups exceeds the number expected on the basis of chance. For a given division of the network's vertices into some modules, modularity reflects the concentration of edges within modules compared with random distribution of links between all nodes regardless of modules.
There are different methods for calculating modularity. In the most common version of the concept, the randomization of the edges is done so as to preserve the degree of each vertex. Consider a graph with
n
{\displaystyle n}
nodes and
m
{\displaystyle m}
links (edges) such that the graph can be partitioned into two communities using a membership variable
s
{\displaystyle s}
. If a node
v
{\displaystyle v}
belongs to community 1,
s
v
=
1
{\displaystyle s_{v}=1}
, or if
v
{\displaystyle v}
belongs to community 2,
s
v
=
−
1
{\displaystyle s_{v}=-1}
. Let the adjacency matrix for the network be represented by
A
{\displaystyle A}
, where
A
v
w
=
0
{\displaystyle A_{vw}=0}
means there's no edge (no interaction) between nodes
v
{\displaystyle v}
and
w
{\displaystyle w}
and
A
v
w
=
1
{\displaystyle A_{vw}=1}
means there is an edge between the two. Also for simplicity we consider an undirected network. Thus
A
v
w
=
A
w
v
{\displaystyle A_{vw}=A_{wv}}
. (It is important to note that multiple edges may exist between two nodes, but here we assess the simplest case).
Modularity
Q
{\displaystyle Q}
is then defined as the fraction of edges that fall within group 1 or 2, minus the expected number of edges within groups 1 and 2 for a random graph with the same node degree distribution as the given network.
The expected number of edges shall be computed using the concept of a configuration model. The configuration model is a randomized realization of a particular network. Given a network with
n
{\displaystyle n}
nodes, where each node
v
{\displaystyle v}
has a node degree
k
v
{\displaystyle k_{v}}
, the configuration model cuts each edge into two halves, and then each half edge, called a stub, is rewired randomly with any other stub in the network, even allowing self-loops (which occur when a stub is rewired to another stub from the same node) and multiple-edges between the same two nodes. Thus, even though the node degree distribution of the graph remains intact, the configuration model results in a completely random network.
== Expected Number of Edges Between Nodes ==
Now consider two nodes
v
{\displaystyle v}
and
w
{\displaystyle w}
, with node degrees
k
v
{\displaystyle k_{v}}
and
k
w
{\displaystyle k_{w}}
respectively, from a randomly rewired network as described above. We calculate the expected number of full edges between these nodes.
Let us consider each of the
k
v
{\displaystyle k_{v}}
stubs of node
v
{\displaystyle v}
and create associated indicator variables
I
i
(
v
,
w
)
{\displaystyle I_{i}^{(v,w)}}
for them,
i
=
1
,
…
,
k
v
{\displaystyle i=1,\ldots ,k_{v}}
, with
I
i
(
v
,
w
)
=
1
{\displaystyle I_{i}^{(v,w)}=1}
if the
i
{\displaystyle i}
-th stub happens to connect to one of the
k
w
{\displaystyle k_{w}}
stubs of node
w
{\displaystyle w}
in this particular random graph. If it does not, then
I
i
(
v
,
w
)
=
0
{\displaystyle I_{i}^{(v,w)}=0}
. Since the
i
{\displaystyle i}
-th stub of node
v
{\displaystyle v}
can connect to any of the
2
m
−
1
{\displaystyle 2m-1}
remaining stubs with equal probability (while
m
{\displaystyle m}
is the number of edges in the original graph), and since there are
k
w
{\displaystyle k_{w}}
stubs it can connect to associated with node
w
{\displaystyle w}
, evidently
p
(
I
i
(
v
,
w
)
=
1
)
=
E
[
I
i
(
v
,
w
)
]
=
k
w
2
m
−
1
{\displaystyle p(I_{i}^{(v,w)}=1)=E[I_{i}^{(v,w)}]={\frac {k_{w}}{2m-1}}}
The total number of full edges
J
v
w
{\displaystyle J_{vw}}
between
v
{\displaystyle v}
and
w
{\displaystyle w}
is just
J
v
w
=
∑
i
=
1
k
v
I
i
(
v
,
w
)
{\displaystyle J_{vw}=\sum _{i=1}^{k_{v}}I_{i}^{(v,w)}}
, so the expected value of this quantity is
E
[
J
v
w
]
=
E
[
∑
i
=
1
k
v
I
i
(
v
,
w
)
]
=
∑
i
=
1
k
v
E
[
I
i
(
v
,
w
)
]
=
∑
i
=
1
k
v
k
w
2
m
−
1
=
k
v
k
w
2
m
−
1
{\displaystyle E[J_{vw}]=E\left[\sum _{i=1}^{k_{v}}I_{i}^{(v,w)}\right]=\sum _{i=1}^{k_{v}}E[I_{i}^{(v,w)}]=\sum _{i=1}^{k_{v}}{\frac {k_{w}}{2m-1}}={\frac {k_{v}k_{w}}{2m-1}}}
Many texts then make the following approximations, for random networks with a large number of edges. When
m
{\displaystyle m}
is large, they drop the subtraction of
1
{\displaystyle 1}
in the denominator above and simply use the approximate expression
k
v
k
w
2
m
{\displaystyle {\frac {k_{v}k_{w}}{2m}}}
for the expected number of edges between two nodes. Additionally, in a large random network, the number of self-loops and multi-edges is vanishingly small. Ignoring self-loops and multi-edges allows one to assume that there is at most one edge between any two nodes. In that case,
J
v
w
{\displaystyle J_{vw}}
becomes a binary indicator variable, so its expected value is also the probability that it equals
1
{\displaystyle 1}
, which means one can approximate the probability of an edge existing between nodes
v
{\displaystyle v}
and
w
{\displaystyle w}
as
k
v
k
w
2
m
{\displaystyle {\frac {k_{v}k_{w}}{2m}}}
.
== Modularity ==
Hence, the difference between the actual number of edges between node
v
{\displaystyle v}
and
w
{\displaystyle w}
and the expected number of edges between them is
A
v
w
−
k
v
k
w
2
m
{\displaystyle A_{vw}-{\frac {k_{v}k_{w}}{2m}}}
Summing over all node pairs gives the equation for modularity,
Q
{\displaystyle Q}
.
It is important to note that Eq. 3 holds good for partitioning into two communities only. Hierarchical partitioning (i.e. partitioning into two communities, then the two sub-communities further partitioned into two smaller sub communities only to maximize Q) is a possible approach to identify multiple communities in a network. Additionally, (3) can be generalized for partitioning a network into c communities.
where eij is the fraction of edges with one end vertices in community i and the other in community j:
e
i
j
=
∑
v
w
A
v
w
2
m
1
v
∈
c
i
1
w
∈
c
j
{\displaystyle e_{ij}=\sum _{vw}{\frac {A_{vw}}{2m}}1_{v\in c_{i}}1_{w\in c_{j}}}
and ai is the fraction of ends of edges that are attached to vertices in community i:
a
i
=
k
i
2
m
=
∑
j
e
i
j
{\displaystyle a_{i}={\frac {k_{i}}{2m}}=\sum _{j}e_{ij}}
== Example of multiple community detection ==
We consider an undirected network with 10 nodes and 12 edges and the following adjacency matrix.
The communities in the graph are represented by the red, green and blue node clusters in Fig 1. The optimal community partitions are depicted in Fig 2.
== Matrix formulation ==
An alternative formulation of the modularity, useful particularly in spectral optimization algorithms, is as follows. Define
S
v
r
{\displaystyle S_{vr}}
to be
1
{\displaystyle 1}
if vertex
v
{\displaystyle v}
belongs to group
r
{\displaystyle r}
and
0
{\displaystyle 0}
otherwise. Then
δ
(
c
v
,
c
w
)
=
∑
r
S
v
r
S
w
r
{\displaystyle \delta (c_{v},c_{w})=\sum _{r}S_{vr}S_{wr}}
and hence
Q
=
1
4
m
∑
v
w
∑
r
[
A
v
w
−
k
v
k
w
2
m
]
S
v
r
S
w
r
=
1
4
m
T
r
(
S
T
B
S
)
,
{\displaystyle Q={\frac {1}{4m}}\sum _{vw}\sum _{r}\left[A_{vw}-{\frac {k_{v}k_{w}}{2m}}\right]S_{vr}S_{wr}={\frac {1}{4m}}\mathrm {Tr} (\mathbf {S} ^{\mathrm {T} }\mathbf {BS} ),}
where
S
{\displaystyle S}
is the (non-square) matrix having elements
S
v
{\displaystyle S_{v}}
and
B
{\displaystyle B}
is the so-called modularity matrix, which has elements
B
v
w
=
A
v
w
−
k
v
k
w
2
m
.
{\displaystyle B_{vw}=A_{vw}-{\frac {k_{v}k_{w}}{2m}}.}
All rows and columns of the modularity matrix sum to zero, which means that the modularity of an undivided network is also always
0
{\displaystyle 0}
.
For networks divided into just two communities, one can alternatively define
s
v
=
±
1
{\displaystyle s_{v}=\pm 1}
to indicate the community to which node
v
{\displaystyle v}
belongs, which then leads to
Q
=
1
4
m
∑
v
w
B
v
w
s
v
s
w
=
1
4
m
s
T
B
s
,
{\displaystyle Q={1 \over 4m}\sum _{vw}B_{vw}s_{v}s_{w}={1 \over 4m}\mathbf {s} ^{\mathrm {T} }\mathbf {Bs} ,}
where
s
{\displaystyle s}
is the column vector with elements
s
v
{\displaystyle s_{v}}
.
This function has the same form as the Hamiltonian of an Ising spin glass, a connection that has been exploited to create simple computer algorithms, for instance using simulated annealing, to maximize the modularity. The general form of the modularity for arbitrary numbers of communities is equivalent to a Potts spin glass and similar algorithms can be developed for this case also.
== Overfitting ==
Although the method of modularity maximization is motivated by computing a deviation from a null model, this deviation is not computed in a statistically consistent manner. Because of this, the method notoriously finds high-scoring communities in its own null model
(the configuration model), which by definition cannot be statistically significant. Because of this, the method cannot be used to reliably obtain statistically significant community structure in empirical networks.
== Resolution limit ==
Modularity compares the number of edges inside a cluster with the expected number of edges that
one would find in the cluster if the network were a random network with the same number of nodes and where
each node keeps its degree, but edges are otherwise randomly attached. This random null model implicitly assumes that each node can get attached to any other node of the network. This assumption is however unreasonable if the network is very large, as the horizon of a node includes a small part of the network, ignoring most of it.
Moreover, this implies that the expected number of edges between two groups of nodes decreases if the size of the network increases. So, if a network is large enough, the expected number of edges between two groups of nodes in modularity's null model may be smaller than one. If this happens, a single edge between the two clusters would be interpreted by modularity as a sign of a strong correlation between the two clusters, and optimizing modularity would lead to the merging of the two clusters, independently of the clusters' features. So, even weakly interconnected complete graphs, which have the highest possible density of internal edges, and represent the best identifiable communities, would be merged by modularity optimization if the network were sufficiently large.
For this reason, optimizing modularity in large networks would fail to resolve small communities, even when they are well defined. This bias
is inevitable for methods like modularity optimization, which rely on a global null model.
== Multiresolution methods ==
There are two main approaches which try to solve the resolution limit within the modularity context: the addition of a resistance r to every node, in the form of a self-loop, which increases (r>0) or decreases (r<0) the aversion of nodes to form communities; or the addition of a parameter γ>0 in front of the null-case term in the definition of modularity, which controls the relative importance between internal links of the communities and the null model. Optimizing modularity for values of these parameters in their respective appropriate ranges, it is possible to recover the whole mesoscale of the network, from the macroscale in which all nodes belong to the same community, to the microscale in which every node forms its own community, hence the name multiresolution methods. However, it has been shown that these methods have limitations when communities are very heterogeneous in size.
== Software Tools ==
There are a couple of software tools available that are able to compute clusterings in graphs with good modularity.
Original implementation of the multi-level Louvain method.
The Leiden algorithm which additionally avoids unconnected communities.
The Vienna Graph Clustering (VieClus) algorithm, a parallel memetic algorithm.
== See also ==
Complex network
Community structure
Null model
Percolation theory
== References == | Wikipedia/Modularity_(networks) |
A network host is a computer or other device connected to a computer network. A host may work as a server offering information resources, services, and applications to users or other hosts on the network. Hosts are assigned at least one network address.
A computer participating in networks that use the Internet protocol suite may also be called an IP host. Specifically, computers participating in the Internet are called Internet hosts. Internet hosts and other IP hosts have one or more IP addresses assigned to their network interfaces. The addresses are configured either manually by an administrator, automatically at startup by means of the Dynamic Host Configuration Protocol (DHCP), or by stateless address autoconfiguration methods.
Network hosts that participate in applications that use the client–server model of computing are classified as server or client systems. Network hosts may also function as nodes in peer-to-peer applications, in which all nodes share and consume resources in an equipotent manner.
== Origins ==
In operating systems, the term terminal host denotes a time-sharing computer or multi-user software providing services to computer terminals, or a computer that provides services to smaller or less capable devices, such as a mainframe computer serving teletype terminals or video terminals. Other examples of this architecture include a telnet host connected to a telnet server and an xhost connected to an X Window client.
The term Internet host or just host is used in a number of Request for Comments (RFC) documents that define the Internet and its predecessor, the ARPANET. RFC 871 defines a host as a general-purpose computer system connected to a communications network for "... the purpose of achieving resource sharing amongst the participating operating systems..."
While the ARPANET was being developed, computers connected to the network were typically mainframe computer systems that could be accessed from dumb terminals connected via serial ports. Since these terminals did not host software or perform computations themselves, they were not considered hosts as they were not connected to any IP network, and were not assigned IP addresses. User computers connected to the ARPANET at a packet-switching node were considered hosts.
== Nodes, hosts, and servers ==
A network node is any device participating in a network. A host is a node that participates in user applications, either as a server, client, or both. A server is a type of host that offers resources to the other hosts. Typically a server accepts connections from clients who request a service function.
Every network host is a node, but not every network node is a host. Network infrastructure hardware, such as modems, Ethernet hubs, and network switches are not directly or actively participating in application-level functions, and do not necessarily have a network address, and are not considered to be network hosts.
== See also ==
Communication endpoint – Type of communication network node
End system – Computer connected to a network
Port (computer networking) – Communications endpoint in an operating system
Terminal (telecommunication) – Device which ends a telecommunications link
== References ==
== External links ==
R. Braden, ed. (October 1989). Requirements for Internet Hosts -- Communication Layers. doi:10.17487/RFC1122. RFC 1122. | Wikipedia/Host_(network) |
IEEE 802.20 or Mobile Broadband Wireless Access (MBWA) was a specification by the standard association of the Institute of Electrical and Electronics Engineers (IEEE) for mobile broadband networks. The main standard was published in 2008. MBWA is no longer being actively developed.
This wireless broadband technology is also known and promoted as iBurst (or HC-SDMA, High Capacity Spatial Division Multiple Access). It was originally developed by ArrayComm and optimizes the use of its bandwidth with the help of smart antennas. Kyocera is the manufacturer of iBurst devices.
== Description ==
iBurst is a mobile broadband wireless access system that was first developed by ArrayComm, and announced with partner Sony in April 2000.
It was adopted as the High Capacity – Spatial Division Multiple Access (HC-SDMA) radio interface standard (ATIS-0700004-2005) by the Alliance for Telecommunications Industry Solutions (ATIS).
The standard was prepared by ATIS’ Wireless Technology and Systems Committee's Wireless Wideband Internet Access subcommittee and accepted as an American National Standard in 2005.
HC-SDMA was announced as considered by ISO TC204 WG16 for the continuous communications standards architecture, known as Communications, Air-interface, Long and Medium range (CALM), which ISO is developing for intelligent transport systems (ITS). ITS may include applications for public safety, network congestion management during traffic incidents, automatic toll booths, and more. An official liaison was established between WTSC and ISO TC204 WG16 for this in 2005.
The HC-SDMA interface provides wide-area broadband wireless data-connectivity for fixed, portable and mobile computing devices and appliances. The protocol is designed to be implemented with smart antenna array techniques (called MIMO for multiple-input multiple-output) to substantially improve the radio frequency (RF) coverage, capacity and performance for the system.
In January 2006, the IEEE 802.20 Mobile Broadband Wireless Access Working Group adopted a technology proposal that included the use of the HC-SDMA standard for the 625kHz Multi-Carrier time-division duplex (TDD) mode of the standard. One Canadian vendor operates at 1.8 GHz.
== Technical description ==
The HC-SDMA interface operates on a similar premise as cellular phones, with hand-offs between HC-SDMA cells repeatedly providing the user with a seamless wireless Internet access even when moving at the speed of a car or train.
The standard's proposed benefits:
IP roaming & handoff (at more than 1 Mbit/s)
New MAC and PHY with IP and adaptive antennas
Optimized for full mobility up to vehicular speeds of 250 km/h
Operates in Licensed Bands (below 3.5 GHz)
Uses Packet Architecture
Low Latency
Some technical details were:
Bandwidths of 5, 10, and 20 MHz.
Peak data rates of 80 Mbit/s.
Spectral efficiency above 1 bit/sec/Hz using multiple input/multiple output technology (MIMO).
Layered frequency hopping allocates OFDM carriers to near, middle, and far-away handsets, improving SNR (works best for SISO handsets.)
Supports low-bit rates efficiently, carrying up to 100 phone calls per MHz.
Hybrid ARQ with up to 6 transmissions and several choices for interleaving.
Basic slot period of 913 microseconds carrying 8 OFDM symbols.
One of the first standards to support both TDM (FL, RL) and separate-frequency (FL, RL) deployments.
The protocol:
specifies base station and client device RF characteristics, including output power levels, transmit frequencies and timing error, pulse shaping, in-band and out-of band spurious emissions, receiver sensitivity and selectivity;
defines associated frame structures for the various burst types including standard uplink and downlink traffic, paging and broadcast burst types;
specifies the modulation, forward error correction, interleaving and scrambling for various burst types;
describes the various logical channels (broadcast, paging, random access, configuration and traffic channels) and their roles in establishing communication over the radio link; and
specifies procedures for error recovery and retry.
The protocol also supports Layer 3 (L3) mechanisms for creating and controlling logical connections (sessions) between client device and base including registration, stream start, power control, handover, link adaptation, and stream closure, as well as L3 mechanisms for client device authentication and secure transmission on the data links.
Currently deployed iBurst systems allow connectivity up to 2 Mbit/s for each subscriber equipment. Apparently there will be future firmware upgrade possibilities to increase these speeds up to 5 Mbit/s, consistent with HC-SDMA protocol.
== History ==
The 802.20 working group was proposed in response to products using technology originally developed by ArrayComm marketed under the iBurst brand name. The Alliance for Telecommunications Industry Solutions adopted iBurst as ATIS-0700004-2005.
The Mobile Broadband Wireless Access (MBWA) Working Group was approved by IEEE Standards Board on December 11, 2002, to prepare a formal specification for a packet-based air interface designed for Internet Protocol-based services.
At its height, the group had 175 participants.
On June 8, 2006, the IEEE-SA Standards Board directed that all activities of the 802.20 Working Group be temporarily suspended until October 1, 2006.
The decision came from complaints of a lack of transparency, and that the group's chair, Jerry Upton, was favoring Qualcomm.
The unprecedented step came after other working groups had also been subject to related allegations of large companies undermining the standard process.
Intel and Motorola had filed appeals, claiming they were not given time to prepare proposals.
These claims were cited in a 2007 lawsuit filed by Broadcom against Qualcomm.
On September 15, 2006, the IEEE-SA Standards Board approved a plan to enable the working group to move towards completion and approval by reorganizing.
The chair at the November 2006 meeting was Arnold Greenspan.
On July 17, 2007, the IEEE 802 Executive Committee along with its 802.20 Oversight Committee approved a change to voting in the 802.20 working group. Instead of a vote per attending individual, each entity would have a single vote.
On June 12, 2008, the IEEE approved the base standard to be published.
Additional supporting standards included IEEE 802.20.2-2010, a protocol conformance statement, 802.20.3-2010, minimum performance characteristics, an amendment 802.20a-2010 for a Management Information Base and some corrections, and amendment 802.20b-2010 to support bridging.
802.20 standard was put to hibernation in March 2011 due to lack of activity.
In 2004 another wireless standard group had been formed as IEEE 802.22, for wireless regional networks using unused television station frequencies.
Trials such as those in the Netherlands by T-Mobile International in 2004 were announced as "Pre-standard 802.20". These were based on an orthogonal frequency-division multiplexing technology known as FLASH-OFDM developed by Flarion (since 2006 owned by Qualcomm).
However, other service providers soon adopted 802.16e (the mobile version of WiMAX).
In September 2008, the Association of Radio Industries and Businesses in Japan adopted the 802.20-2008 standard as ARIB STD-T97.
Kyocera markets products supporting the standard under the iBurst name. As of March 2011, Kyocera claimed 15 operators offered service in 12 countries.
== Commercial use ==
Various options are already commercially available using:
Desktop modem with USB and Ethernet ports (with external power supply)
Portable USB modem (using USB power supply)
Laptop modem (PC card)
Wireless Residential Gateway
Mobile Broadband Router
iBurst was commercially available in twelve countries in 2011 including Azerbaijan, Lebanon, and United States.
iBurst (Pty) Ltd started operation in South Africa in 2005.
iBurst Africa International provided the service in Ghana in 2007, and then later in Mozambique, Democratic Republic of the Congo and Kenya.
MoBif Wireless Broadband Sdn Bhd, started service in Malaysia in 2007, changing its name to iZZinet. The provider ceased operations in March 2011.
In Australia, Veritel and Personal Broadband Australia (a subsidiary of Commander Australia Limited), offered iBurst services however both have since been shut down after the increase of 3.5G and 4G mobile data services. BigAir acquired Veritel's iBurst customers in 2006, and shut down the service in 2009.
Personal Broadband Australia's iBurst service was shut down in December 2008.
iBurst South Africa officially shut down on August 31, 2017. Users were given a choice to keep their @iburst.co.za or @wbs.co.za. iBurst still keeps support staff available, however this is also expected to be shut down by the end of 2017 (no information about support remaining for the email addresses from iBurst has been given).
== See also ==
Broadband Wireless Access
Satellite internet
== References ==
== External links ==
"IEEE 802.20: Mobile Broadband Wireless Access (MBWA)". Official web site. Retrieved August 20, 2011.
IEEE Standard for Local and metropolitan area networks — Part 20: Air Interface for Mobile Broadband Wireless Access Systems Supporting Vehicular Mobility — Physical and Media Access Control Layer Specification (PDF). IEEE Standards Association. August 29, 2008. ISBN 978-0-7381-5766-5. Archived from the original (PDF) on February 15, 2010. {{cite book}}: |work= ignored (help)
Experiment and Simulation Results of Adaptive Antenna Array System at Base and Mobile Stations in Mobile Environment - IEICE Transactions
Kyocera website
iBurst Association | Wikipedia/Global_area_network |
Network, networking and networked may refer to:
== Science and technology ==
Network theory, the study of graphs as a representation of relations between discrete objects
Network science, an academic field that studies complex networks
=== Mathematics ===
Networks, a graph with attributes studied in network theory
Scale-free network, a network whose degree distribution follows a power law
Small-world network, a mathematical graph in which most nodes are not neighbors, but have neighbors in common
Flow network, a directed graph where each edge has a capacity and each edge receives a flow
=== Biology ===
Biological network, any network that applies to biological systems
Ecological network, a representation of interacting species in an ecosystem
Neural network, a network or circuit of neurons
=== Technology and communication ===
Artificial neural network, a computing system inspired by animal brains
Broadcast network, radio stations, television stations, or other electronic media outlets that broadcast content from a centralized source
News network
Radio network, including both broadcast and two-way communications
Television network, used to distribute television program content
Electrical network, an interconnection of electrical components
Social networking service, an online platform that people use to build social networks
Telecommunications network, allowing communication between separated nodes
Computer network or data network, a digital telecommunications network
Network hardware: Network switch, Networking cable
Wireless network, a computer network using wireless data connections
Network (typeface), used on the transport network in the West Midlands, UK
=== Sociology and business ===
Social network, in social science research
Scientific collaboration network, a social network wherein nodes are scientists and links are co-authorships
Social group, a network of people
Network of practice, a social science concept
Business networking, the sharing of information or services between people, companies or groups
Personal networking, the practice of developing and maintaining a personal network
Supply network, a pattern of temporal and spatial processes carried out at facility nodes and over distribution links
Transport network, a network in geographic space
== Arts, entertainment and media ==
Network (1976 film), a 1976 American film
Network (2019 film), an Indian film
Network (album), a 2004 album by Saga
Network (comics), a series of Marvel Comics characters
Network (play), a 2017 play based on the 1976 film
Network (TV series), a Canadian variety television series
Network (video game), a 1980 business simulation game for the Apple II
Network, aka Taryn Haldane, a fictional character and member of the Sovereign Seven comic book series
Network, the members' newsletter of the British Sociological Association
The Network, an American new wave band
"The Network", a 1987 Matlock episode
The Network, a fictional organization in the comic strip Modesty Blaise
"Networking", a song by We Are the Physics from We Are the Physics Are OK at Music
== Organizations ==
NETWORK (Slovak party), a political party in Slovakia
Network (lobby group), an American social justice group
The Network (group of churches), an international group of evangelical churches
The Network (political party), an Italian political party (1991–1999)
The Network (professional wrestling), a professional wrestling stable
The Network 2018, an Italian political party (2011–present)
Network (Russia), allegedly an anti-government anarchist organization active in Russia in 2015–2017
== See also ==
List of university networks
Nettwerk, Nettwerk Music Group, a record label
Netzwerk (disambiguation)
Networked: The New Social Operating System, a 2012 book | Wikipedia/Network_(disambiguation) |
A body area network (BAN), also referred to as a wireless body area network (WBAN), a body sensor network (BSN) or a medical body area network (MBAN), is a wireless network of wearable computing devices. BAN devices may be embedded inside the body as implants or pills, may be surface-mounted on the body in a fixed position, or may be accompanied devices which humans can carry in different positions, such as in clothes pockets, by hand, or in various bags. Devices are becoming smaller, especially in body area networks. These networks include multiple small body sensor units (BSUs) and a single central unit (BCU). Despite this trend, decimeter (tab and pad) sized smart devices still play an important role. They act as data hubs or gateways and provide a user interface for viewing and managing BAN applications on the spot. The development of WBAN technology started around 1995 around the idea of using wireless personal area network (WPAN) technologies to implement communications on, near, and around the human body. About six years later, the term "BAN" came to refer to systems where communication is entirely within, on, and in the immediate proximity of a human body. A WBAN system can use WPAN wireless technologies as gateways to reach longer ranges. Through gateway devices, it is possible to connect the wearable devices on the human body to the internet. This way, medical professionals can access patient data online using the internet independent of the patient location.
== Concept ==
The rapid growth in physiological sensors, low-power integrated circuits, and wireless communication has enabled a new generation of wireless sensor networks, now used for purposes such as monitoring traffic, crops, infrastructure, and health. The body area network field is an interdisciplinary area which could allow inexpensive and continuous health monitoring with real-time updates of medical records through the Internet. A number of intelligent physiological sensors can be integrated into a wearable wireless body area network, which can be used for computer-assisted rehabilitation or early detection of medical conditions. This area relies on the feasibility of implanting very small biosensors inside the human body that are comfortable and that don't impair normal activities. The implanted sensors in the human body will collect various physiological changes in order to monitor the patient's health status no matter their location. The information will be transmitted wirelessly to an external processing unit. This device will instantly transmit all information in real time to the doctors throughout the world. If an emergency is detected, the physicians will immediately inform the patient through the computer system by sending appropriate messages or alarms. Currently, the level of information provided and energy resources capable of powering the sensors are limiting. While the technology is still in its primitive stage it is being widely researched and once adopted, is expected to be a breakthrough invention in healthcare, leading to concepts like telemedicine and MHealth becoming real.
== Applications ==
Initial applications of BANs are expected to appear primarily in the healthcare domain, especially for continuous monitoring and logging vital parameters of patients with chronic diseases such as diabetes, asthma and heart attacks.
A BAN in place on a patient can alert the hospital, even before they have a heart attack, through measuring changes in their vital signs.
A BAN on a patient with diabetes could auto inject insulin through a pump, as soon as their blood-sugar levels increase.
A BAN can be used, to learn the underlying health state transitions and dynamics of a disease
Other applications of this technology include sports, military, or security. Extending the technology to new areas could also assist communication by seamless exchanges of information between individuals, or between individuals and machines.
== Standards ==
The latest international standard for BANs is the IEEE 802.15.6 standard.
== Components ==
A typical BAN or BSN requires vital sign monitoring sensors, motion detectors (through accelerometers) to help identify the location of the monitored individual and some form of communication, to transmit vital sign and motion readings to medical practitioners or care givers. A typical body area network kit will consist of sensors, a Processor, a transceiver and a battery. Physiological sensors, such as ECG and SpO2 sensors, have been developed. Other sensors such as a blood pressure sensor, EEG sensor and a PDA for BSN interface are under development.
=== Wireless communication in the U.S. ===
The FCC has approved the allocation of 40 MHz of spectrum bandwidth for medical BAN low-power, wide-area radio links at the 2360–2400 MHz band. This will allow off-loading MBAN communication from the already saturated standard Wi-Fi spectrum to a standard band.
The 2360–2390 MHz frequency range is available on a secondary basis. The FCC will expand the existing Medical Device Radiocommunication (MedRadio) Service in Part 95 of its rules. MBAN devices using the band will operate under a 'license-by-rule' basis which eliminates the need to apply for individual transmitter licenses. Usage of the 2360–2390 MHz frequencies are restricted to indoor operation at health-care facilities and are subject to registration and site approval by coordinators to protect aeronautical telemetry primary usage. Operation in the 2390–2400 MHz band is not subject to registration or coordination and may be used in all areas including residential.
== Challenges ==
Problems with the use of this technology could include:
Data quality: Data generated and collected through BANs can play a key role in the patient care process. It is essential that the quality of this data is of a high standard to ensure that the decisions made are based on the best information possible
Data management: As BANs generate large volumes of data, the need to manage and maintain these datasets is of utmost importance.
Sensor validation: Pervasive sensing devices are subject to inherent communication and hardware constraints including unreliable wired/wireless network links, interference and limited power reserves. This may result in erroneous datasets being transmitted back to the end user. It is of the utmost importance especially within a healthcare domain that all sensor readings are validated. This helps to reduce false alarm generation and to identify possible weaknesses within the hardware and software design.
Data consistency: Data residing on multiple mobile devices and wireless patient notes need to be collected and analysed in a seamless fashion. Within body area networks, vital patient datasets may be fragmented over a number of nodes and across a number of networked PCs or Laptops. If a medical practitioner's mobile device does not contain all known information then the quality of patient care may degrade.
Security: Considerable effort would be required to make WBAN transmission secure and accurate. It would have to be made sure that the patient secure data is only derived from each patient's dedicated WBAN system and is not mixed up with other patient's data. Further, the data generated from WBAN should have secure and limited access. Although security is a high priority in most networks, little study has been done in this area for WBANs. As WBANs are resource-constrained in terms of power, memory, communication rate and computational capability, security solutions proposed for other networks may not be applicable to WBANs. Confidentiality, authentication, integrity, and freshness of data together with availability and secure management are the security requirements in WBAN. The IEEE 802.15.6 standard, which is latest standard for WBAN, tried to provide security in WBAN. However, it has several security problems.
Interoperability: WBAN systems would have to ensure seamless data transfer across standards such as Bluetooth, Zigbee etc. to promote information exchange, plug and play device interaction. Further, the systems would have to be scalable, ensure efficient migration across networks and offer uninterrupted connectivity.
System devices: The sensors used in WBAN would have to be low on complexity, small in form factor, light in weight, power efficient, easy to use and reconfigurable. Further, the storage devices need to facilitate remote storage and viewing of patient data as well as access to external processing and analysis tools via the Internet.
Energy vs. accuracy: Sensors' activation policy should be determined to optimizing the trade-off between the BAN's power consumption versus the probability of patient's health state mis-classification. High power consumption often results in more accurate observations on the patient's health state and vice versa.
Privacy: People might consider the WBAN technology as a potential threat to freedom if the applications go beyond "secure" medical usage. Social acceptance would be key to this technology finding a wider application.
Interference: The wireless link used for body sensors should reduce the interference and increase the coexistence of sensor node devices with other network devices available in the environment. This is especially important for large scale implementation of WBAN systems.
Cost: Today's consumers expect low cost health monitoring solutions which provide high functionality. WBAN implementations will need to be cost optimized to be appealing alternatives to health conscious consumers.
Constant monitoring: Users may require different levels of monitoring, for example those at risk of cardiac ischemia may want their WBANs to function constantly, while others at risk of falls may only need WBANs to monitor them while they are walking or moving. The level of monitoring influences the amount of energy required and the life cycle of the BAN before the energy source is depleted.
Constrained deployment: The WBAN needs to be wearable, lightweight and non intrusive. It should not alter or encumber the user's daily activities. The technology should ultimately be transparent to the user i.e., it should perform its monitoring tasks without the user realising it.
Consistent performance: The performance of the WBAN should be consistent. Sensor measurements should be accurate and calibrated, even when the WBAN is switched off and switched on again. The wireless links should be robust and work under various user environments.
== See also ==
Energy harvesting
EnOcean
Physiological Signal Based Security
== References ==
Engineer Reza Khalilian (SCOPUS:57193996763) (ORCiD: 0000-0001-5936-8596) (WOS: ACO-0524-2022)
PhD, MSc Engineer of ICT and Electronics, Author, Research As, Tour Guide, Technical College Teacher
Verified email at jdeihe.ac.ir - Homepage
Healthcare EcosystemsWBANTelemedicineAI IoTCancer Prevention
Title
Cited by
Year
An Efficient Method to Improve WBAN Security
R Khalilian, A Rezai, E Abedini
Advanced Science and Technology Letters (ASTL) 64 (No. 11; ISSN. 2287-1233 … 25 2014
Secure Wireless Body Area Network (WBAN) Communication Method Using New Random Key Management Scheme
R Khalilian, A Rezai, F Mesrinejad
International Journal of Security and Its Applications (IJSIA) Scopus 10 (11 … 12 2016
Wireless Body Area Network (WBAN) Applications Necessity in Real Time Healthcare
R Khalilian, A Rezai
13th IEEE Princeton Integrated STEM (Science, Technology, Engineering … 10 2022
Cloud Computing
R Kalilian, A Rezai, M Mahdavi
2024
A new Efficient Adjustable Current Pulse Power Supply (ACPS) in the Wireless Body Area Network (WBAN)
Khalilian, Reza, Rezai, Abdalhossein, Gharavi, Arash, Zafari, Mehdi
1st International Conference of Ideas on Electrical Engineering (ICNIEE2024 … 2024
Breast Cancer Diagnosis by Phased Array RADAR (PAR) on Real Time Wireless Body Area Networks (WBANs) Platform
Khalilian, Reza, Rezai, Abdalhossein
1st International Conference of Ideas on Electrical Engineering (ICNIEE2024 … 2024
Intelligent Transportation System (ITS) Using Internet of Things (IoT)
Khalilian, Reza, Rezai, Abdalhossein, Talakesh, SM Reza
14th IEEE International Conference on Information and Knowledge Technology … 2023
A New Biomedical Signal Processor (BSP) Architecture for Energy Efficient in Wireless Body Area Networks (WBANs)
R Khalilian, A Rezai
International Conference on New Researches and Technologies in Electrical … 2023
An Eco-Friendly Cosmopolitan (EFC) by Recycling Scientific/Industrial Towns (RSITs)
Khalilian, Reza, Rezai, Abdalhossein, Talakesh, SM Reza
14th IEEE International Conference on Information and Knowledge Technology … 2023
Human Brain Mapping by Electroencephalogram (EEG) in Wireless Body Area Network (WBAN) with Brain Computing Interface (BCI) on Metaverse via Artificial Intelligent …
R Khalilian, A Rezai
The 6th meeting of the interdisciplinary health research network with the … 2023
A New Platform of WBAN in Real Time Tele Medicine and Healthcare Ecosystems to Manage the Covid-19
R Khalilian, A Rezai
Academic Journal, Journal of Biomedical Physics and Engineering, 5th Shiraz … 2023
X Band Exciter
R Khalilian, H Emami, M Moradi
ISBN: 978-622-94444-4-3, Gohar Gooya Publications 1, 163 2023
Elements of Information Theory (and Coding 1)
R Khalilian, A Hatam, S Nasri
https://ketab.ir/book/37011d8c-5536-4eb3-9a64-0a45fd0973fa Elements of … 2023
BAES Token in Wireless Body Area Network for Real Time Healthcare Ecosystem
R Khalilian, A Rezai, F Mesrinejad
The 9th International Conference on Health, Treatment and Health Promotion … 2022
An Overview on the Wireless Body Area Networks as a Mater Platform by Value Management for Real Time Telemedicine and Healthcare Monitoring Ecosystem
R Khalilian, A Rezai, R Kelishadi
1st International Telemedicine Conference, Medical Sciences University of … 2022
RFID Smart Card (1)
R Khalilian, V Amir, F Goli
https://ketab.ir/book/fd1a72a8-6b69-47fb-9cb8-5982216fc05a Gohar Gooya … 2021
E-Commerce (1)
R Khalilian, MR Moslehi, E Khalilian
https://ketab.ir/book/396dbafc-22c3-4056-accd-074efe64c758 Gohar Gooya … 2021
Wireless Body Area Network (WBAN) (1)
R Kalilian, A Rezai, F Mesrinejad
https://ketab.ir/book/682938a3-7130-4ab7-8830-8437e1abeea0 Gohar Gooya … 2021
Investigation, Design and Improvement Security of Wireless Body Area Networks (WBANs)
R Khalilian, A Rezai, F Mesrinejad
Master Thesis, Islamic Azad University of Majlesi 2016
Security Assessment of Proposed Wireless Body Area Network (WBAN)
R Khalilian, A Rezai, F Mesrinejad
National Conference on Electrical Engineering of Majlesi (NCEEM), Isfahan … 2016
A New Method for Energy Efficient in Wireless Body Area Network (WBAN)
R Khalilian, A Rezai, E Abedini
Iran's Electrical Engineering Student Conference, International Sharif … 2014
WBAN Security Improvement in a Real Time Healthcare and Medical Ecosystem
R Khalilian, A Rezai, F Mesrinejad, E Abedini
Mechatronic Systems Engineering and Telecommunication Systems (MISMEC … 2013
E-Commerce Security Protocols as a Dynamic Code Tokens
R Khalilian, MR Moslehi
Science and Culture University (ACECR IUT) 2011
RFID Smart Card and Contactless Reader (Terminal)
R Khalilian, V Amir
Shahid Rajaee Technical and Engineering College 2008
== Further reading ==
Ullah, Sana; Higgins, Henry; Braem, Bart; Latre, Benoit; Blondia, Chris; Moerman, Ingrid; Saleem, Shahnaz; Rahman, Ziaur & Kwak, Kyung (2010). "A Comprehensive Survey of Wireless Body Area Networks". Journal of Medical Systems. 36 (3): 1–30. doi:10.1007/s10916-010-9571-3. hdl:1854/LU-3234782. ISSN 0148-5598. PMID 20721685. S2CID 7988320.
== External links ==
Video of a short talk by Cardiologist Eric Topol about Wireless medicine
"Mobile Health: Concepts, Initiatives and Applications", First Book (in Portuguese) about using Wireless Technology to assist Healthcare | Wikipedia/Body_area_network |
In operating systems, memory management is the function responsible for managing the computer's primary memory.: 105–208
The memory management function keeps track of the status of each memory location, either allocated or free. It determines how memory is allocated among competing processes, deciding which gets memory, when they receive it, and how much they are allowed. When memory is allocated it determines which memory locations will be assigned. It tracks when memory is freed or unallocated and updates the status.
This is distinct from application memory management, which is how a process manages the memory assigned to it by the operating system.
== Memory management techniques ==
=== Single contiguous allocation ===
Single allocation is the simplest memory management technique. All the computer's memory, usually with the exception of a small portion reserved for the operating system, is available to a single application. MS-DOS is an example of a system that allocates memory in this way. An embedded system running a single application might also use this technique.
A system using single contiguous allocation may still multitask by swapping the contents of memory to switch among users. Early versions of the MUSIC operating system used this technique.
=== Partitioned allocation ===
Partitioned allocation divides primary memory into multiple memory partitions, usually contiguous areas of memory. Each partition might contain all the information for a specific job or task. Memory management consists of allocating a partition to a job when it starts and unallocating it when the job ends.
Partitioned allocation usually requires some hardware support to prevent the jobs from interfering with one another or with the operating system. The IBM System/360 uses a lock-and-key technique. The UNIVAC 1108, PDP-6 and PDP-10, and GE-600 series use base and bounds registers to indicate the ranges of accessible memory.
Partitions may be either static, that is defined at Initial Program Load (IPL) or boot time, or by the computer operator, or dynamic, that is, automatically created for a specific job. IBM System/360 Operating System Multiprogramming with a Fixed Number of Tasks (MFT) is an example of static partitioning, and Multiprogramming with a Variable Number of Tasks (MVT) is an example of dynamic. MVT and successors use the term region to distinguish dynamic partitions from static ones in other systems.
Partitions may be relocatable with base registers, as in the UNIVAC 1108, PDP-6 and PDP-10, and GE-600 series. Relocatable partitions are able to be compacted to provide larger chunks of contiguous physical memory. Compaction moves "in-use" areas of memory to eliminate "holes" or unused areas of memory caused by process termination in order to create larger contiguous free areas.
Some systems allow partitions to be swapped out to secondary storage to free additional memory. Early versions of IBM's Time Sharing Option (TSO) swapped users in and out of time-sharing partitions.
=== Paged memory management ===
Paged allocation divides the computer's primary memory into fixed-size units called page frames, and the program's virtual address space into pages of the same size. The hardware memory management unit maps pages to frames. The physical memory can be allocated on a page basis while the address space appears contiguous.
Usually, with paged memory management, each job runs in its own address space. However, there are some single address space operating systems that run all processes within a single address space, such as IBM i, which runs all processes within a large address space, and IBM OS/VS1 and OS/VS2 (SVS), which ran all jobs in a single 16MiB virtual address space.
Paged memory can be demand-paged when the system can move pages as required between primary and secondary memory.
=== Segmented memory management ===
Segmented memory is the only memory management technique that does not provide the user's program with a "linear and contiguous address space.": 165 Segments are areas of memory that usually correspond to a logical grouping of information such as a code procedure or a data array. Segments require hardware support in the form of a segment table which usually contains the physical address of the segment in memory, its size, and other data such as access protection bits and status (swapped in, swapped out, etc.)
Segmentation allows better access protection than other schemes because memory references are relative to a specific segment and the hardware will not permit the application to reference memory not defined for that segment.
It is possible to implement segmentation with or without paging. Without paging support the segment is the physical unit swapped in and out of memory if required. With paging support the pages are usually the unit of swapping and segmentation only adds an additional level of security.
Addresses in a segmented system usually consist of the segment id and an offset relative to the segment base address, defined to be offset zero.
The Intel IA-32 (x86) architecture allows a process to have up to 16,383 segments of up to 4GiB each. IA-32 segments are subdivisions of the computer's linear address space, the virtual address space provided by the paging hardware.
The Multics operating system is probably the best known system implementing segmented memory. Multics segments are subdivisions of the computer's physical memory of up to 256 pages, each page being 1K 36-bit words in size, resulting in a maximum segment size of 1MiB (with 9-bit bytes, as used in Multics). A process could have up to 4046 segments.
== Rollout/rollin ==
Rollout/rollin (RO/RI) is a computer operating system memory management technique where the entire non-shared code and data of a running program is swapped out to auxiliary memory (disk or drum) to free main storage for another task. Programs may be rolled out "by demand end or...when waiting for some long event." Rollout/rollin was commonly used in time-sharing systems, where the user's "think time" was relatively long compared to the time to do the swap.
Unlike virtual storage—paging or segmentation, rollout/rollin does not require any special memory management hardware; however, unless the system has relocation hardware such as a memory map or base and bounds registers, the program must be rolled back in to its original memory locations. Rollout/rollin has been largely superseded by virtual memory.
Rollout/rollin was an optional feature of OS/360 Multiprogramming with a Variable number of Tasks (MVT)
Rollout/rollin allows the temporary, dynamic expansion of a particular job beyond its originally specified region. When a job needs more space, rollout/rollin attempts to obtain unassigned storage for the job's use. If there is no such unassigned storage, another job is rolled out—i.e., is transferred to auxiliary storage—so that its region may be used by the first job. When released by the first job, this additional storage is again available, either (1) as unassigned storage, if that was its source, or (2) to receive the job to be transferred back into main storage (rolled in).
In OS/360, rollout/rollin was used only for batch jobs, and rollin does not occur until the jobstep borrowing the region terminates.
== See also ==
Memory overcommitment
Memory protection
x86 memory segmentation
== Notes ==
== References == | Wikipedia/Memory_management_(operating_systems) |
Internetworking is the practice of interconnecting multiple computer networks.: 169 Typically, this enables any pair of hosts in the connected networks to exchange messages irrespective of their hardware-level networking technology. The resulting system of interconnected networks is called an internetwork, or simply an internet.
The most notable example of internetworking is the Internet, a network of networks based on many underlying hardware technologies. The Internet is defined by a unified global addressing system, packet format, and routing methods provided by the Internet Protocol.: 103
The term internetworking is a combination of the components inter (between) and networking. An earlier term for an internetwork is catenet, a short-form of (con)catenating networks.
== History ==
The first international heterogenous resource sharing network was developed by the computer science department at University College London (UCL) who interconnected the ARPANET with early British academic networks beginning in 1973. In the ARPANET, the network elements used to connect individual networks were called gateways, but the term has been deprecated in this context, because of possible confusion with functionally different devices. By 1973-4, researchers in France, the United States, and the United Kingdom had worked out an approach to internetworking where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible, as demonstrated in the CYCLADES network. Researchers at Xerox PARC outlined the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking. Research at the National Physical Laboratory in the United Kingdom confirmed establishing a common host protocol would be more reliable and efficient. The ARPANET connection to UCL later evolved into SATNET. In 1977, ARPA demonstrated a three-way internetworking experiment, which linked a mobile vehicle in PRNET with nodes in the ARPANET, and, via SATNET, to nodes at UCL. The X.25 protocol, on which public data networks were based in the 1970s and 1980s, was supplemented by the X.75 protocol which enabled internetworking.
Today the interconnecting gateways are called routers. The definition of an internetwork today includes the connection of other types of computer networks such as personal area networks.
=== Catenet ===
Catenet, a short-form of (con)catenating networks, is obsolete terminolgy for a system of packet-switched communication networks interconnected via gateways.
The term was coined by Louis Pouzin, who designed the CYCLADES network, in an October 1973 note circulated to the International Network Working Group, which was published in a 1974 paper "A Proposal for Interconnecting Packet Switching Networks". Pouzin was a pioneer of internetworking at a time when network meant what is now called a local area network. Catenet was the concept of linking these networks into a network of networks with specifications for compatibility of addressing and routing. The term was used in technical writing in the late 1970s and early 1980s, including in RFCs and IENs. Catenet was gradually displaced by the short-form of the term internetwork, internet (lower-case i), when the Internet Protocol spread more widely from the mid 1980s and the use of the term internet took on a broader sense and became well known in the 1990s.
== Interconnection of networks ==
Internetworking, a combination of the components inter (between) and networking, started as a way to connect disparate types of networking technology, but it became widespread through the developing need to connect two or more local area networks via some sort of wide area network.
To build an internetwork, the following are needed:: 103 A standardized scheme to address packets to any host on any participating network; a standardized protocol defining format and handling of transmitted packets; components interconnecting the participating networks by routing packets to their destinations based on standardized addresses.
Another type of interconnection of networks often occurs within enterprises at the link layer of the networking model, i.e. at the hardware-centric layer below the level of the TCP/IP logical interfaces. Such interconnection is accomplished with network bridges and network switches. This is sometimes incorrectly termed internetworking, but the resulting system is simply a larger, single subnetwork, and no internetworking protocol, such as Internet Protocol, is required to traverse these devices. However, a single computer network may be converted into an internetwork by dividing the network into segments and logically dividing the segment traffic with routers and having an internetworking software layer that applications employ.
The Internet Protocol is designed to provide an unreliable (not guaranteed) packet service across the network. The architecture avoids intermediate network elements maintaining any state of the network. Instead, this function is assigned to the endpoints of each communication session. To transfer data reliably, applications must utilize an appropriate transport layer protocol, such as Transmission Control Protocol (TCP), which provides a reliable stream. Some applications use a simpler, connection-less transport protocol, User Datagram Protocol (UDP), for tasks which do not require reliable delivery of data or that require real-time service, such as video streaming or voice chat.
== Networking models ==
Two architectural models are commonly used to describe the protocols and methods used in internetworking. The Open System Interconnection (OSI) reference model was developed under the auspices of the International Organization for Standardization (ISO) and provides a rigorous description for layering protocol functions from the underlying hardware to the software interface concepts in user applications. Internetworking is implemented in the Network Layer (Layer 3) of the model.
The Internet Protocol Suite, also known as the TCP/IP model, was not designed to conform to the OSI model and does not refer to it in any of the normative specifications in Request for Comments and Internet standards. Despite similar appearance as a layered model, it has a much less rigorous, loosely defined architecture that concerns itself only with the aspects of the style of networking in its own historical provenance. It assumes the availability of any suitable hardware infrastructure, without discussing hardware-specific low-level interfaces, and that a host has access to this local network to which it is connected via a link layer interface.
For a period in the late 1980s and early 1990s, the network engineering community was polarized over the implementation of competing protocol suites, commonly known as the Protocol Wars. It was unclear which of the OSI model and the Internet protocol suite would result in the best and most robust computer networks.
== See also ==
History of the Internet
== References ==
== Sources ==
Moschovitis, Christos J. P. (1999). History of the Internet: A Chronology, 1843 to the Present. ABC-CLIO. ISBN 978-1-57607-118-2. | Wikipedia/Internetwork |
Enhanced Data rates for GSM Evolution (EDGE), also known as 2.75G and under various other names, is a 2G digital mobile phone technology for packet switched data transmission. It is a subset of General Packet Radio Service (GPRS) on the GSM network and improves upon it offering speeds close to 3G technology, hence the name 2.75G. EDGE is standardized by the 3GPP as part of the GSM family and as an upgrade to GPRS.
EDGE was deployed on GSM networks beginning in 2003 – initially by Cingular (now AT&T) in the United States. It could be readily deployed on existing GSM and GPRS cellular equipment, making it an easier upgrade for cellular companies compared to the UMTS 3G technology that required significant changes. Through the introduction of sophisticated methods of coding and transmitting data, EDGE delivers higher bit-rates per radio channel, resulting in a threefold increase in capacity and performance compared with an ordinary GSM/GPRS connection - originally a max speed of 384 kbit/s. Later, Evolved EDGE was developed as an enhanced standard providing even more reduced latency and more than double performance, with a peak bit-rate of up to 1 Mbit/s.
== Name and definition ==
Enhanced Data rates for GSM Evolution is the common full name of the EDGE standard. Other names include: Enhanced GPRS (EGPRS), IMT Single Carrier (IMT-SC), and Enhanced Data rates for Global Evolution.
Although described as "2.75G" by the 3GPP body, EDGE is part of International Telecommunication Union (ITU)'s 3G definition. It is also recognized as part of the International Mobile Telecommunications - 2000 (IMT-2000) standard for 3G.
== Technology ==
EDGE/EGPRS is implemented as a bolt-on enhancement for 2.5G GSM/GPRS networks, making it easier for existing GSM carriers to upgrade to it. EDGE is a superset to GPRS and can function on any network with GPRS deployed on it, provided the carrier implements the necessary upgrade. EDGE requires no hardware or software changes to be made in GSM core networks. EDGE-compatible transceiver units must be installed and the base station subsystem needs to be upgraded to support EDGE. If the operator already has this in place, which is often the case today, the network can be upgraded to EDGE by activating an optional software feature. Today EDGE is supported by all major chip vendors for both GSM and WCDMA/HSPA.
=== Transmission techniques ===
In addition to Gaussian minimum-shift keying (GMSK), EDGE uses higher-order PSK/8 phase-shift keying (8PSK) for the upper five of its nine modulation and coding schemes. EDGE produces a 3-bit word for every change in carrier phase. This effectively triples the gross data rate offered by GSM. EDGE, like GPRS, uses a rate adaptation algorithm that adapts the modulation and coding scheme (MCS) according to the quality of the radio channel, and thus the bit rate and robustness of data transmission. It introduces a new technology not found in GPRS, incremental redundancy, which, instead of retransmitting disturbed packets, sends more redundancy information to be combined in the receiver. This increases the probability of correct decoding.
EDGE can carry a bandwidth up to 236 kbit/s (with end-to-end latency of less than 150 ms) for 4 timeslots (theoretical maximum is 473.6 kbit/s for 8 timeslots) in packet mode. This means it can handle four times as much traffic as standard GPRS. EDGE meets the International Telecommunication Union's requirement for a 3G network, and has been accepted by the ITU as part of the IMT-2000 family of 3G standards. It also enhances the circuit data mode called HSCSD, increasing the data rate of this service.
=== EDGE modulation and coding scheme (MCS) ===
The channel encoding process in GPRS as well as EGPRS/EDGE consists of two steps: first, a cyclic code is used to add parity bits, which are also referred to as the Block Check Sequence, followed by coding with a possibly punctured convolutional code. In GPRS, the Coding Schemes CS-1 to CS-4 specify the number of parity bits generated by the cyclic code and the puncturing rate of the convolutional code. In GPRS Coding Schemes CS-1 through CS-3, the convolutional code is of rate 1/2, i.e. each input bit is converted into two coded bits. In Coding Schemes CS-2 and CS-3, the output of the convolutional code is punctured to achieve the desired code rate. In GPRS Coding Scheme CS-4, no convolutional coding is applied.
In EGPRS/EDGE, the modulation and coding schemes MCS-1 to MCS-9 take the place of the coding schemes of GPRS, and additionally specify which modulation scheme is used, GMSK or 8PSK. MCS-1 through MCS-4 use GMSK and have performance similar (but not equal) to GPRS, while MCS-5 through MCS-9 use 8PSK. In all EGPRS modulation and coding schemes, a convolutional code of rate 1/3 is used, and puncturing is used to achieve the desired code rate. In contrast to GPRS, the Radio Link Control (RLC) and medium access control (MAC) headers and the payload data are coded separately in EGPRS. The headers are coded more robustly than the data.
== Deployment ==
The first EDGE network was deployed by Cingular (now AT&T) in the United States on June 30, 2003, initially covering Indianapolis. T-Mobile US deployed their EDGE network in September 2005. In Canada, Rogers Wireless deployed their EDGE network in 2004. In Malaysia, DiGi launched EDGE beginning in May 2004 initially only in the Klang Valley.
In Europe, TeliaSonera in Finland rolled out EDGE in April 2004. Orange began trialling EDGE in France in April 2005 before a consumer rollout later that year. Bouygues Telecom completed its national deployment of EDGE in the country in 2005, strategically focusing on EDGE which is cheaper to deploy compared to 3G networks. Telfort was the first network in the Netherlands to roll out EDGE having done so by May 2005. Orange launched the UK's first EDGE network in February 2006.
The Global Mobile Suppliers Association reported in 2008 that EDGE networks have been launched in 147 countries around the world.
== Evolved EDGE ==
Evolved EDGE, also called EDGE Evolution and 2.875G, is a bolt-on extension to the GSM mobile telephony standard, which improves on EDGE in a number of ways. Latencies are reduced by lowering the Transmission Time Interval by half (from 20 ms to 10 ms). Bit rates are increased up to 1 Mbit/s peak bandwidth and latencies down to 80 ms using dual carrier, higher symbol rate and higher-order modulation (32QAM and 16QAM instead of 8PSK), and turbo codes to improve error correction. This results in real world downlink speeds of up to 600 kbit/s. Further the signal quality is improved using dual antennas improving average bit-rates and spectrum efficiency.
The main intention of increasing the existing EDGE throughput is that many operators would like to upgrade their existing infrastructure rather than invest on new network infrastructure. Mobile operators have invested billions in GSM networks, many of which are already capable of supporting EDGE data speeds up to 236.8 kbit/s. With a software upgrade and a new device compliant with Evolved EDGE (like an Evolved EDGE smartphone) for the user, these data rates can be boosted to speeds approaching 1 Mbit/s (i.e. 98.6 kbit/s per timeslot for 32QAM). Many service providers may not invest in a completely new technology like 3G networks.
Considerable research and development happened throughout the world for this new technology. A successful trial by Nokia Siemens and "one of China's leading operators" was achieved in a live environment. However, Evolved EDGE was introduced much later than its predecessor, EDGE, coinciding with the widespread adoption of 3G technologies such as HSPA and just before the emergence of 4G networks. This timing significantly limited its relevance and practical application, as operators prioritized investment in more advanced wireless technologies like UMTS and LTE.
Moreover, these newer technologies also targeted network coverage layers on low frequencies, further diminishing the potential advantages of Evolved EDGE. Coupled with the upcoming phase-out and shutdown of 2G mobile networks, it became very unlikely that Evolved EDGE would ever see deployment on live networks. As of 2016, no commercial networks supported the Evolved EDGE standard (3GPP Rel-7).
=== Technology ===
==== Reduced Latency ====
With Evolved EDGE come three major features designed to reduce latency over the air interface.
In EDGE, a single RLC data block (ranging from 23 to 148 bytes of data) is transmitted over four frames, using a single time slot. On average, this requires 20 ms for one way transmission. Under the RTTI scheme, one data block is transmitted over two frames in two timeslots, reducing the latency of the air interface to 10 ms.
In addition, Reduced Latency also implies support of Piggy-backed ACK/NACK (PAN), in which a bitmap of blocks not received is included in normal data blocks. Using the PAN field, the receiver may report missing data blocks immediately, rather than waiting to send a dedicated PAN message.
A final enhancement is RLC-non persistent mode. With EDGE, the RLC interface could operate in either acknowledged mode, or unacknowledged mode. In unacknowledged mode, there is no retransmission of missing data blocks, so a single corrupt block would cause an entire upper-layer IP packet to be lost. With non-persistent mode, an RLC data block may be retransmitted if it is less than a certain age. Once this time expires, it is considered lost, and subsequent data blocks may then be forwarded to upper layers.
==== Higher modulation schemes ====
Both uplink and downlink throughput is improved by using 16 or 32 QAM (quadrature amplitude modulation), along with turbo codes and higher symbol rates.
== Enhanced CSD ==
A lesser-known version of the EDGE standard is Enhanced Circuit Switched Data (ECSD), which is circuit switched.
== Compact-EDGE ==
A variant, so called Compact-EDGE, was developed for use in a portion of Digital AMPS network spectrum.
== Networks ==
The Global mobile Suppliers Association (GSA) states that, as of May 2013, there were 604 GSM/EDGE networks in 213 countries, from a total of 606 mobile network operator commitments in 213 countries.
== See also ==
Broadband Internet access
CDMA2000
Evolution-Data Optimized
List of device bandwidths
Mobile broadband
Spectral efficiency comparison table
UMTS
WiDEN
Wi-Fi
Comparison of mobile phone standards including LTE
Comparison of wireless data standards including WiMAX and HSPA+
== References ==
== External links ==
The Global mobile Suppliers Association
Evolved EDGE as an alternative to 3G
A technical document by the 3G Americas association
An opinion on evolved EDGE by Martin Sauter
An EDGE Evolution report by Visant Strategies | Wikipedia/Enhanced_Data_Rates_for_GSM_Evolution |
A network switch (also called switching hub, bridging hub, Ethernet switch, and, by the IEEE, MAC bridge) is networking hardware that connects devices on a computer network by using packet switching to receive and forward data to the destination device.
A network switch is a multiport network bridge that uses MAC addresses to forward data at the data link layer (layer 2) of the OSI model. Some switches can also forward data at the network layer (layer 3) by additionally incorporating routing functionality. Such switches are commonly known as layer-3 switches or multilayer switches.
Switches for Ethernet are the most common form of network switch. The first MAC Bridge was invented in 1983 by Mark Kempf, an engineer in the Networking Advanced Development group of Digital Equipment Corporation. The first 2 port Bridge product (LANBridge 100) was introduced by that company shortly after. The company subsequently produced multi-port switches for both Ethernet and FDDI such as GigaSwitch. Digital decided to license its MAC Bridge patent in a royalty-free, non-discriminatory basis that allowed IEEE standardization. This permitted a number of other companies to produce multi-port switches, including Kalpana. Ethernet was initially a shared-access medium, but the introduction of the MAC bridge began its transformation into its most-common point-to-point form without a collision domain. Switches also exist for other types of networks including Fibre Channel, Asynchronous Transfer Mode, and InfiniBand.
Unlike repeater hubs, which broadcast the same data out of each port and let the devices pick out the data addressed to them, a network switch learns the Ethernet addresses of connected devices and then only forwards data to the port connected to the device to which it is addressed.
== Overview ==
A switch is a device in a computer network that connects other devices together. Multiple data cables are plugged into a switch to enable communication between different networked devices. Switches manage the flow of data across a network by transmitting a received network packet only to the one or more devices for which the packet is intended. Each networked device connected to a switch can be identified by its network address, allowing the switch to direct the flow of traffic maximizing the security and efficiency of the network.
A switch is more intelligent than an Ethernet hub, which simply retransmits packets out of every port of the hub except the port on which the packet was received, unable to distinguish different recipients, and achieving an overall lower network efficiency.
An Ethernet switch operates at the data link layer (layer 2) of the OSI model to create a separate collision domain for each switch port. Each device connected to a switch port can transfer data to any of the other ports at any time and the transmissions will not interfere. Because broadcasts are still being forwarded to all connected devices by the switch, the newly formed network segment continues to be a broadcast domain. Switches may also operate at higher layers of the OSI model, including the network layer and above. A switch that also operates at these higher layers is known as a multilayer switch.
Segmentation involves the use of a switch to split a larger collision domain into smaller ones in order to reduce collision probability and to improve overall network throughput. In the extreme case (i.e. micro-segmentation), each device is directly connected to a switch port dedicated to the device. In contrast to an Ethernet hub, there is a separate collision domain on each switch port. This allows computers to have dedicated bandwidth on point-to-point connections to the network and also to run in full-duplex mode. Full-duplex mode has only one transmitter and one receiver per collision domain, making collisions impossible.
The network switch plays an integral role in most modern Ethernet local area networks (LANs). Mid-to-large-sized LANs contain a number of linked managed switches. Small office/home office (SOHO) applications typically use a single switch, or an all-purpose device such as a residential gateway to access small office/home broadband services such as DSL or cable Internet. In most of these cases, the end-user device contains a router and components that interface to the particular physical broadband technology.
Many switches have pluggable modules, such as Small Form-factor Pluggable (SFP) modules. These modules often contain a transceiver that connects the switch to a physical medium, such as a fiber optic cable. Alternatively, DAC (Direct Attach Copper) cables may be used in place of modules. These modules were preceded by Medium Attachment Units connected via Attachment Unit Interfaces to switches and have evolved over time: the first modules were Gigabit interface converters, followed by XENPAK modules, SFP modules, XFP transceivers,
SFP+ modules, QSFP, QSFP-DD, and OSFP modules. Pluggable modules are also used for transmitting video in broadcast applications. With the advent of increased speeds together with Co-packaged optics (CPO), which bring the transceivers close to the switching chip of the switch, reducing power consumption, pluggable modules become replaceable laser light sources, and fiber optics are connected directly to the front of the switch instead of through pluggable modules. CPO is also considerably easier to adapt to water cooling.
== Role in a network ==
Switches are most commonly used as the network connection point for hosts at the edge of a network. In the hierarchical internetworking model and similar network architectures, switches are also used deeper in the network to provide connections between the switches at the edge.
In switches intended for commercial use, built-in or modular interfaces make it possible to connect different types of networks, including Ethernet, Fibre Channel, RapidIO, ATM, ITU-T G.hn and 802.11. This connectivity can be at any of the layers mentioned. While the layer-2 functionality is adequate for bandwidth-shifting within one technology, interconnecting technologies such as Ethernet and Token Ring is performed more easily at layer 3 or via routing. Devices that interconnect at the layer 3 are traditionally called routers.
Where there is a need for a great deal of analysis of network performance and security, switches may be connected between WAN routers as places for analytic modules. Some vendors provide firewall, network intrusion detection, and performance analysis modules that can plug into switch ports. Some of these functions may be on combined modules.
Through port mirroring, a switch can create a mirror image of data that can go to an external device, such as intrusion detection systems and packet sniffers.
A modern switch may implement power over Ethernet (PoE), which avoids the need for attached devices, such as a VoIP phone or wireless access point, to have a separate power supply. Since switches can have redundant power circuits connected to uninterruptible power supplies, the connected device can continue operating even when regular office power fails.
In 1989 and 1990, Kalpana introduced the first multiport Ethernet switch, its seven-port EtherSwitch.
== Bridging ==
Modern commercial switches primarily use Ethernet interfaces. The core function of an Ethernet switch is to provide multiple ports of layer-2 bridging. Layer-1 functionality is required in all switches in support of the higher layers. Many switches also perform operations at other layers. A device capable of more than bridging is known as a multilayer switch.
A layer 2 network device is a multiport device that uses hardware addresses (MAC addresses) to process and forward data at the data link layer (layer 2).
A switch operating as a network bridge may interconnect otherwise separate layer 2 networks. The bridge learns the MAC address of each connected device, storing this data in a table that maps MAC addresses to ports. This table is often implemented using high-speed content-addressable memory (CAM), some vendors refer to the MAC address table as a CAM table.
Bridges also buffer an incoming packet and adapt the transmission speed to that of the outgoing port. While there are specialized applications, such as storage area networks, where the input and output interfaces are the same bandwidth, this is not always the case in general LAN applications. In LANs, a switch used for end-user access typically concentrates lower bandwidth and uplinks into a higher bandwidth.
The Ethernet header at the start of the frame contains all the information required to make a forwarding decision, some high-performance switches can begin forwarding the frame to the destination whilst still receiving the frame payload from the sender. This cut-through switching can significantly reduce latency through the switch.
Interconnects between switches may be regulated using the Spanning Tree Protocol (STP) that disables forwarding on links so that the resulting local area network is a tree without switching loops. In contrast to routers, spanning tree bridges must have topologies with only one active path between two points. Shortest path bridging and TRILL (Transparent Interconnection of Lots of Links) are layer 2 alternatives to STP which allow all paths to be active with multiple equal cost paths.
== Types ==
=== Form factors ===
Switches are available in many form factors, including stand-alone, desktop units which are typically intended to be used in a home or office environment outside a wiring closet; rack-mounted switches for use in an equipment rack or an enclosure; DIN rail mounted for use in industrial environments; and small installation switches, mounted into a cable duct, floor box or communications tower, as found, for example, in fiber to the office infrastructures.
Rack-mounted switches may be stand-alone units, stackable switches or large chassis units with swappable line cards.
=== Configuration options ===
Unmanaged switches have no configuration interface or options. They are plug and play. They are typically the least expensive switches, and therefore often used in a small office/home office environment. Unmanaged switches can be desktop or rack mounted.
Managed switches have one or more methods to modify the operation of the switch. Common management methods include: a command-line interface (CLI) accessed via serial console, telnet or Secure Shell, an embedded Simple Network Management Protocol (SNMP) agent allowing management from a remote console or management station, or a web interface for management from a web browser. Two sub-classes of managed switches are smart and enterprise-managed switches.
Smart switches (aka intelligent switches) are managed switches with a limited set of management features. Likewise, web-managed switches are switches that fall into a market niche between unmanaged and managed. For a price much lower than a fully managed switch they provide a web interface (and usually no CLI access) and allow configuration of basic settings, such as VLANs, port-bandwidth and duplex.
Enterprise managed switches (aka managed switches) have a full set of management features, including CLI, SNMP agent, and web interface. They may have additional features to manipulate configurations, such as the ability to display, modify, backup and restore configurations. Compared with smart switches, enterprise switches have more features that can be customized or optimized and are generally more expensive than smart switches. Enterprise switches are typically found in networks with a larger number of switches and connections, where centralized management is a significant savings in administrative time and effort. A stackable switch is a type of enterprise-managed switch.
==== Typical management features ====
Centralized configuration management and configuration distribution
Enable and disable ports
Link bandwidth and duplex settings
Quality of service configuration and monitoring
MAC filtering and other access control list features
Configuration of Spanning Tree Protocol (STP) and Shortest Path Bridging (SPB) features
Simple Network Management Protocol (SNMP) monitoring of device and link health
Port mirroring for monitoring traffic and troubleshooting
Link aggregation configuration to set up multiple ports for the same connection to achieve higher data transfer rates and reliability
VLAN configuration and port assignments including IEEE 802.1Q tagging
NTP (Network Time Protocol) synchronization
Network access control features such as IEEE 802.1X
LLDP (Link Layer Discovery Protocol)
IGMP snooping for control of multicast traffic
== Traffic monitoring ==
It is difficult to monitor traffic that is bridged using a switch because only the sending and receiving ports can see the traffic.
Methods that are specifically designed to allow a network analyst to monitor traffic include:
Port mirroring – Because the purpose of a switch is to not forward traffic to network segments where it would be superfluous, a node attached to a switch cannot monitor traffic on other segments. Port mirroring is how this problem is addressed in switched networks: In addition to the usual behavior of forwarding frames only to ports through which they might reach their addressees, the switch forwards frames received through a given monitored port to a designated monitoring port, allowing analysis of traffic that would otherwise not be visible through the switch.
Switch monitoring (SMON) is described by RFC 2613 and is a provision for controlling facilities such as port mirroring.
RMON
sFlow
These monitoring features are rarely present on consumer-grade switches. Other monitoring methods include connecting a layer-1 hub or network tap between the monitored device and its switch port.
== See also ==
== Notes ==
== References ==
== External links ==
What to consider when buying an ethernet switch | Wikipedia/Network_switch |
A spatial network (sometimes also geometric graph) is a graph in which the vertices or edges are spatial elements associated with geometric objects, i.e., the nodes are located in a space equipped with a certain metric. The simplest mathematical realization of spatial network is a lattice or a random geometric graph (see figure in the right), where nodes are distributed uniformly at random over a two-dimensional plane; a pair of nodes are connected if the Euclidean distance is smaller than a given neighborhood radius. Transportation and mobility networks, Internet, mobile phone networks, power grids, social and contact networks and biological neural networks are all examples where the underlying space is relevant and where the graph's topology alone does not contain all the information. Characterizing and understanding the structure, resilience and the evolution of spatial networks is crucial for many different fields ranging from urbanism to epidemiology.
== Examples ==
An urban spatial network can be constructed by abstracting intersections as nodes and streets as links, which is referred to as a transportation network.
One might think of the 'space map' as being the negative image of the standard map, with the open space cut out of the background buildings or walls.
== Characterizing spatial networks ==
The following aspects are some of the characteristics to examine a spatial network:
Planar networks
In many applications, such as railways, roads, and other transportation networks, the network is assumed to be planar. Planar networks build up an important group out of the spatial networks, but not all spatial networks are planar. Indeed, the airline passenger
networks is a non-planar example: Many large airports in the world are connected through direct flights.
The way it is embedded in space
There are examples of networks, which seem to be not "directly" embedded in space. Social networks for instance
connect individuals through friendship relations. But in this case, space intervenes in the fact that the connection
probability between two individuals usually decreases with the distance between them.
Voronoi tessellation
A spatial network can be represented by a Voronoi diagram, which is a way of dividing space into a number of regions. The dual graph for a Voronoi diagram corresponds to the Delaunay triangulation for the same set of points.
Voronoi tessellations are interesting for spatial networks in the sense that they provide a natural representation model
to which one can compare a real world network.
Mixing space and topology
Examining the topology of the nodes and edges itself is another way to characterize networks. The distribution of degree of the nodes is often considered, regarding the structure of edges it is useful to find the Minimum spanning tree, or the generalization, the Steiner tree and the relative neighborhood graph.
== Probability and spatial networks ==
In the "real" world, many aspects of networks are not deterministic - randomness plays an important role. For example, new links, representing friendships, in social networks are in a certain manner random. Modelling spatial networks in respect of stochastic operations is consequent. In many cases the spatial Poisson process is used to approximate data sets of processes on spatial networks. Other stochastic aspects of interest are:
The Poisson line process
Stochastic geometry: the Erdős–Rényi graph
Percolation theory
== Approach from the theory of space syntax ==
Another definition of spatial network derives from the theory of space syntax. It can be notoriously difficult to decide what a spatial element should be in complex spaces involving large open areas or many interconnected paths. The originators of space syntax, Bill Hillier and Julienne Hanson use axial lines and convex spaces as the spatial elements. Loosely, an axial line is the 'longest line of sight and access' through open space, and a convex space the 'maximal convex polygon' that can be drawn in open space. Each of these elements is defined by the geometry of the local boundary in different regions of the space map. Decomposition of a space map into a complete set of intersecting axial lines or overlapping convex spaces produces the axial map or overlapping convex map respectively. Algorithmic definitions of these maps exist, and this allows the mapping from an arbitrary shaped space map to a network amenable to graph mathematics to be carried out in a relatively well defined manner. Axial maps are used to analyse urban networks, where the system generally comprises linear segments, whereas convex maps are more often used to analyse building plans where space patterns are often more convexly articulated, however both convex and axial maps may be used in either situation.
Currently, there is a move within the space syntax community to integrate better with geographic information systems (GIS), and much of the software they produce interlinks with commercially available GIS systems.
== History ==
While networks and graphs were already for a long time the subject
of many studies in mathematics, physics, mathematical sociology,
computer science, spatial networks have been also studied intensively during the 1970s in quantitative geography. Objects of studies in geography are inter alia locations, activities and flows of individuals, but also networks evolving in time and space. Most of the important problems such
as the location of nodes of a network, the evolution of
transportation networks and their interaction with population
and activity density are addressed in these earlier
studies. On the other side, many important points still remain unclear, partly because at that time datasets of large networks and larger computer capabilities were lacking.
Recently, spatial networks have been the subject of studies in Statistics, to connect probabilities and stochastic processes with networks in the real world.
== See also ==
Hyperbolic geometric graph
Spatial network analysis software
Cascading failure
Complex network
Planar graphs
Percolation theory
Modularity (networks)
Random graphs
Topological graph theory
Small-world network
Chemical graph
Interdependent networks
== References ==
Bandelt, Hans-Jürgen; Chepoi, Victor (2008). "Metric graph theory and geometry: a survey" (PDF). Contemp. Math. Contemporary Mathematics. 453: 49–86. doi:10.1090/conm/453/08795. ISBN 9780821842393. Archived from the original (PDF) on 2006-11-25.
Pach, János; et al. (2004). Towards a Theory of Geometric Graphs. Contemporary Mathematics, no. 342, American Mathematical Society.
Pisanski, Tomaž; Randić, Milan (2000). "Bridges between geometry and graph theory". In Gorini, C. A. (ed.). Geometry at Work: Papers in Applied Geometry. Washington, DC: Mathematical Association of America. pp. 174–194. Archived from the original on 2007-09-27. | Wikipedia/Spatial_network |
Synchronous Optical Networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized protocols that transfer multiple digital bit streams synchronously over optical fiber using lasers or highly coherent light from light-emitting diodes (LEDs). At low transmission rates, data can also be transferred via an electrical interface. The method was developed to replace the plesiochronous digital hierarchy (PDH) system for transporting large amounts of telephone calls and data traffic over the same fiber without the problems of synchronization.
SONET and SDH, which are essentially the same, were originally designed to transport circuit mode communications, e.g. DS1, DS3, from a variety of different sources. However, they were primarily designed to support real-time, uncompressed, circuit-switched voice encoded in PCM format. The primary difficulty in doing this prior to SONET/SDH was that the synchronization sources of these various circuits were different. This meant that each circuit was actually operating at a slightly different rate and with different phase. SONET/SDH allowed for the simultaneous transport of many different circuits of differing origin within a single framing protocol. SONET/SDH is not a complete communications protocol in itself, but a transport protocol (not a "transport" in the OSI Model sense).
Due to SONET/SDH's essential protocol neutrality and transport-oriented features, SONET/SDH was the choice for transporting the fixed length Asynchronous Transfer Mode (ATM) frames also known as cells. It quickly evolved mapping structures and concatenated payload containers to transport ATM connections. In other words, for ATM (and eventually other protocols such as Ethernet), the internal complex structure previously used to transport circuit-oriented connections was removed and replaced with a large and concatenated frame (such as STS-3c) into which ATM cells, IP packets, or Ethernet frames are placed.
Both SDH and SONET are widely used today: SONET in the United States and Canada, and SDH in the rest of the world. Although the SONET standards were developed before SDH, it is considered a variation of SDH because of SDH's greater worldwide market penetration.
SONET is subdivided into four sublayers with some factor such as the path, line, section and physical layer.
The SDH standard was originally defined by the European Telecommunications Standards Institute (ETSI), and is formalised as International Telecommunication Union (ITU) standards G.707, G.783, G.784, and G.803. The SONET standard was defined by Telcordia and American National Standards Institute (ANSI) standard T1.105. which define the set of transmission formats and transmission rates in the range above 51.840 Mbit/s.
== Difference from PDH ==
SDH differs from Plesiochronous Digital Hierarchy (PDH) in that the exact rates that are used to transport the data on SONET/SDH are tightly synchronized across the entire network, using atomic clocks. This synchronization system allows entire inter-country networks to operate synchronously, greatly reducing the amount of buffering required between elements in the network.
Both SONET and SDH can be used to encapsulate earlier digital transmission standards, such as the PDH standard, or they can be used to directly support either Asynchronous Transfer Mode (ATM) or so-called packet over SONET/SDH (POS) networking. Therefore, it is inaccurate to think of SDH or SONET as communications protocols in and of themselves; they are generic, all-purpose transport containers for moving both voice and data. The basic format of a SONET/SDH signal allows it to carry many different services in its virtual container (VC), because it is bandwidth-flexible.
== Protocol overview ==
SONET and SDH often use different terms to describe identical features or functions. This can cause confusion and exaggerate their differences. With a few exceptions, SDH can be thought of as a superset of SONET.
SONET is a set of transport containers that allow for delivery of a variety of protocols, including traditional telephony, ATM, Ethernet, and TCP/IP traffic. SONET therefore is not in itself a native communications protocol and should not be confused as being necessarily connection-oriented in the way that term is usually used.
The protocol is a heavily multiplexed structure, with the header interleaved between the data in a complex way. This permits the encapsulated data to have its own frame rate and be able to "float around" relative to the SDH/SONET frame structure and rate. This interleaving permits a very low latency for the encapsulated data. Data passing through equipment can be delayed by at most 32 μs, compared to a frame rate of 125 μs; in many competing protocols, which buffer the data during such transits for at least one frame or packet before sending it on. Extra padding is allowed for the multiplexed data to move within the overall framing, as the data is clocked at a different rate than the frame rate. The protocol is made more complex by the decision to permit this padding at most levels of the multiplexing structure, but it improves all-around performance.
== Basic transmission unit ==
The basic unit of framing in SDH is a STM-1 (Synchronous Transport Module, level 1), which operates at 155.520 megabits per second (Mbit/s). SONET refers to this basic unit as an STS-3c (Synchronous Transport Signal 3, concatenated). When the STS-3c is carried over OC-3, it is often colloquially referred to as OC-3c, but this is not an official designation within the SONET standard as there is no physical layer (i.e. optical) difference between an STS-3c and 3 STS-1s carried within an OC-3.
SONET offers an additional basic unit of transmission, the STS-1 (Synchronous Transport Signal 1) or OC-1, operating at 51.84 Mbit/s—exactly one third of an STM-1/STS-3c/OC-3c carrier. This speed is dictated by the bandwidth requirements for PCM-encoded telephonic voice signals: at this rate, an STS-1/OC-1 circuit can carry the bandwidth equivalent of a standard DS-3 channel, which can carry 672 64-kbit/s voice channels. In SONET, the STS-3c signal is composed of three multiplexed STS-1 signals; the STS-3c may be carried on an OC-3 signal. Some manufacturers also support the SDH equivalent of the STS-1/OC-1, known as STM-0.
=== Framing ===
In packet-oriented data transmission, such as Ethernet, a packet frame usually consists of a header and a payload. The header is transmitted first, followed by the payload (and possibly a trailer, such as a CRC). In synchronous optical networking, this is modified slightly. The header is termed the overhead, and instead of being transmitted before the payload, is interleaved with it during transmission. Part of the overhead is transmitted, then part of the payload, then the next part of the overhead, then the next part of the payload, until the entire frame has been transmitted.
In the case of an STS-1, the frame is 810 octets in size, while the STM-1/STS-3c frame is 2,430 octets in size. For STS-1, the frame is transmitted as three octets of overhead, followed by 87 octets of payload. This is repeated nine times, until 810 octets have been transmitted, taking 125 μs. In the case of an STS-3c/STM-1, which operates three times faster than an STS-1, nine octets of overhead are transmitted, followed by 261 octets of payload. This is also repeated nine times until 2,430 octets have been transmitted, also taking 125 μs. For both SONET and SDH, this is often represented by displaying the frame graphically: as a block of 90 columns and nine rows for STS-1, and 270 columns and nine rows for STM1/STS-3c. This representation aligns all the overhead columns, so the overhead appears as a contiguous block, as does the payload.
The internal structure of the overhead and payload within the frame differs slightly between SONET and SDH, and different terms are used in the standards to describe these structures. Their standards are extremely similar in implementation, making it easy to interoperate between SDH and SONET at any given bandwidth.
In practice, the terms STS-1 and OC-1 are sometimes used interchangeably, though the OC designation refers to the signal in its optical form. It is therefore incorrect to say that an OC-3 contains 3 OC-1s: an OC-3 can be said to contain 3 STS-1s.
=== SDH frame ===
The Synchronous Transport Module, level 1 (STM-1) frame is the basic transmission format for SDH—the first level of the synchronous digital hierarchy. The STM-1 frame is transmitted in exactly 125 μs, therefore, there are 8,000 frames per second on a 155.52 Mbit/s OC-3 fiber-optic circuit. The STM-1 frame consists of overhead and pointers plus information payload. The first nine columns of each frame make up the section overhead and administrative unit pointers, and the last 261 columns make up the information payload. The pointers (H1, H2, H3 bytes) identify administrative units (AU) within the information payload. Thus, an OC-3 circuit can carry 150.336 Mbit/s of payload, after accounting for the overhead.
Carried within the information payload, which has its own frame structure of nine rows and 261 columns, are administrative units identified by pointers. Also within the administrative unit are one or more virtual containers (VCs). VCs contain path overhead and VC payload. The first column is for path overhead; it is followed by the payload container, which can itself carry other containers. Administrative units can have any phase alignment within the STM frame, and this alignment is indicated by the pointer in row four.
The section overhead (SOH) of a STM-1 signal is divided into two parts: the regenerator section overhead (RSOH) and the multiplex section overhead (MSOH). The overheads contain information from the transmission system itself, which is used for a wide range of management functions, such as monitoring transmission quality, detecting failures, managing alarms, data communication channels, service channels, etc.
The STM frame is continuous and is transmitted in a serial fashion: byte-by-byte, row-by-row.
==== Transport overhead ====
The transport overhead is used for signaling and measuring transmission error rates, and is composed as follows:
Section overhead
Called regenerator section overhead (RSOH) in SDH terminology: 27 octets containing information about the frame structure required by the terminal equipment.
Line overhead
Called multiplex section overhead (MSOH) in SDH: 45 octets containing information about error correction and Automatic Protection Switching messages (e.g., alarms and maintenance messages) as may be required within the network. The error correction is included for STM-16 and above.
Administrative unit (AU) pointer
Points to the location of the J1 byte in the payload (the first byte in the virtual container).
==== Path virtual envelope ====
Data transmitted from end to end is referred to as path data. It is composed of two components:
Payload overhead (POH)
9 octets used for end-to-end signaling and error measurement.
Payload
User data (774 bytes for STM-0/STS-1, or 2,430 octets for STM-1/STS-3c)
For STS-1, the payload is referred to as the synchronous payload envelope (SPE), which in turn has 18 stuffing bytes, leading to the STS-1 payload capacity of 756 bytes.
The STS-1 payload is designed to carry a full PDH DS3 frame. When the DS3 enters a SONET network, path overhead is added, and that SONET network element (NE) is said to be a path generator and terminator. The SONET NE is line terminating if it processes the line overhead. Note that wherever the line or path is terminated, the section is terminated also. SONET regenerators terminate the section, but not the paths or line.
An STS-1 payload can also be subdivided into seven virtual tributary groups (VTGs). Each VTG can then be subdivided into four VT1.5 signals, each of which can carry a PDH DS1 signal. A VTG may instead be subdivided into three VT2 signals, each of which can carry a PDH E1 signal. The SDH equivalent of a VTG is a TUG-2; VT1.5 is equivalent to VC-11, and VT2 is equivalent to VC-12.
Three STS-1 signals may be multiplexed by time-division multiplexing to form the next level of the SONET hierarchy, the OC-3 (STS-3), running at 155.52 Mbit/s. The signal is multiplexed by interleaving the bytes of the three STS-1 frames to form the STS-3 frame, containing 2,430 bytes and transmitted in 125 μs.
Higher-speed circuits are formed by successively aggregating multiples of slower circuits, their speed always being immediately apparent from their designation. For example, four STS-3 or AU4 signals can be aggregated to form a 622.08 Mbit/s signal designated OC-12 or STM-4.
The highest rate commonly deployed is the OC-768 or STM-256 circuit, which operates at rate of just under 38.5 Gbit/s. Where fiber exhaustion is a concern, multiple SONET signals can be transported over multiple wavelengths on a single fiber pair by means of wavelength-division multiplexing, including dense wavelength-division multiplexing (DWDM) and coarse wavelength-division multiplexing (CWDM). DWDM circuits are the basis for all modern submarine communications cable systems and other long-haul circuits.
== SONET/SDH and relationship to 10 Gigabit Ethernet ==
Another type of high-speed data networking circuit is 10 Gigabit Ethernet (10GbE). The Gigabit Ethernet Alliance created two 10 Gigabit Ethernet variants: a local area variant (LAN PHY) with a line rate of 10.3125 Gbit/s, and a wide area variant (WAN PHY) with the same line rate as OC-192/STM-64 (9,953,280 kbit/s).
The WAN PHY variant encapsulates Ethernet data using a lightweight SDH/SONET frame, so as to be compatible at a low level with equipment designed to carry SDH/SONET signals, whereas the LAN PHY variant encapsulates Ethernet data using 64B/66B line coding.
However, 10 Gigabit Ethernet does not explicitly provide any interoperability at the bitstream level with other SDH/SONET systems. This differs from WDM system transponders, including both coarse and dense wavelength-division multiplexing systems (CWDM and DWDM) that currently support OC-192 SONET signals, which can normally support thin-SONET–framed 10 Gigabit Ethernet.
== SONET/SDH data rates ==
User throughput must not deduct path overhead from the payload bandwidth, but path-overhead bandwidth is variable based on the types of cross-connects built across the optical system.
Note that the data-rate progression starts at 155 Mbit/s and increases by multiples of four. The only exception is OC-24, which is standardized in ANSI T1.105, but not a SDH standard rate in ITU-T G.707. Other rates, such as OC-9, OC-18, OC-36, OC-96, and OC-1536, are defined but not commonly deployed; most are considered orphaned rates.
== Physical layer ==
The physical layer refers to the first layer in the OSI networking model. The ATM and SDH layers are the regenerator section level, digital line level, transmission path level, virtual path level, and virtual channel level. The physical layer is modeled on three major entities: transmission path, digital line and the regenerator section. The regenerator section refers to the section and photonic layers. The photonic layer is the lowest SONET layer and it is responsible for transmitting the bits to the physical medium. The section layer is responsible for generating the proper STS-N frames which are to be transmitted across the physical medium. It deals with issues such as proper framing, error monitoring, section maintenance, and orderwire.
The line layer ensures reliable transport of the payload and overhead generated by the path layer. It provides synchronization and multiplexing for multiple paths. It modifies overhead bits relating to quality control. The path layer is SONET's highest level layer. It takes data to be transmitted and transforms them into signals required by the line layer, and adds or modifies the path overhead bits for performance monitoring and protection switching.
== SONET/SDH network management protocols ==
=== Overall functionality ===
Network management systems are used to configure and monitor SDH and SONET equipment either locally or remotely.
The systems consist of three essential parts, covered later in more detail:
Software running on a network management system terminal, e.g. workstation, dumb terminal or laptop housed in an exchange/central office.
Transport of network management data between the network management system terminal and the SONET/SDH equipment, e.g. using TL1/Q3 protocols.
Transport of network management data between SDH/SONET equipment using dedicated embedded data communication channels (DCCs) within the section and line overhead.
The main functions of network management thereby include:
Network and network-element provisioning
In order to allocate bandwidth throughout a network, each network element must be configured. Although this can be done locally, through a craft interface, it is normally done through a network management system (sitting at a higher layer) that, in turn, operates through the SONET/SDH network management network.
Software upgrade
Network-element software upgrades are done mostly through the SONET/SDH management network in modern equipment.
Performance management
Network elements have a very large set of standards for performance management. The performance-management criteria allow not only monitoring the health of individual network elements, but isolating and identifying most network defects or outages. Higher-layer network monitoring and management software allows the proper filtering and troubleshooting of network-wide performance management, so that defects and outages can be quickly identified and resolved.
Consider the three parts defined above:
=== Network management system terminal ===
Local Craft interface
Local "craftspersons" (telephone network engineers) can access a SDH/SONET network element on a "craft port" and issue commands through a dumb terminal or terminal emulation program running on a laptop. This interface can also be attached to a console server, allowing for remote out-of-band management and logging.
Network management system (sitting at a higher layer)
This will often consist of software running on a Workstation covering a number of SDH/SONET network elements
=== TL1/Q3 Protocols ===
TL1
SONET equipment is often managed with the TL1 protocol. TL1 is a telecom language for managing and reconfiguring SONET network elements. The command language used by a SONET network element, such as TL1, must be carried by other management protocols, such as SNMP, CORBA, or XML.
Q3
SDH has been mainly managed using the Q3 interface protocol suite defined in ITU recommendations Q.811 and Q.812. With the convergence of SONET and SDH on switching matrix and network elements architecture, newer implementations have also offered TL1.
Most SONET NEs have a limited number of management interfaces defined:
TL1 Electrical interface
The electrical interface, often a 50-ohm coaxial cable, sends SONET TL1 commands from a local management network physically housed in the central office where the SONET network element is located. This is for local management of that network element and, possibly, remote management of other SONET network elements.
=== Dedicated embedded data communication channels (DCCs) ===
SONET and SDH have dedicated data communication channels (DCCs) within the section and line overhead for management traffic. Generally, section overhead (regenerator section in SDH) is used. According to ITU-T G.7712, there are three modes used for management:
IP-only stack, using PPP as data-link
OSI-only stack, using LAP-D as data-link
Dual (IP+OSI) stack using PPP or LAP-D with tunneling functions to communicate between stacks.
To handle all of the possible management channels and signals, most modern network elements contain a router for the network commands and underlying (data) protocols.
== Equipment ==
With advances in SONET and SDH chipsets, the traditional categories of network elements are no longer distinct. Nevertheless, as network architectures have remained relatively constant, even newer equipment (including multi-service provisioning platforms) can be examined in light of the architectures they will support. Thus, there is value in viewing new, as well as traditional, equipment in terms of the older categories.
=== Regenerator ===
Traditional regenerators terminate the section overhead, but not the line or path. Regenerators extend long-haul routes in a way similar to most regenerators, by converting an optical signal that has already traveled a long distance into electrical format and then retransmitting a regenerated high-power signal.
Since the late 1990s, regenerators have been largely replaced by optical amplifiers. Also, some of the functionality of regenerators has been absorbed by the transponders of wavelength-division multiplexing systems.
=== STS multiplexer and demultiplexer ===
STS multiplexer and demultiplexer provide the interface between an electrical tributary network and the optical network.
=== Add-drop multiplexer ===
Add-drop multiplexers (ADMs) are the most common type of network elements. Traditional ADMs were designed to support one of the network architectures, though new generation systems can often support several architectures, sometimes simultaneously. ADMs traditionally have a high-speed side (where the full line rate signal is supported), and a low-speed side, which can consist of electrical as well as optical interfaces. The low-speed side takes in low-speed signals, which are multiplexed by the network element and sent out from the high-speed side, or vice versa.
=== Digital cross connect system ===
Recent digital cross connect systems (DCSs or DXCs) support numerous high-speed signals, and allow for cross-connection of DS1s, DS3s and even STS-3s/12c and so on, from any input to any output. Advanced DCSs can support numerous subtending rings simultaneously.
== Network architectures ==
SONET and SDH have a limited number of architectures defined. These architectures allow for efficient bandwidth usage as well as protection, i.e. the ability to transmit traffic even when part of the network has failed, and are fundamental to the worldwide deployment of SONET and SDH for moving digital traffic. Every SDH/SONET connection on the optical physical layer uses two optical fibers, regardless of the transmission speed.
=== Linear Automatic Protection Switching ===
Linear Automatic Protection Switching (APS), also known as 1+1, involves four fibers: two working fibers (one in each direction), and two protection fibers. Switching is based on the line state, and may be unidirectional (with each direction switching independently), or bidirectional (where the network elements at each end negotiate so that both directions are generally carried on the same pair of fibers).
=== Unidirectional path-switched ring ===
In unidirectional path-switched rings (UPSRs), two redundant (path-level) copies of protected traffic are sent in either direction around a ring. A selector at the egress node determines which copy has the highest quality, and uses that copy, thus coping if one copy deteriorates due to a broken fiber or other failure.
UPSRs tend to sit nearer to the edge of a network, and as such are sometimes called collector rings. Because the same data is sent around the ring in both directions, the total capacity of a UPSR is equal to the line rate N of the OC-N ring. For example, in an OC-3 ring with 3 STS-1s used to transport 3 DS-3s from ingress node A to the egress node D, 100 percent of the ring bandwidth (N=3) would be consumed by nodes A and D. Any other nodes on the ring could only act as pass-through nodes. The SDH equivalent of UPSR is subnetwork connection protection (SNCP); SNCP does not impose a ring topology, but may also be used in mesh topologies.
=== Bidirectional line-switched ring ===
Bidirectional line-switched ring (BLSR) comes in two varieties: two-fiber BLSR and four-fiber BLSR. BLSRs switch at the line layer. Unlike UPSR, BLSR does not send redundant copies from ingress to egress. Rather, the ring nodes adjacent to the failure reroute the traffic "the long way" around the ring on the protection fibers. BLSRs trade cost and complexity for bandwidth efficiency, as well as the ability to support "extra traffic" that can be pre-empted when a protection switching event occurs. In four-fiber ring, either single node failures, or multiple line failures can be supported, since a failure or maintenance action on one line causes the protection fiber connecting two nodes to be used rather than looping it around the ring.
BLSRs can operate within a metropolitan region or, often, will move traffic between municipalities. Because a BLSR does not send redundant copies from ingress to egress, the total bandwidth that a BLSR can support is not limited to the line rate N of the OC-N ring, and can actually be larger than N depending upon the traffic pattern on the ring.
In the best case, all traffic is between adjacent nodes. The worst case is when all traffic on the ring egresses from a single node, i.e., the BLSR is serving as a collector ring. In this case, the bandwidth that the ring can support is equal to the line rate N of the OC-N ring. This is why BLSRs are seldom, if ever, deployed in collector rings, but often deployed in inter-office rings. The SDH equivalent of BLSR is called Multiplex Section-Shared Protection Ring (MS-SPRING).
== Synchronization ==
Clock sources used for synchronization in telecommunications networks are rated by quality, commonly called a stratum. Typically, a network element uses the highest quality stratum available to it, which can be determined by monitoring the synchronization status messages (SSM) of selected clock sources.
Synchronization sources available to a network element are:
Local external timing
This is generated by an atomic cesium clock or a satellite-derived clock by a device in the same central office as the network element. The interface is often a DS1, with sync-status messages supplied by the clock and placed into the DS1 overhead.
Line-derived timing
A network element can choose (or be configured) to derive its timing from the line-level, by monitoring the S1 sync-status bytes to ensure quality.
Holdover
As a last resort, in the absence of higher quality timing, a network element can go into a holdover mode until higher-quality external timing becomes available again. In this mode, the network element uses its own timing circuits as a reference.
=== Timing loops ===
A timing loop occurs when network elements in a network are each deriving their timing from other network elements, without any of them being a "master" timing source. This network loop will eventually see its own timing "float away" from any external networks, causing mysterious bit errors—and ultimately, in the worst cases, massive loss of traffic. The source of these kinds of errors can be hard to diagnose. In general, a network that has been properly configured should never find itself in a timing loop, but some classes of silent failures could nevertheless cause this issue.
== Next-generation SONET/SDH ==
SONET/SDH development was originally driven by the need to transport multiple PDH signals—like DS1, E1, DS3, and E3—along with other groups of multiplexed 64 kbit/s pulse-code modulated voice traffic. The ability to transport ATM traffic was another early application. In order to support large ATM bandwidths, concatenation was developed, whereby smaller multiplexing containers (e.g., STS-1) are inversely multiplexed to build up a larger container (e.g., STS-3c) to support large data-oriented pipes.
One problem with traditional concatenation, however, is inflexibility. Depending on the data and voice traffic mix that must be carried, there can be a large amount of unused bandwidth left over, due to the fixed sizes of concatenated containers. For example, fitting a 100 Mbit/s Fast Ethernet connection inside a 155 Mbit/s STS-3c container leads to considerable waste. More important is the need for all intermediate network elements to support newly introduced concatenation sizes. This problem was overcome with the introduction of Virtual Concatenation.
Virtual concatenation (VCAT) allows for a more arbitrary assembly of lower-order multiplexing containers, building larger containers of fairly arbitrary size (e.g., 100 Mbit/s) without the need for intermediate network elements to support this particular form of concatenation. Virtual concatenation leverages the X.86 or Generic Framing Procedure (GFP) protocols in order to map payloads of arbitrary bandwidth into the virtually concatenated container.
The Link Capacity Adjustment Scheme (LCAS) allows for dynamically changing the bandwidth via dynamic virtual concatenation, multiplexing containers based on the short-term bandwidth needs in the network.
The set of next-generation SONET/SDH protocols that enable Ethernet transport is referred to as Ethernet over SONET/SDH (EoS).
== End of life and retirement ==
SONET/SDH was used by internet access providers for large customers, and is no longer competitive in the supply of private circuits. Development has stagnated for the last decade (2020) and both suppliers of equipment and operators of SONET/SDH networks are migrating to other technologies such as OTN and wide area Ethernet.
British Telecom has recently (March 2020) closed down their KiloStream and Mega Stream products which were the last large scale uses of the BT SDH. BT has also ceased new connections to their SDH network which indicates withdrawal of services soon.
== See also ==
List of device bandwidths
Routing and wavelength assignment
Multiwavelength optical networking
Optical mesh network
Optical Transport Network
Remote error indication
G.709
Transmux
Internet access
== Notes ==
== References ==
== External links ==
Understanding SONET/SDH
The Queen's University of Belfast SDH/SONET Primer Archived 20 September 2005 at the Wayback Machine
SDH Pocket Handbook from Acterna/JDSU
SONET Pocket Handbook from Acterna/JDSU
The Sonet Homepage
SONET Interoperability Form (SIF)
Network Connection Speeds Reference
Next-generation SDH: the future looks bright
The Future of SONET/SDH (pdf)
Telcordia GR-253-CORE, SONET Transport Systems: Common Generic Criteria
Telcordia GR-499-CORE, Transport Systems Generic Requirements (TSGR): Common Requirements
ANSI T1.105: SONET - Basic Description including Multiplex Structure, Rates and Formats
ANSI T1.119/ATIS PP 0900119.01.2006: SONET - Operations, Administration, Maintenance, and Provisioning (OAM&P) - Communications
ITU-T recommendation G.707: Network Node Interface for the Synchronous Digital Hierarchy (SDH)
ITU-T recommendation G.783: Characteristics of synchronous digital hierarchy (SDH) equipment functional blocks
ITU-T recommendation G.803: Architecture of Transport Networks Based on the Synchronous Digital Hierarchy (SDH) | Wikipedia/Synchronous_optical_networking |
A network operating system (NOS) is a specialized operating system for a network device such as a router, switch or firewall.
Historically operating systems with networking capabilities were described as network operating systems, because they allowed personal computers (PCs) to participate in computer networks and shared file and printer access within a local area network (LAN). This description of operating systems is now largely historical, as common operating systems include a network stack to support a client–server model.
== Key Functions ==
Network Operating Systems (NOS) are responsible for managing various network activities. Key functions include creating and managing user accounts, controlling access to resources such as files and printers, and facilitating communication between devices. NOS also monitors network performance, addresses issues, and manages resources to ensure efficient and secure operation of the network.
== History ==
Packet switching networks were developed to share hardware resources, such as a mainframe computer, a printer or a large and expensive hard disk.: 318
Historically, a network operating system was an operating system for a computer which implemented network capabilities. Operating systems with a network stack allowed personal computers to participate in a client-server architecture in which a server enables multiple clients to share resources, such as printers.
These limited client/server networks were gradually replaced by Peer-to-peer networks, which used networking capabilities to share resources and files located on a variety of computers of all sizes. A peer-to-peer network sets all connected computers equal; they all share the same abilities to use resources available on the network.
Today, distributed computing and groupware applications have become the norm. Computer operating systems include a networking stack as a matter of course.: 318 During the 1980s the need to integrate dissimilar computers with network capabilities grew and the number of networked devices grew rapidly. Partly because it allowed for multi-vendor interoperability, and could route packets globally rather than being restricted to a single building, the Internet protocol suite became almost universally adopted in network architectures. Thereafter, computer operating systems and the firmware of network devices tended to support Internet protocols.: 305
== Network device operating systems ==
Network operating systems can be embedded in a router or hardware firewall that operates the functions in the network layer (layer 3). Notable network operating systems include:
=== Proprietary network operating systems ===
Cisco IOS, a family of network operating systems used on Cisco Systems routers and network switches. (Earlier switches ran the Catalyst operating system, or CatOS)
RouterOS by MikroTik
ZyNOS, used in network devices made by ZyXEL
=== FreeBSD, NetBSD, OpenBSD, and Linux-based operating systems ===
Cisco NX-OS, IOS XE, and IOS XR; families of network operating systems used across various Cisco Systems device including the Cisco Nexus and Cisco ASR platforms
Junos OS; a network operating system that runs on Juniper Networks platforms
Cumulus Linux distribution, which uses the full TCP/IP stack of Linux
DD-WRT, a Linux kernel-based firmware for wireless routers and access points as well as low-cost networking device platforms such as the Linksys WRT54G
Dell Networking Operating System; DNOS9 is NetBSD based, while OS10 uses the Linux kernel
Extensible Operating System runs on switches from Arista and uses an unmodified Linux kernel
ExtremeXOS (EXOS), used in network devices made by Extreme Networks
FTOS (Force10 Operating System), the firmware family used on Force10 Ethernet switches
ONOS, an open source SDN operating system (hosted by Linux Foundation) for communications service providers that is designed for scalability, high performance and high availability.
OpenBSD, an open source operating system which includes its own implementations of BGP, RPKI, OSPF, MPLS, VXLAN, and other IETF standardized networking protocols, as well as firewall (PF) and load-balancing functionality.
OpenWrt used to route IP packets on embedded devices
pfSense, a fork of M0n0wall, which uses PF
OPNsense, a fork of pfSense
SONiC, a Linux-based network operating system developed by Microsoft
VyOS, an open source fork of the Vyatta routing package
== See also ==
Distributed operating system
FRRouting
Interruptible operating system
Network Computer Operating System
Network functions virtualization
Operating System Projects
SONiC (operating system)
== References == | Wikipedia/Network_operating_system |
These tables provide a comparison of operating systems, of computer devices, as listing general and technical information for a number of widely used and currently available PC or handheld (including smartphone and tablet computer) operating systems. The article "Usage share of operating systems" provides a broader, and more general, comparison of operating systems that includes servers, mainframes and supercomputers.
Because of the large number and variety of available Linux distributions, they are all grouped under a single entry; see comparison of Linux distributions for a detailed comparison. There is also a variety of BSD and DOS operating systems, covered in comparison of BSD operating systems and comparison of DOS operating systems.
== Nomenclature ==
The nomenclature for operating systems varies among providers and sometimes within providers.
For purposes of this article the terms used are;
kernel
In some operating systems, the OS is split into a low level region called the kernel and higher level code that relies on the kernel. Typically the kernel implements processes but its code does not run as part of a process.
hybrid kernel
monolithic kernel
Nucleus
In some operating systems there is OS code permanently present in a contiguous region of memory addressable by unprivileged code; in IBM systems this is typically referred to as the nucleus. The nucleus typically contains both code that requires special privileges and code that can run in an unprivileged state. Typically some code in the nucleus runs in the context of a dispatching unit, e.g., address space, process, task, thread, while other code runs independent of any dispatching unit. In contemporary operating systems unprivileged applications cannot alter the nucleus.
License and pricing policies also vary among different systems. The tables below use the following terms:
BSD
BSD licenses are a family of permissive free software licenses, imposing minimal restrictions on the use and distribution of covered software.
bundled
The fee is included in the price of the hardware
bundled initially
The fee is included in the price of the hardware but upgrades require an additional fee.
GPL2
GPL3
Per user
The fee depends on the maximum number of users concurrently logged on.
MSU
The fee depends on the resources consumed by the user
MULC
Measured Usage License Charges
PSLC
Parallel Sysplex Software Pricing
== General information ==
== Technical information ==
== Security ==
== Commands ==
For POSIX compliant (or partly compliant) systems like FreeBSD, Linux, macOS or Solaris, the basic commands are the same because they are standardized.
NOTE: Linux systems may vary by distribution which specific program, or even 'command' is called, via the POSIX alias function. For example, if you wanted to use the DOS dir to give you a directory listing with one detailed file listing per line you could use alias dir='ls -lahF' (e.g. in a session configuration file).
== See also ==
=== Operating system comparisons ===
== References ==
== External links ==
"Operating System Technological Comparison". Retrieved May 9, 2005. | Wikipedia/Comparison_of_operating_systems |
In computer network research, network simulation is a technique whereby a software program replicates the behavior of a real network. This is achieved by calculating the interactions between the different network entities such as routers, switches, nodes, access points, links, etc. Most simulators use discrete event simulation in which the modeling of systems in which state variables change at discrete points in time. The behavior of the network and the various applications and services it supports can then be observed in a test lab; various attributes of the environment can also be modified in a controlled manner to assess how the network/protocols would behave under different conditions.
== Network simulator ==
A network simulator is a software program that can predict the performance of a computer network or a wireless communication network. Since communication networks have become tool complex for traditional analytical methods to provide an accurate understanding of system behavior, network simulators are used. In simulators, the computer network is modeled with devices, links, applications, etc., and the network performance is reported. Simulators come with support for the most popular technologies and networks in use today such as 5G, Internet of Things (IoT), Wireless LANs, mobile ad hoc networks, wireless sensor networks, vehicular ad hoc networks, cognitive radio networks, LTE
== Simulations ==
Most of the commercial simulators are GUI driven, while some network simulators are CLI driven. The network model/configuration describes the network (nodes, routers, switches, links) and the events (data transmissions, packet error, etc.). Output results would include network-level metrics, link metrics, device metrics etc. Further, drill down in terms of simulations trace files would also be available. Trace files log every packet, every event that occurred in the simulation and is used for analysis. Most network simulators use discrete event simulation, in which a list of pending "events" is stored, and those events are processed in order, with some events triggering future events—such as the event of the arrival of a packet at one node triggering the event the arrival of that packet at a downstream node.
== Network emulation ==
Network emulation allows users to introduce real devices and applications into a test network (simulated) that alters packet flow in such a way as to mimic the behavior of a live network. Live traffic can pass through the simulator and be affected by objects within the simulation.
The typical methodology is that real packets from a live application are sent to the emulation server (where the virtual network is simulated). The real packet gets 'modulated' into a simulation packet. The simulation packet gets demodulated into a real packet after experiencing effects of loss, errors, delay, jitter etc., thereby transferring these network effects into the real packet. Thus it is as-if the real packet flowed through a real network but in reality it flowed through the simulated network.
Emulation is widely used in the design stage for validating communication networks prior to deployment.
== List of network simulators ==
There are both free/open-source and proprietary network simulators available. Examples of notable open source network simulators / emulators include:
ns Simulator
GloMoSim
SimGrid
There are also some notable commercial network simulators.
== Uses of network simulators ==
Network simulators provide a cost-effective method for
5G, 6G coverage, capacity, throughput and latency analysis
Network R & D (More than 70% of all Network Research paper reference a network simulator)
Defense applications such as UHF/VHF/L-Band Radio based MANET Radios, Dynamic TDMA MAC, PHY Waveforms etc.
IOT, VANET simulations
UAV network/drone swarm communication simulation
Machine Learning for communication networks
Education: Online courses, Lab experimentation, and R & D. Most universities use a network simulator for teaching / R & D since it is too expensive to buy hardware equipment
There are a wide variety of network simulators, ranging from the very simple to the very complex. Minimally, a network simulator must enable a user to
Model the network topology specifying the nodes on the network and the links between those nodes
Model the application flow (traffic) between the nodes
Providing network performance metrics such as throughput, latency, error, etc., as output
Evaluate protocol and device designs
Log radio measurements, packet and events for drill-down analyses and debugging
== See also ==
Network emulation
Traffic generation model
== References == | Wikipedia/Network_simulation |
The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.
The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, addressed, transmitted, routed, and received. This functionality is organized into four abstraction layers, which classify all related protocols according to each protocol's scope of networking. An implementation of the layers for a particular application forms a protocol stack. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment (link); the internet layer, providing internetworking between independent networks; the transport layer, handling host-to-host communication; and the application layer, providing process-to-process data exchange for applications.
The technical standards underlying the Internet protocol suite and its constituent protocols are maintained by the Internet Engineering Task Force (IETF). The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems.
== History ==
=== Early research ===
Initially referred to as the DOD Internet Architecture Model, the Internet protocol suite has its roots in research and development sponsored by the Defense Advanced Research Projects Agency (DARPA) in the late 1960s. After DARPA initiated the pioneering ARPANET in 1969, Steve Crocker established a "Networking Working Group" which developed a host-host protocol, the Network Control Program (NCP). In the early 1970s, DARPA started work on several other data transmission technologies, including mobile packet radio, packet satellite service, local area networks, and other data networks in the public and private domains. In 1972, Bob Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf joined Kahn with the goal of designing the next protocol generation for the ARPANET to enable internetworking. They drew on the experience from the ARPANET research community, the International Network Working Group, which Cerf chaired, and researchers at Xerox PARC.
By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, in which the differences between local network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the existing ARPANET protocols, this function was delegated to the hosts. Cerf credits Louis Pouzin and Hubert Zimmermann, designers of the CYCLADES network, with important influences on this design. The new protocol was implemented as the Transmission Control Program in 1974 by Cerf, Yogen Dalal and Carl Sunshine.
Initially, the Transmission Control Program (the Internet Protocol did not then exist as a separate protocol) provided only a reliable byte stream service to its users, not datagrams. Several versions were developed through the Internet Experiment Note series. As experience with the protocol grew, collaborators recommended division of functionality into layers of distinct protocols, allowing users direct access to datagram service. Advocates included Bob Metcalfe and Yogen Dalal at Xerox PARC; Danny Cohen, who needed it for his packet voice work; and Jonathan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments (RFCs), the technical and strategic document series that has both documented and catalyzed Internet development. Postel stated, "We are screwing up in our design of Internet protocols by violating the principle of layering." Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would be inflexible and lead to scalability issues. In version 4, written in 1978, Postel split the Transmission Control Program into two distinct protocols, the Internet Protocol as connectionless layer and the Transmission Control Protocol as a reliable connection-oriented service.
The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. This end-to-end principle was pioneered by Louis Pouzin in the CYCLADES network, based on the ideas of Donald Davies. Using this design, it became possible to connect other networks to the ARPANET that used the same principle, irrespective of other local characteristics, thereby solving Kahn's initial internetworking problem. A popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, can run over "two tin cans and a string." Years later, as a joke in 1999, the IP over Avian Carriers formal protocol specification was created and successfully tested two years later. 10 years later still, it was adapted for IPv6.
DARPA contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on several hardware platforms. During development of the protocol the version number of the packet routing layer progressed from version 1 to version 4, the latter of which was installed in the ARPANET in 1983. It became known as Internet Protocol version 4 (IPv4) as the protocol that is still in use in the Internet, alongside its current successor, Internet Protocol version 6 (IPv6).
=== Early implementation ===
In 1975, a two-network IP communications test was performed between Stanford and University College London. In November 1977, a three-network IP test was conducted between sites in the US, the UK, and Norway. Several other IP prototypes were developed at multiple research centers between 1978 and 1983.
A computer called a router is provided with an interface to each network. It forwards network packets back and forth between them. Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways.
=== Adoption ===
In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In the same year, NORSAR/NDRE and Peter Kirstein's research group at University College London adopted the protocol. The migration of the ARPANET from NCP to TCP/IP was officially completed on flag day January 1, 1983, when the new protocols were permanently activated.
In 1985, the Internet Advisory Board (later Internet Architecture Board) held a three-day TCP/IP workshop for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use. In 1985, the first Interop conference focused on network interoperability by broader adoption of TCP/IP. The conference was founded by Dan Lynch, an early Internet activist. From the beginning, large corporations, such as IBM and DEC, attended the meeting.
IBM, AT&T and DEC were the first major corporations to adopt TCP/IP, this despite having competing proprietary protocols. In IBM, from 1984, Barry Appelman's group did TCP/IP development. They navigated the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies, such as FTP Software and the Wollongong Group, began offering TCP/IP stacks for DOS and Microsoft Windows. The first VM/CMS TCP/IP stack came from the University of Wisconsin.
Some of the early TCP/IP stacks were written single-handedly by a few programmers. Jay Elinsky and Oleg Vishnepolsky of IBM Research wrote TCP/IP stacks for VM/CMS and OS/2, respectively. In 1984 Donald Gillies at MIT wrote a ntcp multi-connection TCP which runs atop the IP/PacketDriver layer maintained by John Romkey at MIT in 1983–84. Romkey leveraged this TCP in 1986 when FTP Software was founded. Starting in 1985, Phil Karn created a multi-connection TCP application for ham radio systems (KA9Q TCP).
The spread of TCP/IP was fueled further in June 1989, when the University of California, Berkeley agreed to place the TCP/IP code developed for BSD UNIX into the public domain. Various corporate vendors, including IBM, included this code in commercial TCP/IP software releases. For Windows 3.1, the dominant PC operating system among consumers in the first half of the 1990s, Peter Tattam's release of the Trumpet Winsock TCP/IP stack was key to bringing the Internet to home users. Trumpet Winsock allowed TCP/IP operations over a serial connection (SLIP or PPP). The typical home PC of the time had an external Hayes-compatible modem connected via an RS-232 port with an 8250 or 16550 UART which required this type of stack. Later, Microsoft would release their own TCP/IP add-on stack for Windows for Workgroups 3.11 and a native stack in Windows 95. These events helped cement TCP/IP's dominance over other protocols on Microsoft-based networks, which included IBM's Systems Network Architecture (SNA), and on other platforms such as Digital Equipment Corporation's DECnet, Open Systems Interconnection (OSI), and Xerox Network Systems (XNS).
Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations were polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks.
=== Formal specification and standards ===
The technical standards underlying the Internet protocol suite and its constituent protocols have been delegated to the Internet Engineering Task Force (IETF).
The characteristic architecture of the Internet protocol suite is its broad division into operating scopes for the protocols that constitute its core functionality. The defining specifications of the suite are RFC 1122 and 1123, which broadly outlines four abstraction layers (as well as related protocols); the link layer, IP layer, transport layer, and application layer, along with support protocols. These have stood the test of time, as the IETF has never modified this structure. As such a model of networking, the Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems.
== Key architectural principles ==
The end-to-end principle has evolved over time. Its original expression put the maintenance of state and overall intelligence at the edges, and assumed the Internet that connected the edges retained no state and concentrated on speed and simplicity. Real-world needs for firewalls, network address translators, web content caches and the like have forced changes in this principle.
The robustness principle states: "In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior. That is, it must be careful to send well-formed datagrams, but must accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear).": 23 "The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features.": 13
Encapsulation is used to provide abstraction of protocols and services. Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers. The data is further encapsulated at each level.
An early pair of architectural documents, RFC 1122 and 1123, titled Requirements for Internet Hosts, emphasizes architectural principles over layering. RFC 1122/23 are structured in sections referring to layers, but the documents refer to many other architectural principles, and do not emphasize layering. They loosely defines a four-layer model, with the layers having names, not numbers, as follows:
The application layer is the scope within which applications, or processes, create user data and communicate this data to other applications on another or the same host. The applications make use of the services provided by the underlying lower layers, especially the transport layer which provides reliable or unreliable pipes to other processes. The communications partners are characterized by the application architecture, such as the client–server model and peer-to-peer networking. This is the layer in which all application protocols, such as SMTP, FTP, SSH, HTTP, operate. Processes are addressed via ports which essentially represent services.
The transport layer performs host-to-host communications on either the local network or remote networks separated by routers. It provides a channel for the communication needs of applications. UDP is the basic transport layer protocol, providing an unreliable connectionless datagram service. The Transmission Control Protocol provides flow-control, connection establishment, and reliable transmission of data.
The internet layer exchanges datagrams across network boundaries. It provides a uniform networking interface that hides the actual topology (layout) of the underlying network connections. It is therefore also the layer that establishes internetworking. Indeed, it defines and establishes the Internet. This layer defines the addressing and routing structures used for the TCP/IP protocol suite. The primary protocol in this scope is the Internet Protocol, which defines IP addresses. Its function in routing is to transport datagrams to the next host, functioning as an IP router, that has the connectivity to a network closer to the final data destination.
The link layer defines the networking methods within the scope of the local network link on which hosts communicate without intervening routers. This layer includes the protocols used to describe the local network topology and the interfaces needed to effect the transmission of internet layer datagrams to next-neighbor hosts.
== Link layer ==
The protocols of the link layer operate within the scope of the local network connection to which a host is attached. This regime is called the link in TCP/IP parlance and is the lowest component layer of the suite. The link includes all hosts accessible without traversing a router. The size of the link is therefore determined by the networking hardware design. In principle, TCP/IP is designed to be hardware independent and may be implemented on top of virtually any link-layer technology. This includes not only hardware implementations but also virtual link layers such as virtual private networks and networking tunnels.
The link layer is used to move packets between the internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on the link can be controlled in the device driver for the network card, as well as in firmware or by specialized chipsets. These perform functions, such as framing, to prepare the internet layer packets for transmission, and finally transmit the frames to the physical layer and over a transmission medium. The TCP/IP model includes specifications for translating the network addressing methods used in the Internet Protocol to link-layer addresses, such as media access control (MAC) addresses. All other aspects below that level, however, are implicitly assumed to exist and are not explicitly defined in the TCP/IP model.
The link layer in the TCP/IP model has corresponding functions in Layer 2 of the OSI model.
== Internet layer ==
Internetworking requires sending data from the source network to the destination network. This process is called routing and is supported by host addressing and identification using the hierarchical IP addressing system. The internet layer provides an unreliable datagram transmission facility between hosts located on potentially different IP networks by forwarding datagrams to an appropriate next-hop router for further relaying to its destination. The internet layer has the responsibility of sending packets across potentially multiple networks. With this functionality, the internet layer makes possible internetworking, the interworking of different IP networks, and it essentially establishes the Internet.
The internet layer does not distinguish between the various transport layer protocols. IP carries data for a variety of different upper layer protocols. These protocols are each identified by a unique protocol number: for example, Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP) are protocols 1 and 2, respectively.
The Internet Protocol is the principal component of the internet layer, and it defines two addressing systems to identify network hosts and to locate them on the network. The original address system of the ARPANET and its successor, the Internet, is Internet Protocol version 4 (IPv4). It uses a 32-bit IP address and is therefore capable of identifying approximately four billion hosts. This limitation was eliminated in 1998 by the standardization of Internet Protocol version 6 (IPv6) which uses 128-bit addresses. IPv6 production implementations emerged in approximately 2006.
== Transport layer ==
The transport layer establishes basic data channels that applications use for task-specific data exchange. The layer establishes host-to-host connectivity in the form of end-to-end message transfer services that are independent of the underlying network and independent of the structure of user data and the logistics of exchanging information. Connectivity at the transport layer can be categorized as either connection-oriented, implemented in TCP, or connectionless, implemented in UDP. The protocols in this layer may provide error control, segmentation, flow control, congestion control, and application addressing (port numbers).
For the purpose of providing process-specific transmission channels for applications, the layer establishes the concept of the network port. This is a numbered logical construct allocated specifically for each of the communication channels an application needs. For many types of services, these port numbers have been standardized so that client computers may address specific services of a server computer without the involvement of service discovery or directory services.
Because IP provides only a best-effort delivery, some transport-layer protocols offer reliability.
TCP is a connection-oriented protocol that addresses numerous reliability issues in providing a reliable byte stream:
data arrives in-order
data has minimal error (i.e., correctness)
duplicate data is discarded
lost or discarded packets are resent
includes traffic congestion control
The newer Stream Control Transmission Protocol (SCTP) is also a reliable, connection-oriented transport mechanism. It is message-stream-oriented, not byte-stream-oriented like TCP, and provides multiple streams multiplexed over a single connection. It also provides multihoming support, in which a connection end can be represented by multiple IP addresses (representing multiple physical interfaces), such that if one fails, the connection is not interrupted. It was developed initially for telephony applications (to transport SS7 over IP).
Reliability can also be achieved by running IP over a reliable data-link protocol such as the High-Level Data Link Control (HDLC).
The User Datagram Protocol (UDP) is a connectionless datagram protocol. Like IP, it is a best-effort, unreliable protocol. Reliability is addressed through error detection using a checksum algorithm. UDP is typically used for applications such as streaming media (audio, video, Voice over IP, etc.) where on-time arrival is more important than reliability, or for simple query/response applications like DNS lookups, where the overhead of setting up a reliable connection is disproportionately large. Real-time Transport Protocol (RTP) is a datagram protocol that is used over UDP and is designed for real-time data such as streaming media.
The applications at any given network address are distinguished by their TCP or UDP port. By convention, certain well-known ports are associated with specific applications.
The TCP/IP model's transport or host-to-host layer corresponds roughly to the fourth layer in the OSI model, also called the transport layer.
QUIC is rapidly emerging as an alternative transport protocol. Whilst it is technically carried via UDP packets it seeks to offer enhanced transport connectivity relative to TCP. HTTP/3 works exclusively via QUIC.
== Application layer ==
The application layer includes the protocols used by most applications for providing user services or exchanging application data over the network connections established by the lower-level protocols. This may include some basic network support services such as routing protocols and host configuration. Examples of application layer protocols include the Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), the Simple Mail Transfer Protocol (SMTP), and the Dynamic Host Configuration Protocol (DHCP). Data coded according to application layer protocols are encapsulated into transport layer protocol units (such as TCP streams or UDP datagrams), which in turn use lower layer protocols to effect actual data transfer.
The TCP/IP model does not consider the specifics of formatting and presenting data and does not define additional layers between the application and transport layers as in the OSI model (presentation and session layers). According to the TCP/IP model, such functions are the realm of libraries and application programming interfaces. The application layer in the TCP/IP model is often compared to a combination of the fifth (session), sixth (presentation), and seventh (application) layers of the OSI model.
Application layer protocols are often associated with particular client–server applications, and common services have well-known port numbers reserved by the Internet Assigned Numbers Authority (IANA). For example, the HyperText Transfer Protocol uses server port 80 and Telnet uses server port 23. Clients connecting to a service usually use ephemeral ports, i.e., port numbers assigned only for the duration of the transaction at random or from a specific range configured in the application.
At the application layer, the TCP/IP model distinguishes between user protocols and support protocols.: §1.1.3 Support protocols provide services to a system of network infrastructure. User protocols are used for actual user applications. For example, FTP is a user protocol and DNS is a support protocol.
Although the applications are usually aware of key qualities of the transport layer connection such as the endpoint IP addresses and port numbers, application layer protocols generally treat the transport layer (and lower) protocols as black boxes which provide a stable network connection across which to communicate. The transport layer and lower-level layers are unconcerned with the specifics of application layer protocols. Routers and switches do not typically examine the encapsulated traffic, rather they just provide a conduit for it. However, some firewall and bandwidth throttling applications use deep packet inspection to interpret application data. An example is the Resource Reservation Protocol (RSVP). It is also sometimes necessary for Applications affected by NAT to consider the application payload.
== Layering evolution and representations in the literature ==
The Internet protocol suite evolved through research and development funded over a period of time. In this process, the specifics of protocol components and their layering changed. In addition, parallel research and commercial interests from industry associations competed with design features. In particular, efforts in the International Organization for Standardization led to a similar goal, but with a wider scope of networking in general. Efforts to consolidate the two principal schools of layering, which were superficially similar, but diverged sharply in detail, led independent textbook authors to formulate abridging teaching tools.
The following table shows various such networking models. The number of layers varies between three and seven.
Some of the networking models are from textbooks, which are secondary sources that may conflict with the intent of RFC 1122 and other IETF primary sources.
== Comparison of TCP/IP and OSI layering ==
The three top layers in the OSI model, i.e. the application layer, the presentation layer and the session layer, are not distinguished separately in the TCP/IP model which only has an application layer above the transport layer. While some pure OSI protocol applications, such as X.400, also combined them, there is no requirement that a TCP/IP protocol stack must impose monolithic architecture above the transport layer. For example, the NFS application protocol runs over the External Data Representation (XDR) presentation protocol, which, in turn, runs over a protocol called Remote Procedure Call (RPC). RPC provides reliable record transmission, so it can safely use the best-effort UDP transport.
Different authors have interpreted the TCP/IP model differently, and disagree whether the link layer, or any aspect of the TCP/IP model, covers OSI layer 1 (physical layer) issues, or whether TCP/IP assumes a hardware layer exists below the link layer. Several authors have attempted to incorporate the OSI model's layers 1 and 2 into the TCP/IP model since these are commonly referred to in modern standards (for example, by IEEE and ITU). This often results in a model with five layers, where the link layer or network access layer is split into the OSI model's layers 1 and 2.
The IETF protocol development effort is not concerned with strict layering. Some of its protocols may not fit cleanly into the OSI model, although RFCs sometimes refer to it and often use the old OSI layer numbers. The IETF has repeatedly stated that Internet Protocol and architecture development is not intended to be OSI-compliant. RFC 3439, referring to the internet architecture, contains a section entitled: "Layering Considered Harmful".
For example, the session and presentation layers of the OSI suite are considered to be included in the application layer of the TCP/IP suite. The functionality of the session layer can be found in protocols like HTTP and SMTP and is more evident in protocols like Telnet and the Session Initiation Protocol (SIP). Session-layer functionality is also realized with the port numbering of the TCP and UDP protocols, which are included in the transport layer of the TCP/IP suite. Functions of the presentation layer are realized in the TCP/IP applications with the MIME standard in data exchange.
Another difference is in the treatment of routing protocols. The OSI routing protocol IS-IS belongs to the network layer, and does not depend on CLNS for delivering packets from one router to another, but defines its own layer-3 encapsulation. In contrast, OSPF, RIP, BGP and other routing protocols defined by the IETF are transported over IP, and, for the purpose of sending and receiving routing protocol packets, routers act as hosts. As a consequence, routing protocols are included in the application layer. Some authors, such as Tanenbaum in Computer Networks, describe routing protocols in the same layer as IP, reasoning that routing protocols inform decisions made by the forwarding process of routers.
IETF protocols can be encapsulated recursively, as demonstrated by tunnelling protocols such as Generic Routing Encapsulation (GRE). GRE uses the same mechanism that OSI uses for tunnelling at the network layer.
== Implementations ==
The Internet protocol suite does not presume any specific hardware or software environment. It only requires that hardware and a software layer exists that is capable of sending and receiving packets on a computer network. As a result, the suite has been implemented on essentially every computing platform. A minimal implementation of TCP/IP includes the following: Internet Protocol (IP), Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Internet Group Management Protocol (IGMP). In addition to IP, ICMP, TCP, UDP, Internet Protocol version 6 requires Neighbor Discovery Protocol (NDP), ICMPv6, and Multicast Listener Discovery (MLD) and is often accompanied by an integrated IPSec security layer.
== See also ==
BBN Report 1822, an early layered network model
Internetwork Packet Exchange
Fast Local Internet Protocol
List of automation protocols
List of information technology initialisms
List of IP protocol numbers
Lists of network protocols
List of TCP and UDP port numbers
== Notes ==
== References ==
== Bibliography ==
Douglas E. Comer (2001). Internetworking with TCP/IP – Principles, Protocols and Architecture. CET [i. e.] Computer Equipment and Trade. ISBN 86-7991-142-9.
Joseph G. Davies; Thomas F. Lee (2003). Microsoft Windows Server 2003 TCP/IP Protocols and Services. Microsoft Press. ISBN 0-7356-1291-9.
Forouzan, Behrouz A. (2003). TCP/IP Protocol Suite (2nd ed.). McGraw-Hill. ISBN 978-0-07-246060-5.
Craig Hunt (1998). TCP/IP Network Administration. O'Reilly. ISBN 1-56592-322-7.
Maufer, Thomas A. (1999). IP Fundamentals. Prentice Hall. ISBN 978-0-13-975483-8.
Ian McLean (2000). Windows 2000 TCP/IP Black Book. Coriolis Group Books. ISBN 1-57610-687-X.
Ajit Mungale (September 29, 2004). Pro .NET 1.1 Network Programming. Apress. ISBN 1-59059-345-6.
W. Richard Stevens (April 24, 1994). TCP/IP Illustrated, Volume 1: The Protocols. Addison-Wesley. ISBN 0-201-63346-9.
W. Richard Stevens; Gary R. Wright (1994). TCP/IP Illustrated, Volume 2: The Implementation. Addison-Wesley. ISBN 0-201-63354-X.
W. Richard Stevens (1996). TCP/IP Illustrated, Volume 3: TCP for Transactions, HTTP, NNTP, and the UNIX Domain Protocols. Addison-Wesley. ISBN 0-201-63495-3.
Andrew S. Tanenbaum (2003). Computer Networks. Prentice Hall PTR. ISBN 0-13-066102-3.
Clark, D. (1988). "The Design Philosophy of the DARPA Internet Protocols" (PDF). Proceedings of the Sigcomm '88 Symposium on Communications Architectures and Protocols. ACM. pp. 106–114. doi:10.1145/52324.52336. ISBN 978-0897912792. S2CID 6156615. Retrieved October 16, 2011.
Cerf, Vinton G.; Kahn, Robert E. (May 1974). "A Protocol for Packet Network Intercommunication" (PDF). IEEE Transactions on Communications. 22 (5): 637–648. doi:10.1109/TCOM.1974.1092259.
== External links ==
Internet History – Pages on Robert Kahn, Vinton Cerf, and TCP/IP (reviewed by Cerf and Kahn).
T. Socolofsky; C. Kale (January 1991). A TCP/IP Tutorial. Network Working Group. doi:10.17487/RFC1180. RFC 1180. Informational.
The Ultimate Guide to TCP/IP
The TCP/IP Guide – A comprehensive look at the protocols and the procedure and processes involved
A Study of the ARPANET TCP/IP Digest, archived from the original on December 4, 2021 | Wikipedia/IP_network |
A tree topology, or star-bus topology, is a hybrid network topology in which star networks are interconnected via bus networks. Tree networks are hierarchical, and each node can have an arbitrary number of child nodes.
== Regular tree networks ==
A regular tree network's topology is characterized by two parameters: the branching,
d
{\displaystyle d}
, and the
number of generations,
G
{\displaystyle G}
. The total number of the nodes,
N
{\displaystyle N}
, and the number of peripheral nodes
N
p
{\displaystyle N_{p}}
, are given by
N
=
d
G
+
1
−
1
d
−
1
,
N
p
=
d
G
{\displaystyle N={\frac {d^{G+1}-1}{d-1}},\quad N_{p}=d^{G}}
== Random tree networks ==
Three parameters are crucial in determining the statistics of random tree networks, first, the branching probability, second the maximum number of allowed progenies at each branching point, and third the maximum number of generations, that a tree can attain. There are a lot of studies that address the large tree networks, however small tree networks are seldom studied.
== Tools to deal with networks ==
A group at MIT has developed a set of functions for Matlab that can help in analyzing the networks. These tools could be used to study the tree networks as well.
L. de Weck, Oliver. "MIT Strategic Engineering Research Group (SERG), Part II". Retrieved May 1, 2018.
== References == | Wikipedia/Tree_network |
A mesh network is a local area network topology in which the infrastructure nodes (i.e. bridges, switches, and other infrastructure devices) connect directly, dynamically and non-hierarchically to as many other nodes as possible and cooperate with one another to efficiently route data to and from clients.
This lack of dependency on one node allows for every node to participate in the relay of information. Mesh networks dynamically self-organize and self-configure, which can reduce installation overhead. The ability to self-configure enables dynamic distribution of workloads, particularly in the event a few nodes should fail. This in turn contributes to fault-tolerance and reduced maintenance costs.
Mesh topology may be contrasted with conventional star/tree local network topologies in which the bridges/switches are directly linked to only a small subset of other bridges/switches, and the links between these infrastructure neighbours are hierarchical. While star-and-tree topologies are very well established, highly standardized and vendor-neutral, vendors of mesh network devices have not yet all agreed on common standards, and interoperability between devices from different vendors is not yet assured.
== Basic principles ==
Mesh networks can relay messages using either a flooding or a routing technique, which makes them different from non-mesh networks. A routed message is propagated along a path by hopping from node to node until it reaches its destination. To ensure that all its paths are available, the network must allow for continuous connections and must reconfigure itself around broken paths, using self-healing algorithms such as Shortest Path Bridging and TRILL (Transparent Interconnection of Lots of Links). Self-healing allows a routing-based network to operate when a node breaks down or when a connection becomes unreliable. The network is typically quite reliable, as there is often more than one path between a source and a destination in the network. Although mostly used in wireless situations, this concept can also apply to wired networks and to software interaction.
A mesh network whose nodes are all connected to each other is a fully connected network. Fully connected wired networks are more secure and reliable: problems in a cable affect only the two nodes attached to it. In such networks, however, the number of cables, and therefore the cost, goes up rapidly as the number of nodes increases.
== Types ==
=== Wired mesh ===
Shortest path bridging and TRILL each allow Ethernet switches to be connected in a mesh topology and allow for all paths to be active. IP routing supports multiple paths from source to destination.
=== Wireless mesh ===
A wireless mesh network (WMN) is a network made up of radio nodes organized in a mesh topology. It can also be a form of wireless ad hoc network.
== See also ==
Category of mesh networking technologies
Bluetooth mesh networking
MENTOR routing algorithm
Optical mesh network
== References ==
== External links ==
NYU-NET3 at the Wayback Machine (archived 2015-07-08) Application of a tetrahedral structure to create a resilient partial-mesh 3-dimensional campus backbone data network
Phantom anonymous, decentralized network, isolated from the Internet
Disruption Tolerant Mesh Networks autonomous machine controller in mesh nodes operate despite loss of cloud connectivity. | Wikipedia/Mesh_network |
Maximum-entropy random graph models are random graph models used to study complex networks subject to the principle of maximum entropy under a set of structural constraints, which may be global, distributional, or local.
== Overview ==
Any random graph model (at a fixed set of parameter values) results in a probability distribution on graphs, and those that are maximum entropy within the considered class of distributions have the special property of being maximally unbiased null models for network inference (e.g. biological network inference). Each model defines a family of probability distributions on the set of graphs of size
n
{\displaystyle n}
(for each
n
>
n
0
{\displaystyle n>n_{0}}
for some finite
n
0
{\displaystyle n_{0}}
), parameterized by a collection of constraints on
J
{\displaystyle J}
observables
{
Q
j
(
G
)
}
j
=
1
J
{\displaystyle \{Q_{j}(G)\}_{j=1}^{J}}
defined for each graph
G
{\displaystyle G}
(such as fixed expected average degree, degree distribution of a particular form, or specific degree sequence), enforced in the graph distribution alongside entropy maximization by the method of Lagrange multipliers. Note that in this context "maximum entropy" refers not to the entropy of a single graph, but rather the entropy of the whole probabilistic ensemble of random graphs.
Several commonly studied random network models are in fact maximum entropy, for example the ER graphs
G
(
n
,
m
)
{\displaystyle G(n,m)}
and
G
(
n
,
p
)
{\displaystyle G(n,p)}
(which each have one global constraint on the number of edges), as well as the configuration model (CM). and soft configuration model (SCM) (which each have
n
{\displaystyle n}
local constraints, one for each nodewise degree-value). In the two pairs of models mentioned above, an important distinction is in whether the constraint is sharp (i.e. satisfied by every element of the set of size-
n
{\displaystyle n}
graphs with nonzero probability in the ensemble), or soft (i.e. satisfied on average across the whole ensemble). The former (sharp) case corresponds to a microcanonical ensemble, the condition of maximum entropy yielding all graphs
G
{\displaystyle G}
satisfying
Q
j
(
G
)
=
q
j
∀
j
{\displaystyle Q_{j}(G)=q_{j}\forall j}
as equiprobable; the latter (soft) case is canonical, producing an exponential random graph model (ERGM).
== Canonical ensemble of graphs (general framework) ==
Suppose we are building a random graph model consisting of a probability distribution
P
(
G
)
{\displaystyle \mathbb {P} (G)}
on the set
G
n
{\displaystyle {\mathcal {G}}_{n}}
of simple graphs with
n
{\displaystyle n}
vertices. The Gibbs entropy
S
[
G
]
{\displaystyle S[G]}
of this ensemble will be given by
S
[
G
]
=
−
∑
G
∈
G
n
P
(
G
)
log
P
(
G
)
.
{\displaystyle S[G]=-\sum _{G\in {\mathcal {G}}_{n}}\mathbb {P} (G)\log \mathbb {P} (G).}
We would like the ensemble-averaged values
{
⟨
Q
j
⟩
}
j
=
1
J
{\displaystyle \{\langle Q_{j}\rangle \}_{j=1}^{J}}
of observables
{
Q
j
(
G
)
}
j
=
1
J
{\displaystyle \{Q_{j}(G)\}_{j=1}^{J}}
(such as average degree, average clustering, or average shortest path length) to be tunable, so we impose
J
{\displaystyle J}
"soft" constraints on the graph distribution:
⟨
Q
j
⟩
=
∑
G
∈
G
n
P
(
G
)
Q
j
(
G
)
=
q
j
,
{\displaystyle \langle Q_{j}\rangle =\sum _{G\in {\mathcal {G}}_{n}}\mathbb {P} (G)Q_{j}(G)=q_{j},}
where
j
=
1
,
.
.
.
,
J
{\displaystyle j=1,...,J}
label the constraints. Application of the method of Lagrange multipliers to determine the distribution
P
(
G
)
{\displaystyle \mathbb {P} (G)}
that maximizes
S
[
G
]
{\displaystyle S[G]}
while satisfying
⟨
Q
j
⟩
=
q
j
{\displaystyle \langle Q_{j}\rangle =q_{j}}
, and the normalization condition
∑
G
∈
G
n
P
(
G
)
=
1
{\displaystyle \sum _{G\in {\mathcal {G}}_{n}}\mathbb {P} (G)=1}
results in the following:
P
(
G
)
=
1
Z
exp
[
−
∑
j
=
1
J
ψ
j
Q
j
(
G
)
]
,
{\displaystyle \mathbb {P} (G)={\frac {1}{Z}}\exp \left[-\sum _{j=1}^{J}\psi _{j}Q_{j}(G)\right],}
where
Z
{\displaystyle Z}
is a normalizing constant (the partition function) and
{
ψ
j
}
j
=
1
J
{\displaystyle \{\psi _{j}\}_{j=1}^{J}}
are parameters (Lagrange multipliers) coupled to the correspondingly indexed graph observables, which may be tuned to yield graph samples with desired values of those properties, on average; the result is an exponential family and canonical ensemble; specifically yielding an ERGM.
== The Erdős–Rényi model ==
G
(
n
,
m
)
{\displaystyle G(n,m)}
In the canonical framework above, constraints were imposed on ensemble-averaged quantities
⟨
Q
j
⟩
{\displaystyle \langle Q_{j}\rangle }
. Although these properties will on average take on values specifiable by appropriate setting of
{
ψ
j
}
j
=
1
J
{\displaystyle \{\psi _{j}\}_{j=1}^{J}}
, each specific instance
G
{\displaystyle G}
may have
Q
j
(
G
)
≠
q
j
{\displaystyle Q_{j}(G)\neq q_{j}}
, which may be undesirable. Instead, we may impose a much stricter condition: every graph with nonzero probability must satisfy
Q
j
(
G
)
=
q
j
{\displaystyle Q_{j}(G)=q_{j}}
exactly. Under these "sharp" constraints, the maximum-entropy distribution is determined. We exemplify this with the Erdős–Rényi model
G
(
n
,
m
)
{\displaystyle G(n,m)}
.
The sharp constraint in
G
(
n
,
m
)
{\displaystyle G(n,m)}
is that of a fixed number of edges
m
{\displaystyle m}
, that is
|
E
(
G
)
|
=
m
{\displaystyle |\operatorname {E} (G)|=m}
, for all graphs
G
{\displaystyle G}
drawn from the ensemble (instantiated with a probability denoted
P
n
,
m
(
G
)
{\displaystyle \mathbb {P} _{n,m}(G)}
). This restricts the sample space from
G
n
{\displaystyle {\mathcal {G}}_{n}}
(all graphs on
n
{\displaystyle n}
vertices) to the subset
G
n
,
m
=
{
g
∈
G
n
;
|
E
(
g
)
|
=
m
}
⊂
G
n
{\displaystyle {\mathcal {G}}_{n,m}=\{g\in {\mathcal {G}}_{n};|\operatorname {E} (g)|=m\}\subset {\mathcal {G}}_{n}}
. This is in direct analogy to the microcanonical ensemble in classical statistical mechanics, wherein the system is restricted to a thin manifold in the phase space of all states of a particular energy value.
Upon restricting our sample space to
G
n
,
m
{\displaystyle {\mathcal {G}}_{n,m}}
, we have no external constraints (besides normalization) to satisfy, and thus we'll select
P
n
,
m
(
G
)
{\displaystyle \mathbb {P} _{n,m}(G)}
to maximize
S
[
G
]
{\displaystyle S[G]}
without making use of Lagrange multipliers. It is well known that the entropy-maximizing distribution in the absence of external constraints is the uniform distribution over the sample space (see maximum entropy probability distribution), from which we obtain:
P
n
,
m
(
G
)
=
1
|
G
n
,
m
|
=
(
(
n
2
)
m
)
−
1
,
{\displaystyle \mathbb {P} _{n,m}(G)={\frac {1}{|{\mathcal {G}}_{n,m}|}}={\binom {\binom {n}{2}}{m}}^{-1},}
where the last expression in terms of binomial coefficients is the number of ways to place
m
{\displaystyle m}
edges among
(
n
2
)
{\displaystyle {\binom {n}{2}}}
possible edges, and thus is the cardinality of
G
n
,
m
{\displaystyle {\mathcal {G}}_{n,m}}
.
== Generalizations ==
A variety of maximum-entropy ensembles have been studied on generalizations of simple graphs. These include, for example, ensembles of simplicial complexes, and weighted random graphs with a given expected degree sequence
== See also ==
Principle of maximum entropy
Maximum entropy probability distribution
Method of Lagrange multipliers
Null model
Random graph
Exponential random graph model
Canonical ensemble
Microcanonical ensemble
== References == | Wikipedia/Maximum-entropy_random_graph_model |
The NPL network, or NPL Data Communications Network, was a local area computer network operated by the National Physical Laboratory (NPL) in London that pioneered the concept of packet switching.
Based on designs conceived by Donald Davies in 1965, development work began in 1966. Construction began in 1968 and elements of the first version of the network, the Mark I, became operational in early 1969 then fully operational in January 1970. The Mark II version operated from 1973 until 1986. The NPL network was the first computer network to implement packet switching and the first to use high-speed links. Its original design, along with the innovations implemented in the ARPANET and the CYCLADES network, laid down the technical foundations of the modern Internet.
== Origins ==
In 1965, Donald Davies, who was later appointed to head of the NPL Division of Computer Science, proposed a commercial national data network in the United Kingdom based on packet switching in Proposal for the Development of a National Communications Service for On-line Data Processing. The following year, he refined his ideas in Proposal for the Development of a National Communications Service for OnLine Data Processing. The design was the first to describe the concept of an "interface computer", today known as a router.
A written version of the proposal entitled A digital communications network for computers giving rapid response at remote terminals was presented by Roger Scantlebury at the Symposium on Operating Systems Principles in 1967. The design involved transmitting signals (packets) across a network with a hierarchical structure. It was proposed that "local networks" be constructed with interface computers which had responsibility for multiplexing among a number of user systems (time-sharing computers and other users) and for communicating with "high level network". The latter would be constructed with "switching nodes" connected together with megabit rate circuits (T1 links, which run with a 1.544 Mbit/s line rate). In Scantlebury's report following the conference, he noted "It would appear that the ideas in the NPL paper at the moment are more advanced than any proposed in the USA".
=== Packet switching ===
The first theoretical foundation of packet switching was the work of Paul Baran, at RAND, in which data was transmitted in small chunks and routed independently by a method similar to store-and-forward techniques between intermediate networking nodes. Davies independently arrived at the same model in 1965 and named it packet switching. He chose the term "packet" after consulting with an NPL linguist because it was capable of being translated into languages other than English without compromise. In July 1968, NPL put on a demonstration of real and simulated networks at an event organised by the Real Time Club at the Royal Festival Hall in London. Davies gave the first public presentation of packet switching on 5 August 1968 at the IFIP Congress in Edinburgh.
Davies' original ideas influenced other research around the world. Larry Roberts incorporated these concepts into the design for the ARPANET. The NPL network initially proposed a line speed of 768 kbit/s. Influenced by this, the planned line speed for ARPANET was upgraded from 2.4 kbit/s to 50 kbit/s and a similar packet format adopted. Louis Pouzin's CYCLADES project in France was also influenced by Davies' work. These networks laid down the technical foundations of the modern Internet.
== Implementation and further research ==
=== Network development ===
Beginning in late 1966, Davies' tasked Derek Barber, his deputy, to establish a team to build a local-area network to serve the needs of NPL and prove the feasibility of packet switching. The team consisted of:
Data communications and team leader: Roger Scantlebury
Software: Peter Wilkinson (lead), John Laws, Carol Walsh, Keith Wilkinson (no relation) and Rex Haymes.
Hardware: Keith Bartlett (lead), Les Pink, Patrick Woodroffe, Brian Aldous, Peter Carter, Peter Neale and a few others.
The team worked through 1967 to produce design concepts for a wide-area network and a local-area network to demonstrate the technology. Construction of the local-area network began in 1968 using a Honeywell 516 node. The NPL team liaised with Honeywell in the adaptation of the DDP516 input/output controller, and, the following year, the ARPANET chose the same computer to serve as Interface Message Processors (IMPs).
Elements of the first version of the network, Mark I NPL Network, became operational in early 1969 (before the ARPANET installed its first node). The network was fully operational in January 1970. The local-area NPL network followed by the wide-area ARPANET in the United States were the first two computer networks that implemented packet switching. The network used high-speed links, the first computer network to do so.
The NPL network was later interconnected with other networks, including the Post Office Experimental Packet Switched Service (EPSS) and the European Informatics Network (EIN) in 1976.
In 1976, 12 computers and 75 terminal devices were attached. The following year there were roughly 30 computers, 30 peripherals and 100 VDU terminals all able to interact through the NPL Network. The network remained in operation until 1986.
=== Protocol development ===
The first use of the term protocol in a modern data-commutations context occurs in a memorandum entitled A Protocol for Use in the NPL Data Communications Network written by Roger Scantlebury and Keith Bartlett in April 1967. A further publication by Bartlett in 1968 introduced the concept of an alternating bit protocol (later used by the ARPANET and the EIN) and described the need for three levels of data transmission, roughly corresponding to the lower levels of the seven-layer OSI model that emerged a decade later.
The Mark II version, which operated from 1973, used such a "layered" protocol architecture.
The NPL team also introduced the idea of protocol verification. Protocol verification was discussed in the November 1978 special edition of the Proceedings of the IEEE on packet switching.
=== Simulation studies ===
The NPL team also carried out simulation work on the performance of wide-area packet networks, studying datagrams and network congestion. This work was carried out to investigate networks of a size capable of providing data communications facilities to most of the U.K.
Davies proposed an adaptive method of congestion control that he called isarithmic.
=== Internetworking ===
The NPL network was a testbed for internetworking research throughout the 1970s. Davies, Scantlebury and Barber were active members of the International Network Working Group (INWG) formed in 1972. Vint Cerf and Bob Kahn acknowledged Davies and Scantlebury in their 1974 paper A Protocol for Packet Network Intercommunication, which DARPA developed into the Internet protocol suite used in the modern Internet.
Barber was appointed director of the European COST 11 project and played a leading part in the European Informatics Network (EIN). Scantlebury led the UK technical contribution, reporting directly to Donald Davies. The EIN protocol helped to launch the INWG and X.25 protocols. INWG proposed an international end to end protocol in 1975/6, although this was not widely adopted. Barber became the chair of INWG in 1976. He proposed and implemented a mail protocol for EIN.
NPL investigated the "basic dilemma" involved in internetworking; that is, a common host protocol would require restructuring existing networks if they were not designed to use the same protocol. NPL connected with the European Informatics Network by translating between two different host protocols while the NPL connection to the Post Office Experimental Packet Switched Service used a common host protocol in both networks. This work confirmed establishing a common host protocol would be more reliable and efficient.
Davies and Barber published Communication networks for computers in 1973 and Computer networks and their protocols in 1979. They spoke at the Data Communications Symposium in 1975 about the "battle for access standards" between datagrams and virtual circuits, with Barber saying the "lack of standard access interfaces for emerging public packet-switched communication networks is creating 'some kind of monster' for users". For a long period of time, the network engineering community was polarized over the implementation of competing protocol suites, commonly known as the Protocol Wars. It was unclear which type of protocol would result in the best and most robust computer networks.
=== Email ===
Derek Barber proposed an electronic mail protocol in 1979 in INWG 192 and implemented it on the EIN. This was referenced by Jon Postel in his early work on Internet email, published in the Internet Experiment Note series.
=== Network security ===
Davies' later research at NPL focused on data security for computer networks.
== Legacy ==
The concepts of packet switching, high-speed routers, layered communication protocols, hierarchical computer networks, and the essence of the end-to-end principle that were researched and developed at the NPL became fundamental to data communication in modern computer networks including the Internet.
Beyond NPL, and the designs of Paul Baran at RAND, DARPA was the most important institutional force, creating the ARPANET, the first wide-area packet-switched network, to which many other network designs at the time were compared or replicated. The ARPANET's routing, flow control, software design and network control were developed independently by the IMP team working for Bolt Beranek & Newman. The CYCLADES network designed by Louis Pouzin at the IRIA in France built on the work of Donald Davies and pioneered important improvements to the ARPANET design.
Moreover, in the view of some, the research and development of internetworking, and TCP/IP in particular (which was sponsored by DARPA), marks the true beginnings of the Internet. The adoption of TCP/IP and the early governance of the Internet were also fostered by DARPA.
NPL sponsors a gallery, opened in 2009, about the "Technology of the Internet" at The National Museum of Computing at Bletchley Park.
== See also ==
Coloured Book protocols
History of the Internet
Internet in the United Kingdom
JANET
UK Post Office Telecommunications and later British Telecommunications
Packet Switch Stream
International Packet Switched Service
Telecommunications in the United Kingdom
== References ==
== Further reading ==
Abbate, Janet (2000), Inventing the Internet, MIT Press, ISBN 9780262511155
Campbell-Kelly, Martin (1987). "Data Communications at the National Physical Laboratory (1965-1975)". IEEE Annals of the History of Computing. 9 (3): 221–247. doi:10.1109/MAHC.1987.10023. S2CID 8172150.
Hafner, Katie; Lyon, Matthew (1996). Where wizards stay up late : the origins of the Internet. New York : Simon & Schuster. ISBN 978-0-684-81201-4.
=== Primary sources ===
Davies, D. W. (10 November 1965), Remote On-line Data Processing and Its Communication Needs, Private papers.
Davies, D. W. (16 November 1965), Further Speculations on Data Transmission, Private papers.
Davies, D. W. (15 December 1965), Proposal for the Development of a National Communications Service for OnLine Data Processing, Private papers.
Davies, D. W. (June 1966), Proposal for a Digital Communication Network (PDF), Private papers.
Davies, D.W. (February 1967), A Store-and-Forward Communication Network for Real-Time Computers and their Peripherals. PO Colloquium on Message Switching.
Scantlebury, R. A.; K. A. Bartlett (February 1967). An NPL Data Communications Network Based on the Plessey XL12 Computer. Private papers.
Scantlebury, R. A.; Bartlett, K. A. (April 1967), A Protocol for Use in the NPL Data Communications Network, Private papers.
Davies, D.W. (July 1967) Some Design Aspects of a Communication Network for Rapid-Response Computers. Computer Technology Conference.
Davies, D. W.; Bartlett, K. A.; Scantlebury, R. A.; Wilkinson, P. T. (October 1967). A digital communications network for computers giving rapid response at remote terminals. ACM Symposium on Operating Systems Principles.
Scantlebury, R. A.; Wilkinson, P.T. (1971). The design of a switching system to allow remote access to computer services by other computers and terminal devices. Proceedings of the 2nd Symposium on Problems in the Optimization of Data Communications Systems. pp. 160–167.
Barber, D. L. A. (1972). Winkler, S (ed.). "The European computer network project". Computer Communications: Impacts and Implications. Washington, D.C.: 192–200.
Scantlebury, R. A.; Wilkinson, P.T. (1974). The National Physical Laboratory Data Communications Network. Proceedings of the 2nd ICCC 74. pp. 223–228.
== External links ==
"Publications and Conference Papers - Data Communications at the National Physical Laboratory". Jisc Archives Hub.
NPL Data Communications Network NPL video, 1970s
Government loses way in computer networks New Scientist, 1975
The Story of Packet Switching Interview with Roger Scantlebury, Peter Wilkinson, Keith Bartlett, and Brian Aldous, 2011
The birth of the Internet in the UK Google video featuring Roger Scantlebury, Peter Wilkinson, Peter Kirstein and Vint Cerf, 2013 | Wikipedia/NPL_network |
An application service provider (ASP) is a business providing application software generally through the Web. ASPs that specialize in a particular application (such as a medical billing program) may be referred to as providing software as a service.
== The ASP model ==
The application software resides on the vendor's system and is accessed by users through a communication protocol. Alternatively, the vendor may provide special purpose client software. Client software may interface with these systems through an application programming interface.
ASP characteristics include:
ASP hosts the application
ASP owns, operates and maintains the servers that support the application
ASP delivers the application to customers via the Internet or a thin client
ASP may bill on a per-use basis (on-demand outsourcing), a monthly/annual fee, or a per-labor hour basis
The advantages to this approach include:
Application costs are scaled over multiple customers
ASP may provide more application experience than the customer's staff
ASP may provide application customization for the customer
Application's version is likely to be kept up to date
Experts manage the application for performance
Experts research the application for new features
The disadvantages include:
The customer must rely on the ASP for a critical business function, including security and performance
The customer may have to accept the application as provided
The customer may have to adapt to possible application changes
Integration with other applications may be problematic
== See also ==
Application server
Business service provider
Communication as a service
Hosted service provider
Multitenancy
Outsourcing
Service level agreement
Utility computing
Web application
== References == | Wikipedia/Application_service_provider |
A network address is an identifier for a node or host on a telecommunications network. Network addresses are designed to be unique identifiers across the network, although some networks allow for local, private addresses, or locally administered addresses that may not be unique. Special network addresses are allocated as broadcast or multicast addresses. These too are not unique.
In some cases, network hosts may have more than one network address. For example, each network interface controller may be uniquely identified. Further, because protocols are frequently layered, more than one protocol's network address can occur in any particular network interface or node and more than one type of network address may be used in any one network.
Network addresses can be flat addresses which contain no information about the node's location in the network (such as a MAC address), or may contain structure or hierarchical information for the routing (such as an IP address).
== Examples ==
Examples of network addresses include:
Telephone number, in the public switched telephone network
IP address in IP networks including the Internet
IPX address, in NetWare
X.25 or X.21 address, in a circuit switched data network
MAC address, in Ethernet and other related IEEE 802 network technologies
== References ==
== External links ==
Media related to Network addressing at Wikimedia Commons | Wikipedia/Network_address |
A local area network (LAN) is a computer network that interconnects computers within a limited area such as a residence, campus, or building, and has its network equipment and interconnects locally managed. LANs facilitate the distribution of data and sharing network devices, such as printers.
The LAN contrasts the wide area network (WAN), which not only covers a larger geographic distance, but also generally involves leased telecommunication circuits or Internet links. An even greater contrast is the Internet, which is a system of globally connected business and personal computers.
Ethernet and Wi-Fi are the two most common technologies used for local area networks; historical network technologies include ARCNET, Token Ring, and LocalTalk.
== Cabling ==
Most wired network infrastructures utilize Category 5 or Category 6 twisted pair cabling with RJ45 compatible terminations. This medium provides physical connectivity between the Ethernet interfaces present on a large number of IP-aware devices. Depending on the grade of cable and quality of installation, speeds of up to 10 Mbit/s, 100 Mbit/s, 1 Gbit/s, or 10 Gbit/s are supported.
== Wireless LAN ==
In a wireless LAN, users have unrestricted movement within the coverage area. Wireless networks have become popular in residences and small businesses because of their ease of installation, convenience, and flexibility. Most wireless LANs consist of devices containing wireless radio technology that conforms to 802.11 standards as certified by the IEEE. Most wireless-capable residential devices operate at both the 2.4 GHz and 5 GHz frequencies and fall within the 802.11n or 802.11ac standards. Some older home networking devices operate exclusively at a frequency of 2.4 GHz under 802.11b and 802.11g, or 5 GHz under 802.11a. Some newer devices operate at the aforementioned frequencies in addition to 6 GHz under Wi-Fi 6E. Wi-Fi is a marketing and compliance certification for IEEE 802.11 technologies. The Wi-Fi Alliance has tested compliant products, and certifies them for interoperability. The technology may be integrated into smartphones, tablet computers and laptops. Guests are often offered Internet access via a hotspot service.
== Infrastructure and technicals ==
Simple LANs in office or school buildings generally consist of cabling and one or more network switches; a switch is used to allow devices on a LAN to talk to one another via Ethernet. A switch can be connected to a router, cable modem, or ADSL modem for Internet access. LANs at residential homes usually tend to have a single router and often may include a wireless repeater. A LAN can include a wide variety of other network devices such as firewalls, load balancers, and network intrusion detection. A wireless access point is required for connecting wireless devices to a network; when a router includes this device, it is referred to as a wireless router.
Advanced LANs are characterized by their use of redundant links with switches using the Spanning Tree Protocol to prevent loops, their ability to manage differing traffic types via quality of service (QoS), and their ability to segregate traffic with VLANs. A network bridge binds two different LANs or LAN segments to each other, often in order to grant a wired-only device access to a wireless network medium.
Network topology describes the layout of interconnections between devices and network segments. At the data link layer and physical layer, a wide variety of LAN topologies have been used, including ring, bus, mesh and star. The star topology is the most common in contemporary times. Wireless LAN (WLAN) also has its topologies: independent basic service set (IBSS, an ad-hoc network) where each node connects directly to each other (this is also standardized as Wi-Fi Direct), or basic service set (BSS, an infrastructure network that uses an wireless access point).
=== Network layer configuration ===
DHCP is used to assign internal IP addresses to members of a local area network. A DHCP server typically runs on the router with end devices as its clients. All DHCP clients request configuration settings using the DHCP protocol in order to acquire their IP address, a default route and one or more DNS server addresses. Once the client implements these settings, it will be able to communicate on that internet.
=== Protocols ===
At the higher network layers, protocols such as NetBIOS, IPX/SPX, AppleTalk and others were once common, but the Internet protocol suite (TCP/IP) has prevailed as the standard of choice for almost all local area networks today.
=== Connection to other LANs ===
LANs can maintain connections with other LANs via leased lines, leased services, or across the Internet using virtual private network technologies. Depending on how the connections are established and secured, and the distance involved, such linked LANs may also be classified as a metropolitan area network (MAN) or a wide area network (WAN).
=== Connection to the Internet ===
Local area networks may be connected to the Internet (a type of WAN) via fixed-line means (such as a DSL/ADSL modem) or alternatively using a cellular or satellite modem. These would additionally make use of telephone wires such as VDSL and VDSL2, coaxial cables, or fiber to the home for running fiber-optic cables directly into a house or office building, or alternatively a cellular modem or satellite dish in the latter non-fixed cases. With Internet access, the Internet service provider (ISP) would grant a single WAN-facing IP address to the network. A router is configured with the provider's IP address on the WAN interface, which is shared among all devices in the LAN by network address translation.
A gateway establishes physical and data link layer connectivity to a WAN over a service provider's native telecommunications infrastructure. Such devices typically contain a cable, DSL, or optical modem bound to a network interface controller for Ethernet. Home and small business class routers are often incorporated into these devices for additional convenience, and they often also have integrated wireless access point and 4-port Ethernet switch.
The ITU-T G.hn and IEEE Powerline standard, which provide high-speed (up to 1 Gbit/s) local area networking over existing home wiring, are examples of home networking technology designed specifically for IPTV delivery.
== History and development of LAN ==
=== Early installations ===
The increasing demand and usage of computers in universities and research labs in the late 1960s generated the need to provide high-speed interconnections between computer systems. A 1970 report from the Lawrence Radiation Laboratory detailing the growth of their "Octopus" network gave a good indication of the situation.
A number of experimental and early commercial LAN technologies were developed in the 1970s. Ethernet was developed at Xerox PARC between 1973 and 1974. The Cambridge Ring was developed at Cambridge University starting in 1974. ARCNET was developed by Datapoint Corporation in 1976 and announced in 1977. It had the first commercial installation in December 1977 at Chase Manhattan Bank in New York. In 1979, the electronic voting system for the European Parliament was the first installation of a LAN connecting hundreds (420) of microprocessor-controlled voting terminals to a polling/selecting central unit with a multidrop bus with Master/slave (technology) arbitration. It used 10 kilometers of simple unshielded twisted pair category 3 cable—the same cable used for telephone systems—installed inside the benches of the European Parliament Hemicycles in Strasbourg and Luxembourg.
The development and proliferation of personal computers using the CP/M operating system in the late 1970s, and later DOS-based systems starting in 1981, meant that many sites grew to dozens or even hundreds of computers. The initial driving force for networking was to share storage and printers, both of which were expensive at the time. There was much enthusiasm for the concept, and for several years, from about 1983 onward, computer industry pundits habitually declared the coming year to be, "The year of the LAN".
=== Competing standards ===
In practice, the concept was marred by the proliferation of incompatible physical layer and network protocol implementations, and a plethora of methods of sharing resources. Typically, each vendor would have its own type of network card, cabling, protocol, and network operating system. A solution appeared with the advent of Novell NetWare which provided even-handed support for dozens of competing card and cable types, and a much more sophisticated operating system than most of its competitors.
Of the competitors to NetWare, only Banyan Vines had comparable technical strengths, but Banyan never gained a secure base. 3Com produced 3+Share and Microsoft produced MS-Net. These then formed the basis for collaboration between Microsoft and 3Com to create a simple network operating system LAN Manager and its cousin, IBM's LAN Server. None of these enjoyed any lasting success; Netware dominated the personal computer LAN business from early after its introduction in 1983 until the mid-1990s when Microsoft introduced Windows NT.
In 1983, TCP/IP was first shown capable of supporting actual defense department applications on a Defense Communication Agency LAN testbed located at Reston, Virginia. The TCP/IP-based LAN successfully supported Telnet, FTP, and a Defense Department teleconferencing application. This demonstrated the feasibility of employing TCP/IP LANs to interconnect Worldwide Military Command and Control System (WWMCCS) computers at command centers throughout the United States. However, WWMCCS was superseded by the Global Command and Control System (GCCS) before that could happen.
During the same period, Unix workstations were using TCP/IP networking. Although the workstation market segment is now much reduced, the technologies developed in the area continue to be influential on the Internet and in all forms of networking—and the TCP/IP protocol has replaced IPX, AppleTalk, NBF, and other protocols used by the early PC LANs.
Econet was Acorn Computers's low-cost local area network system, intended for use by schools and small businesses. It was first developed for the Acorn Atom and Acorn System 2/3/4 computers in 1981.
=== Further development ===
In the 1980s, several token ring network implementations for LANs were developed. IBM released its own implementation of token ring in 1985, It ran at 4 Mbit/s. IBM claimed that their token ring systems were superior to Ethernet, especially under load, but these claims were debated; while the slow but inexpensive AppleTalk was popular for Macs, in 1987 InfoWorld said, "No LAN has stood out as the clear leader, even in the IBM world". IBM's implementation of token ring was the basis of the IEEE 802.5 standard. A 16 Mbit/s version of Token Ring was standardized by the 802.5 working group in 1989. IBM had market dominance over Token Ring, for example, in 1990, IBM equipment was the most widely used for Token Ring networks.
Fiber Distributed Data Interface (FDDI), a LAN standard, was considered an attractive campus backbone network technology in the early to mid 1990s since existing Ethernet networks only offered 10 Mbit/s data rates and Token Ring networks only offered 4 Mbit/s or 16 Mbit/s rates. Thus it was a relatively high-speed choice of that era, with speeds such as 100 Mbit/s.
By 1994, vendors included Cisco Systems, National Semiconductor, Network Peripherals, SysKonnect (acquired by Marvell Technology Group), and 3Com. FDDI installations have largely been replaced by Ethernet deployments.
== See also ==
Asynchronous Transfer Mode
Chaosnet
LAN messenger
LAN party
Network interface controller
== References ==
== External links ==
Media related to Local area networks (LAN) at Wikimedia Commons | Wikipedia/Local_area_network |
A near-me area network is a logical grouping of communication devices that are in close physical proximity to each other, but not necessarily connected to the same communication network infrastructure. Thus, two smartphones connected via different mobile carriers may form a near-me area network.
Near-me area network applications focus on communications among devices within a certain proximity to each other, but don't generally concern themselves with the devices' exact locations.
== Background ==
The Internet consists of different types of communication networks. Common types include local area networks (LAN), metropolitan area networks (MAN), and wide area networks (WAN). Local area networks have the coverage of a small geographic area, such as a school, residence, building, or company. Metropolitan area networks cover a larger area, such as a city or state. Wide area networks provide communication in a broad geographic area covering national and international locations. Personal area networks (PANs) are wireless LANs with a very short range (up to a few meters), enabling computer devices (such as PDAs and printers) to communicate with other nearby devices and computers.
The concept of near-me area networks has become relevant based on the increasing popularity of location-sensitive (GPS-enabled) mobile devices, including iPhone and Android smartphones, Some services are meaningful only to a group of people in close proximity.
== Examples ==
Ben is going to the ABC supermarket to buy three bottles of red wine. The supermarket offers a 30 percent discount on the purchase of six bottles, so he sends a message to other customers to see if they would like to buy the other three bottles of wine.
Susan bought a movie ticket 15 minutes ago, but she now feels dizzy and can't watch the film. She sends out messages to people around the cinema to see if anyone will purchase her ticket at 40 percent off.
In a theme park, guests would like to know each ride's queue status to manage their waiting time. So, they take a photo of the queue they are in and share it with other guests through a network application.
Ann works at Causeway Bay and would like to find someone to have lunch with. She checks her friend list to see who is closest to her at this moment and invites that friend to join her.
Carol just lost her son in the street, so she sends out his picture, which is stored in her mobile device, to passers-by to see if they can find him.
== See also ==
Location-based service
== References == | Wikipedia/Near-me_area_network |
In telecommunications and computer networking, a network packet is a formatted unit of data carried by a packet-switched network. A packet consists of control information and user data; the latter is also known as the payload. Control information provides data for delivering the payload (e.g., source and destination network addresses, error detection codes, or sequencing information). Typically, control information is found in packet headers and trailers.
In packet switching, the bandwidth of the transmission medium is shared between multiple communication sessions, in contrast to circuit switching, in which circuits are preallocated for the duration of one session and data is typically transmitted as a continuous bit stream.
== Terminology ==
In the seven-layer OSI model of computer networking, packet strictly refers to a protocol data unit at layer 3, the network layer. A data unit at layer 2, the data link layer, is a frame. In layer 4, the transport layer, the data units are segments and datagrams. Thus, in the example of TCP/IP communication over Ethernet, a TCP segment is carried in one or more IP packets, which are each carried in one or more Ethernet frames.
== Architecture ==
The basis of the packet concept is the postal letter: the header is like the envelope, the payload is the entire content inside the envelope, and the footer would be your signature at the bottom.
Network design can achieve two major results by using packets: error detection and multiple host addressing.
== Framing ==
Communications protocols use various conventions for distinguishing the elements of a packet and for formatting the user data. For example, in Point-to-Point Protocol, the packet is formatted in 8-bit bytes, and special characters are used to delimit elements. Other protocols, like Ethernet, establish the start of the header and data elements by their location relative to the start of the packet. Some protocols format the information at a bit level instead of a byte level.
== Contents ==
A packet may contain any of the following components:
Addresses
The routing of network packets requires two network addresses, the source address of the sending host, and the destination address of the receiving host.
Error detection and correction
Error detection and correction is performed at various layers in the protocol stack. Network packets may contain a checksum, parity bits or cyclic redundancy checks to detect errors that occur during transmission.
At the transmitter, the calculation is performed before the packet is sent. When received at the destination, the checksum is recalculated, and compared with the one in the packet. If discrepancies are found, the packet may be corrected or discarded. Any packet loss due to these discards is dealt with by the network protocol.
In some cases, modifications of the network packet may be necessary while routing, in which cases checksums are recalculated.
Hop limit
Under fault conditions, packets can end up traversing a closed circuit. If nothing was done, eventually the number of packets circulating would build up until the network was congested to the point of failure. Time to live is a field that is decreased by one each time a packet goes through a network hop. If the field reaches zero, routing has failed, and the packet is discarded.
Ethernet packets have no time-to-live field and so are subject to broadcast storms in the presence of a switching loop.
Length
There may be a field to identify the overall packet length. However, in some types of networks, the length is implied by the duration of the transmission.
Protocol identifier
It is often desirable to carry multiple communication protocols on a network. A protocol identifier field specifies a packet's protocol and allows the protocol stack to process many types of packets.
Priority
Some networks implement quality of service which can prioritize some types of packets above others. This field indicates which packet queue should be used; a high-priority queue is emptied more quickly than lower-priority queues at points in the network where congestion is occurring.
Payload
In general, the payload is the data that is carried on behalf of an application. It is usually of variable length, up to a maximum that is set by the network protocol and sometimes the equipment on the route. When necessary, some networks can break a larger packet into smaller packets.
== Examples ==
=== Internet protocol ===
IP packets are composed of a header and payload. The header consists of fixed and optional fields. The payload appears immediately after the header. An IP packet has no trailer. However, an IP packet is often carried as the payload inside an Ethernet frame, which has its own header and trailer.
Per the end-to-end principle, IP networks do not provide guarantees of delivery, non-duplication, or in-order delivery of packets. However, it is common practice to layer a reliable transport protocol such as Transmission Control Protocol on top of the packet service to provide such protection.
=== NASA Deep Space Network ===
The Consultative Committee for Space Data Systems (CCSDS) packet telemetry standard defines the protocol used for the transmission of spacecraft instrument data over the deep-space channel. Under this standard, an image or other data sent from a spacecraft instrument is transmitted using one or more packets.
=== MPEG packetized stream ===
Packetized elementary stream (PES) is a specification associated with the MPEG-2 standard that allows an elementary stream to be divided into packets. The elementary stream is packetized by encapsulating sequential data bytes from the elementary stream between PES packet headers.
A typical method of transmitting elementary stream data from a video or audio encoder is to first create PES packets from the elementary stream data and then to encapsulate these PES packets inside an MPEG transport stream (TS) packets or an MPEG program stream (PS). The TS packets can then be transmitted using broadcasting techniques, such as those used in an ATSC and DVB.
=== NICAM ===
In order to provide mono compatibility, the NICAM signal is transmitted on a subcarrier alongside the sound carrier. This means that the FM or AM regular mono sound carrier is left alone for reception by monaural receivers. The NICAM packet (except for the header) is scrambled with a nine-bit pseudo-random bit-generator before transmission. Making the NICAM bitstream look more like white noise is important because this reduces signal patterning on adjacent TV channels.
== See also ==
== References == | Wikipedia/Network_packet |
In the context of network theory, a complex network is a graph (network) with non-trivial topological features—features that do not occur in simple networks such as lattices or random graphs but often occur in networks representing real systems. The study of complex networks is a young and active area of scientific research (since 2000) inspired largely by empirical findings of real-world networks such as computer networks, biological networks, technological networks, brain networks, climate networks and social networks.
== Definition ==
Most social, biological, and technological networks display substantial non-trivial topological features, with patterns of connection between their elements that are neither purely regular nor purely random. Such features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure, and hierarchical structure. In the case of directed networks these features also include reciprocity, triad significance profile and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. The most complex structures can be realized by networks with a medium number of interactions. This corresponds to the fact that the maximum information content (entropy) is obtained for medium probabilities.
Two well-known and much studied classes of complex networks are scale-free networks and small-world networks, whose discovery and definition are canonical case-studies in the field. Both are characterized by specific structural features—power-law degree distributions for the former and short path lengths and high clustering for the latter. However, as the study of complex networks has continued to grow in importance and popularity, many other aspects of network structures have attracted attention as well.
The field continues to develop at a brisk pace, and has brought together researchers from many areas including mathematics, physics, electric power systems, biology, climate, computer science, sociology, epidemiology, and others. Ideas and tools from network science and engineering have been applied to the analysis of metabolic and genetic regulatory networks; the study of ecosystem stability and robustness; clinical science; the modeling and design of scalable communication networks such as the generation and visualization of complex wireless networks; and a broad range of other practical issues. Network science is the topic of many conferences in a variety of different fields, and has been the subject of numerous books both for the lay person and for the expert.
== Scale-free networks ==
A network is called scale-free if its degree distribution, i.e., the probability that a node selected uniformly at random has a certain number of links (degree), follows a mathematical function called a power law. The power law implies that the degree distribution of these networks has no characteristic scale. In contrast, networks with a single well-defined scale are somewhat similar to a lattice in that every node has (roughly) the same degree. Examples of networks with a single scale include the Erdős–Rényi (ER) random graph, random regular graphs, regular lattices, and hypercubes. Some models of growing networks that produce scale-invariant degree distributions are the Barabási–Albert model and the fitness model. In a network with a scale-free degree distribution, some vertices have a degree that is orders of magnitude larger than the average - these vertices are often called "hubs", although this language is misleading as, by definition, there is no inherent threshold above which a node can be viewed as a hub. If there were such a threshold, the network would not be scale-free.
Interest in scale-free networks began in the late 1990s with the reporting of discoveries of power-law degree distributions in real world networks such as the World Wide Web, the network of Autonomous systems (ASs), some networks of Internet routers, protein interaction networks, email networks, etc. Most of these reported "power laws" fail when challenged with rigorous statistical testing, but the more general idea of heavy-tailed degree distributions—which many of these networks do genuinely exhibit (before finite-size effects occur) -- are very different from what one would expect if edges existed independently and at random (i.e., if they followed a Poisson distribution). There are many different ways to build a network with a power-law degree distribution. The Yule process is a canonical generative process for power laws, and has been known since 1925. However, it is known by many other names due to its frequent reinvention, e.g., The Gibrat principle by Herbert A. Simon, the Matthew effect, cumulative advantage and, preferential attachment by Barabási and Albert for power-law degree distributions. Recently, Hyperbolic Geometric Graphs have been suggested as yet another way of constructing scale-free networks.
Some networks with a power-law degree distribution (and specific other types of structure) can be highly resistant to the random deletion of vertices—i.e., the vast majority of vertices remain connected together in a giant component. Such networks can also be quite sensitive to targeted attacks aimed at fracturing the network quickly. When the graph is uniformly random except for the degree distribution, these critical vertices are the ones with the highest degree, and have thus been implicated in the spread of disease (natural and artificial) in social and communication networks, and in the spread of fads (both of which are modeled by a percolation or branching process). While random graphs (ER) have an average distance of order log N between nodes, where N is the number of nodes, scale free graph can have a distance of log log N.
== Small-world networks ==
A network is called a small-world network by analogy with the small-world phenomenon (popularly known as six degrees of separation). The small world hypothesis, which was first described by the Hungarian writer Frigyes Karinthy in 1929, and tested experimentally by Stanley Milgram (1967), is the idea that two arbitrary people are connected by only six degrees of separation, i.e. the diameter of the corresponding graph of social connections is not much larger than six. In 1998, Duncan J. Watts and Steven Strogatz published the first small-world network model, which through a single parameter smoothly interpolates between a random graph and a lattice. Their model demonstrated that with the addition of only a small number of long-range links, a regular graph, in which the diameter is proportional to the size of the network, can be transformed into a "small world" in which the average number of edges between any two vertices is very small (mathematically, it should grow as the logarithm of the size of the network), while the clustering coefficient stays large. It is known that a wide variety of abstract graphs exhibit the small-world property, e.g., random graphs and scale-free networks. Further, real world networks such as the World Wide Web and the metabolic network also exhibit this property.
In the scientific literature on networks, there is some ambiguity associated with the term "small world". In addition to referring to the size of the diameter of the network, it can also refer to the co-occurrence of a small diameter and a high clustering coefficient. The clustering coefficient is a metric that represents the density of triangles in the network. For instance, sparse random graphs have a vanishingly small clustering coefficient while real world networks often have a coefficient significantly larger. Scientists point to this difference as suggesting that edges are correlated in real world networks. Approaches have been developed to generate network models that exhibit high correlations, while preserving the desired degree distribution and small-world properties. These approaches can be used to generate analytically solvable toy models for research into these systems.
== Spatial networks ==
Many real networks are embedded in space. Examples include, transportation and other infrastructure networks, brain networks. Several models for spatial networks have been developed.
== See also ==
== Books ==
B. S. Manoj, Abhishek Chakraborty, and Rahul Singh, Complex Networks: A Networking and Signal Processing Perspective, Pearson, New York, USA, February 2018. ISBN 978-0-13-478699-5
S.N. Dorogovtsev and J.F.F. Mendes, Evolution of Networks: From biological networks to the Internet and WWW, Oxford University Press, 2003, ISBN 0-19-851590-1
Duncan J. Watts, Six Degrees: The Science of a Connected Age, W. W. Norton & Company, 2003, ISBN 0-393-04142-5
Duncan J. Watts, Small Worlds: The Dynamics of Networks between Order and Randomness, Princeton University Press, 2003, ISBN 0-691-11704-7
Albert-László Barabási, Linked: How Everything is Connected to Everything Else, 2004, ISBN 0-452-28439-2
Alain Barrat, Marc Barthelemy, Alessandro Vespignani, Dynamical processes on complex networks, Cambridge University Press, 2008, ISBN 978-0-521-87950-7
Stefan Bornholdt (editor) and Heinz Georg Schuster (editor), Handbook of Graphs and Networks: From the Genome to the Internet, 2003, ISBN 3-527-40336-1
Guido Caldarelli, Scale-Free Networks, Oxford University Press, 2007, ISBN 978-0-19-921151-7
Guido Caldarelli, Michele Catanzaro, Networks: A Very Short Introduction Oxford University Press, 2012, ISBN 978-0-19-958807-7
E. Estrada, "The Structure of Complex Networks: Theory and Applications", Oxford University Press, 2011, ISBN 978-0-199-59175-6
Mark Newman, Networks: An Introduction, Oxford University Press, 2010, ISBN 978-0-19-920665-0
Mark Newman, Albert-László Barabási, and Duncan J. Watts, The Structure and Dynamics of Networks, Princeton University Press, Princeton, 2006, ISBN 978-0-691-11357-9
R. Pastor-Satorras and A. Vespignani, Evolution and Structure of the Internet: A statistical physics approach, Cambridge University Press, 2004, ISBN 0-521-82698-5
T. Lewis, Network Science, Wiley 2009,
Niloy Ganguly (editor), Andreas Deutsch (editor) and Animesh Mukherjee (editor), Dynamics On and Of Complex Networks Applications to Biology, Computer Science, and the Social Sciences, 2009, ISBN 978-0-8176-4750-6
Vito Latora, Vincenzo Nicosia, Giovanni Russo, Complex Networks: Principles, Methods and Applications, Cambridge University Press, 2017, ISBN 978-1-107-10318-4
== References ==
D. J. Watts and S. H. Strogatz (1998). "Collective dynamics of 'small-world' networks". Nature. 393 (6684): 440–442. Bibcode:1998Natur.393..440W. doi:10.1038/30918. PMID 9623998. S2CID 4429113.
S. H. Strogatz (2001). "Exploring Complex Networks". Nature. 410 (6825): 268–276. Bibcode:2001Natur.410..268S. doi:10.1038/35065725. PMID 11258382.
R. Albert and A.-L. Barabási (2002). "Statistical mechanics of complex networks". Reviews of Modern Physics. 74 (1): 47–97. arXiv:cond-mat/0106096. Bibcode:2002RvMP...74...47A. doi:10.1103/RevModPhys.74.47. S2CID 60545.
S. N. Dorogovtsev and J.F.F. Mendes (2002). "Evolution of Networks". Adv. Phys. 51 (4): 1079–1187. arXiv:cond-mat/0106144. Bibcode:2002AdPhy..51.1079D. doi:10.1080/00018730110112519. S2CID 429546.
M. E. J. Newman, The structure and function of complex networks, SIAM Review 45, 167–256 (2003)
S. N. Dorogovtsev, A. V. Goltsev, and J. F. F. Mendes, Critical phenomena in complex networks, Rev. Mod. Phys. 80, 1275, (2008)
G. Caldarelli, R. Marchetti, L. Pietronero, The Fractals Properties of Internet, Europhysics Letters 52, 386 (2000). https://arxiv.org/abs/cond-mat/0009178. DOI: 10.1209/epl/i2000-00450-8
A. E. Motter (2004). "Cascade control and defense in complex networks". Phys. Rev. Lett. 93 (9): 098701. arXiv:cond-mat/0401074. Bibcode:2004PhRvL..93i8701M. doi:10.1103/PhysRevLett.93.098701. PMID 15447153. S2CID 4856492.
J. Lehnert, Controlling Synchronization Patterns in Complex Networks, springer 2016
Dolev, Shlomi; Elovici, Yuval; Puzis, Rami (2010), "Routing betweenness centrality", J. ACM, 57 (4): 25:1–25:27, doi:10.1145/1734213.1734219, S2CID 15662473 | Wikipedia/Complex_network |
A metropolitan area network (MAN) is a computer network that interconnects users with computer resources in a geographic region of the size of a metropolitan area. The term MAN is applied to the interconnection of local area networks (LANs) in a city into a single larger network which may then also offer efficient connection to a wide area network. The term is also used to describe the interconnection of several LANs in a metropolitan area through the use of point-to-point connections between them.
== History ==
By 1999, local area networks (LANs) were well established and providing data communication in buildings and offices. For the interconnection of LANs within a city, businesses relied primarily on the public switched telephone network. But while the telephone network was able to support the packet-based exchange of data that the various LAN protocols implemented, the bandwidth of the telephone network was already under heavy demand from circuit-switched voice, and the telephone exchanges were ill-designed to cope with the traffic spikes that LANs tended to produce.: 11
To interconnect local area networks more effectively, it was suggested that office buildings are connected using the single-mode optical fiber lines, which were by that time widely used in long-haul telephone trunks. Such dark fibre links were in some cases already installed on customer premises and telephone companies started to offer their dark fibre within their subscriber packages. Fibre optic metropolitan area networks were operated by telephone companies as private networks for their customers and did not necessarily have full integration with the public wide area network (WAN) through gateways.: 12
Besides the larger companies that connected their offices across metropolitan areas, universities and research institutions also adopted dark fibre as their metropolitan area network backbone. In West Berlin the BERCOM project built up a multifunctional broadband communications system to connect the mainframe computers that publicly funded universities and research institutions in the city housed. The BERCOM MAN project could progress at speed because the Deutsche Bundespost had already installed hundreds of miles of fibre optic cable in West Berlin. Like other metropolitan dark fibre networks at the time, the dark fibre network in West Berlin had a star topology with a hub somewhere in the city centre.: 56 The backbone of the dedicated BERCOM MAN for universities and research institutions was an optical fibre double ring that used a high-speed slotted ring protocol developed by the GMD Research Centre for Innovative Computer Systems and Telephony. The BERCOM MAN backbone could thus support two times 280 Mbit/s data transfer.: 57
The productive use of dense wavelength-division multiplexing (DWDM) provided another impetus for the development of metropolitan area networks in the 2000s. Long haul DWDM, with ranges from 0 to 3000+ km, had been developed so that companies that stored large amounts of data on different sites could exchange data or establish mirrors of their file server. With the use of DWDM on the existing fibre optic MANs of carriers, companies no longer needed to connect their LANs with a dedicated fibre optic link.: 14 With DWDM companies could build dedicated MANs using the existing dark fibre network of a provider in a city. MANs thus became cheaper to build and maintain.: 15 The DWDM platforms provided by dark fibre providers in cities allow for a single fibre pair to be divided into 32 wavelengths. One wavelength could support between 10 Mbit/s and 10 Gbit/s. Thus companies that paid for a MAN to connect different office sites within a city could increase the bandwidths of their MAN backbone as part of their subscription. DWDM platforms also alleviated the need for protocol conversion to connect LANs in a city, because any protocol and any traffic type could be transmitted using DWDM. Effectively it gave companies wishing to establish a MAN choice of protocol.: 16
Metro Ethernet uses a fibre optic ring as a Gigabit Ethernet MAN backbone within a larger city. The ring topology is implemented using Internet Protocol (IP) so that data can be rerouted if a link is congested or fails. In the US the Sprint was an early adopter of fibre optic rings that routed IP packets on the MAN backbone. Between 2002 and 2003 Sprint built three MAN rings to cover San Francisco, Oakland and San Jose, and in turn connected these three metro rings with a further two rings. The Sprint metro rings routed voice and data, were connected to several local telecom exchange points and totalled 189 miles of fibre optic cable. The metro rings also connected many cities that went on to become part of the Silicon Valley tech-hub, such as Fremont, Milpitas, Mountain View, Palo Alto, Redwood City, San Bruno, San Carlos, Santa Clara and Sunnyvale.
The metro Ethernet rings that did not route IP traffic instead used one of the various proprietary Spanning Tree Protocol implementations; each MAN ring had a root bridge. Because layer 2 switching can not operate if there is a loop in the network, the protocols to support L2 MAN rings all need to block redundant links and thus block part of the ring.: 41 Capsuling protocols, such as Multiprotocol Label Switching (MPLS), were also deployed to address the drawbacks of operating L2 metro Ethernet rings.: 43
Metro Ethernet was effectively the extension of Ethernet protocols beyond the local area network (LAN) and the ensuing investment in Ethernet led to the deployment of carrier Ethernet, where Ethernet protocols are used in wide area networks (WANs). The efforts of the Metro Ethernet Forum (MEF) in defining best practice and standards for metropolitan area networks thus also defined carrier Ethernet. While the IEEE tried to standardise the emerging Ethernet-based proprietary protocols, industry forums such as the MEF filled the gap and in January 2013 launched a certification for network equipment that can be configured to meet Carrier Ethernet 2.0 specifications.
== Metropolitan Internet exchange points ==
Internet exchange points (IXs) have historically been important for the connection of MANs to the national or global Internet. The Boston Metropolitan Exchange Point (Boston MXP) enabled metro Ethernet providers, such as the HarvardNet to exchange data with national carriers, such as the Sprint Corporation and AT&T. Exchange points also serve as low-latency links between campus area networks, thus the Massachusetts Institute of Technology and the Boston University could exchange data, voice and video using the Boston MXP. Further examples of metropolitan Internet exchanges in the USA that were operational as of 2002 include the Anchorage Metropolitan Access Point (AMAP), the Seattle Internet Exchange (SIX), the Dallas-Fort Worth Metropolitan Access Point (DFMAP) and the Denver Internet Exchange (IX-Denver). Verizon put into operation three regional metropolitan exchanges to interconnect MANs and give them access to the Internet. The MAE-West serves the MANs of San Jose, Los Angeles and California. The MAE-East interconnects the MANs of New York City, Washington, D.C., and Miami. While the MAE-Central interconnects the MANs of Dallas, Texas, and Illinois.
In larger cities several local providers may have built a dark fibre MAN backbone. In London, the metro Ethernet rings of several providers make up the London MAN infrastructure. Like other MANs, the London MAN primarily serves the needs of its urban customers, who typically need a high number of connections with low bandwidth, a fast transit to other MAN providers, as well as high bandwidth access to national and international long-haul providers. Within the MAN of larger cities, metropolitan exchange points now play a vital role. The London Internet Exchange (LINX) had by 2005 built up several exchange points across the Greater London region.
Cities that host one of the international Internet exchanges have become a preferred location for companies and data centres. The Amsterdam Internet Exchange (AMS-IX) is the world's second-largest Internet exchange and has attracted companies to Amsterdam that are dependent on high-speed internet access. The Amsterdam metropolitan area network has benefited too from high-speed Internet access.: 105 Similarly Frankfurt has become a magnet for data centres of international companies because it hosts the non-profit DE-CIX, the largest Internet exchange in the world.: 116 The business model of the metro DE-CIX is to reduce the transit cost for local carriers by keeping data in the metropolitan area or region, while at the same time allowing long-haul low-latency peering globally with other major MANs.
== See also ==
Community network
E-government
Municipal wireless network
Smart city
Wireless community network
== References == | Wikipedia/Metropolitan_area_network |
Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.
Network protocols that use aggressive retransmissions to compensate for packet loss due to congestion can increase congestion, even after the initial load has been reduced to a level that would not normally have induced network congestion. Such networks exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.
Networks use congestion control and congestion avoidance techniques to try to avoid collapse. These include: exponential backoff in protocols such as CSMA/CA in 802.11 and the similar CSMA/CD in the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers and network switches. Other techniques that address congestion include priority schemes which transmit some packets with higher priority ahead of others and the explicit allocation of network resources to specific flows through the use of admission control.
== Network capacity ==
Network resources are limited, including router processing time and link throughput. Resource contention may occur on networks in several common circumstances. A wireless LAN is easily filled by a single personal computer. Even on fast computer networks, the backbone can easily be congested by a few servers and client PCs. Denial-of-service attacks by botnets are capable of filling even the largest Internet backbone network links, generating large-scale network congestion. In telephone networks, a mass call event can overwhelm digital telephone circuits, in what can otherwise be defined as a denial-of-service attack.
== Congestive collapse ==
Congestive collapse (or congestion collapse) is the condition in which congestion prevents or limits useful communication. Congestion collapse generally occurs at choke points in the network, where incoming traffic exceeds outgoing bandwidth. Connection points between a local area network and a wide area network are common choke points. When a network is in this condition, it settles into a stable state where traffic demand is high but little useful throughput is available, during which packet delay and loss occur and quality of service is extremely poor.
Congestive collapse was identified as a possible problem by 1984. It was first observed on the early Internet in October 1986, when the NSFNET phase-I backbone dropped three orders of magnitude from its capacity of 32 kbit/s to 40 bit/s, which continued until end nodes started implementing Van Jacobson and Sally Floyd's congestion control between 1987 and 1988. When more packets were sent than could be handled by intermediate routers, the intermediate routers discarded many packets, expecting the endpoints of the network to retransmit the information. However, early TCP implementations had poor retransmission behavior. When this packet loss occurred, the endpoints sent extra packets that repeated the information lost, doubling the incoming rate.
== Congestion control ==
Congestion control modulates traffic entry into a telecommunications network in order to avoid congestive collapse resulting from oversubscription. This is typically accomplished by reducing the rate of packets. Whereas congestion control prevents senders from overwhelming the network, flow control prevents the sender from overwhelming the receiver.
=== Theory of congestion control ===
The theory of congestion control was pioneered by Frank Kelly, who applied microeconomic theory and convex optimization theory to describe how individuals controlling their own rates can interact to achieve an optimal network-wide rate allocation. Examples of optimal rate allocation are max-min fair allocation and Kelly's suggestion of proportionally fair allocation, although many others are possible.
Let
x
i
{\displaystyle x_{i}}
be the rate of flow
i
{\displaystyle i}
,
c
l
{\displaystyle c_{l}}
be the capacity of link
l
{\displaystyle l}
, and
r
l
i
{\displaystyle r_{li}}
be 1 if flow
i
{\displaystyle i}
uses link
l
{\displaystyle l}
and 0 otherwise. Let
x
{\displaystyle x}
,
c
{\displaystyle c}
and
R
{\displaystyle R}
be the corresponding vectors and matrix. Let
U
(
x
)
{\displaystyle U(x)}
be an increasing, strictly concave function, called the utility, which measures how much benefit a user obtains by transmitting at rate
x
{\displaystyle x}
. The optimal rate allocation then satisfies
max
x
∑
i
U
(
x
i
)
{\displaystyle \max \limits _{x}\sum _{i}U(x_{i})}
such that
R
x
≤
c
{\displaystyle Rx\leq c}
The Lagrange dual of this problem decouples so that each flow sets its own rate, based only on a price signaled by the network. Each link capacity imposes a constraint, which gives rise to a Lagrange multiplier,
p
l
{\displaystyle p_{l}}
. The sum of these multipliers,
y
i
=
∑
l
p
l
r
l
i
,
{\displaystyle y_{i}=\sum _{l}p_{l}r_{li},}
is the price to which the flow responds.
Congestion control then becomes a distributed optimization algorithm. Many current congestion control algorithms can be modeled in this framework, with
p
l
{\displaystyle p_{l}}
being either the loss probability or the queueing delay at link
l
{\displaystyle l}
. A major weakness is that it assigns the same price to all flows, while sliding window flow control causes burstiness that causes different flows to observe different loss or delay at a given link.
=== Classification of congestion control algorithms ===
Among the ways to classify congestion control algorithms are:
By type and amount of feedback received from the network: Loss; delay; single-bit or multi-bit explicit signals
By incremental deployability: Only sender needs modification; sender and receiver need modification; only router needs modification; sender, receiver and routers need modification.
By performance aspect: high bandwidth-delay product networks; lossy links; fairness; advantage to short flows; variable-rate links
By fairness criterion: Max-min fairness; proportionally fair; controlled delay
== Mitigation ==
Mechanisms have been invented to prevent network congestion or to deal with a network collapse:
Network scheduler – active queue management which reorders or selectively drops network packets in the presence of congestion
Explicit Congestion Notification – an extension to IP and TCP communications protocols that adds a flow control mechanism
TCP congestion control – various implementations of efforts to deal with network congestion
The correct endpoint behavior is usually to repeat dropped information, but progressively slow the repetition rate. Provided all endpoints do this, the congestion lifts and the network resumes normal behavior. Other strategies such as slow start ensure that new connections do not overwhelm the router before congestion detection initiates.
Common router congestion avoidance mechanisms include fair queuing and other scheduling algorithms, and random early detection where packets are randomly dropped as congestion is detected. This proactively triggers the endpoints to slow transmission before congestion collapse occurs.
Some end-to-end protocols are designed to behave well under congested conditions; TCP is a well known example. The first TCP implementations to handle congestion were described in 1984, but Van Jacobson's inclusion of an open source solution in the Berkeley Standard Distribution UNIX ("BSD") in 1988 first provided good behavior.
UDP does not control congestion. Protocols built atop UDP must handle congestion independently. Protocols that transmit at a fixed rate, independent of congestion, can be problematic. Real-time streaming protocols, including many Voice over IP protocols, have this property. Thus, special measures, such as quality of service, must be taken to keep packets from being dropped in the presence of congestion.
=== Practical network congestion avoidance ===
Connection-oriented protocols, such as the widely used TCP protocol, watch for packet loss or queuing delay to adjust their transmission rate. Various network congestion avoidance processes support different trade-offs.
=== TCP/IP congestion avoidance ===
The TCP congestion avoidance algorithm is the primary basis for congestion control on the Internet.
Problems occur when concurrent TCP flows experience tail-drops, especially when bufferbloat is present. This delayed packet loss interferes with TCP's automatic congestion avoidance. All flows that experience this packet loss begin a TCP retrain at the same moment – this is called TCP global synchronization.
=== Active queue management ===
Active queue management (AQM) is the reordering or dropping of network packets inside a transmit buffer that is associated with a network interface controller (NIC). This task is performed by the network scheduler.
==== Random early detection ====
One solution is to use random early detection (RED) on the network equipment's egress queue. On networking hardware ports with more than one egress queue, weighted random early detection (WRED) can be used.
RED indirectly signals TCP sender and receiver by dropping some packets, e.g. when the average queue length is more than a threshold (e.g. 50%) and deletes linearly or cubically more packets, up to e.g. 100%, as the queue fills further.
==== Robust random early detection ====
The robust random early detection (RRED) algorithm was proposed to improve the TCP throughput against denial-of-service (DoS) attacks, particularly low-rate denial-of-service (LDoS) attacks. Experiments confirmed that RED-like algorithms were vulnerable under LDoS attacks due to the oscillating TCP queue size caused by the attacks.
==== Flow-based WRED ====
Some network equipment is equipped with ports that can follow and measure each flow and are thereby able to signal a too big bandwidth flow according to some quality of service policy. A policy could then divide the bandwidth among all flows by some criteria.
==== Explicit Congestion Notification ====
Another approach is to use Explicit Congestion Notification (ECN). ECN is used only when two hosts signal that they want to use it. With this method, a protocol bit is used to signal explicit congestion. This is better than the indirect congestion notification signaled by packet loss by the RED/WRED algorithms, but it requires support by both hosts.
When a router receives a packet marked as ECN-capable and the router anticipates congestion, it sets the ECN flag, notifying the sender of congestion. The sender should respond by decreasing its transmission bandwidth, e.g., by decreasing its sending rate by reducing the TCP window size or by other means.
The L4S protocol is an enhanced version of ECN which allows senders to collaborate with network devices to control congestion.
==== TCP window shaping ====
Congestion avoidance can be achieved efficiently by reducing traffic. When an application requests a large file, graphic or web page, it usually advertises a window of between 32K and 64K. This results in the server sending a full window of data (assuming the file is larger than the window). When many applications simultaneously request downloads, this data can create a congestion point at an upstream provider. By reducing the window advertisement, the remote servers send less data, thus reducing the congestion.
==== Backward ECN ====
Backward ECN (BECN) is another proposed congestion notification mechanism. It uses ICMP source quench messages as an IP signaling mechanism to implement a basic ECN mechanism for IP networks, keeping congestion notifications at the IP level and requiring no negotiation between network endpoints. Effective congestion notifications can be propagated to transport layer protocols, such as TCP and UDP, for the appropriate adjustments.
== Side effects of congestive collapse avoidance ==
=== Radio links ===
The protocols that avoid congestive collapse generally assume that data loss is caused by congestion. On wired networks, errors during transmission are rare. WiFi, 3G and other networks with a radio layer are susceptible to data loss due to interference and may experience poor throughput in some cases. The TCP connections running over a radio-based physical layer see the data loss and tend to erroneously believe that congestion is occurring.
=== Short-lived connections ===
The slow-start protocol performs badly for short connections. Older web browsers created many short-lived connections and opened and closed the connection for each file. This kept most connections in the slow start mode. Initial performance can be poor, and many connections never get out of the slow-start regime, significantly increasing latency. To avoid this problem, modern browsers either open multiple connections simultaneously or reuse one connection for all files requested from a particular server.
== Admission control ==
Admission control is any system that requires devices to receive permission before establishing new network connections. If the new connection risks creating congestion, permission can be denied. Examples include Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn standard for home networking over legacy wiring, Resource Reservation Protocol for IP networks and Stream Reservation Protocol for Ethernet.
== See also ==
Bandwidth management – Capacity control on a communications network
Cascading failure – Systemic risk of failure
Choke exchange – Telephone exchange designed to handle many simultaneous call attempts
Erlang (unit) – Load measure in telecommunications
Sorcerer's Apprentice syndrome – Network protocol flaw in the original versions of TFTP
Teletraffic engineering – Application of traffic engineering theory to telecommunications
Thrashing – Constant exchange between memory and storage
Traffic shaping – Communication bandwidth management technique
Reliability (computer networking) – Protocol acknowledgement capability
== References ==
== External links ==
Floyd, S. and K. Fall, Promoting the Use of End-to-End Congestion Control in the Internet (IEEE/ACM Transactions on Networking, August 1999)
Sally Floyd, On the Evolution of End-to-end Congestion Control in the Internet: An Idiosyncratic View (IMA Workshop on Scaling Phenomena in Communication Networks, October 1999) (pdf format)
Linktionary term: Queuing Archived 2003-03-08 at the Wayback Machine
Pierre-Francois Quet, Sriram Chellappan, Arjan Durresi, Mukundan Sridharan, Hitay Ozbay, Raj Jain, "Guidelines for optimizing Multi-Level ECN, using fluid flow based TCP model"
Sally Floyd, Ratul Mahajan, David Wetherall: RED-PD: RED with Preferential Dropping Archived 2003-04-02 at the Wayback Machine
A Generic Simple RED Simulator for educational purposes by Mehmet Suzen
Approaches to Congestion Control in Packet Networks
Papers in Congestion Control
Random Early Detection Homepage
Explicit Congestion Notification Homepage
TFRC Homepage
AIMD-FC Homepage
Recent Publications in low-rate denial-of-service (DoS) attacks | Wikipedia/Congestion_control |
Hierarchical network models are iterative algorithms for creating networks which are able to reproduce the unique properties of the scale-free topology and the high clustering of the nodes at the same time. These characteristics are widely observed in nature, from biology to language to some social networks.
== Concept ==
The hierarchical network model is part of the scale-free model family sharing their main property of having proportionally more hubs among the nodes than by random generation; however, it significantly differs from the other similar models (Barabási–Albert, Watts–Strogatz) in the distribution of the nodes' clustering coefficients: as other models would predict a constant clustering coefficient as a function of the degree of the node, in hierarchical models nodes with more links are expected to have a lower clustering coefficient. Moreover, while the Barabási-Albert model predicts a decreasing average clustering coefficient as the number of nodes increases, in the case of the hierarchical models there is no relationship between the size of the network and its average clustering coefficient.
The development of hierarchical network models was mainly motivated by the failure of the other scale-free models in incorporating the scale-free topology and high clustering into one single model. Since several real-life networks (metabolic networks, the protein interaction network, the World Wide Web or some social networks) exhibit such properties, different hierarchical topologies were introduced in order to account for these various characteristics.
== Algorithm ==
Hierarchical network models are usually derived in an iterative way by replicating the initial cluster of the network according to a certain rule. For instance, consider an initial network of five fully interconnected nodes (N=5). As a next step, create four replicas of this cluster and connect the peripheral nodes of each replica to the central node of the original cluster (N=25). This step can be repeated indefinitely, thereby for any k steps the number of nodes in the system can be derived by N=5k+1.
Of course there have been several different ways for creating hierarchical systems proposed in the literature. These systems generally differ in the structure of the initial cluster as well as in the degree of expansion which is often referred to as the replication factor of the model.
== Properties ==
=== Degree distribution ===
Being part of the scale-free model family, the degree distribution of the hierarchical network model follows the power law meaning that a randomly selected node in the network has k edges with a probability
P
(
k
)
∼
c
k
−
γ
{\displaystyle P\left(k\right)\sim ck^{-\gamma }\,}
where c is a constant and γ is the degree exponent. In most real world networks exhibiting scale-free properties γ lies in the interval [2,3].
As a specific result for hierarchical models it has been shown that the degree exponent of the distribution function can be calculated as
γ
=
1
+
ln
M
ln
(
M
−
1
)
{\displaystyle \gamma =1+{\frac {\ln M}{\ln(M-1)}}}
where M represents the replication factor of the model.
=== Clustering coefficient ===
In contrast to the other scale-free models (Erdős–Rényi, Barabási–Albert, Watts–Strogatz) where the clustering coefficient is independent of the degree of a specific node, in hierarchical networks the clustering coefficient can be expressed as a function of the degree in the following way:
C
(
k
)
∼
k
−
β
{\displaystyle C\left(k\right)\sim k^{-\beta }\,}
It has been analytically shown that in deterministic scale-free networks the exponent β takes the value of 1.
== Examples ==
=== Actor network ===
Based on the actor database available at www.IMDb.com the network is defined by Hollywood actors who are connected to each other if they both appeared in the same movie, resulting in a data set of 392,340 nodes and 15,347,957 edges. As earlier studies have shown, this network exhibits scale-free properties at least for high values of k. Moreover, the clustering coefficients seem to follow the required scaling law with the parameter -1 providing evidence for the hierarchical topology of the network. Intuitively, one-performance actors have by definition a clustering coefficient of one while actors starring in several movies are highly unlikely to work with the same crew which in general results in a decreasing clustering coefficient as the number of co-stars grows.
=== Language network ===
Words can be regarded as network if one specifies the linkage criteria between them. Defining links as appearance as a synonym in the Merriam-Webster dictionary a semantic web of 182,853 nodes with 317,658 edges was constructed. As it turned out, the obtained network of words indeed follows a power law in its degree distribution while the distribution of the clustering coefficient indicates that the underlying web follows a hierarchical structure with γ=3.25 and β=1.
=== Network of webpages ===
By mapping the www.nd.edu domain a network of 325,729 nodes and 1,497,135 edges was obtained whose degree distribution followed a power law with γout=2.45 and γin=2.1 for the out- and in-degrees, respectively. The evidence for the scaling law distribution of the clustering coefficients is significantly weaker than in the previous cases although there is a clearly visible declining pattern in the distribution of C(k) indicating that the more links a domain has the less interconnected the linked/linking web pages are.
=== Domain network ===
The domain network, i.e. the internet at the autonomous system (AS) level where the administrative domains are said to be connected in case there is a router which connects them, was found to comprise 65,520 nodes and 24,412 links between them and exhibit the properties of a scale-free network. The sample distribution of the clustering coefficients was fitted by the scaling function C(k)~k−0.75 whose exponent is (in absolute terms) somewhat smaller than the theoretical parameter for deterministic scale-free networks.
== References == | Wikipedia/Hierarchical_network_model |
Intergalactic Computer Network or Galactic Network (IGCN) was a computer networking concept similar to today's Internet.
J.C.R. Licklider, the first director of the Information Processing Techniques Office (IPTO) at The Pentagon's ARPA, used the term in the early 1960s to refer to a networking system he "imagined as an electronic commons open to all, 'the main and essential medium of informational interaction for governments, institutions, corporations, and individuals.'" An office memorandum he sent to his colleagues in 1963 was addressed to "Members and Affiliates of the Intergalactic Computer Network". As head of IPTO from 1962 to 1964, "Licklider initiated three of the most important developments in information technology: the creation of computer science departments at several major universities, time-sharing, and networking."
Licklider first learned about time-sharing from Christopher Strachey at the inaugural UNESCO Information Processing Conference in Paris in 1959.
By the late 1960s, his promotion of the concept had inspired a primitive version of his vision called ARPANET. ARPANET expanded into a network of networks in the 1970s that became the Internet.
== See also ==
History of the Internet
== References ==
== Further reading == | Wikipedia/Intergalactic_Computer_Network |
In economics, a network effect (also called network externality or demand-side economies of scale) is the phenomenon by which the value or utility a user derives from a good or service depends on the number of users of compatible products. Network effects are typically positive feedback systems, resulting in users deriving more and more value from a product as more users join the same network. The adoption of a product by an additional user can be broken into two effects: an increase in the value to all other users (total effect) and also the enhancement of other non-users' motivation for using the product (marginal effect).
Network effects can be direct or indirect. Direct network effects arise when a given user's utility increases with the number of other users of the same product or technology, meaning that adoption of a product by different users is complementary. This effect is separate from effects related to price, such as a benefit to existing users resulting from price decreases as more users join. Direct network effects can be seen with social networking services, including Twitter, Facebook, Airbnb, Uber, and LinkedIn; telecommunications devices like the telephone; and instant messaging services such as MSN, AIM or QQ. Indirect (or cross-group) network effects arise when there are "at least two different customer groups that are interdependent, and the utility of at least one group grows as the other group(s) grow". For example, hardware may become more valuable to consumers with the growth of compatible software.
Network effects are commonly mistaken for economies of scale, which describe decreasing average production costs in relation to the total volume of units produced. Economies of scale are a common phenomenon in traditional industries such as manufacturing, whereas network effects are most prevalent in new economy industries, particularly information and communication technologies. Network effects are the demand side counterpart of economies of scale, as they function by increasing a customer's willingness to pay due rather than decreasing the supplier's average cost.
Upon reaching critical mass, a bandwagon effect can result. As the network continues to become more valuable with each new adopter, more people are incentivised to adopt, resulting in a positive feedback loop. Multiple equilibria and a market monopoly are two key potential outcomes in markets that exhibit network effects. Consumer expectations are key in determining which outcomes will result.
== Origins ==
Network effects were a central theme in the arguments of Theodore Vail, the first post-patent president of Bell Telephone, in gaining a monopoly on US telephone services. In 1908, when he presented the concept in Bell's annual report, there were over 4,000 local and regional telephone exchanges, most of which were eventually merged into the Bell System.
Network effects were popularized by Robert Metcalfe, stated as Metcalfe's law. Metcalfe was one of the co-inventors of Ethernet and a co-founder of the company 3Com. In selling the product, Metcalfe argued that customers needed Ethernet cards to grow above a certain critical mass if they were to reap the benefits of their network. According to Metcalfe, the rationale behind the sale of networking cards was that the cost of the network was directly proportional to the number of cards installed, but the value of the network was proportional to the square of the number of users. This was expressed algebraically as having a cost of N, and a value of N2. While the actual numbers behind this proposition were never firm, the concept allowed customers to share access to expensive resources like disk drives and printers, send e-mail, and eventually access the Internet.
The economic theory of the network effect was advanced significantly between 1985 and 1995 by researchers Michael L. Katz, Carl Shapiro, Joseph Farrell, and Garth Saloner. Author, high-tech entrepreneur Rod Beckstrom presented a mathematical model for describing networks that are in a state of positive network effect at BlackHat and Defcon in 2009 and also presented the inverse network effect with an economic model for defining it as well. Because of the positive feedback often associated with the network effect, system dynamics can be used as a modelling method to describe the phenomena. Word of mouth and the Bass diffusion model are also potentially applicable. The next major advance occurred between 2000 and 2003 when researchers Geoffrey G Parker, Marshall Van Alstyne, Jean-Charles Rochet and Jean Tirole independently developed the two-sided market literature showing how network externalities that cross distinct groups can lead to free pricing for one of those groups.
=== Evidence and consequences ===
While the diversity of sources is in decline, there is a countervailing force of continually increasing functionality with new services, products and applications — such as music streaming services (Spotify), file sharing programs (Dropbox) and messaging platforms (Messenger, WhatsApp and Snapchat). Another major finding was the dramatic increase in the infant mortality rate of websites — with the dominant players in each functional niche - once established guarding their turf more staunchly than ever.
On the other hand, growing network effect does not always bring proportional increase in returns. Whether additional users bring more value depends on the commoditization of supply, the type of incremental user and the nature of substitutes. For example, social networks can hit an inflection point, after which additional users do not bring more value. This could be attributed to the fact that as more people join the network, its users are less willing to share personal content and the site becomes more focused on news and public content.
== Economics ==
Network economics refers to business economics that benefit from the network effect. This is when the value of a good or service increases when others buy the same good or service. Examples are website such as EBay, or iVillage where the community comes together and shares thoughts to help the website become a better business organization.
In sustainability, network economics refers to multiple professionals (architects, designers, or related businesses) all working together to develop sustainable products and technologies. The more companies are involved in environmentally friendly production, the easier and cheaper it becomes to produce new sustainable products. For instance, if no one produces sustainable products, it is difficult and expensive to design a sustainable house with custom materials and technology. But due to network economics, the more industries are involved in creating such products, the easier it is to design an environmentally sustainable building.
Another benefit of network economics in a certain field is improvement that results from competition and networking within an industry.
== Adoption and competition ==
=== Critical mass ===
In the early phases of a network technology, incentives to adopt the new technology are low. After a certain number of people have adopted the technology, network effects become significant enough that adoption becomes a dominant strategy. This point is called critical mass. At the critical mass point, the value obtained from the good or service is greater than or equal to the price paid for the good or service.
When a product reaches critical mass, network effects will drive subsequent growth until a stable balance is reached. Therefore, a key business concern must then be how to attract users prior to reaching critical mass. Critical quality is closely related to consumer expectations, which will be affected by price and quality of products or services, the company's reputation and the growth path of the network. Thus, one way is to rely on extrinsic motivation, such as a payment, a fee waiver, or a request for friends to sign up. A more natural strategy is to build a system that has enough value without network effects, at least to early adopters. Then, as the number of users increases, the system becomes even more valuable and is able to attract a wider user base.
=== Limits to growth ===
Network growth is generally not infinite, and tends to plateau when it reaches market saturation (all customers have already joined) or diminishing returns make acquisition of the last few customers too costly.
Networks can also stop growing or collapse if they do not have enough capacity to handle growth. For example, an overloaded phone network that has so many customers that it becomes congested, leading to busy signals, the inability to get a dial tone, and poor customer support. This creates a risk that customers will defect to a rival network because of the inadequate capacity of the existing system. After this point, each additional user decreases the value obtained by every other user.
Peer-to-peer (P2P) systems are networks designed to distribute load among their user pool. This theoretically allows P2P networks to scale indefinitely. The P2P based telephony service Skype benefits from this effect and its growth is limited primarily by market saturation.
=== Market tipping ===
Network effects give rise to the potential outcome of market tipping, defined as "the tendency of one system to pull away from its rivals in popularity once it has gained an initial edge". Tipping results in a market in which only one good or service dominates and competition is stifled, and can result in a monopoly. This is because network effects tend to incentivise users to coordinate their adoption of a single product. Therefore, tipping can result in a natural form of market concentration in markets that display network effects. However, the presence of network effects does not necessarily imply that a market will tip; the following additional conditions must be met:
The utility derived by users from network effects must exceed the utility they derive from differentiation
Users must have high costs of multihoming (i.e. adopting more than one competing networks)
Users must have high switching costs
If any of these three conditions are not satisfied, the market may fail to tip and multiple products with significant market shares may coexist. One such example is the U.S. instant messaging market, which remained an oligopoly despite significant network effects. This can be attributed to the low multi-homing and switching costs faced by users.
Market tipping does not imply permanent success in a given market. Competition can be reintroduced into the market due to shocks such as the development of new technologies. Additionally, if the price is raised above customers' willingness to pay, this may reverse market tipping.
=== Multiple equilibria and expectations ===
Networks effects often result in multiple potential market equilibrium outcomes. The key determinant in which equilibrium will manifest are the expectations of the market participants, which are self-fulfilling. Because users are incentivised to coordinate their adoption, user will tend to adopt the product that they expect to draw the largest number of users. These expectations may be shaped by path dependence, such as a perceived first-mover advantage, which can result in lock-in. The most commonly cited example of path dependence is the QWERTY keyboard, which owes its ubiquity to its establishment of an early lead in the keyboard layout industry and high switching costs, rather than any inherent advantage over competitors. Other key influences of adoption expectations can be reputational (e.g. a firm that has previously produced high quality products may be favoured over a new firm).
Markets with network effects may result in inefficient equilibrium outcomes. With simultaneous adoption, users may fail to coordinate towards a single agreed-upon product, resulting in splintering among different networks, or may coordinate to lock-in to a different product than the one that is best for them.
== Technology lifecycle ==
If some existing technology or company whose benefits are largely based on network effects starts to lose market share against a challenger such as a disruptive technology or open standards based competition, the benefits of network effects will reduce for the incumbent, and increase for the challenger. In this model, a tipping point is eventually reached at which the network effects of the challenger dominate those of the former incumbent, and the incumbent is forced into an accelerating decline, whilst the challenger takes over the incumbent's former position.
Sony's Betamax and Victor Company of Japan (JVC)'s video home system (VHS) can both be used for video cassette recorders (VCR), but the two technologies are not compatible. Therefore, the VCR that is suitable for one type of cassette cannot fit in another. VHS's technology gradually surpassed Betamax in the competition. In the end, Betamax lost its original market share and was replaced by VHS.
== Negative network externalities ==
Negative network externalities, in the mathematical sense, are those that have a negative effect compared to normal (positive) network effects. Just as positive network externalities (network effects) cause positive feedback and exponential growth, negative network externalities are also caused by positive feedback resulting in exponential decay. Negative network effect must not be confused with negative feedback. Negative feedback is the forces that pull towards equilibrium and are responsible for stability.
Besides, Negative network externalities has four characteristics, which are namely, more login retries, longer query times, longer download times and more download attempts. Therefore, congestion occurs when the efficiency of a network decreases as more people use it, and this reduces the value to people already using it. Traffic congestion that overloads the freeway and network congestion on connections with limited bandwidth both display negative network externalities.
Braess's paradox suggests that adding paths through a network can have a negative effect on performance of the network.
== Interoperability ==
Interoperability has the effect of making the network bigger and thus increases the external value of the network to consumers. Interoperability achieves this primarily by increasing potential connections and secondarily by attracting new participants to the network. Other benefits of interoperability include reduced uncertainty, reduced lock-in, commoditization and competition based on price.
Interoperability can be achieved through standardization or other cooperation. Companies involved in fostering interoperability face a tension between cooperating with their competitors to grow the potential market for products and competing for market share.
== Compatibility and incompatibility ==
Product compatibility is closely related to network externalities in company's competition, which refers to two systems that can be operated together without changing. Compatible products are characterized by better matching with customers, so they can enjoy all the benefits of the network without having to purchase products from the same company. However, not only products of compatibility will intensify competition between companies, this will make users who had purchased products lose their advantages, but also proprietary networks may raise the industry entry standards. Compared to large companies with better reputation or strength, weaker companies or small networks will more inclined to choose compatible products.
Besides, the compatibility of products is conducive to the company's increase in market share. For example, the Windows system is famous for its operating compatibility, thereby satisfying consumers' diversification of other applications. As the supplier of Windows systems, Microsoft benefits from indirect network effects, which cause the growing of the company's market share.
Incompatibility is the opposite of compatibility. Because incompatibility of products will aggravate market segmentation and reduce efficiency, and also harm consumer interests and enhance competition. The result of the competition between incompatible networks depends on the complete sequential of adoption and the early preferences of the adopters. Effective competition determines the market share of companies, which is historically important. Since the installed base can directly bring more network profit and increase the consumers' expectations, which will have a positive impact on the smooth implementation of subsequent network effects.
== Open versus closed standards ==
In communication and information technologies, open standards and interfaces are often developed through the participation of multiple companies and are usually perceived to provide mutual benefit. But, in cases in which the relevant communication protocols or interfaces are closed standards, the network effect can give the company controlling those standards monopoly power. The Microsoft corporation is widely seen by computer professionals as maintaining its monopoly through these means. One observed method Microsoft uses to put the network effect to its advantage is called Embrace, extend and extinguish.
Mirabilis is an Israeli start-up which pioneered instant messaging (IM) and was bought by America Online. By giving away their ICQ product for free and preventing interoperability between their client software and other products, they were able to temporarily dominate the market for instant messaging. The IM technology has completed the use from the home to the workplace, because of its faster processing speed and simplified process characteristics. Because of the network effect, new IM users gained much more value by choosing to use the Mirabilis system (and join its large network of users) than they would use a competing system. As was typical for that era, the company never made any attempt to generate profits from its dominant position before selling the company.
== Network effect as a competitive advantage ==
Network effect can significantly influence the competitive landscape of an industry. According to Michael E. Porter, strong network effect might decrease the threat of new entrants, which is one of the five major competitive forces that act on an industry. Persistent barriers to entry into a market may help incumbent companies to fend off competition and keep or increase their market share, while maintaining profitability and return on capital.
These attractive characteristics are one of the reasons that allowed platform companies like Amazon, Google or Facebook to grow rapidly and create shareholder value. On the other hand, network effect can result in high concentration of power in an industry, or even a monopoly. This often leads to increased scrutiny from regulators that try to restore healthy competition, as is often the case with large technology companies.
== Examples ==
=== Telephone ===
Network effects are the incremental benefit gained by each user for each new user that joins a network. An example of a direct network effect is the telephone. Originally when only a small number of people owned a telephone the value it provided was minimal. Not only did other people need to own a telephone for it to be useful, but it also had to be connected to the network through the users home. As technology advanced it became more affordable for people to own a telephone. This created more value and utility due to the increase in users. Eventually increased usage through exponential growth led to the telephone is used by almost every household adding more value to the network for all users. Without the network effect and technological advances the telephone would have nowhere near the amount of value or utility as it does today.
=== Financial exchanges ===
Transactions in the financial field may feature a network effect. As the number of sellers and buyers in the exchange, who have the symmetric information increases, liquidity increases, and transaction costs decrease. This then attracts a larger number of buyers and sellers to the exchange.
The network advantage of financial exchanges is apparent in the difficulty that startup exchanges have in dislodging a dominant exchange. For example, the Chicago Board of Trade has retained overwhelming dominance of trading in US Treasury bond futures despite the startup of Eurex US trading of identical futures contracts. Similarly, the Chicago Mercantile Exchange has maintained dominance in trading of Eurobond interest rate futures despite a challenge from Euronext.Liffe.
=== Cryptocurrencies and blockchains ===
Cryptocurrencies such as Bitcoin and smart contract blockchains such as Ethereum also exhibit network effects.
Smart contract blockchains can produce network effects through the social network of individuals that uses a blockchain for securing its transactions. Public infrastructure networks such as Ethereum and others can facilitate entities that do not explicitly trust one another to collaborate in meaningful way, incentivizing growth in the network. However, as of 2019, such networks grow more slowly due to missing particular requirements such as privacy and scalability.
=== Software ===
The widely used computer software benefits from powerful network effects. The software-purchase characteristic is that it is easily influenced by the opinions of others, so the customer base of the software is the key to realizing a positive network effect. Although customers' motivation for choosing software is related to the product itself, media interaction and word-of-mouth recommendations from purchased customers can still increase the possibility of software being applied to other customers who have not purchased it, thereby resulting in network effects.
In 2007 Apple released the iPhone followed by the app store. Most iPhone apps rely heavily on the existence of strong network effects. This enables the software to grow in popularity very quickly and spread to a large userbase with very limited marketing needed. The Freemium business model has evolved to take advantage of these network effects by releasing a free version that will not limit the adoption or any users and then charge for premium features as the primary source of revenue. Furthermore, some software companies will launch free trial versions during the trial period to attract buyers and reduce their uncertainty. The duration of free time is related to the network effect. The more positive feedback the company received, the shorter the free trial time will be.
Software companies (for example Adobe or Autodesk) often give significant discounts to students. By doing so, they intentionally stimulate the network effect - as more students learn to use a particular piece of software, it becomes more viable for companies and employers to use it as well. And the more employers require a given skill, the higher the benefit that employees will receive from learning it. This creates a self-reinforcing cycle, further strengthening the network effect.
=== Web sites ===
Many web sites benefit from a network effect. One example is web marketplaces and exchanges. For example, eBay would not be a particularly useful site if auctions were not competitive. As the number of users grows on eBay, auctions grow more competitive, pushing up the prices of bids on items. This makes it more worthwhile to sell on eBay and brings more sellers onto eBay, which, in turn, drives prices down again due to increased supply. Increased supply brings even more buyers to eBay. Essentially, as the number of users of eBay grows, prices fall and supply increases, and more and more people find the site to be useful.
Network effects were used as justification in business models by some of the dot-com companies in the late 1990s. These firms operated under the belief that when a new market comes into being which contains strong network effects, firms should care more about growing their market share than about becoming profitable. The justification was that market share would determine which firm could set technical and marketing standards and giving these companies a first-mover advantage.
Social networking websites are good examples. The more people register onto a social networking website, the more useful the website is to its registrants.
Google uses the network effect in its advertising business with its Google AdSense service. AdSense places ads on many small sites, such as blogs, using Google technology to determine which ads are relevant to which blogs. Thus, the service appears to aim to serve as an exchange (or ad network) for matching many advertisers with many small sites. In general, the more blogs AdSense can reach, the more advertisers it will attract, making it the most attractive option for more blogs.
By contrast, the value of a news site is primarily proportional to the quality of the articles, not to the number of other people using the site. Similarly, the first generation of search engines experienced little network effect, as the value of the site was based on the value of the search results. This allowed Google to win users away from Yahoo! without much trouble, once users believed that Google's search results were superior. Some commentators mistook the value of the Yahoo! brand (which does increase as more people know of it) for a network effect protecting its advertising business.
=== Rail gauge ===
There are strong network effects in the initial choice of rail gauge, and in gauge conversion decisions. Even when placing isolated rails not connected to any other lines, track layers usually choose a standard rail gauge so they can use off-the-shelf rolling stock. Although a few manufacturers make rolling stock that can adjust to different rail gauges, most manufacturers make rolling stock that only works with one of the standard rail gauges. This even applies to urban rail systems where historically tramways and to a lesser extent metros would come in a wide array of different gauges, nowadays virtually all new networks are built to a handful of gauges and overwhelmingly standard gauge.
=== Credit cards ===
For credit cards that are now widely used, large-scale applications on the market are closely related to network effects. Credit card, as one of the currency payment methods in the current economy, which was originated in 1949. Early research on the circulation of credit cards at the retail level found that credit card interest rates were not affected by macroeconomic interest rates and remained almost unchanged. Later, credit cards gradually entered the network level due to changes in policy priorities and became a popular trend in payment in the 1980s. Different levels of credit cards separate benefit from two types of network effects. The application of credit cards related to external network effects, which is because when this has become a payment method, and more people use credit cards. Each additional person uses the same credit card, the value of rest people who use the credit card will increase. Besides, the credit card system at the network level could be seen as a two-sided market. On the one hand, the number of cardholders attracts merchants to use credit cards as a payment method. On the other hand, an increasing number of merchants can also attract more new cardholders. In other words, the use of credit cards has increased significantly among merchants which leads to increased value. This can conversely increase the cardholder's credit card value and the number of users. Moreover, credit card services also display a network effect between merchant discounts and credit accessibility. When credit accessibility increases which greater sales can be obtained, merchants are willing to be charged more discounts by credit card issuers.
Visa has become a leader in the electronic payment industry through the network effect of credit cards as its competitive advantage. Till 2016, Visa's credit card market share has risen from a quarter to as much as half in four years. Visa benefits from the network effect. Since every additional Visa cardholder is more attractive to merchants, and merchants can also attract more new cardholders through the brand. In other words, the popularity and convenience of Visa in the electronic payment market, lead more people and merchants choose to use Visa, which greatly increases the value of Visa.
== See also ==
== References ==
== Further reading ==
Chen, Andrew (2021). The Cold Start Problem: How to Start and Scale Network Effects. Harper Business. ISBN 978-0062969743.
== External links ==
Coordination and Lock-In: Competition with Switching Costs and Network Effects, Joseph Farrell and Paul Klemperer.
Network Externalities (Effects), S. J. Liebowitz, Stephen E. Margolis.
An Overview of Network Effects, Arun Sundararajan.
The Economics of Networks, Nicholas Economides.
Network Economics: An Introduction by Anna Nagurney of the Isenberg School of Management at University of Massachusetts Amherst
Supply chain network economics by Anna Nagurney | Wikipedia/Network_effect |
In graph theory, a random geometric graph (RGG) is the mathematically simplest spatial network, namely an undirected graph constructed by randomly placing N nodes in some metric space (according to a specified probability distribution) and connecting two nodes by a link if and only if their distance is in a given range, e.g. smaller than a certain neighborhood radius, r.
Random geometric graphs resemble real human social networks in a number of ways. For instance, they spontaneously demonstrate community structure - clusters of nodes with high modularity. Other random graph generation algorithms, such as those generated using the Erdős–Rényi model or Barabási–Albert (BA) model do not create this type of structure. Additionally, random geometric graphs display degree assortativity according to their spatial dimension: "popular" nodes (those with many links) are particularly likely to be linked to other popular nodes.
Percolation theory on the random geometric graph (the study of its global connectivity) is sometimes called the Gilbert disk model after the work of Edgar Gilbert, who introduced these graphs and percolation in them in a 1961 paper. A real-world application of RGGs is the modeling of ad hoc networks. Furthermore they are used to perform benchmarks for graph algorithms.
== Definition ==
In the following, let G = (V, E) denote an undirected Graph with a set of vertices V and a set of edges E ⊆ V × V. The set sizes are denoted by |V| = n and |E| = m. Additionally, if not noted otherwise, the metric space [0,1)d with the euclidean distance is considered, i.e. for any points
x
,
y
∈
[
0
,
1
)
d
{\displaystyle x,y\in [0,1)^{d}}
the euclidean distance of x and y is defined as
d
(
x
,
y
)
=
|
|
x
−
y
|
|
2
=
∑
i
=
1
d
(
x
i
−
y
i
)
2
{\displaystyle d(x,y)=||x-y||_{2}={\sqrt {\sum _{i=1}^{d}(x_{i}-y_{i})^{2}}}}
.
A random geometric graph (RGG) is an undirected geometric graph with nodes randomly sampled from the uniform distribution of the underlying space [0,1)d. Two vertices p, q ∈ V are connected if, and only if, their distance is less than a previously specified parameter r ∈ (0,1), excluding any loops. Thus, the parameters r and n fully characterize a RGG.
== Algorithms ==
=== Naive algorithm ===
The naive approach is to calculate the distance of every vertex to every other vertex. As there are
n
(
n
−
1
)
2
{\textstyle {\frac {n(n-1)}{2}}}
possible connections that are checked, the time complexity of the naive algorithm is
Θ
(
n
2
)
{\textstyle \Theta (n^{2})}
. The samples are generated by using a random number generator (RNG) on
[
0
,
1
)
d
{\displaystyle [0,1)^{d}}
. Practically, one can implement this using d random number generators on
[
0
,
1
)
{\displaystyle [0,1)}
, one RNG for every dimension.
==== Pseudocode ====
V := generateSamples(n) // Generates n samples in the unit cube.
for each p ∈ V do
for each q ∈ V\{p} do
if distance(p, q) ≤ r then
addConnection(p, q) // Add the edge (p, q) to the edge data structure.
end if
end for
end for
As this algorithm is not scalable (every vertex needs information of every other vertex), Holtgrewe et al. and Funke et al. have introduced new algorithms for this problem.
=== Distributed algorithms ===
==== Holtgrewe et al. ====
This algorithm, which was proposed by Holtgrewe et al., was the first distributed RGG generator algorithm for dimension 2. It partitions the unit square into equal sized cells with side length of at least
r
{\displaystyle r}
. For a given number
P
=
p
2
{\displaystyle P=p^{2}}
of processors, each processor is assigned
k
p
×
k
p
{\textstyle {k \over p}\times {k \over p}}
cells, where
k
=
⌊
1
/
r
⌋
.
{\textstyle k=\left\lfloor {1/r}\right\rfloor .}
For simplicity,
P
{\textstyle P}
is assumed to be a square number, but this can be generalized to any number of processors. Each processor then generates
n
P
{\textstyle {\frac {n}{P}}}
vertices, which are then distributed to their respective owners. Then the vertices are sorted by the cell number they fall into, for example with Quicksort. Next, each processor then sends their adjacent processors the information about the vertices in the border cells, such that each processing unit can calculate the edges in their partition independent of the other units. The expected running time is
O
(
n
P
log
n
P
)
{\textstyle O({\frac {n}{P}}\log {\frac {n}{P}})}
. An upper bound for the communication cost of this algorithm is given by
T
a
l
l
−
t
o
−
a
l
l
(
n
/
P
,
P
)
+
T
a
l
l
−
t
o
−
a
l
l
(
1
,
P
)
+
T
p
o
i
n
t
−
t
o
−
p
o
i
n
t
(
n
/
(
k
⋅
P
)
+
2
)
{\displaystyle T_{all-to-all}(n/P,P)+T_{all-to-all}(1,P)+T_{point-to-point}(n/(k\cdot {P})+2)}
, where
T
a
l
l
−
t
o
−
a
l
l
(
l
,
c
)
{\displaystyle T_{all-to-all}(l,c)}
denotes the time for an all-to-all communication with messages of length l bits to c communication partners.
T
p
o
i
n
t
−
t
o
−
p
o
i
n
t
(
l
)
{\displaystyle T_{point-to-point}(l)}
is the time taken for a point-to-point communication for a message of length l bits.
Since this algorithm is not communication free, Funke et al. proposed a scalable distributed RGG generator for higher dimensions, which works without any communication between the processing units.
==== Funke et al. ====
The approach used in this algorithm is similar to the approach in Holtgrewe: Partition the unit cube into equal sized chunks with side length of at least r. So in d = 2 this will be squares, in d = 3 this will be cubes. As there can only fit at most
⌊
1
/
r
⌋
{\textstyle {\left\lfloor {1/r}\right\rfloor }}
chunks per dimension, the number of chunks is capped at
⌊
1
/
r
⌋
d
{\textstyle {\left\lfloor {1/r}\right\rfloor }^{d}}
. As before, each processor is assigned
⌊
1
/
r
⌋
d
P
{\textstyle {\left\lfloor {1/r}\right\rfloor }^{d} \over P}
chunks, for which it generates the vertices. To achieve a communication free process, each processor then generates the same vertices in the adjacent chunks by exploiting pseudorandomization of seeded hash functions. This way, each processor calculates the same vertices and there is no need for exchanging vertex information.
For dimension 3, Funke et al. showed that the expected running time is
O
(
m
+
n
P
+
log
P
)
{\textstyle O({\frac {m+n}{P}}+\log {P})}
, without any cost for communication between processing units.
== Properties ==
=== Isolated vertices and connectivity ===
The probability that a single vertex is isolated in a RGG is
(
1
−
π
r
2
)
n
−
1
{\textstyle (1-\pi r^{2})^{n-1}}
. Let
X
{\textstyle X}
be the random variable counting how many vertices are isolated. Then the expected value of
X
{\textstyle X}
is
E
(
X
)
=
n
(
1
−
π
r
2
)
n
−
1
=
n
e
−
π
r
2
n
−
O
(
r
4
n
)
{\textstyle E(X)=n(1-\pi r^{2})^{n-1}=ne^{-\pi r^{2}n}-O(r^{4}n)}
. The term
μ
=
n
e
−
π
r
2
n
{\textstyle \mu =ne^{-\pi r^{2}n}}
provides information about the connectivity of the RGG. For
μ
⟶
0
{\textstyle \mu \longrightarrow 0}
, the RGG is asymptotically almost surely connected. For
μ
⟶
∞
{\displaystyle \mu \longrightarrow \infty }
, the RGG is asymptotically almost surely disconnected. And for
μ
=
Θ
(
1
)
{\textstyle \mu =\Theta (1)}
, the RGG has a giant component that covers more than
n
2
{\textstyle {\frac {n}{2}}}
vertices and
X
{\displaystyle X}
is Poisson distributed with parameter
μ
{\displaystyle \mu }
. It follows that if
μ
=
Θ
(
1
)
{\textstyle \mu =\Theta (1)}
, the probability that the RGG is connected is
P
[
X
=
0
]
∼
e
−
μ
{\textstyle P[X=0]\sim e^{-\mu }}
and the probability that the RGG is not connected is
P
[
X
>
0
]
∼
1
−
e
−
μ
{\textstyle P[X>0]\sim 1-e^{-\mu }}
.
For any
l
p
{\textstyle l_{p}}
-Norm (
1
≤
p
≤
∞
{\textstyle 1\leq p\leq \infty }
) and for any number of dimensions
d
>
2
{\displaystyle d>2}
, a RGG possesses a sharp threshold of connectivity at
r
∼
(
ln
(
n
)
α
p
,
d
n
)
1
d
{\textstyle r\sim \left({\ln(n) \over \alpha _{p,d}n}\right)^{1 \over d}}
with constant
α
p
,
d
{\displaystyle \alpha _{p,d}}
. In the special case of a two-dimensional space and the euclidean norm (
d
=
2
{\displaystyle d=2}
and
p
=
2
{\displaystyle p=2}
) this yields
r
∼
ln
(
n
)
π
n
{\textstyle r\sim {\sqrt {\ln(n) \over \pi n}}}
.
=== Hamiltonicity ===
It has been shown, that in the two-dimensional case, the threshold
r
∼
ln
(
n
)
π
n
{\textstyle r\sim {\sqrt {\ln(n) \over \pi n}}}
also provides information about the existence of a Hamiltonian cycle (Hamiltonian Path). For any
ϵ
>
0
{\displaystyle \epsilon >0}
, if
r
∼
ln
(
n
)
(
π
+
ϵ
)
n
{\textstyle r\sim {\sqrt {\ln(n) \over (\pi +\epsilon )n}}}
, then the RGG has asymptotically almost surely no Hamiltonian cycle and if
r
∼
ln
(
n
)
(
π
−
ϵ
)
n
{\textstyle r\sim {\sqrt {\ln(n) \over (\pi -\epsilon )n}}}
for any
ϵ
>
0
{\textstyle \epsilon >0}
, then the RGG has asymptotically almost surely a Hamiltonian cycle.
=== Clustering coefficient ===
The clustering coefficient of RGGs only depends on the dimension d of the underlying space [0,1)d. The clustering coefficient is
C
d
=
1
−
H
d
(
1
)
{\textstyle C_{d}=1-H_{d}(1)}
for even
d
{\displaystyle d}
and
C
d
=
3
2
−
H
d
(
1
2
)
{\textstyle C_{d}={3 \over 2}-H_{d}({1 \over 2})}
for odd
d
{\displaystyle d}
where
H
d
(
x
)
=
1
π
∑
i
=
x
d
2
Γ
(
i
)
Γ
(
i
+
1
2
)
(
3
4
)
i
+
1
2
{\displaystyle H_{d}(x)={1 \over {\sqrt {\pi }}}\sum _{i=x}^{d \over 2}{\Gamma (i) \over \Gamma (i+{1 \over 2})}\left({3 \over 4}\right)^{i+{1 \over 2}}}
For large
d
{\displaystyle d}
, this simplifies to
C
d
∼
3
2
π
d
(
3
4
)
d
+
1
2
{\displaystyle C_{d}\sim 3{\sqrt {2 \over \pi d}}\left({3 \over 4}\right)^{d+1 \over 2}}
.
== Generalized random geometric graphs ==
In 1988, Bernard Waxman generalised the standard RGG by introducing a probabilistic connection function as opposed to the deterministic one suggested by Gilbert. The example introduced by Waxman was a stretched exponential where two nodes
i
{\displaystyle i}
and
j
{\displaystyle j}
connect with probability given by
H
i
j
=
β
e
−
r
i
j
r
0
{\textstyle H_{ij}=\beta e^{-{r_{ij} \over r_{0}}}}
where
r
i
j
{\displaystyle r_{ij}}
is the euclidean separation and
β
{\displaystyle \beta }
,
r
0
{\displaystyle r_{0}}
are parameters determined by the system. This type of RGG with probabilistic connection function is often referred to a soft random geometric Graph, which now has two sources of randomness; the location of nodes (vertices) and the formation of links (edges). This connection function has been generalized further in the literature
H
i
j
=
β
e
−
(
r
i
j
r
0
)
η
{\textstyle H_{ij}=\beta e^{-\left({r_{ij} \over r_{0}}\right)^{\eta }}}
which is often used to study wireless networks without interference. The parameter
η
{\displaystyle \eta }
represents how the signal decays with distance, when
η
=
2
{\displaystyle \eta =2}
is free space,
η
>
2
{\displaystyle \eta >2}
models a more cluttered environment like a town (= 6 models cities like New York) whilst
η
<
2
{\displaystyle \eta <2}
models highly reflective environments. We notice that for
η
=
1
{\displaystyle \eta =1}
is the Waxman model, whilst as
η
→
∞
{\displaystyle \eta \to \infty }
and
β
=
1
{\textstyle \beta =1}
we have the standard RGG. Intuitively these type of connection functions model how the probability of a link being made decays with distance.
=== Overview of some results for Soft RGG ===
In the high density limit for a network with exponential connection function the number of isolated nodes is Poisson distributed, and the resulting network contains a unique giant component and isolated nodes only. Therefore by ensuring there are no isolated nodes, in the dense regime, the network is a.a.s fully connected; similar to the results shown in for the disk model. Often the properties of these networks such as betweenness centrality and connectivity are studied in the limit as the density
→
∞
{\displaystyle \to \infty }
which often means border effects become negligible. However, in real life where networks are finite, although can still be extremely dense, border effects will impact on full connectivity; in fact showed that for full connectivity, with an exponential connection function, is greatly impacted by boundary effects as nodes near the corner/face of a domain are less likely to connect compared with those in the bulk. As a result full connectivity can be expressed as a sum of the contributions from the bulk and the geometries boundaries. A more general analysis of the connection functions in wireless networks has shown that the probability of full connectivity can be well approximated expressed by a few moments of the connection function and the regions geometry.
== References == | Wikipedia/Random_geometric_graph |
The dependency network approach provides a system level analysis of the activity and topology of directed networks. The approach extracts causal topological relations between the network's nodes (when the network structure is analyzed), and provides an important step towards inference of causal activity relations between the network nodes (when analyzing the network activity). This methodology has originally been introduced for the study of financial data, it has been extended and applied to other systems, such as the immune system, semantic networks, and functional brain networks.
In the case of network activity, the analysis is based on partial correlations. In simple words, the partial (or residual) correlation is a measure of the effect (or contribution) of a given node, say j, on the correlations between another pair of nodes, say i and k. Using this concept, the dependency of one node on another node is calculated for the entire network. This results in a directed weighted adjacency matrix of a fully connected network. Once the adjacency matrix has been constructed, different algorithms can be used to construct the network, such as a threshold network, Minimal Spanning Tree (MST), Planar Maximally Filtered Graph (PMFG), and others.
== Importance ==
The partial correlation based dependency network is a class of correlation network, capable of uncovering hidden relationships between its nodes.
This original methodology was first presented at the end of 2010, published in PLoS ONE. The authors quantitatively uncovered hidden information about the underlying structure of the U.S. stock market, information that was not present in the standard correlation networks. One of the main results of this work is that for the investigated time period (2001–2003), the structure of the network was dominated by companies belonging to the financial sector, which are the hubs in the dependency network. Thus, they were able for the first time to quantitatively show the dependency relationships between the different economic sectors. Following this work, the dependency network methodology has been applied to the study of the immune system, semantic networks, and functional brain networks.
== Overview ==
To be more specific, the partial correlation of the pair (i, k) given j, is the correlation between them after proper subtraction of the correlations between i and j and between k and j. Defined this way, the difference between the correlations and the partial correlations provides a measure of the influence of node j on the correlation. Therefore, we define the influence of node j on node i, or the dependency of node i on node j − D(i,j), to be the sum of the influence of node j on the correlations of node i with all other nodes.
In the case of network topology, the analysis is based on the effect of node deletion on the shortest paths between the network nodes. More specifically, we define the influence of node j on each pair of nodes (i,k) to be the inverse of the topological distance between these nodes in the presence of j minus the inverse distance between them in the absence of node j. Then we define the influence of node j on node i, or the dependency of node i on node j − D(i,j), to be the sum of the influence of node j on the distances between node i with all other nodes k.
== The activity dependency networks ==
=== The node-node correlations ===
The node-node correlations can be calculated by Pearson’s formula:
C
i
,
j
=
⟨
(
X
i
(
n
)
−
μ
i
)
(
X
j
(
n
)
−
μ
j
)
⟩
σ
i
σ
j
{\displaystyle C_{i,j}={\frac {\left\langle (X_{i}(n)-\mu _{i})(X_{j}(n)-\mu _{j})\right\rangle }{\sigma _{i}\sigma _{j}}}}
Where
X
i
(
n
)
{\displaystyle X_{i}(n)}
and
X
j
(
n
)
{\displaystyle X_{j}(n)}
are the activity of nodes i and j of subject n, μ stands for average, and sigma the STD of the dynamics profiles of nodes i and j. Note that the node-node correlations (or for simplicity the node correlations) for all pairs of nodes define a symmetric correlation matrix whose
(
i
,
j
)
{\displaystyle (i,j)}
element is the correlation between nodes i and j.
=== Partial correlations ===
Next we use the resulting node correlations to compute the partial correlations. The first order partial correlation coefficient is a statistical measure indicating how a third variable affects the correlation between two other variables. The partial correlation between nodes i and k with respect to a third node
j
−
P
C
(
i
,
k
∣
j
)
{\displaystyle j-PC(i,k\mid j)}
is defined as:
P
C
(
i
,
k
∣
j
)
=
C
(
i
,
k
)
−
C
(
i
,
j
)
C
(
k
,
j
)
[
1
−
C
2
(
i
,
j
)
]
[
1
−
C
2
(
k
,
j
)
]
{\displaystyle PC(i,k\mid j)={\frac {C(i,k)-C(i,j)C(k,j)}{\sqrt {[1-C^{2}(i,j)][1-C^{2}(k,j)]}}}}
where
C
(
i
,
j
)
,
C
(
i
,
k
)
{\displaystyle C(i,j),C(i,k)}
and
C
(
j
,
k
)
{\displaystyle C(j,k)}
are the node correlations defined above.
=== The correlation influence and correlation dependency ===
The relative effect of the correlations
C
(
i
,
j
)
{\displaystyle C(i,j)}
and
C
(
j
,
k
)
{\displaystyle C(j,k)}
of node j on the correlation C(i,k) is given by:
d
(
i
,
k
∣
j
)
≡
C
(
i
,
k
)
−
P
C
(
i
,
k
|
j
)
{\displaystyle d(i,k\mid j)\equiv C(i,k)-PC(i,k|j)}
This avoids the trivial case were node j appears to strongly affect the correlation
C
(
i
,
k
)
{\displaystyle C(i,k)}
, mainly because
C
(
i
,
j
)
,
C
(
i
,
k
)
{\displaystyle C(i,j),C(i,k)}
and
C
(
j
,
k
)
{\displaystyle C(j,k)}
have small values. We note that this quantity can be viewed either as the correlation dependency of C(i,k) on node j (the term used here) or as the correlation influence of node j on the correlation C(i,k).
=== Node activity dependencies ===
Next, we define the total influence of node j on node i, or the dependency D(i,j) of node i on node j to be:
D
(
i
,
j
)
=
1
N
−
1
∑
k
≠
j
N
−
1
d
(
i
,
k
∣
j
)
{\displaystyle D(i,j)={\frac {1}{N-1}}\sum _{k\neq j}^{N-1}d(i,k\mid j)}
As defined,D(i,j) is a measure of the average influence of node j on the correlations C(i,k) over all nodes k not equal to j. The node activity dependencies define a dependency matrix D whose (i,j) element is the dependency of node i on node j. It is important to note that while the correlation matrix C is a symmetric matrix, the dependency matrix D is nonsymmetrical –
D
(
i
,
j
)
≠
D
(
j
,
i
)
{\displaystyle D(i,j)\neq D(j,i)}
since the influence of node j on node i is not equal to the influence of node i on node j. For this reason, some of the methods used in the analyses of the correlation matrix (e.g. the PCA) have to be replaced or are less efficient. Yet there are other methods, as the ones used here, that can properly account for the non-symmetric nature of the dependency matrix.
== The structure dependency networks ==
The path influence and distance dependency: The relative effect of node j on the directed path
D
P
(
i
→
k
|
j
)
{\displaystyle DP(i\rightarrow k|j)}
– the shortest topological path with each segment corresponds to a distance 1, between nodes i and k is given:
D
P
(
i
→
k
∣
j
)
≡
1
t
d
(
i
→
k
∣
j
+
)
−
1
t
d
(
i
→
k
∣
j
−
)
{\displaystyle DP(i\rightarrow k\mid j)\equiv {\frac {1}{td(i\rightarrow k\mid j^{+})}}-{\frac {1}{td(i\rightarrow k\mid j^{-})}}}
where
t
d
(
i
→
k
|
j
+
)
{\displaystyle td(i\rightarrow k|j^{+})}
and
t
d
(
i
→
k
∣
j
−
)
{\displaystyle td(i\rightarrow k\mid j^{-})}
are the shortest directed topological path from node i to node k in the presence and the absence of node j respectively.
=== Node structural dependencies ===
Next, we define the total influence of node j on node i, or the dependency D(i,j) of node i on node j to be:
D
(
i
,
j
)
=
1
N
−
1
∑
k
=
1
N
−
1
D
P
(
i
→
k
∣
j
)
{\displaystyle D(i,j)={\frac {1}{N-1}}\sum _{k=1}^{N-1}DP(i\rightarrow k\mid j)}
As defined, D(i,j) is a measure of the average influence of node j on the directed paths from node i to all other nodes k. The node structural dependencies define a dependency matrix D whose (i,j) element is the dependency of node i on node j, or the influence of node j on node i. It is important to note that the dependency matrix D is nonsymmetrical –
D
(
i
,
j
)
≠
D
(
j
,
i
)
{\displaystyle D(i,j)\neq D(j,i)}
since the influence of node j on node i is not equal to the influence of node i on node j.
== Visualization of the dependency network ==
The dependency matrix is the weighted adjacency matrix, representing the fully connected network. Different algorithms can be applied to filter the fully connected network to obtain the most meaningful information, such as using a threshold approach, or different pruning algorithms. A widely used method to construct informative sub-graph of a complete network is the Minimum Spanning Tree (MST). Another informative sub-graph, which retains more information (in comparison to the MST) is the Planar Maximally Filtered Graph (PMFG) which is used here. Both methods are based on hierarchical clustering and the resulting sub-graphs include all the N nodes in the network whose edges represent the most relevant association correlations. The MST sub-graph contains
(
N
−
1
)
{\displaystyle (N-1)}
edges with no loops while the PMFG sub-graph contains
3
(
N
−
2
)
{\displaystyle 3(N-2)}
edges.
== See also ==
Semantic lexicon
Dependency network (graphical model)
== References == | Wikipedia/Dependency_network |
In telecommunications, packet switching is a method of grouping data into short messages in fixed format, i.e. packets, that are transmitted over a digital network. Packets consist of a header and a payload. Data in the header is used by networking hardware to direct the packet to its destination, where the payload is extracted and used by an operating system, application software, or higher layer protocols. Packet switching is the primary basis for data communications in computer networks worldwide.
During the early 1960s, American engineer Paul Baran developed a concept he called distributed adaptive message block switching, with the goal of providing a fault-tolerant, efficient routing method for telecommunication messages as part of a research program at the RAND Corporation, funded by the United States Department of Defense. His ideas contradicted then-established principles of pre-allocation of network bandwidth, exemplified by the development of telecommunications in the Bell System. The new concept found little resonance among network implementers until the independent work of Welsh computer scientist Donald Davies at the National Physical Laboratory in 1965. Davies coined the term packet switching and inspired numerous packet switching networks in the decade following, including the incorporation of the concept into the design of the ARPANET in the United States and the CYCLADES network in France. The ARPANET and CYCLADES were the primary precursor networks of the modern Internet.
== Concept ==
A simple definition of packet switching is:
The routing and transferring of data by means of addressed packets so that a channel is occupied during the transmission of the packet only, and upon completion of the transmission the channel is made available for the transfer of other traffic.
Packet switching allows delivery of variable bit rate data streams, realized as sequences of short messages in fixed format, i.e. packets, over a computer network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques. As they traverse networking hardware, such as switches and routers, packets are received, buffered, queued, and retransmitted (stored and forwarded), resulting in variable latency and throughput depending on the link capacity and the traffic load on the network. Packets are normally forwarded by intermediate network nodes asynchronously using first-in, first-out buffering, but may be forwarded according to some scheduling discipline for fair queuing, traffic shaping, or for differentiated or guaranteed quality of service, such as weighted fair queuing or leaky bucket. Packet-based communication may be implemented with or without intermediate forwarding nodes (switches and routers). In case of a shared physical medium (such as radio or 10BASE5), the packets may be delivered according to a multiple access scheme.
Packet switching contrasts with another principal networking paradigm, circuit switching, a method which pre-allocates dedicated network bandwidth specifically for each communication session, each having a constant bit rate and latency between nodes. In cases of billable services, such as cellular communication services, circuit switching is characterized by a fee per unit of connection time, even when no data is transferred, while packet switching may be characterized by a fee per unit of information transmitted, such as characters, packets, or messages.
A packet switch has four components: input ports, output ports, routing processor, and switching fabric.
== History ==
=== Invention and development ===
The concept of switching small blocks of data was first explored independently by Paul Baran at the RAND Corporation during the early 1960s in the US and Donald Davies at the National Physical Laboratory (NPL) in the UK in 1965.
In the late 1950s, the US Air Force established a wide area network for the Semi-Automatic Ground Environment (SAGE) radar defense system. Recognizing vulnerabilities in this network, the Air Force sought a system that might survive a nuclear attack to enable a response, thus diminishing the attractiveness of the first strike advantage by enemies (see Mutual assured destruction). In the early 1960s, Baran invented the concept of distributed adaptive message block switching in support of the Air Force initiative. The concept was first presented to the Air Force in the summer of 1961 as briefing B-265, later published as RAND report P-2626 in 1962, and finally in report RM 3420 in 1964. The reports describe a general architecture for a large-scale, distributed, survivable communications network. The proposal was composed of three key ideas: use of a decentralized network with multiple paths between any two points; dividing user messages into message blocks; and delivery of these messages by store and forward switching. Baran's network design was focused on digital communication of voice messages using switches that were low-cost electronics.
Christopher Strachey, who became Oxford University's first Professor of Computation, filed a patent application in the United Kingdom for time-sharing in February 1959. In June that year, he gave a paper "Time Sharing in Large Fast Computers" at the UNESCO Information Processing Conference in Paris where he passed the concept on to J. C. R. Licklider. Licklider (along with John McCarthy) was instrumental in the development of time-sharing. After conversations with Licklider about time-sharing with remote computers in 1965, Davies independently invented a similar data communication concept, using short messages in fixed format with high data transmission rates to achieve rapid communications. He went on to develop a more advanced design for a hierarchical, high-speed computer network including interface computers and communication protocols. He coined the term packet switching, and proposed building a commercial nationwide data network in the UK. He gave a talk on the proposal in 1966, after which a person from the Ministry of Defence (MoD) told him about Baran's work.
Roger Scantlebury, a member of Davies' team, presented their work (and referenced that of Baran) at the October 1967 Symposium on Operating Systems Principles (SOSP). At the conference, Scantlebury proposed packet switching for use in the ARPANET and persuaded Larry Roberts the economics were favorable to message switching. Davies had chosen some of the same parameters for his original network design as did Baran, such as a packet size of 1024 bits. To deal with packet permutations (due to dynamically updated route preferences) and datagram losses (unavoidable when fast sources send to a slow destinations), he assumed that "all users of the network will provide themselves with some kind of error control", thus inventing what came to be known as the end-to-end principle. Davies proposed that a local-area network should be built at the laboratory to serve the needs of NPL and prove the feasibility of packet switching. After a pilot experiment in early 1969, the NPL Data Communications Network began service in 1970. Davies was invited to Japan to give a series of lectures on packet switching. The NPL team carried out simulation work on datagrams and congestion in networks on a scale to provide data communication across the United Kingdom.
Larry Roberts made the key decisions in the request for proposal to build the ARPANET. Roberts met Baran in February 1967, but did not discuss networks. He asked Frank Westervelt to explore the questions of message size and contents for the network, and to write a position paper on the intercomputer communication protocol including “conventions for character and block transmission, error checking and re transmission, and computer and user identification." Roberts revised his initial design, which was to connect the host computers directly, to incorporate Wesley Clark's idea to use Interface Message Processors (IMPs) to create a message switching network, which he presented at SOSP. Roberts was known for making decisions quickly. Immediately after SOSP, he incorporated Davies' and Baran's concepts and designs for packet switching to enable the data communications on the network.
A contemporary of Roberts' from MIT, Leonard Kleinrock had researched the application of queueing theory in the field of message switching for his doctoral dissertation in 1961–62 and published it as a book in 1964. Davies, in his 1966 paper on packet switching, applied Kleinorck's techniques to show that "there is an ample margin between the estimated performance of the [packet-switched] system and the stated requirement" in terms of a satisfactory response time for a human user. This addressed a key question about the viability of computer networking. Larry Roberts brought Kleinrock into the ARPANET project informally in early 1967. Roberts and Taylor recognized the issue of response time was important, but did not apply Kleinrock's methods to assess this and based their design on a store-and-forward system that was not intended for real-time computing. After SOSP, and after Roberts' direction to use packet switching, Kleinrock sought input from Baran and proposed to retain Baran and RAND as advisors. The ARPANET working group assigned Kleinrock responsibility to prepare a report on software for the IMP. In 1968, Roberts awarded Kleinrock a contract to establish a Network Measurement Center (NMC) at UCLA to measure and model the performance of packet switching in the ARPANET.
Bolt Beranek & Newman (BBN) won the contract to build the network. Designed principally by Bob Kahn, it was the first wide-area packet-switched network with distributed control. The BBN "IMP Guys" independently developed significant aspects of the network's internal operation, including the routing algorithm, flow control, software design, and network control. The UCLA NMC and the BBN team also investigated network congestion. The Network Working Group, led by Steve Crocker, a graduate student of Kleinrock's at UCLA, developed the host-to-host protocol, the Network Control Program, which was approved by Barry Wessler for ARPA, after he ordered certain more exotic elements to be dropped. In 1970, Kleinrock extended his earlier analytic work on message switching to packet switching in the ARPANET. His work influenced the development of the ARPANET and packet-switched networks generally.
The ARPANET was demonstrated at the International Conference on Computer Communication (ICCC) in Washington in October 1972. However, fundamental questions about the design of packet-switched networks remained.
Roberts presented the idea of packet switching to communication industry professionals in the early 1970s. Before ARPANET was operating, they argued that the router buffers would quickly run out. After the ARPANET was operating, they argued packet switching would never be economic without the government subsidy. Baran had faced the same rejection and thus failed to convince the military into constructing a packet switching network in the 1960s.
The CYCLADES network was designed by Louis Pouzin in the early 1970s to study internetworking. It was the first to implement the end-to-end principle of Davies, and make the host computers responsible for the reliable delivery of data on a packet-switched network, rather than this being a service of the network itself. His team was thus first to tackle the highly-complex problem of providing user applications with a reliable virtual circuit service while using a best-effort service, an early contribution to what will be the Transmission Control Protocol (TCP).
Bob Metcalfe and others at Xerox PARC outlined the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking.
In May 1974, Vint Cerf and Bob Kahn described the Transmission Control Program, an internetworking protocol for sharing resources using packet-switching among the nodes. The specifications of the TCP were then published in RFC 675 (Specification of Internet Transmission Control Program), written by Vint Cerf, Yogen Dalal and Carl Sunshine in December 1974.
The X.25 protocol, developed by Rémi Després and others, was built on the concept of virtual circuits. In the mid-late 1970s and early 1980s, national and international public data networks emerged using X.25 which was developed with participation from France, the UK, Japan, USA and Canada. It was complemented with X.75 to enable internetworking.
Packet switching was shown to be optimal in the Huffman coding sense in 1978.
In the late 1970s, the monolithic Transmission Control Program was layered as the Transmission Control Protocol (TCP), atop the Internet Protocol (IP). Many Internet pioneers developed this into the Internet protocol suite and the associated Internet architecture and governance that emerged in the 1980s.
For a period in the 1980s and early 1990s, the network engineering community was polarized over the implementation of competing protocol suites, commonly known as the Protocol Wars. It was unclear which of the Internet protocol suite and the OSI model would result in the best and most robust computer networks.
Leonard Kleinrock carried out theoretical work at UCLA during the 1970s analyzing throughput and delay in the ARPANET. His theoretical work on hierarchical routing with student Farouk Kamoun became critical to the operation of the Internet. Kleinrock published hundreds of research papers, which ultimately launched a new field of research on the theory and application of queuing theory to computer networks.
Complementary metal–oxide–semiconductor (CMOS) VLSI (very-large-scale integration) technology led to the development of high-speed broadband packet switching during the 1980s–1990s.
=== The "paternity dispute" ===
Roberts claimed in later years that, by the time of the October 1967 SOSP, he already had the concept of packet switching in mind (although not yet named and not written down in his paper published at the conference, which a number of sources describe as "vague"), and that this originated with his old colleague, Kleinrock, who had written about such concepts in his Ph.D. research in 1961-2. In 1997, along with seven other Internet pioneers, Roberts and Kleinrock co-wrote "Brief History of the Internet" published by the Internet Society. In it, Kleinrock is described as having "published the first paper on packet switching theory in July 1961 and the first book on the subject in 1964". Many sources about the history of the Internet began to reflect these claims as uncontroversial facts. This became the subject of what Katie Hafner called a "paternity dispute" in The New York Times in 2001.
The disagreement about Kleinrock's contribution to packet switching dates back to a version of the above claim made on Kleinrock's profile on the UCLA Computer Science department website sometime in the 1990s. Here, he was referred to as the "Inventor of the Internet Technology". The webpage's depictions of Kleinrock's achievements provoked anger among some early Internet pioneers. The dispute over priority became a public issue after Donald Davies posthumously published a paper in 2001 in which he denied that Kleinrock's work was related to packet switching. Davies also described ARPANET project manager Larry Roberts as supporting Kleinrock, referring to Roberts' writings online and Kleinrock's UCLA webpage profile as "very misleading". Walter Isaacson wrote that Kleinrock's claims "led to an outcry among many of the other Internet pioneers, who publicly attacked Kleinrock and said that his brief mention of breaking messages into smaller pieces did not come close to being a proposal for packet switching".
Davies' paper reignited a previous dispute over who deserves credit for getting the ARPANET online between engineers at Bolt, Beranek, and Newman (BBN) who had been involved in building and designing the ARPANET IMP on the one side, and ARPA-related researchers on the other. This earlier dispute is exemplified by BBN's Will Crowther, who in a 1990 oral history described Paul Baran's packet switching design (which he called hot-potato routing), as "crazy" and non-sensical, despite the ARPA team having advocated for it. The reignited debate caused other former BBN employees to make their concerns known, including Alex McKenzie, who followed Davies in disputing that Kleinrock's work was related to packet switching, stating "... there is nothing in the entire 1964 book that suggests, analyzes, or alludes to the idea of packetization".
Former IPTO director Bob Taylor also joined the debate, stating that "authors who have interviewed dozens of Arpanet pioneers know very well that the Kleinrock-Roberts claims are not believed". Walter Isaacson notes that "until the mid-1990s Kleinrock had credited [Baran and Davies] with coming up with the idea of packet switching".
A subsequent version of Kleinrock's biography webpage was copyrighted in 2009 by Kleinrock. He was called on to defend his position over subsequent decades. In 2023, he acknowledged that his published work in the early 1960s was about message switching and claimed he was thinking about packet switching. Primary sources and historians recognize Baran and Davies for independently inventing the concept of digital packet switching used in modern computer networking including the ARPANET and the Internet.
Kleinrock has received many awards for his ground-breaking applied mathematical research on packet switching, carried out in the 1970s, which was an extension of his pioneering work in the early 1960s on the optimization of message delays in communication networks. However, Kleinrock's claims that his work in the early 1960s originated the concept of packet switching and that his work was a source of the packet switching concepts used in the ARPANET have affected sources on the topic, which has created methodological challenges in the historiography of the Internet. Historian Andrew L. Russell said "'Internet history' also suffers from a ... methodological, problem: it tends to be too close to its sources. Many Internet pioneers are alive, active, and eager to shape the histories that describe their accomplishments. Many museums and historians are equally eager to interview the pioneers and to publicize their stories".
== Connectionless and connection-oriented modes ==
Packet switching may be classified into connectionless packet switching, also known as datagram switching, and connection-oriented packet switching, also known as virtual circuit switching. Examples of connectionless systems are Ethernet, IP, and the User Datagram Protocol (UDP). Connection-oriented systems include X.25, Frame Relay, Multiprotocol Label Switching (MPLS), and TCP.
In connectionless mode each packet is labeled with a destination address, source address, and port numbers. It may also be labeled with the sequence number of the packet. This information eliminates the need for a pre-established path to help the packet find its way to its destination, but means that more information is needed in the packet header, which is therefore larger. The packets are routed individually, sometimes taking different paths resulting in out-of-order delivery. At the destination, the original message may be reassembled in the correct order, based on the packet sequence numbers. Thus a virtual circuit carrying a byte stream is provided to the application by a transport layer protocol, although the network only provides a connectionless network layer service.
Connection-oriented transmission requires a setup phase to establish the parameters of communication before any packet is transferred. The signaling protocols used for setup allow the application to specify its requirements and discover link parameters. Acceptable values for service parameters may be negotiated. The packets transferred may include a connection identifier rather than address information and the packet header can be smaller, as it only needs to contain this code and any information, such as length, timestamp, or sequence number, which is different for different packets. In this case, address information is only transferred to each node during the connection setup phase, when the route to the destination is discovered and an entry is added to the switching table in each network node through which the connection passes. When a connection identifier is used, routing a packet requires the node to look up the connection identifier in a table.
Connection-oriented transport layer protocols such as TCP provide a connection-oriented service by using an underlying connectionless network. In this case, the end-to-end principle dictates that the end nodes, not the network itself, are responsible for the connection-oriented behavior.
== Packet switching in networks ==
In telecommunication networks, packet switching is used to optimize the usage of channel capacity and increase robustness. Compared to circuit switching, packet switching is highly dynamic, allocating channel capacity based on usage instead of explicit reservations. This can reduce wasted capacity caused by underutilized reservations at the cost of removing bandwidth guarantees. In practice, congestion control is generally used in IP networks to dynamically negotiate capacity between connections. Packet switching may also increase the robustness of networks in the face of failures. If a node fails, connections do not need to be interrupted, as packets may be routed around the failure.
Packet switching is used in the Internet and most local area networks. The Internet is implemented by the Internet Protocol Suite using a variety of link layer technologies. For example, Ethernet and Frame Relay are common. Newer mobile phone technologies (e.g., GSM, LTE) also use packet switching. Packet switching is associated with connectionless networking because, in these systems, no connection agreement needs to be established between communicating parties prior to exchanging data.
X.25, the international CCITT standard of 1976, is a notable use of packet switching in that it provides to users a service of flow-controlled virtual circuits. These virtual circuits reliably carry variable-length packets with data order preservation. DATAPAC in Canada was the first public network to support X.25, followed by TRANSPAC in France.
Asynchronous Transfer Mode (ATM) is another virtual circuit technology. It differs from X.25 in that it uses small fixed-length packets (cells), and that the network imposes no flow control to users.
Technologies such as MPLS and the Resource Reservation Protocol (RSVP) create virtual circuits on top of datagram networks. MPLS and its predecessors, as well as ATM, have been called "fast packet" technologies. MPLS, indeed, has been called "ATM without cells". Virtual circuits are especially useful in building robust failover mechanisms and allocating bandwidth for delay-sensitive applications.
== Packet-switched networks ==
Donald Davies' work in the late 1960s on data communications and computer network design became well known in the United States, Europe and Japan and was the "cornerstone" that inspired numerous packet switching networks in the decade following.
The history of packet-switched networks can be divided into three overlapping eras: early networks before the introduction of X.25; the X.25 era when many postal, telephone, and telegraph (PTT) companies provided public data networks with X.25 interfaces; and the Internet era which initially competed with the OSI model.
=== Early networks ===
Research into packet switching at the National Physical Laboratory (NPL) began with a proposal for a wide-area network in 1965, and a local-area network in 1966. ARPANET funding was secured in 1966 by Bob Taylor, and planning began in 1967 when he hired Larry Roberts. The NPL network followed by the ARPANET became operational in 1969, the first two networks to use packet switching. Larry Roberts said many of the packet switching networks built in the 1970s were similar "in nearly all respects" to Donald Davies' original 1965 design. The ARPANET and Louis Pouzin's CYCLADES were the primary precursor networks of the modern Internet. CYCLADES, unlike ARPANET, was explicitly designed to research internetworking.
Before the introduction of X.25 in 1976, about twenty different network technologies had been developed. Two fundamental differences involved the division of functions and tasks between the hosts at the edge of the network and the network core. In the datagram system, operating according to the end-to-end principle, the hosts have the responsibility to ensure orderly delivery of packets. In the virtual call system, the network guarantees sequenced delivery of data to the host. This results in a simpler host interface but complicates the network. The X.25 protocol suite uses this network type.
==== AppleTalk ====
AppleTalk is a proprietary suite of networking protocols developed by Apple in 1985 for Apple Macintosh computers. It was the primary protocol used by Apple devices through the 1980s and 1990s. AppleTalk included features that allowed local area networks to be established ad hoc without the requirement for a centralized router or server. The AppleTalk system automatically assigned addresses, updated the distributed namespace, and configured any required inter-network routing. It was a plug-n-play system.
AppleTalk implementations were also released for the IBM PC and compatibles, and the Apple IIGS. AppleTalk support was available in most networked printers, especially laser printers, some file servers and routers.
The protocol was designed to be simple, autoconfiguring, and not require servers or other specialized services to work. These benefits also created drawbacks, as Appletalk tended not to use bandwidth efficiently. AppleTalk support was terminated in 2009.
==== ARPANET ====
The ARPANET was a progenitor network of the Internet and one of the first networks, along with ARPA's SATNET, to run the TCP/IP suite using packet switching technologies.
==== BNRNET ====
BNRNET was a network which Bell-Northern Research developed for internal use. It initially had only one host but was designed to support many hosts. BNR later made major contributions to the CCITT X.25 project.
==== Cambridge Ring ====
The Cambridge Ring was an experimental ring network developed at the Computer Laboratory, University of Cambridge. It operated from 1974 until the 1980s.
==== CompuServe ====
CompuServe developed its own packet switching network, implemented on DEC PDP-11 minicomputers acting as network nodes that were installed throughout the US (and later, in other countries) and interconnected. Over time, the CompuServe network evolved into a complicated multi-tiered network incorporating ATM, Frame Relay, IP and X.25 technologies.
==== CYCLADES ====
The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. First demonstrated in 1973, it was developed to explore alternatives to the early ARPANET design and to support network research generally. It was the first network to use the end-to-end principle and make the hosts responsible for reliable delivery of data, rather than the network itself. Concepts of this network influenced later ARPANET architecture.
==== DECnet ====
DECnet is a suite of network protocols created by Digital Equipment Corporation, originally released in 1975 in order to connect two PDP-11 minicomputers. It evolved into one of the first peer-to-peer network architectures, thus transforming DEC into a networking powerhouse in the 1980s. Initially built with three layers, it later (1982) evolved into a seven-layer OSI-compliant networking protocol. The DECnet protocols were designed entirely by Digital Equipment Corporation. However, DECnet Phase II (and later) were open standards with published specifications, and several implementations were developed outside DEC, including one for Linux.
==== DDX-1 ====
DDX-1 was an experimental network from Nippon PTT. It mixed circuit switching and packet switching. It was succeeded by DDX-2.
==== EIN ====
The European Informatics Network (EIN), originally called COST 11, was a project beginning in 1971 to link networks in Britain, France, Italy, Switzerland and Euratom. Six other European countries also participated in the research on network protocols. Derek Barber directed the project, and Roger Scantlebury led the UK technical contribution; both were from NPL. The contract for its implementation was awarded to an Anglo French consortium led by the UK systems house Logica and Sesa and managed by Andrew Karney. Work began in 1973 and it became operational in 1976 including nodes linking the NPL network and CYCLADES. Barber proposed and implemented a mail protocol for EIN. The transport protocol of the EIN helped to launch the INWG and X.25 protocols. EIN was replaced by Euronet in 1979.
==== EPSS ====
The Experimental Packet Switched Service (EPSS) was an experiment of the UK Post Office Telecommunications. It was the first public data network in the UK when it began operating in 1976. Ferranti supplied the hardware and software. The handling of link control messages (acknowledgements and flow control) was different from that of most other networks.
==== GEIS ====
As General Electric Information Services (GEIS), General Electric was a major international provider of information services. The company originally designed a telephone network to serve as its internal (albeit continent-wide) voice telephone network.
In 1965, at the instigation of Warner Sinback, a data network based on this voice-phone network was designed to connect GE's four computer sales and service centers (Schenectady, New York, Chicago, and Phoenix) to facilitate a computer time-sharing service.
After going international some years later, GEIS created a network data center near Cleveland, Ohio. Very little has been published about the internal details of their network. The design was hierarchical with redundant communication links.
==== IPSANET ====
IPSANET was a semi-private network constructed by I. P. Sharp Associates to serve their time-sharing customers. It became operational in May 1976.
==== IPX/SPX ====
The Internetwork Packet Exchange (IPX) and Sequenced Packet Exchange (SPX) are Novell networking protocols from the 1980s derived from Xerox Network Systems' IDP and SPP protocols, respectively which date back to the 1970s. IPX/SPX was used primarily on networks using the Novell NetWare operating systems.
==== Merit Network ====
Merit Network, an independent nonprofit organization governed by Michigan's public universities, was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan's public universities as a means to help the state's educational and economic development. With initial support from the State of Michigan and the National Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host-to-host connection was made between the IBM mainframe systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit. In October 1972, connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years, in addition to host-to-host interactive connections, the network was enhanced to support terminal-to-host connections, host-to-host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP; additionally, public universities in Michigan joined the network. All of this set the stage for Merit's role in the NSFNET project starting in the mid-1980s.
==== NPL ====
Donald Davies of the National Physical Laboratory (United Kingdom) designed and proposed a national commercial data network based on packet switching in 1965. The proposal was not taken up nationally but the following year, he designed a local network using "interface computers", today known as routers, to serve the needs of NPL and prove the feasibility of packet switching.
By 1968 Davies had begun building the NPL network to meet the needs of the multidisciplinary laboratory and prove the technology under operational conditions. In 1969, the NPL, followed by the ARPANET, were the first two networks to use packet switching. By 1976, 12 computers and 75 terminal devices were attached, and more were added until the network was replaced in 1986. NPL was the first to use high-speed links.
==== Octopus ====
Octopus was a local network at Lawrence Livermore National Laboratory. It connected sundry hosts at the lab to interactive terminals and various computer peripherals including a bulk storage system.
==== Philips Research ====
Philips Research Laboratories in Redhill, Surrey developed a packet switching network for internal use. It was a datagram network with a single switching node.
==== PUP ====
PARC Universal Packet (PUP or Pup) was one of the two earliest internetworking protocol suites; it was created by researchers at Xerox PARC in the mid-1970s. The entire suite provided routing and packet delivery, as well as higher level functions such as a reliable byte stream, along with numerous applications. Further developments led to Xerox Network Systems (XNS).
==== RCP ====
RCP was an experimental network created by the French PTT. It was used to gain experience with packet switching technology before the specification of the TRANSPAC public network was frozen. RCP was a virtual-circuit network in contrast to CYCLADES which was based on datagrams. RCP emphasised terminal-to-host and terminal-to-terminal connection; CYCLADES was concerned with host-to-host communication. RCP influenced the X.25 specification, which was deployed on TRANSPAC and other public data networks.
==== RETD ====
Red Especial de Transmisión de Datos (RETD) was a network developed by Compañía Telefónica Nacional de España. It became operational in 1972 and thus was the first public network.
==== SCANNET ====
"The experimental packet-switched Nordic telecommunication network SCANNET was implemented in Nordic technical libraries in the 1970s, and it included first Nordic electronic journal Extemplo. Libraries were also among first ones in universities to accommodate microcomputers for public use in the early 1980s."
==== SITA HLN ====
SITA is a consortium of airlines. Its High Level Network (HLN) became operational in 1969. Although organised to act like a packet-switching network, it still used message switching. As with many non-academic networks, very little has been published about it.
==== SRCnet/SERCnet ====
A number of computer facilities serving the Science Research Council (SRC) community in the United Kingdom developed beginning in the early 1970s. Each had their own star network (ULCC London, UMRCC Manchester, Rutherford Appleton Laboratory). There were also regional networks centred on Bristol (on which work was initiated in the late 1960s) followed in the mid-late 1970s by Edinburgh, the Midlands and Newcastle. These groups of institutions shared resources to provide better computing facilities than could be afforded individually. The networks were each based on one manufacturer's standards and were mutually incompatible and overlapping. In 1981, the SRC was renamed the Science and Engineering Research Council (SERC). In the early 1980s a standardisation and interconnection effort started, hosted on an expansion of the SERCnet research network and based on the Coloured Book protocols, later evolving into JANET.
==== Systems Network Architecture ====
Systems Network Architecture (SNA) is IBM's proprietary networking architecture created in 1974. An IBM customer could acquire hardware and software from IBM and lease private lines from a common carrier to construct a private network.
==== Telenet ====
Telenet was the first FCC-licensed public data network in the United States. Telenet was incorporated in 1973 and started operations in 1975. It was founded by Bolt Beranek & Newman with Larry Roberts as CEO as a means of making packet switching technology public. Telenet initially used a proprietary Virtual circuit host interface, but changed it to X.25 and the terminal interface to X.29 after their standardization in CCITT. It went public in 1979 and was then sold to GTE.
==== Tymnet ====
Tymnet was an international data communications network headquartered in San Jose, CA. In 1969, it began install a network based on minicomputers to connect timesharing terminals to its central computers. The network used store-and-forward and voice-grade lines. Routing was not distributed, rather it was established by a central supervisor on a call-by-call basis.
=== X.25 era ===
There were two kinds of X.25 networks. Some such as DATAPAC and TRANSPAC were initially implemented with an X.25 external interface. Some older networks such as TELENET and TYMNET were modified to provide an X.25 host interface in addition to older host connection schemes. DATAPAC was developed by Bell-Northern Research which was a joint venture of Bell Canada (a common carrier) and Northern Telecom (a telecommunications equipment supplier). Northern Telecom sold several DATAPAC clones to foreign PTTs including the Deutsche Bundespost. X.75 and X.121 allowed the interconnection of national X.25 networks.
==== AUSTPAC ====
AUSTPAC was an Australian public X.25 network operated by Telstra. Established by Telstra's predecessor Telecom Australia in the early 1980s, AUSTPAC was Australia's first public packet-switched data network and supported applications such as on-line betting, financial applications—the Australian Tax Office made use of AUSTPAC—and remote terminal access to academic institutions, who maintained their connections to AUSTPAC up until the mid-late 1990s in some cases. Access was via a dial-up terminal to a PAD, or, by linking a permanent X.25 node to the network.
==== ConnNet ====
ConnNet was a network operated by the Southern New England Telephone Company serving the state of Connecticut. Launched on March 11, 1985, it was the first local public packet-switched network in the United States.
==== Datanet 1 ====
Datanet 1 was the public switched data network operated by the Dutch PTT Telecom (now known as KPN). Strictly speaking Datanet 1 only referred to the network and the connected users via leased lines (using the X.121 DNIC 2041), the name also referred to the public PAD service Telepad (using the DNIC 2049). And because the main Videotex service used the network and modified PAD devices as infrastructure the name Datanet 1 was used for these services as well.
==== DATAPAC ====
DATAPAC was the first operational X.25 network (1976). It covered major Canadian cities and was eventually extended to smaller centers.
==== Datex-P ====
Deutsche Bundespost operated the Datex-P national network in Germany. The technology was acquired from Northern Telecom.
==== Eirpac ====
Eirpac is the Irish public switched data network supporting X.25 and X.28. It was launched in 1984, replacing Euronet. Eirpac is run by Eircom.
==== Euronet ====
Nine member states of the European Economic Community contracted with Logica and the French company SESA to set up a joint venture in 1975 to undertake the Euronet development, using X.25 protocols to form virtual circuits. It was to replace EIN and established a network in 1979 linking a number of European countries until 1984 when the network was handed over to national PTTs.
==== HIPA-NET ====
Hitachi designed a private network system for sale as a turnkey package to multi-national organizations. In addition to providing X.25 packet switching, message switching software was also included. Messages were buffered at the nodes adjacent to the sending and receiving terminals. Switched virtual calls were not supported, but through the use of logical ports an originating terminal could have a menu of pre-defined destination terminals.
==== Iberpac ====
Iberpac is the Spanish public packet-switched network, providing X.25 services. It was based on RETD which was operational since 1972. Iberpac was run by Telefonica.
==== IPSS ====
In 1978, X.25 provided the first international and commercial packet-switching network, the International Packet Switched Service (IPSS).
==== JANET ====
JANET was the UK academic and research network, linking all universities, higher education establishments, and publicly funded research laboratories following its launch in 1984. The X.25 network, which used the Coloured Book protocols, was based mainly on GEC 4000 series switches, and ran X.25 links at up to 8 Mbit/s in its final phase before being converted to an IP-based network in 1991. The JANET network grew out of the 1970s SRCnet, later called SERCnet.
==== PSS ====
Packet Switch Stream (PSS) was the Post Office Telecommunications (later to become British Telecom) national X.25 network with a DNIC of 2342. British Telecom renamed PSS Global Network Service (GNS), but the PSS name has remained better known. PSS also included public dial-up PAD access, and various InterStream gateways to other services such as Telex.
==== REXPAC ====
REXPAC was the nationwide experimental packet switching data network in Brazil, developed by the research and development center of Telebrás, the state-owned public telecommunications provider.
==== SITA Data Transport Network ====
SITA is a consortium of airlines. Its Data Transport Network adopted X.25 in 1981, becoming the world's most extensive packet-switching network. As with many non-academic networks, very little has been published about it.
==== TRANSPAC ====
TRANSPAC was the national X.25 network in France. It was developed locally at about the same time as DATAPAC in Canada. The development was done by the French PTT and influenced by its preceding expreimental networkRCP. It began operation in 1978, and served commercial users and, after Minitel began, consumers.
==== Tymnet ====
Tymnet utilized virtual call packet switched technology including X.25, SNA/SDLC, BSC and ASCII interfaces to connect host computers (servers) at thousands of large companies, educational institutions, and government agencies. Users typically connected via dial-up connections or dedicated asynchronous serial connections. The business consisted of a large public network that supported dial-up users and a private network business that allowed government agencies and large companies (mostly banks and airlines) to build their own dedicated networks. The private networks were often connected via gateways to the public network to reach locations not on the private network. Tymnet was also connected to dozens of other public networks in the U.S. and internationally via X.25/X.75 gateways.
==== UNINETT ====
UNINETT was a wide-area Norwegian packet-switched network established through a joint effort between Norwegian universities, research institutions and the Norwegian Telecommunication administration. The original network was based on X.25; Internet protocols were adopted later.
==== VENUS-P ====
VENUS-P was an international X.25 network that operated from April 1982 through March 2006. At its subscription peak in 1999, VENUS-P connected 207 networks in 87 countries.
==== XNS ====
Xerox Network Systems (XNS) was a protocol suite promulgated by Xerox, which provided routing and packet delivery, as well as higher level functions such as a reliable stream, and remote procedure calls. It was developed from PARC Universal Packet (PUP).
=== Internet era ===
When Internet connectivity was made available to anyone who could pay for an Internet service provider subscription, the distinctions between national networks blurred. The user no longer saw network identifiers such as the DNIC. Some older technologies such as circuit switching have resurfaced with new names such as fast packet switching. Researchers have created some experimental networks to complement the existing Internet.
==== CSNET ====
The Computer Science Network (CSNET) was a computer network funded by the NSF that began operation in 1981. Its purpose was to extend networking benefits for computer science departments at academic and research institutions that could not be directly connected to ARPANET due to funding or authorization limitations. It played a significant role in spreading awareness of, and access to, national networking and was a major milestone on the path to the development of the global Internet.
==== Internet2 ====
Internet2 is a not-for-profit United States computer networking consortium led by members from the research and education communities, industry, and government. The Internet2 community, in partnership with Qwest, built the first Internet2 Network, called Abilene, in 1998 and was a prime investor in the National LambdaRail (NLR) project. In 2006, Internet2 announced a partnership with Level 3 Communications to launch a brand new nationwide network, boosting its capacity from 10 to 100 Gbit/s. In October, 2007, Internet2 officially retired Abilene and now refers to its new, higher capacity network as the Internet2 Network.
==== NSFNET ====
The National Science Foundation Network (NSFNET) was a program of coordinated, evolving projects sponsored by the NSF beginning in 1985 to promote advanced research and education networking in the United States. NSFNET was also the name given to several nationwide backbone networks, operating at speeds of 56 kbit/s, 1.5 Mbit/s (T1), and 45 Mbit/s (T3), that were constructed to support NSF's networking initiatives from 1985 to 1995. Initially created to link researchers to the nation's NSF-funded supercomputing centers, through further public funding and private industry partnerships it developed into a major part of the Internet backbone.
==== NSFNET regional networks ====
In addition to the five NSF supercomputer centers, NSFNET provided connectivity to eleven regional networks and through these networks to many smaller regional and campus networks in the United States. The NSFNET regional networks were:
BARRNet, the Bay Area Regional Research Network in Palo Alto, California;
CERFnet, California Education and Research Federation Network in San Diego, California, serving California and Nevada;
CICNet, the Committee on Institutional Cooperation Network via the Merit Network in Ann Arbor, Michigan and later as part of the T3 upgrade via Argonne National Laboratory outside of Chicago, serving the Big Ten Universities and the University of Chicago in Illinois, Indiana, Michigan, Minnesota, Ohio, and Wisconsin;
Merit/MichNet in Ann Arbor, Michigan serving Michigan, formed in 1966, still in operation as of 2023;
MIDnet in Lincoln, Nebraska serving Arkansas, Iowa, Kansas, Missouri, Nebraska, Oklahoma, and South Dakota;
NEARNET, the New England Academic and Research Network in Cambridge, Massachusetts, added as part of the upgrade to T3, serving Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont, established in late 1988, operated by BBN under contract to MIT, BBN assumed responsibility for NEARNET on 1 July 1993;
NorthWestNet in Seattle, Washington, serving Alaska, Idaho, Montana, North Dakota, Oregon, and Washington, founded in 1987;
NYSERNet, New York State Education and Research Network in Ithaca, New York;
JVNCNet, the John von Neumann National Supercomputer Center Network in Princeton, New Jersey, serving Delaware and New Jersey;
SESQUINET, the Sesquicentennial Network in Houston, Texas, founded during the 150th anniversary of the State of Texas;
SURAnet, the Southeastern Universities Research Association network in College Park, Maryland and later as part of the T3 upgrade in Atlanta, Georgia serving Alabama, Florida, Georgia, Kentucky, Louisiana, Maryland, Mississippi, North Carolina, South Carolina, Tennessee, Virginia, and West Virginia, sold to BBN in 1994; and
Westnet in Salt Lake City, Utah and Boulder, Colorado, serving Arizona, Colorado, New Mexico, Utah, and Wyoming.
==== National LambdaRail ====
The National LambdaRail (NRL) was launched in September 2003. It is a 12,000-mile high-speed national computer network owned and operated by the US research and education community that runs over fiber-optic lines. It was the first transcontinental 10 Gigabit Ethernet network. It operates with an aggregate capacity of up to 1.6 Tbit/s and a 40 Gbit/s bitrate. NLR ceased operations in March 2014.
==== TransPAC2, and TransPAC3 ====
TransPAC2 is a high-speed international Internet service connecting research and education networks in the Asia-Pacific region to those in the US. TransPAC3 is part of the NSF's International Research Network Connections (IRNC) program.
==== Very high-speed Backbone Network Service (vBNS) ====
The Very high-speed Backbone Network Service (vBNS) came on line in April 1995 as part of a NSF sponsored project to provide high-speed interconnection between NSF-sponsored supercomputing centers and select access points in the United States. The network was engineered and operated by MCI Telecommunications under a cooperative agreement with the NSF. By 1998, the vBNS had grown to connect more than 100 universities and research and engineering institutions via 12 national points of presence with DS-3 (45 Mbit/s), OC-3c (155 Mbit/s), and OC-12 (622 Mbit/s) links on an all OC-12 backbone, a substantial engineering feat for that time. The vBNS installed one of the first ever production OC-48 (2.5 Gbit/s) IP links in February 1999 and went on to upgrade the entire backbone to OC-48.
In June 1999 MCI WorldCom introduced vBNS+ which allowed attachments to the vBNS network by organizations that were not approved by or receiving support from NSF. After the expiration of the NSF agreement, the vBNS largely transitioned to providing service to the government. Most universities and research centers migrated to the Internet2 educational backbone. In January 2006, when MCI and Verizon merged, vBNS+ became a service of Verizon Business.
== See also ==
Multi-bearer network
Optical burst switching
Packet radio
Transmission delay
Virtual private network
== References ==
=== Bibliography ===
Abbate, Janet (2000). Inventing the Internet. MIT Press. ISBN 9780262511155.
Gillies, James; Cailliau, Robert (2000). How the Web was born : the story of the World Wide Web. Oxford: Oxford University Press. ISBN 0-19-286207-3. OCLC 43377073.
Hafner, Katie (1996). Where Wizards Stay Up Late. Simon and Schuster. pp. 52–67. ISBN 9780684832678.
Norberg, Arthur; O'Neill, Judy E. (2000). Transforming Computer Technology: Information Processing for the Pentagon, 1962-1982. Johns Hopkins University. ISBN 978-0801863691.
Moschovitis, Christos J. P. (1999). History of the Internet: A Chronology, 1843 to the Present. ABC-CLIO. ISBN 978-1-57607-118-2.
Lawrence Roberts, The Evolution of Packet Switching (Proceedings of the IEEE, November, 1978)
=== Primary sources ===
Paul Baran et al., On Distributed Communications, Volumes I-XI Archived 2011-03-29 at the Wayback Machine (RAND Corporation Research Documents, August, 1964)
Paul Baran, On Distributed Communications: I Introduction to Distributed Communications Network (RAND Memorandum RM-3420-PR. August 1964)
Paul Baran, On Distributed Communications Networks, (IEEE Transactions on Communications Systems, Vol. CS-12 No. 1, pp. 1–9, March 1964)
D. W. Davies, K. A. Bartlett, R. A. Scantlebury, and P. T. Wilkinson, A digital communications network for computers giving rapid response at remote terminals (ACM Symposium on Operating Systems Principles. October 1967)
R. A. Scantlebury, P. T. Wilkinson, and K. A. Bartlett, The design of a message switching Centre for a digital communication network (IFIP 1968)
== Further reading ==
Pelkey, James L.; Russell, Andrew L.; Robbins, Loring G. (2022). Circuits, Packets, and Protocols: Entrepreneurs and Computer Communications, 1968-1988. Morgan & Claypool. ISBN 978-1-4503-9729-2.
Russell, Andrew L. (2014). Open Standards and the Digital Age: History, Ideology, and Networks. Cambridge University Press. ISBN 978-1-139-91661-5.
== External links ==
Wilkinson, Peter (Summer 2020), "Packet Switching and the NPL Network", Computer Resurrection: The Journal of the Computer Conservation Society (90), ISSN 0958-7403
Oral history interview with Paul Baran. Charles Babbage Institute University of Minnesota, Minneapolis. Baran describes his working environment at RAND, as well as his initial interest in survivable communications, and the evolution, writing and distribution of his eleven-volume work, "On Distributed Communications". Baran discusses his interaction with the group at ARPA who were responsible for the later development of the ARPANET.
NPL Data Communications Network NPL video, 1970s
Packet Switching History and Design, site reviewed by Baran, Roberts, and Kleinrock
Paul Baran and the Origins of the Internet
20+ articles on packet switching in the 1970s, archived from the original on 2009-08-01
"An Introduction to Packet Switched Networks". Phrack. May 3, 1988. Archived from the original on 2023-12-04. | Wikipedia/Packet-switched_network |
In computer networking, network traffic control is the process of managing, controlling or reducing the network traffic, particularly Internet bandwidth, e.g. by the network scheduler. It is used by network administrators, to reduce congestion, latency and packet loss. This is part of bandwidth management. In order to use these tools effectively, it is necessary to measure the network traffic to determine the causes of network congestion and attack those problems specifically.
Network traffic control is an important subject in datacenters as it is necessary for efficient use of datacenter network bandwidth and for maintaining service level agreements.
== Traffic shaping ==
Traffic shaping is the retiming (delaying) of packets (or frames) until they meet specified bandwidth and or burstiness limits. Since such delays involve queues that are nearly always finite and, once full, excess traffic is nearly always dropped (discarded), traffic shaping nearly always implies traffic policing as well.
== Traffic policing ==
Traffic policing is the dropping (discarding) or reduction in priority (demoting) of packets (or frames) that exceed some specified bandwidth and or burstiness limit.
== References == | Wikipedia/Network_traffic_control |
The study of interdependent networks is a subfield of network science dealing with phenomena caused by the interactions between complex networks. Though there may be a wide variety of interactions between networks, dependency focuses on the scenario in which the nodes in one network require support from nodes in another network.
== Motivation for the model ==
In nature, networks rarely appear in isolation. They are typically elements in larger systems and can have non-trivial effects on one another. For example, infrastructure networks exhibit interdependency to a large degree. The power stations which form the nodes of the power grid require fuel delivered via a network of roads or pipes and are also controlled via the nodes of communications network. Though the transportation network does not depend on the power network to function, the communications network does. Thus the deactivation of a critical number of nodes in either the power network or the communication network can lead to a series of cascading failures across the system with potentially catastrophic repercussions. If the two networks were treated in isolation, this important feedback effect would not be seen and predictions of network robustness would be greatly overestimated.
== Dependency links ==
Links in a standard network represent connectivity, providing information about how one node can be reached from another. Dependency links represent a need for support from one node to another. This relationship is often, though not necessarily, mutual and thus the links can be directed or undirected. Crucially, a node loses its ability to function as soon as the node it is dependent on ceases to function while it may not be so severely effected by losing a node it is connected to.
== Comparison to many-particle systems in physics ==
In statistical physics, phase transitions can only appear in many particle systems. Though phase transitions are well known in network science, in single networks they are second order only. With the introduction of internetwork dependency, first order transitions emerge. This is a new phenomenon and one with profound implications for systems engineering. Where system dissolution takes place after steady (if steep) degradation for second order transitions, the existence of a first order transition implies that the system can go from a relatively healthy state to complete collapse with no advanced warning.
== Examples ==
Infrastructure networks. The network of power stations depends on instructions from the communications network which require power themselves. Another example is the interdependence between electric and natural gas systems
Transportation networks. The networks of airports and seaports are interdependent in that in a given city, the ability of that city's airport to function is dependent upon resources obtained from the seaport or vice versa.
Protein networks. A biological process regulated by a number of proteins is often represented as a network. Since the same proteins participate in different processes, the networks are interdependent.
Ecological networks. Food webs constructed from species which depend on one another are interdependent when the same species participates in different webs.
Climate networks. Spatial measurements of different climatological variables define a network. The networks defined by different sets of variables are interdependent.
== See also ==
2003 Italy blackout – Power outage in Italy and Switzerland
Cascading failure – Systemic risk of failure
Percolation theory – Mathematical theory on behavior of connected clusters in a random graph
== References == | Wikipedia/Interdependent_networks |
Network topology is the arrangement of the elements (links, nodes, etc.) of a communication network. Network topology can be used to define or describe the arrangement of various types of telecommunication networks, including command and control radio networks, industrial fieldbusses and computer networks.
Network topology is the topological structure of a network and may be depicted physically or logically. It is an application of graph theory wherein communicating devices are modeled as nodes and the connections between the devices are modeled as links or lines between the nodes. Physical topology is the placement of the various components of a network (e.g., device location and cable installation), while logical topology illustrates how data flows within a network. Distances between nodes, physical interconnections, transmission rates, or signal types may differ between two different networks, yet their logical topologies may be identical. A network's physical topology is a particular concern of the physical layer of the OSI model.
Examples of network topologies are found in local area networks (LAN), a common computer network installation. Any given node in the LAN has one or more physical links to other devices in the network; graphically mapping these links results in a geometric shape that can be used to describe the physical topology of the network. A wide variety of physical topologies have been used in LANs, including ring, bus, mesh and star. Conversely, mapping the data flow between the components determines the logical topology of the network. In comparison, Controller Area Networks, common in vehicles, are primarily distributed control system networks of one or more controllers interconnected with sensors and actuators over, invariably, a physical bus topology.
== Topologies ==
Two basic categories of network topologies exist, physical topologies and logical topologies.
The transmission medium layout used to link devices is the physical topology of the network. For conductive or fiber optical mediums, this refers to the layout of cabling, the locations of nodes, and the links between the nodes and the cabling. The physical topology of a network is determined by the capabilities of the network access devices and media, the level of control or fault tolerance desired, and the cost associated with cabling or telecommunication circuits.
In contrast, logical topology is the way that the signals act on the network media, or the way that the data passes through the network from one device to the next without regard to the physical interconnection of the devices. A network's logical topology is not necessarily the same as its physical topology. For example, the original twisted pair Ethernet using repeater hubs was a logical bus topology carried on a physical star topology. Token Ring is a logical ring topology, but is wired as a physical star from the media access unit. Physically, Avionics Full-Duplex Switched Ethernet (AFDX) can be a cascaded star topology of multiple dual redundant Ethernet switches; however, the AFDX virtual links are modeled as time-switched single-transmitter bus connections, thus following the safety model of a single-transmitter bus topology previously used in aircraft. Logical topologies are often closely associated with media access control methods and protocols. Some networks are able to dynamically change their logical topology through configuration changes to their routers and switches.
== Links ==
The transmission media (often referred to in the literature as the physical media) used to link devices to form a computer network include electrical cables (Ethernet, HomePNA, power line communication, G.hn), optical fiber (fiber-optic communication), and radio waves (wireless networking). In the OSI model, these are defined at layers 1 and 2 — the physical layer and the data link layer.
A widely adopted family of transmission media used in local area network (LAN) technology is collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Ethernet transmits data over both copper and fiber cables. Wireless LAN standards (e.g. those defined by IEEE 802.11) use radio waves, or others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data.
=== Wired technologies ===
The orders of the following wired technologies are, roughly, from slowest to fastest transmission speed.
Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire surrounded by an insulating layer (typically a flexible material with a high dielectric constant), which itself is surrounded by a conductive layer. The insulation between the conductors helps maintain the characteristic impedance of the cable which can help improve its performance. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.
ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network.
Signal traces on printed circuit boards are common for board-level serial communication, particularly between certain types integrated circuits, a common example being SPI.
Ribbon cable (untwisted and possibly unshielded) has been a cost-effective media for serial protocols, especially within metallic enclosures or rolled within copper braid or foil, over short distances, or at lower data rates. Several serial network protocols can be deployed without shielded or twisted pair cabling, that is, with flat or ribbon cable, or a hybrid flat and twisted ribbon cable, should EMC, length, and bandwidth constraints permit: RS-232, RS-422, RS-485, CAN, GPIB, SCSI, etc.
Twisted pair wire is the most widely used medium for all telecommunication. Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer network cabling (wired Ethernet as defined by IEEE 802.3) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 10 billion bits per second. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted pair (STP). Each form comes in several category ratings, designed for use in various scenarios.
An optical fiber is a glass fiber. It carries pulses of light that represent data. Some advantages of optical fibers over metal wires are very low transmission loss and immunity from electrical interference. Optical fibers can simultaneously carry multiple wavelengths of light, which greatly increases the rate that data can be sent, and helps enable data rates of up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea communications cables to interconnect continents.
Price is a main factor distinguishing wired- and wireless technology options in a business. Wireless options command a price premium that can make purchasing wired computers, printers and other devices a financial benefit. Before making the decision to purchase hard-wired technology products, a review of the restrictions and limitations of the selections is necessary. Business and employee needs may override any cost considerations.
=== Wireless technologies ===
Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 50 km (30 mi) apart.
Communications satellites – Satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically in geostationary orbit 35,786 km (22,236 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
Cellular and PCS systems use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.
Radio and spread spectrum technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi.
Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.
=== Exotic technologies ===
There have been various attempts at transporting data over exotic media:
IP over Avian Carriers was a humorous April fool's Request for Comments, issued as RFC 1149. It was implemented in real life in 2001.
Extending the Internet to interplanetary dimensions via radio waves, the Interplanetary Internet.
Both cases have a large round-trip delay time, which gives slow two-way communication, but does not prevent sending large amounts of information.
== Nodes ==
Network nodes are the points of connection of the transmission medium to transmitters and receivers of the electrical, optical, or radio signals carried in the medium. Nodes may be associated with a computer, but certain types may have only a microcontroller at a node or possibly no programmable device at all. In the simplest of serial arrangements, one RS-232 transmitter can be connected by a pair of wires to one receiver, forming two nodes on one link, or a Point-to-Point topology. Some protocols permit a single node to only either transmit or receive (e.g., ARINC 429). Other protocols have nodes that can both transmit and receive into a single channel (e.g., CAN can have many transceivers connected to a single bus). While the conventional system building blocks of a computer network include network interface controllers (NICs), repeaters, hubs, bridges, switches, routers, modems, gateways, and firewalls, most address network concerns beyond the physical network topology and may be represented as single nodes on a particular physical network topology.
=== Network interfaces ===
A network interface controller (NIC) is computer hardware that provides a computer with the ability to access the transmission media, and has the ability to process low-level network information. For example, the NIC may have a connector for accepting a cable, or an aerial for wireless transmission and reception, and the associated circuitry.
The NIC responds to traffic addressed to a network address for either the NIC or the computer as a whole.
In Ethernet networks, each network interface controller has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.
=== Repeaters and hubs ===
A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal may be reformed or retransmitted at a higher power level, to the other side of an obstruction possibly using a different transmission medium, so that the signal can cover longer distances without degradation. Commercial repeaters have extended RS-232 segments from 15 meters to over a kilometer. In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart.
Repeaters work within the physical layer of the OSI model, that is, there is no end-to-end change in the physical protocol across the repeater, or repeater pair, even if a different physical layer may be used between the ends of the repeater, or repeater pair. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters that can be used in a row, e.g., the Ethernet 5-4-3 rule.
A repeater with multiple ports is known as hub, an Ethernet hub in Ethernet networks, a USB hub in USB networks.
USB networks use hubs to form tiered-star topologies.
Ethernet hubs and repeaters in LANs have been mostly obsoleted by modern switches.
=== Bridges ===
A network bridge connects and filters traffic between two network segments at the data link layer (layer 2) of the OSI model to form a single network. This breaks the network's collision domain but maintains a unified broadcast domain. Network segmentation breaks down a large, congested network into an aggregation of smaller, more efficient networks.
Bridges come in three basic types:
Local bridges: Directly connect LANs
Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.
Wireless bridges: Can be used to join LANs or connect remote devices to LANs.
=== Switches ===
A network switch is a device that forwards and filters OSI layer 2 datagrams (frames) between ports based on the destination MAC address in each frame.
A switch is distinct from a hub in that it only forwards the frames to the physical ports involved in the communication rather than all ports connected. It can be thought of as a multi-port bridge. It learns to associate physical ports to MAC addresses by examining the source addresses of received frames. If an unknown destination is targeted, the switch broadcasts to all ports but the source. Switches normally have numerous ports, facilitating a star topology for devices, and cascading additional switches.
Multi-layer switches are capable of routing based on layer 3 addressing or additional logical levels. The term switch is often used loosely to include devices such as routers and bridges, as well as devices that may distribute traffic based on load or based on application content (e.g., a Web URL identifier).
=== Routers ===
A router is an internetworking device that forwards packets between networks by processing the routing information included in the packet or datagram (Internet protocol information from layer 3). The routing information is often processed in conjunction with the routing table (or forwarding table). A router uses its routing table to determine where to forward packets. A destination in a routing table can include a black hole because data can go into it, however, no further processing is done for said data, i.e. the packets are dropped.
=== Modems ===
Modems (MOdulator-DEModulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Modems are commonly used for telephone lines, using a digital subscriber line technology.
=== Firewalls ===
A firewall is a network device for controlling network security and access rules. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks.
== Classification ==
The study of network topology recognizes eight basic topologies: point-to-point, bus, star, ring or circular, mesh, tree, hybrid, or daisy chain.
=== Point-to-point ===
The simplest topology with a dedicated link between two endpoints. Easiest to understand, of the variations of point-to-point topology, is a point-to-point communication channel that appears, to the user, to be permanently associated with the two endpoints. A child's tin can telephone is one example of a physical dedicated channel.
Using circuit-switching or packet-switching technologies, a point-to-point circuit can be set up dynamically and dropped when no longer needed. Switched point-to-point topologies are the basic model of conventional telephony.
The value of a permanent point-to-point network is unimpeded communications between the two endpoints. The value of an on-demand point-to-point connection is proportional to the number of potential pairs of subscribers and has been expressed as Metcalfe's Law.
=== Daisy chain ===
Daisy chaining is accomplished by connecting each computer in series to the next. If a message is intended for a computer partway down the line, each system bounces it along in sequence until it reaches the destination. A daisy-chained network can take two basic forms: linear and ring.
A linear topology puts a two-way link between one computer and the next. However, this was expensive in the early days of computing, since each computer (except for the ones at each end) required two receivers and two transmitters.
By connecting the computers at each end of the chain, a ring topology can be formed. When a node sends a message, the message is processed by each computer in the ring. An advantage of the ring is that the number of transmitters and receivers can be cut in half. Since a message will eventually loop all of the way around, transmission does not need to go both directions. Alternatively, the ring can be used to improve fault tolerance. If the ring breaks at a particular link then the transmission can be sent via the reverse path thereby ensuring that all nodes are always connected in the case of a single failure.
=== Bus ===
In local area networks using bus topology, each node is connected by interface connectors to a single central cable. This is the 'bus', also referred to as the backbone, or trunk – all data transmission between nodes in the network is transmitted over this common transmission medium and is able to be received by all nodes in the network simultaneously.
A signal containing the address of the intended receiving machine travels from a source machine in both directions to all machines connected to the bus until it finds the intended recipient, which then accepts the data. If the machine address does not match the intended address for the data, the data portion of the signal is ignored. Since the bus topology consists of only one wire it is less expensive to implement than other topologies, but the savings are offset by the higher cost of managing the network. Additionally, since the network is dependent on the single cable, it can be the single point of failure of the network. In this topology data being transferred may be accessed by any node.
==== Linear bus ====
In a linear bus network, all of the nodes of the network are connected to a common transmission medium which has just two endpoints. When the electrical signal reaches the end of the bus, the signal is reflected back down the line, causing unwanted interference. To prevent this, the two endpoints of the bus are normally terminated with a device called a terminator.
==== Distributed bus ====
In a distributed bus network, all of the nodes of the network are connected to a common transmission medium with more than two endpoints, created by adding branches to the main section of the transmission medium – the physical distributed bus topology functions in exactly the same fashion as the physical linear bus topology because all nodes share a common transmission medium.
=== Star ===
In star topology (also called hub-and-spoke), every peripheral node (computer workstation or any other peripheral) is connected to a central node called a hub or switch. The hub is the server and the peripherals are the clients. The network does not necessarily have to resemble a star to be classified as a star network, but all of the peripheral nodes on the network must be connected to one central hub. All traffic that traverses the network passes through the central hub, which acts as a signal repeater.
The star topology is considered the easiest topology to design and implement. One advantage of the star topology is the simplicity of adding additional nodes. The primary disadvantage of the star topology is that the hub represents a single point of failure. Also, since all peripheral communication must flow through the central hub, the aggregate central bandwidth forms a network bottleneck for large clusters.
==== Extended star ====
The extended star network topology extends a physical star topology by one or more repeaters between the central node and the peripheral (or 'spoke') nodes. The repeaters are used to extend the maximum transmission distance of the physical layer, the point-to-point distance between the central node and the peripheral nodes. Repeaters allow greater transmission distance, further than would be possible using just the transmitting power of the central node. The use of repeaters can also overcome limitations from the standard upon which the physical layer is based.
A physical extended star topology in which repeaters are replaced with hubs or switches is a type of hybrid network topology and is referred to as a physical hierarchical star topology, although some texts make no distinction between the two topologies.
A physical hierarchical star topology can also be referred as a tier-star topology. This topology differs from a tree topology in the way star networks are connected together. A tier-star topology uses a central node, while a tree topology uses a central bus and can also be referred as a star-bus network.
==== Distributed star ====
A distributed star is a network topology that is composed of individual networks that are based upon the physical star topology connected in a linear fashion – i.e., 'daisy-chained' – with no central or top level connection point (e.g., two or more 'stacked' hubs, along with their associated star connected nodes or 'spokes').
=== Ring ===
A ring topology is a daisy chain in a closed loop. Data travels around the ring in one direction. When one node sends data to another, the data passes through each intermediate node on the ring until it reaches its destination. The intermediate nodes repeat (retransmit) the data to keep the signal strong. Every node is a peer; there is no hierarchical relationship of clients and servers. If one node is unable to retransmit data, it severs communication between the nodes before and after it in the bus.
Advantages:
When the load on the network increases, its performance is better than bus topology.
There is no need of network server to control the connectivity between workstations.
Disadvantages:
Aggregate network bandwidth is bottlenecked by the weakest link between two nodes.
=== Mesh ===
The value of fully meshed networks is proportional to the exponent of the number of subscribers, assuming that communicating groups of any two endpoints, up to and including all the endpoints, is approximated by Reed's Law.
==== Fully connected network ====
In a fully connected network, all nodes are interconnected. (In graph theory this is called a complete graph.) The simplest fully connected network is a two-node network. A fully connected network doesn't need to use packet switching or broadcasting. However, since the number of connections grows quadratically with the number of nodes:
c
=
n
(
n
−
1
)
2
.
{\displaystyle c={\frac {n(n-1)}{2}}.\,}
This makes it impractical for large networks. This kind of topology does not trip and affect other nodes in the network.
==== Partially connected network ====
In a partially connected network, certain nodes are connected to exactly one other node; but some nodes are connected to two or more other nodes with a point-to-point link. This makes it possible to make use of some of the redundancy of mesh topology that is physically fully connected, without the expense and complexity required for a connection between every node in the network.
=== Hybrid ===
Hybrid topology is also known as hybrid network. Hybrid networks combine two or more topologies in such a way that the resulting network does not exhibit one of the standard topologies (e.g., bus, star, ring, etc.). For example, a tree network (or star-bus network) is a hybrid topology in which star networks are interconnected via bus networks. However, a tree network connected to another tree network is still topologically a tree network, not a distinct network type. A hybrid topology is always produced when two different basic network topologies are connected.
A star-ring network consists of two or more ring networks connected using a multistation access unit (MAU) as a centralized hub.
Snowflake topology is meshed at the core, but tree shaped at the edges.
Two other hybrid network types are hybrid mesh and hierarchical star.
== Centralization ==
The star topology reduces the probability of a network failure by connecting all of the peripheral nodes (computers, etc.) to a central node. When the physical star topology is applied to a logical bus network such as Ethernet, this central node (traditionally a hub) rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on the network, sometimes including the originating node. All peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the central node only. The failure of a transmission line linking any peripheral node to the central node will result in the isolation of that peripheral node from all others, but the remaining peripheral nodes will be unaffected. However, the disadvantage is that the failure of the central node will cause the failure of all of the peripheral nodes.
If the central node is passive, the originating node must be able to tolerate the reception of an echo of its own transmission, delayed by the two-way round trip transmission time (i.e. to and from the central node) plus any delay generated in the central node. An active star network has an active central node that usually has the means to prevent echo-related problems.
A tree topology (a.k.a. hierarchical topology) can be viewed as a collection of star networks arranged in a hierarchy. This tree structure has individual peripheral nodes (e.g. leaves) which are required to transmit to and receive from one other node only and are not required to act as repeaters or regenerators. Unlike the star network, the functionality of the central node may be distributed.
As in the conventional star network, individual nodes may thus still be isolated from the network by a single-point failure of a transmission path to the node. If a link connecting a leaf fails, that leaf is isolated; if a connection to a non-leaf node fails, an entire section of the network becomes isolated from the rest.
To alleviate the amount of network traffic that comes from broadcasting all signals to all nodes, more advanced central nodes were developed that are able to keep track of the identities of the nodes that are connected to the network. These network switches will learn the layout of the network by listening on each port during normal data transmission, examining the data packets and recording the address/identifier of each connected node and which port it is connected to in a lookup table held in memory. This lookup table then allows future transmissions to be forwarded to the intended destination only.
Daisy chain topology is a way of connecting network nodes in a linear or ring structure. It is used to transmit messages from one node to the next until they reach the destination node.
A daisy chain network can have two types: linear and ring. A linear daisy chain network is like an electrical series, where the first and last nodes are not connected. A ring daisy chain network is where the first and last nodes are connected, forming a loop.
== Decentralization ==
In a partially connected mesh topology, there are at least two nodes with two or more paths between them to provide redundant paths in case the link providing one of the paths fails. Decentralization is often used to compensate for the single-point-failure disadvantage that is present when using a single device as a central node (e.g., in star and tree networks). A special kind of mesh, limiting the number of hops between two nodes, is a hypercube. The number of arbitrary forks in mesh networks makes them more difficult to design and implement, but their decentralized nature makes them very useful.
This is similar in some ways to a grid network, where a linear or ring topology is used to connect systems in multiple directions. A multidimensional ring has a toroidal topology, for instance.
A fully connected network, complete topology, or full mesh topology is a network topology in which there is a direct link between all pairs of nodes. In a fully connected network with n nodes, there are
n
(
n
−
1
)
2
{\displaystyle {\frac {n(n-1)}{2}}\,}
direct links. Networks designed with this topology are usually very expensive to set up, but provide a high degree of reliability due to the multiple paths for data that are provided by the large number of redundant links between nodes. This topology is mostly seen in military applications.
== See also ==
== References ==
== External links ==
Tetrahedron Core Network: Application of a tetrahedral structure to create a resilient partial-mesh 3-dimensional campus backbone data network | Wikipedia/Fully_connected_network |
A home network or home area network (HAN) is a type of computer network, specifically a type of local area network (LAN), that facilitates communication among devices within the close vicinity of a home. Devices capable of participating in this network, for example, smart devices such as network printers and handheld mobile computers, often gain enhanced emergent capabilities through their ability to interact. These additional capabilities can be used to increase the quality of life inside the home in a variety of ways, such as automation of repetitive tasks, increased personal productivity, enhanced home security, and easier access to entertainment. Other than a regular LAN that are centralized and use IP technologies, a home network may also make use of direct peer-to-peer methods as well as non-IP protocols such as Bluetooth.
== Infrastructure devices ==
Certain devices in a home network are primarily concerned with enabling or supporting the communications of the kinds of end devices residents more directly interact with. Unlike their data center counterparts, these networking devices are compact and passively cooled, aiming to be as hands-off and non-obtrusive as possible.
A gateway establishes physical and data link layer connectivity to a WAN provided by a service provider. Home routers provided by internet service providers (ISP) usually have the modem integrated within the unit. It is effectively a client of the external DHCP servers owned by the ISP.
A router establishes network layer connectivity between a wide area network (WAN) and the local area network of the residence. For IPv4 networking, the device may also perform the function of network address translation establishing a private network with a set of independent addresses for the network. These devices often contain an integrated wireless access point and a multi-port Ethernet LAN switch.
A wireless access point provides connectivity within the home network for mobile devices and many other types using the Wi-Fi standard. When a router includes this service, it is referred to as a wireless router, which is predominantly the case.
A network switch permits the connection of multiple wired Ethernet devices to the home network. While the needs of most home networks are satisfied with wireless connectivity, some devices require wired connection. Such devices, for example IP cameras and IP phones, are sometimes powered via their network cable with power over Ethernet (PoE).
A network bridge binds two different network interfaces to each other, often in order to grant a wired-only device access to a wireless network medium.
Controllers for home automation or smart home hubs act as a controller for light bulbs, smart plugs, and security devices.
== Connectivity and protocols ==
Home networks may use either wired or wireless connectivity methods that are found and standardized on local area networks or personal area networks. One of the most common ways of creating a home network is by using wireless radio signal technology; the 802.11 network as certified by the IEEE. Most wireless-capable residential devices operate at a frequency of 2.4 GHz under 802.11b and 802.11g or 5 GHz under 802.11a. Some home networking devices operate in both radio-band signals and fall within the 802.11n or 802.11ac standards. Wi-Fi is a marketing and compliance certification for IEEE 802.11 technologies. The Wi-Fi Alliance has tested compliant products, and certifies them for interoperability.
Low power, close range communication based on IEEE 802.15 standards has a strong presence in homes. Bluetooth continues to be the technology of choice for most wireless accessories such as keyboards, mice, headsets, and game controllers. These connections are often established in a transient, ad-hoc manner and are not thought of as permanent residents of a home network. A "low-rate" version of the original WPAN protocol was used as the basis of Zigbee.
== Endpoint devices and services ==
Home networks may consist of a variety of devices and services. Personal computers such as desktops and mobile computers like tablets and smartphones are commonly used on home networks to communicate with other devices. A network attached storage (NAS) device may be part of the network, for general storage or backup purposes. A print server can be used to share any directly connected printers with other computers on the network.
Smart speakers may be used on a network for streaming media. DLNA is a common protocol used for interoperability between networked media-centric devices in the home, allowing devices like stereo systems on the network to access the music library from a PC on the same network, for example. Using an additional Internet connection, TVs for instance may stream online video content, while video game consoles can use online multiplayer.
Traditionally, data-centric equipment such as computers and media players have been the primary tenants of a home network. However, due to the lowering cost of computing and the ubiquity of smartphone usage, many traditionally non-networked home equipment categories now include new variants capable of control or remote monitoring through an app on a smartphone. Newer startups and established home equipment manufacturers alike have begun to offer these products as part of a "Smart" or "Intelligent" or "Connected Home" portfolio. Examples of such may include "connected" light bulbs (see also Li-Fi), home security alarms and smoke detectors. These often run over the Internet so that they can be accessed remotely.
Individuals may opt to subscribe to managed cloud computing services that provide such services instead of maintaining similar facilities within their home network. In such situations, local services along with the devices maintaining them are replaced by those in an external data center and made accessible to the home-dweller's computing devices via a WAN Internet connection.
== Network management ==
Apple devices aim to make networking as hidden and automatic as possible, utilizing a zero-configuration networking protocol called Bonjour embedded within their otherwise proprietary line of software and hardware products.
Microsoft offers simple access control features built into their Windows operating system. Homegroup is a feature that allows shared disk access, shared printer access and shared scanner access among all computers and users (typically family members) in a home, in a similar fashion as in a small office workgroup, e.g., by means of distributed peer-to-peer networking (without a central server). Additionally, a home server may be added for increased functionality. The Windows HomeGroup feature was introduced with Microsoft Windows 7 in order to simplify file sharing in residences. All users (typically all family members), except guest accounts, may access any shared library on any computer that is connected to the home group. Passwords are not required from the family members during logon. Instead, secure file sharing is possible by means of a temporary password that is used when adding a computer to the HomeGroup.
== See also ==
Access control
Computer security software
Data backup
Encryption
Firewall (computing)
Home automation
Home server
Indoor positioning system (IPS)
Matter
Network security
Smart, connected products
Software update
Virtual assistant
== References ==
== External links ==
WikiBooks:Transferring Data between Standard Dial-Up Modems
Home Net WG of the IETF | Wikipedia/Home_area_network |
Scientific collaboration network is a social network where nodes are scientists and links are co-authorships as the latter is one of the most well documented forms of scientific collaboration. It is an undirected, scale-free network where the degree distribution follows a power law with an exponential cutoff – most authors are sparsely connected while a few authors are intensively connected. The network has an assortative nature – hubs tend to link to other hubs and low-degree nodes tend to link to low-degree nodes. Assortativity is not structural, meaning that it is not a consequence of the degree distribution, but it is generated by some process that governs the network’s evolution.
== Study by Mark Newman ==
A detailed reconstruction of an actual collaboration was made by Mark Newman. He analyzed the collaboration networks through several large databases in the fields of biology and medicine, physics and computer science in a five-year window (1995-1999). The results showed that these networks form small worlds, in which randomly chosen pairs of scientists are typically separated by only a short path of intermediate acquaintances. They also suggest that the networks are highly clustered, i.e. two scientists are much more likely to have collaborated if they have a third common collaborator than are two scientists chosen randomly from the community.
== Prototype of evolving networks ==
Barabasi et al. studied the collaboration networks in mathematics and neuro-science of an 8-year period (1991-1998) to understand the topological and dynamical laws governing complex networks. They viewed the collaboration network as a prototype of evolving networks, as it expands by the addition of new nodes (authors) and new links (papers co-authored). The results obtained indicated that the network is scale-free and that its evolution is governed by preferential attachment. Moreover, authors concluded that most quantities used to characterize the network are time dependent. For example, the average degree (network's interconnectedness) increases in time. Furthermore, the study showed that the node separation decreases over time, however this trend is believed to be offered by incomplete database and it can be opposite in the full system.
== References == | Wikipedia/Scientific_collaboration_network |
Capacity management's goal is to ensure that information technology resources are sufficient to meet upcoming business requirements cost-effectively. One common interpretation of capacity management is described in the ITIL framework. ITIL version 3 views capacity management as comprising three sub-processes: business capacity management, service capacity management, and component capacity management.
As the usage of IT services change and functionality evolves, the amount of central processing units (CPUs), memory and storage to a physical or virtual server etc. also changes. If there are spikes in, for example, processing power at a particular time of the day, it proposes analyzing what is happening at that time and making changes to maximize the existing IT infrastructure; for example, tuning the application, or moving a batch cycle to a quieter period. This capacity planning identifies any potential capacity related issues likely to arise, and justifies any necessary investment decisions - for example, the server requirements to accommodate future IT resource demand, or a data center consolidation.
These activities are intended to optimize performance and efficiency, and to plan for and justify financial investments. Capacity management is concerned with:
Monitoring the performance and throughput or load on a server, server farm, or property
Performance analysis of measurement data, including analysis of the impact of new releases on capacity
Performance tuning of activities to ensure the most efficient use of existing infrastructure
Understanding the demands on the service and future plans for workload growth (or shrinkage)
Influences on demand for computing resources
Capacity planning of storage, computer hardware, software and connection infrastructure resources required over some future period of time.
Capacity management interacts with the discipline of Performance Engineering, both during the requirements and design activities of building a system, and when using performance monitoring.
== Factors affecting network performance ==
Not all networks are the same. As data is broken into component parts (often known frames, packets, or segments) for transmission, several factors can affect their delivery.
Delay: It can take a long time for a packet to be delivered across intervening networks. In reliable protocols where a receiver acknowledges delivery of each chunk of data, it is possible to measure this as round-trip time.
Jitter: This is the variability of delay. Low jitter is desirable, as it ensures a steady stream of packets being delivered. If this varies above 200ms, buffers may get starved and not have data to process.
Reception Order: Some real-time protocols like voice and video require packets to arrive in the correct sequence order to be processed. If packets arrive out-of-order or out-of-sequence, they may have to be dropped because they cannot be inserted into the stream that has already been played.
Packet loss: In some cases, intermediate devices in a network will lose packets. This may be due to errors, to overloading of the intermediate network, or to the intentional discarding of traffic in order to enforce a particular service level.
Retransmission: When packets are lost in a reliable network, they are retransmitted. This incurs two delays: First, the delay from re-sending the data; and second, the delay resulting from waiting until the data is received in the correct order before forwarding it up the protocol stack.
Throughput: The amount of traffic a network can carry is measured as throughput, usually in terms such as kilobits per second. Throughput is analogous to the number of lanes on a highway, whereas latency is analogous to its speed limit.
These factors, and others (such as the performance of the network signaling on the end nodes, compression, encryption, concurrency, and so on) all affect the effective performance of a network. In some cases, the network may not work at all; in others, it may be slow or unusable. And because applications run over these networks, application performance suffers. Various intelligent solutions are available to ensure that traffic over the network is effectively managed to optimize performance for all users. See Traffic Shaping
== The performance management discipline ==
Network performance management (NPM) consists of measuring, modeling, planning, and optimizing networks to ensure that they carry traffic with the speed, reliability, and capacity that is appropriate for the nature of the application and the cost constraints of the organization.
Different applications warrant different blends of capacity, latency, and reliability. For example:
Streaming video or voice can be unreliable (brief moments of static) but needs to have very low latency so that lags don't occur
Bulk file transfer or e-mail must be reliable and have high capacity, but doesn't need to be instantaneous
Instant messaging doesn't consume much bandwidth, but should be fast and reliable
== Network performance management tasks and classes of tools ==
Network Performance management is a core component of the FCAPS ISO telecommunications framework (the 'P' stands for Performance in this acronym). It enables the network engineers to proactively prepare for degradations in their IT infrastructure and ultimately help the end-user experience.
Network managers perform many tasks; these include performance measurement, forensic analysis, capacity planning, and load-testing or load generation. They also work closely with application developers and IT departments who rely on them to deliver underlying network services.
For performance measurement, operators typically measure the performance of their networks at different levels. They either use per-port metrics (how much traffic on port 80 flowed between a client and a server and how long did it take) or they rely on end-user metrics (how fast did the login page load for Bob.)
Per-port metrics are collected using flow-based monitoring and protocols such as NetFlow (now standardized as IPFIX) or RMON.
End-user metrics are collected through web logs, synthetic monitoring, or real user monitoring. An example is ART (application response time) which provides end to end statistics that measure Quality of Experience.
For forensic analysis, operators often rely on sniffers that break down the transactions by their protocols and can locate problems such as retransmissions or protocol negotiations.
For capacity planning, modeling tools such as Aria Networks, OPNET, PacketTrap, NetSim, NetFlow and sFlow Analyzer, or NetQoS that project the impact of new applications or increased usage are invaluable. According to Gartner, through 2018 more than 30% of enterprises will use capacity management tools for their critical IT infrastructures, up from less than 5% in 2014. These capacity management tools help infrastructure and operations management teams plan and optimize IT infrastructures and tools, and balance the use of external and cloud computing service providers.
For load generation that helps to understand the breaking point, operators may use software or appliances that generate scripted traffic. Some hosted service providers also offer pay-as-you-go traffic generation for sites that face the public Internet.
=== Next generation NPM tools ===
Next-generation NPM tools are those that improve network management by automating the collection of network data, including capacity issues, and automatically interpreting it. Terry Slattery, editor at NoJitter.com, compares three such tools, VMWare's vRealize Network Insight, PathSolutions TotalView, and Kemp Flowmon, in the article The Future of Network Performance Management, June 10, 2021.
== The future of NPM ==
The future of network management is a radically expanding area of development, according to Terry Slattery on June 10, 2021: "We're starting to see more analytics of network data at levels that weren’t possible 10-15 years ago, due to limitations that no longer exist in computing, memory, storage, and algorithms. New approaches to network management promise to help us detect and resolve network problems... It’s certainly an interesting and evolving field."
== See also ==
Application performance management
Capacity planning
IT operations analytics
ITIL
Network monitoring
Network planning and design
Performance analysis
Performance tuning
== References == | Wikipedia/Network_performance_management |
In computer networking, a port is a communication endpoint. At the software level within an operating system, a port is a logical construct that identifies a specific process or a type of network service. A port is uniquely identified by a number, the port number, associated with the combination of a transport protocol and the network IP address. Port numbers are 16-bit unsigned integers.
The most common transport protocols that use port numbers are the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). The port completes the destination and origination addresses of a message within a host to point to an operating system process. Specific port numbers are reserved to identify specific services so that an arriving packet can be easily forwarded to a running application. For this purpose, port numbers lower than 1024 identify the historically most commonly used services and are called the well-known port numbers. Higher-numbered ports are available for general use by applications and are known as ephemeral ports.
Ports provide a multiplexing service for multiple services or multiple communication sessions at one network address. In the client–server model of application architecture, multiple simultaneous communication sessions may be initiated for the same service.
== Port number ==
For TCP and UDP, a port number is a 16-bit unsigned integer, thus ranging from 0 to 65535. For TCP, port number 0 is reserved and cannot be used, while for UDP, the source port is optional and a value of zero means no port. A process associates its input or output channels via an internet socket, which is a type of file descriptor, associated with a transport protocol, a network address such as an IP address, and a port number. This is known as binding. A socket is used by a process to send and receive data via the network. The operating system's networking software has the task of transmitting outgoing data from all application ports onto the network, and forwarding arriving network packets to processes by matching the packet's IP address and port number to a socket. For TCP, only one process may bind to a specific IP address and port combination. Common application failures, sometimes called port conflicts, occur when multiple programs attempt to use the same port number on the same IP address with the same protocol.
Applications implementing common services often use specifically reserved well-known port numbers for receiving service requests from clients. This process is known as listening, and involves the receipt of a request on the well-known port potentially establishing a one-to-one server-client dialog, using this listening port. Other clients may simultaneously connect to the same listening port; this works because a TCP connection is identified by a tuple consisting of the local address, the local port, the remote address, and the remote port. The well-known ports are defined by convention overseen by the Internet Assigned Numbers Authority (IANA). In many operating systems special privileges are required for applications to bind to these ports because these are often deemed critical to the operation of IP networks. Conversely, the client end of a connection typically uses a high port number allocated for short-term use, therefore called an ephemeral port.
=== Common port numbers ===
IANA is responsible for the global coordination of the DNS root, IP addressing, and other protocol resources. This includes the registration of commonly used TCP and UDP port numbers for well-known internet services.
The port numbers are divided into three ranges: the well-known ports, the registered ports, and the dynamic or private ports.
The well-known ports (also known as system ports) are those numbered from 0 through 1023. The requirements for new assignments in this range are stricter than for other registrations.
The registered ports are those from 1024 through 49151. IANA maintains the official list of well-known and registered ranges.
The dynamic or private ports are those from 49152 through 65535. One common use for this range is for ephemeral ports.
== Network behavior ==
Transport-layer protocols, such as the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP), transfer data using protocol data units (PDUs). For TCP, the PDU is a segment, and for UDP it is a datagram. Both protocols use a header field for indicating the source and destination port numbers. The port numbers are encoded in the transport protocol packet header, and they can be readily interpreted not only by the sending and receiving hosts but also by other components of the networking infrastructure. In particular, firewalls are commonly configured to differentiate between packets based on their source or destination port numbers. Port forwarding is an example application of this.
== Port scanning ==
The practice of attempting to connect to a range of ports in sequence on a single host is commonly known as port scanning. This is usually associated either with malicious cracking attempts or with network administrators looking for possible vulnerabilities to help prevent such attacks. Port connection attempts are frequently monitored and logged by hosts. The technique of port knocking uses a series of port connections (knocks) from a client computer to enable a server connection.
== Examples ==
An example of the use of ports is the delivery of email. A server used for sending and receiving email generally needs two services. The first service is used to transport email to and from other servers. This is accomplished with the Simple Mail Transfer Protocol (SMTP). A standard SMTP service application listens on TCP port 25 for incoming requests. The second service is usually either the Post Office Protocol (POP) or the Internet Message Access Protocol (IMAP) which is used by email client applications on users' personal computers to fetch email messages from the server. The POP service listens on TCP port number 110. Both services may be running on the same host computer, in which case the port number distinguishes the service that was requested by a remote computer, be it a user's computer or another mail server.
While the listening port number of a server is well defined (IANA calls these the well-known ports), the client's port number is often chosen from the dynamic port range (see below). In some applications, the clients and the server each use specific port numbers assigned by the IANA. A good example of this is DHCP in which the client always uses UDP port 68 and the server always uses UDP port 67.
== Use in URLs ==
Port numbers are a component in web or other uniform resource locators (URLs), but are omitted in most cases. By default, HTTP uses port 80 and HTTPS uses port 443, but a URL like http://www.example.com:8080/path/ specifies that the web browser connects to port 8080 of the HTTP server, instead of the default value.
== History ==
The concept of port numbers was established by the early developers of the ARPANET in informal cooperation of software authors and system administrators. The term port number was not yet in use. It was preceded by the use of the term socket number in the early development stages of the network. A socket number for a remote host was a 40-bit quantity. The first 32 bits were similar to today's IPv4 address, but at the time the most-significant 8 bits were the host number. The least-significant portion of the socket number (bits 33 through 40) was an entity called Another Eightbit Number, abbreviated AEN. Today, network socket refers to a related but distinct concept, namely the internal address of an endpoint used only within the node.
On March 26, 1972, Vint Cerf and Jon Postel called for documenting the then-current usages and establishing a socket number catalog in RFC 322. Network administrators were asked to submit a note or place a phone call, "describing the function and socket numbers of network service programs at each HOST". This catalog was subsequently published as RFC 433 in December 1972 and included a list of hosts and their port numbers and the corresponding function used at each host in the network. This first registry function served primarily as documentation of usage and indicated that port number usage was conflicting between some hosts for "useful public services". The document promised a resolution of the conflicts based on a standard that Postel had published in May 1972 in RFC 349, in which he first proposed official assignments of port numbers to network services and suggested a dedicated administrative function, which he called a czar, to maintain a registry.
The 256 values of the AEN were divided into the following ranges:
The Telnet service received the first official assignment of the value 1.
In detail, the first set of assignments was:
In the early ARPANET, the AEN was also called a socket name, and was used with the Initial Connection Protocol (ICP), a component of the Network Control Protocol (NCP). NCP was the forerunner of the modern Internet protocols. Today the terminology service name is still closely connected with port numbers, the former being text strings used in some network functions to represent a numerical port number.
== See also ==
List of TCP and UDP port numbers
== References == | Wikipedia/Port_(computer_networking) |
Network traffic or data traffic is the amount of data moving across a network at a given point of time. Network data in computer networks is mostly encapsulated in network packets, which provide the load in the network. Network traffic is the main component for network traffic measurement, network traffic control and simulation.
Network traffic control - managing, prioritizing, controlling or reducing the network traffic
Network traffic measurement - measuring the amount and type of traffic on a particular network
Network traffic simulation - to measure the efficiency of a communications network
Traffic generation model - is a stochastic model of the traffic flows or data sources in a communication computer network.
Proper analysis of network traffic provides the organization with the network security as a benefit - unusual amount of traffic in a network is a possible sign of an attack. Network traffic reports provide valuable insights into preventing such attacks.
== Traffic volume ==
Traffic volume is a measure of the total work done by a resource or facility, normally over 24 hours, and is measured in units of erlang-hours. It is defined as the product of the average traffic intensity and the time period of the study.
Traffic volume = Traffic intensity × time
A traffic volume of one erlang-hour can be caused by two circuits being occupied continuously for half an hour or by a circuit being half occupied (0.5 erlang) for a period of two hours. Telecommunication operators are vitally interested in traffic volume, as it directly dictates their revenue.
== References == | Wikipedia/Network_traffic |
In applied mathematics, the soft configuration model (SCM) is a random graph model subject to the principle of maximum entropy under constraints on the expectation of the degree sequence of sampled graphs. Whereas the configuration model (CM) uniformly samples random graphs of a specific degree sequence, the SCM only retains the specified degree sequence on average over all network realizations; in this sense the SCM has very relaxed constraints relative to those of the CM ("soft" rather than "sharp" constraints). The SCM for graphs of size
n
{\displaystyle n}
has a nonzero probability of sampling any graph of size
n
{\displaystyle n}
, whereas the CM is restricted to only graphs having precisely the prescribed connectivity structure.
== Model formulation ==
The SCM is a statistical ensemble of random graphs
G
{\displaystyle G}
having
n
{\displaystyle n}
vertices (
n
=
|
V
(
G
)
|
{\displaystyle n=|V(G)|}
) labeled
{
v
j
}
j
=
1
n
=
V
(
G
)
{\displaystyle \{v_{j}\}_{j=1}^{n}=V(G)}
, producing a probability distribution on
G
n
{\displaystyle {\mathcal {G}}_{n}}
(the set of graphs of size
n
{\displaystyle n}
). Imposed on the ensemble are
n
{\displaystyle n}
constraints, namely that the ensemble average of the degree
k
j
{\displaystyle k_{j}}
of vertex
v
j
{\displaystyle v_{j}}
is equal to a designated value
k
^
j
{\displaystyle {\widehat {k}}_{j}}
, for all
v
j
∈
V
(
G
)
{\displaystyle v_{j}\in V(G)}
. The model is fully parameterized by its size
n
{\displaystyle n}
and expected degree sequence
{
k
^
j
}
j
=
1
n
{\displaystyle \{{\widehat {k}}_{j}\}_{j=1}^{n}}
. These constraints are both local (one constraint associated with each vertex) and soft (constraints on the ensemble average of certain observable quantities), and thus yields a canonical ensemble with an extensive number of constraints. The conditions
⟨
k
j
⟩
=
k
^
j
{\displaystyle \langle k_{j}\rangle ={\widehat {k}}_{j}}
are imposed on the ensemble by the method of Lagrange multipliers (see Maximum-entropy random graph model).
== Derivation of the probability distribution ==
The probability
P
SCM
(
G
)
{\displaystyle \mathbb {P} _{\text{SCM}}(G)}
of the SCM producing a graph
G
{\displaystyle G}
is determined by maximizing the Gibbs entropy
S
[
G
]
{\displaystyle S[G]}
subject to constraints
⟨
k
j
⟩
=
k
^
j
,
j
=
1
,
…
,
n
{\displaystyle \langle k_{j}\rangle ={\widehat {k}}_{j},\ j=1,\ldots ,n}
and normalization
∑
G
∈
G
n
P
SCM
(
G
)
=
1
{\displaystyle \sum _{G\in {\mathcal {G}}_{n}}\mathbb {P} _{\text{SCM}}(G)=1}
. This amounts to optimizing the multi-constraint Lagrange function below:
L
(
α
,
{
ψ
j
}
j
=
1
n
)
=
−
∑
G
∈
G
n
P
SCM
(
G
)
log
P
SCM
(
G
)
+
α
(
1
−
∑
G
∈
G
n
P
SCM
(
G
)
)
+
∑
j
=
1
n
ψ
j
(
k
^
j
−
∑
G
∈
G
n
P
SCM
(
G
)
k
j
(
G
)
)
,
{\displaystyle {\begin{aligned}&{\mathcal {L}}\left(\alpha ,\{\psi _{j}\}_{j=1}^{n}\right)\\[6pt]={}&-\sum _{G\in {\mathcal {G}}_{n}}\mathbb {P} _{\text{SCM}}(G)\log \mathbb {P} _{\text{SCM}}(G)+\alpha \left(1-\sum _{G\in {\mathcal {G}}_{n}}\mathbb {P} _{\text{SCM}}(G)\right)+\sum _{j=1}^{n}\psi _{j}\left({\widehat {k}}_{j}-\sum _{G\in {\mathcal {G}}_{n}}\mathbb {P} _{\text{SCM}}(G)k_{j}(G)\right),\end{aligned}}}
where
α
{\displaystyle \alpha }
and
{
ψ
j
}
j
=
1
n
{\displaystyle \{\psi _{j}\}_{j=1}^{n}}
are the
n
+
1
{\displaystyle n+1}
multipliers to be fixed by the
n
+
1
{\displaystyle n+1}
constraints (normalization and the expected degree sequence). Setting to zero the derivative of the above with respect to
P
SCM
(
G
)
{\displaystyle \mathbb {P} _{\text{SCM}}(G)}
for an arbitrary
G
∈
G
n
{\displaystyle G\in {\mathcal {G}}_{n}}
yields
0
=
∂
L
(
α
,
{
ψ
j
}
j
=
1
n
)
∂
P
SCM
(
G
)
=
−
log
P
SCM
(
G
)
−
1
−
α
−
∑
j
=
1
n
ψ
j
k
j
(
G
)
⇒
P
SCM
(
G
)
=
1
Z
exp
[
−
∑
j
=
1
n
ψ
j
k
j
(
G
)
]
,
{\displaystyle 0={\frac {\partial {\mathcal {L}}\left(\alpha ,\{\psi _{j}\}_{j=1}^{n}\right)}{\partial \mathbb {P} _{\text{SCM}}(G)}}=-\log \mathbb {P} _{\text{SCM}}(G)-1-\alpha -\sum _{j=1}^{n}\psi _{j}k_{j}(G)\ \Rightarrow \ \mathbb {P} _{\text{SCM}}(G)={\frac {1}{Z}}\exp \left[-\sum _{j=1}^{n}\psi _{j}k_{j}(G)\right],}
the constant
Z
:=
e
α
+
1
=
∑
G
∈
G
n
exp
[
−
∑
j
=
1
n
ψ
j
k
j
(
G
)
]
=
∏
1
≤
i
<
j
≤
n
(
1
+
e
−
(
ψ
i
+
ψ
j
)
)
{\displaystyle Z:=e^{\alpha +1}=\sum _{G\in {\mathcal {G}}_{n}}\exp \left[-\sum _{j=1}^{n}\psi _{j}k_{j}(G)\right]=\prod _{1\leq i<j\leq n}\left(1+e^{-(\psi _{i}+\psi _{j})}\right)}
being the partition function normalizing the distribution; the above exponential expression applies to all
G
∈
G
n
{\displaystyle G\in {\mathcal {G}}_{n}}
, and thus is the probability distribution. Hence we have an exponential family parameterized by
{
ψ
j
}
j
=
1
n
{\displaystyle \{\psi _{j}\}_{j=1}^{n}}
, which are related to the expected degree sequence
{
k
^
j
}
j
=
1
n
{\displaystyle \{{\widehat {k}}_{j}\}_{j=1}^{n}}
by the following equivalent expressions:
⟨
k
q
⟩
=
∑
G
∈
G
n
k
q
(
G
)
P
SCM
(
G
)
=
−
∂
log
Z
∂
ψ
q
=
∑
j
≠
q
1
e
ψ
q
+
ψ
j
+
1
=
k
^
q
,
q
=
1
,
…
,
n
.
{\displaystyle \langle k_{q}\rangle =\sum _{G\in {\mathcal {G}}_{n}}k_{q}(G)\mathbb {P} _{\text{SCM}}(G)=-{\frac {\partial \log Z}{\partial \psi _{q}}}=\sum _{j\neq q}{\frac {1}{e^{\psi _{q}+\psi _{j}}+1}}={\widehat {k}}_{q},\ q=1,\ldots ,n.}
== References == | Wikipedia/Soft_configuration_model |
Internetworking is the practice of interconnecting multiple computer networks.: 169 Typically, this enables any pair of hosts in the connected networks to exchange messages irrespective of their hardware-level networking technology. The resulting system of interconnected networks is called an internetwork, or simply an internet.
The most notable example of internetworking is the Internet, a network of networks based on many underlying hardware technologies. The Internet is defined by a unified global addressing system, packet format, and routing methods provided by the Internet Protocol.: 103
The term internetworking is a combination of the components inter (between) and networking. An earlier term for an internetwork is catenet, a short-form of (con)catenating networks.
== History ==
The first international heterogenous resource sharing network was developed by the computer science department at University College London (UCL) who interconnected the ARPANET with early British academic networks beginning in 1973. In the ARPANET, the network elements used to connect individual networks were called gateways, but the term has been deprecated in this context, because of possible confusion with functionally different devices. By 1973-4, researchers in France, the United States, and the United Kingdom had worked out an approach to internetworking where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible, as demonstrated in the CYCLADES network. Researchers at Xerox PARC outlined the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking. Research at the National Physical Laboratory in the United Kingdom confirmed establishing a common host protocol would be more reliable and efficient. The ARPANET connection to UCL later evolved into SATNET. In 1977, ARPA demonstrated a three-way internetworking experiment, which linked a mobile vehicle in PRNET with nodes in the ARPANET, and, via SATNET, to nodes at UCL. The X.25 protocol, on which public data networks were based in the 1970s and 1980s, was supplemented by the X.75 protocol which enabled internetworking.
Today the interconnecting gateways are called routers. The definition of an internetwork today includes the connection of other types of computer networks such as personal area networks.
=== Catenet ===
Catenet, a short-form of (con)catenating networks, is obsolete terminolgy for a system of packet-switched communication networks interconnected via gateways.
The term was coined by Louis Pouzin, who designed the CYCLADES network, in an October 1973 note circulated to the International Network Working Group, which was published in a 1974 paper "A Proposal for Interconnecting Packet Switching Networks". Pouzin was a pioneer of internetworking at a time when network meant what is now called a local area network. Catenet was the concept of linking these networks into a network of networks with specifications for compatibility of addressing and routing. The term was used in technical writing in the late 1970s and early 1980s, including in RFCs and IENs. Catenet was gradually displaced by the short-form of the term internetwork, internet (lower-case i), when the Internet Protocol spread more widely from the mid 1980s and the use of the term internet took on a broader sense and became well known in the 1990s.
== Interconnection of networks ==
Internetworking, a combination of the components inter (between) and networking, started as a way to connect disparate types of networking technology, but it became widespread through the developing need to connect two or more local area networks via some sort of wide area network.
To build an internetwork, the following are needed:: 103 A standardized scheme to address packets to any host on any participating network; a standardized protocol defining format and handling of transmitted packets; components interconnecting the participating networks by routing packets to their destinations based on standardized addresses.
Another type of interconnection of networks often occurs within enterprises at the link layer of the networking model, i.e. at the hardware-centric layer below the level of the TCP/IP logical interfaces. Such interconnection is accomplished with network bridges and network switches. This is sometimes incorrectly termed internetworking, but the resulting system is simply a larger, single subnetwork, and no internetworking protocol, such as Internet Protocol, is required to traverse these devices. However, a single computer network may be converted into an internetwork by dividing the network into segments and logically dividing the segment traffic with routers and having an internetworking software layer that applications employ.
The Internet Protocol is designed to provide an unreliable (not guaranteed) packet service across the network. The architecture avoids intermediate network elements maintaining any state of the network. Instead, this function is assigned to the endpoints of each communication session. To transfer data reliably, applications must utilize an appropriate transport layer protocol, such as Transmission Control Protocol (TCP), which provides a reliable stream. Some applications use a simpler, connection-less transport protocol, User Datagram Protocol (UDP), for tasks which do not require reliable delivery of data or that require real-time service, such as video streaming or voice chat.
== Networking models ==
Two architectural models are commonly used to describe the protocols and methods used in internetworking. The Open System Interconnection (OSI) reference model was developed under the auspices of the International Organization for Standardization (ISO) and provides a rigorous description for layering protocol functions from the underlying hardware to the software interface concepts in user applications. Internetworking is implemented in the Network Layer (Layer 3) of the model.
The Internet Protocol Suite, also known as the TCP/IP model, was not designed to conform to the OSI model and does not refer to it in any of the normative specifications in Request for Comments and Internet standards. Despite similar appearance as a layered model, it has a much less rigorous, loosely defined architecture that concerns itself only with the aspects of the style of networking in its own historical provenance. It assumes the availability of any suitable hardware infrastructure, without discussing hardware-specific low-level interfaces, and that a host has access to this local network to which it is connected via a link layer interface.
For a period in the late 1980s and early 1990s, the network engineering community was polarized over the implementation of competing protocol suites, commonly known as the Protocol Wars. It was unclear which of the OSI model and the Internet protocol suite would result in the best and most robust computer networks.
== See also ==
History of the Internet
== References ==
== Sources ==
Moschovitis, Christos J. P. (1999). History of the Internet: A Chronology, 1843 to the Present. ABC-CLIO. ISBN 978-1-57607-118-2. | Wikipedia/Internetworking |
A campus network, campus area network, corporate area network or CAN is a computer network made up of an interconnection of local area networks (LANs) within a limited geographical area. The networking equipments (switches, routers) and transmission media (optical fiber, copper plant, Cat5 cabling etc.) are almost entirely owned by the campus tenant / owner: an enterprise, university, government etc. A campus area network is larger than a local area network but smaller than a metropolitan area network (MAN) or wide area network (WAN).
== University campuses ==
College or university campus area networks often interconnect a variety of buildings, including administrative buildings, academic buildings, laboratories, university libraries, or student centers, residence halls, gymnasiums, and other outlying structures, like conference centers, technology centers, and training institutes.
Early examples include the Stanford University Network at Stanford University, Project Athena at MIT, and the Andrew Project at Carnegie Mellon University.
== Corporate campuses ==
Much like a university campus network, a corporate campus network serves to connect buildings. Examples of such are the networks at Googleplex and Microsoft's campus. Campus networks are normally interconnected with high speed Ethernet links operating over optical fiber such as gigabit Ethernet and 10 Gigabit Ethernet.
== Area range ==
The range of CAN is 1 km to 5 km. If two buildings have the same domain and they are connected with a network, then it will be considered as CAN only. Though the CAN is mainly used for corporate campuses so the link will be high speed.
== References == | Wikipedia/Campus_area_network |
Integrated Digital Enhanced Network (iDEN) is a mobile telecommunications technology, developed by Motorola, which provides its users the benefits of a trunked radio and a cellular telephone. It was called the first mobile social network by many technology industry analysts. iDEN places more users in a given spectral space, compared to analog cellular and two-way radio systems, by using speech compression and time-division multiple access (TDMA).
== History ==
The iDEN project originally began as MIRS (Motorola Integrated Radio System) in early 1991. The project was a software lab experiment focused on the utilization of discontiguous spectrum for GSM wireless. GSM systems typically require 24 contiguous voice channels, but the original MIRS software platform dynamically selected fragmented channels in the radio frequency (RF) spectrum in such a way that a GSM telecom switch could commence a phone call the same as it would in the contiguous channel scenario.
== Operating frequencies ==
iDEN is designed and licensed to operate on individual frequencies that may not be contiguous. iDEN operates on 25 kHz channels, but only occupies 20 kHz in order to provide interference protection via guard bands. By comparison, TDMA Cellular (Digital AMPS) is licensed in blocks of 30 kHz channels, but each emission occupies 40 kHz, and is capable of serving the same number of subscribers per channel as iDEN. iDEN uses frequency-division duplexing to transmit and receive signals separately, with transmit and receive bands separated by 39 MHz, 45 MHz, or 48 MHz depending on the frequency band being used.
iDEN supports either three or six interconnect users (phone users) per channel, and six dispatch users (push-to-talk users) per channel, using time-division multiple access. The transmit and receive time slots assigned to each user are deliberately offset in time so that a single user never needs to transmit and receive at the same time. This eliminates the need for a duplexer at the mobile end, since time-division duplexing of RF section usage can be performed.
== Hardware ==
The first commercial iDEN handset was Motorola's L3000, which was released in 1994. Lingo, which stands for Link People on the Go, was used as a logo for its earlier handsets. Most modern iDEN handsets use SIM cards, similar to, but incompatible with GSM handsets' SIM cards. Early iDEN models such as the i1000plus stored all subscriber information inside the handset itself, requiring the data to be downloaded and transferred should the subscriber want to switch handsets. Newer handsets using SIM technology make upgrading or changing handsets as easy as swapping the SIM card. Four different sized SIM cards exist, "Endeavor" SIMs are used only with the i2000 without data, "Condor" SIMs are used with the two-digit models (i95cl, for example) using a SIM with less memory than the three-digit models (i730, i860), "Falcon" SIMs are used in the three-digit phones, (i530, i710) and will read the smaller SIM for backward compatibility, but some advanced features such as extra contact information is not supported by the older SIM cards. There is also the "Falcon 128" SIM, which is the same as the original "Falcon", but doubled in memory size, which is used on new 3 digit phones (i560, i930).
The interconnect-side of the iDEN network uses GSM signalling for call set-up and mobility management, with the Abis protocol stack modified to support iDEN's additional features. Motorola has named this modified stack 'Mobis'.
Each base site requires precise timing and location information to synchronize data across the network. To obtain and maintain this information each base site uses GPS satellites to receive a precise timing reference .
== WiDEN ==
Wideband Integrated Digital Enhanced Network, or WiDEN, is a software upgrade developed by Motorola and partners for its iDEN enhanced specialized mobile radio (or ESMR) wireless telephony protocol. WiDEN allows compatible subscriber units to communicate across four 25 kHz channels combined, for up to 100 kbit/s of bandwidth. The protocol is generally considered a 2.5G wireless cellular technology.
=== History ===
iDEN, the platform which WiDEN upgrades, and the protocol on which it is based, was originally introduced by Motorola in 1993, and launched as a commercial network by Nextel in the United States in September 1996.
WiDEN was originally anticipated to be a major stepping stone for United States wireless telephone provider Nextel Communications and its affiliate, Nextel Partners. However, beginning with the December 2004 announcement of the Sprint Nextel merger, Nextel's iDEN network was abandoned in favor of Sprint's CDMA network. WiDEN was deactivated on the NEXTEL National Network in October 2005 when rebanding efforts in the 800 MHz band began in an effort to utilize those data channels as a way to handle more cellular phone call traffic on the NEXTEL iDEN network. The original Nextel iDEN network was finally decommissioned by Sprint on June 30, 2013 and the spectrum refarmed for use in the Sprint LTE network.
=== Subscriber Units ===
The first WiDEN-compatible device to be released was the Motorola iM240 PC card which allows raw data speeds up to 60 kbit/s. The first WiDEN-compatible telephones are the Motorola i850 and i760, which were released mid-summer 2005. The recent i850/i760 Software Upgrade enables WiDEN on both of these phones. The commercial launch of WiDEN came with the release of the Motorola i870 on 31 October 2005, however, most people never got to experience the WiDEN capability in their handsets. WiDEN is also offered in the i930/i920 Smartphone, however, Sprint shipped these units with WiDEN service disabled. Many in the cellular forum communities have found ways using Motorola's own RSS software to activate it. WiDEN was available in most places on Nextel's National Network. As stated above, it no longer is enabled on the Sprint-controlled towers. Since the Sprint Nextel merger the company determined that because Sprint's CDMA network was already 3G and going to EVDO (broadband speeds), and then EVDO Rev A, it would be redundant to keep upgrading the iDEN data network. WiDEN is considered a 2.5G technology.
== Operators ==
Countries which had iDEN networks included United States of America, Canada, Mexico, Brazil, Colombia, Argentina, Peru, Chile, Puerto Rico, Costa Rica, India, Korea, Jordan, Guam, China, Philippines, Thailand, Israel, Singapore, Saudi Arabia, El Salvador, and Guatemala.
Sprint Nextel provided iDEN service across the United States until its iDEN network was decommissioned for additional LTE network capacity on 30 June 2013.
SouthernLINC Wireless provided iDEN service across the United States until its iDEN network was decommissioned for additional LTE network capacity on 1 April 2019.
Telus provided iDEN service under the Mike brand across most of Canada until its iDEN network was decommissioned on 29 January 2016.
Nextel Brazil provided iDEN service in Brazil until its iDEN network was decommissioned on 31 March 2018.
Nextel Argentina also provided iDEN service until decommissioning on 30 June 2019.
Colombian Avantel deactivated its iDEN service at the end of 2021.
Nextel Mexico provided iDEN service in Mexico until its iDEN network was decommissioned in 2017. The Mexican Nextel assets were purchased by AT&T in 2015 along with Iusacell the same year to form the nucleus and the revival of AT&T Mexico. AT&T gradually transitioned users from the previous CDMA and iDEN networks to AWS 3G and 4G LTE networks beginning in 2017.
== Capitalization and pronunciation ==
Motorola originally referred to the platform as wiDEN, choosing to capitalize only the letters representing "Digital Enhanced Network," as it had with iDEN. However, subsequent promotion from Motorola and Nextel has indicated that the preferred capitalization is WiDEN.
The term has been pronounced, commonly, as a close combination to the words "why" and "den", or simply as the word "widen". The former is closer to the original pronunciation of iDEN, as "eye" and "den".
== See also ==
List of device bandwidths
Motorola iDEN phone models
Push to Talk over Cellular
Radio Service Software
Trunked radio system
== References ==
== External links ==
List of Urban ID codes (2011) Archived 2016-03-08 at the Wayback Machine | Wikipedia/Integrated_Digital_Enhanced_Network |
A nanonetwork or nanoscale network is a set of interconnected nanomachines (devices a few hundred nanometers or a few micrometers at most in size) which are able to perform only very simple tasks such as computing, data storing, sensing and actuation. Nanonetworks are expected to expand the capabilities of single nanomachines both in terms of complexity and range of operation by allowing them to coordinate, share and fuse information. Nanonetworks enable new applications of nanotechnology in the biomedical field, environmental research, military technology and industrial and consumer goods applications. Nanoscale communication is defined in IEEE P1906.1.
== Communication approaches ==
Classical communication paradigms need to be revised for the nanoscale. The two main alternatives for communication in the nanoscale are based either on electromagnetic communication or on molecular communication.
=== Electromagnetic ===
This is defined as the transmission and reception of electromagnetic radiation from components based on novel nanomaterials. Recent advancements in carbon and molecular electronics have opened the door to a new generation of electronic nanoscale components such as nanobatteries, nanoscale energy harvesting systems, nano-memories, logical circuitry in the nanoscale and even nano-antennas. From a communication perspective, the unique properties observed in nanomaterials will decide on the specific bandwidths for emission of electromagnetic radiation, the time lag of the emission, or the magnitude of the emitted power for a given input energy, amongst others.
For the time being, two main alternatives for electromagnetic communication in the nanoscale have been envisioned. First, it has been experimentally demonstrated that is possible to receive and demodulate an electromagnetic wave by means of a nanoradio, i.e., an electromechanically resonating carbon nanotube which is able to decode an amplitude or frequency modulated wave. Second, graphene-based nano-antennas have been analyzed as potential electromagnetic radiators in the terahertz band.
=== Molecular ===
Molecular communication is defined as the transmission and reception of information by means of molecules. The different molecular communication techniques can be classified according to the type of molecule propagation in walkaway-based, flow-based or diffusion-based communication.
In walkway-based molecular communication, the molecules propagate through pre-defined pathways by using carrier substances, such as molecular motors. This type of molecular communication can also be achieved by using E. coli bacteria as chemotaxis.
In flow-based molecular communication, the molecules propagate through diffusion in a fluidic medium whose flow and turbulence are guided and predictable. The hormonal communication through blood streams inside the human body is an example of this type of propagation. The flow-based propagation can also be realized by using carrier entities whose motion can be constrained on the average along specific paths, despite showing a random component. A good example of this case is given by pheromonal long range molecular communications.
In diffusion-based molecular communication, the molecules propagate through spontaneous diffusion in a fluidic medium. In this case, the molecules can be subject solely to the laws of diffusion or can also be affected by non-predictable turbulence present in the fluidic medium. Pheromonal communication, when pheromones are released into a fluidic medium, such as air or water, is an example of diffusion-based architecture. Other examples of this kind of transport include calcium signaling among cells, as well as quorum sensing among bacteria.
Based on the macroscopic theory of ideal (free) diffusion the impulse response of a unicast molecular communication channel was reported in a paper that identified that the impulse response of the ideal diffusion based molecular communication channel experiences temporal spreading. Such temporal spreading has a deep impact in the performance of the system, for example in creating the intersymbol interference (ISI) at the receiving nanomachine. In order to detect the concentration-encoded molecular signal two detection methods named sampling-based detection (SD) and energy-based detection (ED) have been proposed. While the SD approach is based on the concentration amplitude of only one sample taken at a suitable time instant during the symbol duration, the ED approach is based on the total accumulated number of molecules received during the entire symbol duration. In order to reduce the impact of ISI a controlled pulse-width based molecular communication scheme has been analysed. The work presented in showed that it is possible to realize multilevel amplitude modulation based on ideal diffusion. A comprehensive study of pulse-based binary and sinus-based, concentration-encoded molecular communication system have also been investigated.
== See also ==
IEEE P1906.1 Recommended Practice for Nanoscale and Molecular Communication Framework
== References ==
== External links ==
IEEE Communications Society Best Readings in Nanoscale Communication Networks
Stack Exchange Page for Q&A on NanoNetworking
Nanoscale Networking in Industry
Instructions to join P1906.1 Working Group
MONACO Project – Broadband Wireless Networking Laboratory at Georgia Tech, Atlanta, Georgia, US
GRANET Project – Broadband Wireless Networking Laboratory at Georgia Tech, Atlanta, Georgia, US
NaNoNetworking Center in Catalunya at Universitat Politècnica de Catalunya, Barcelona, Catalunya, Spain
Molecular communication research at York University, Toronto, Canada
Research on Molecular Communication at University of Ottawa, Ottawa, Canada
Intelligence Networking Lab. at Yonsei University, Korea
Wiki on Molecular Communication at University of California, Irvine, California, US
Home page of the IEEE Communications Society Emerging Technical Subcommittee on Nanoscale, Molecular, and Quantum Networking.
P1906.1 – Recommended Practice for Nanoscale and Molecular Communication Framework
IEEE 802.15 Terahertz Interest Group
Nano Communication Networks (Elsevier) Journal
A simulation tool for nanoscale biological networks – Elsevier presentation
NanoNetworking Research Group (NRG) at Boğaziçi University, Istanbul, Turkey | Wikipedia/Nanoscale_network |
A cellular network or mobile network is a telecommunications network where the link to and from end nodes is wireless and the network is distributed over land areas called cells, each served by at least one fixed-location transceiver (such as a base station). These base stations provide the cell with the network coverage which can be used for transmission of voice, data, and other types of content via radio waves. Each cell's coverage area is determined by factors such as the power of the transceiver, the terrain, and the frequency band being used. A cell typically uses a different set of frequencies from neighboring cells, to avoid interference and provide guaranteed service quality within each cell.
When joined together, these cells provide radio coverage over a wide geographic area. This enables numerous devices, including mobile phones, tablets, laptops equipped with mobile broadband modems, and wearable devices such as smartwatches, to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the devices are moving through more than one cell during transmission. The design of cellular networks allows for seamless handover, enabling uninterrupted communication when a device moves from one cell to another.
Modern cellular networks utilize advanced technologies such as Multiple Input Multiple Output (MIMO), beamforming, and small cells to enhance network capacity and efficiency.
Cellular networks offer a number of desirable features:
More capacity than a single large transmitter, since the same frequency can be used for multiple links as long as they are in different cells
Mobile devices use less power than a single transmitter or satellite since the cell towers are closer
Larger coverage area than a single terrestrial transmitter, since additional cell towers can be added indefinitely and are not limited by the horizon
Capability of utilizing higher frequency signals (and thus more available bandwidth / faster data rates) that are not able to propagate at long distances
With data compression and multiplexing, several video (including digital video) and audio channels may travel through a higher frequency signal on a single wideband carrier
Major telecommunications providers have deployed voice and data cellular networks over most of the inhabited land area of Earth. This allows mobile phones and other devices to be connected to the public switched telephone network and public Internet access. In addition to traditional voice and data services, cellular networks now support Internet of Things (IoT) applications, connecting devices such as smart meters, vehicles, and industrial sensors.
The evolution of cellular networks from 1G to 5G has progressively introduced faster speeds, lower latency, and support for a larger number of devices, enabling advanced applications in fields such as healthcare, transportation, and smart cities.
Private cellular networks can be used for research or for large organizations and fleets, such as dispatch for local public safety agencies or a taxicab company, as well as for local wireless communications in enterprise and industrial settings such as factories, warehouses, mines, power plants, substations, oil and gas facilities and ports.
== Concept ==
In a cellular radio system, a land area to be supplied with radio service is divided into cells in a pattern dependent on terrain and reception characteristics. These cell patterns roughly take the form of regular shapes, such as hexagons, squares, or circles although hexagonal cells are conventional. Each of these cells is assigned with multiple frequencies (f1 – f6) which have corresponding radio base stations. The group of frequencies can be reused in other cells, provided that the same frequencies are not reused in adjacent cells, which would cause co-channel interference.
The increased capacity in a cellular network, compared with a network with a single transmitter, comes from the mobile communication switching system developed by Amos Joel of Bell Labs that permitted multiple callers in a given area to use the same frequency by switching calls to the nearest available cellular tower having that frequency available. This strategy is viable because a given radio frequency can be reused in a different area for an unrelated transmission. In contrast, a single transmitter can only handle one transmission for a given frequency. Inevitably, there is some level of interference from the signal from the other cells which use the same frequency. Consequently, there must be at least one cell gap between cells which reuse the same frequency in a standard frequency-division multiple access (FDMA) system.
Consider the case of a taxi company, where each radio has a manually operated channel selector knob to tune to different frequencies. As drivers move around, they change from channel to channel. The drivers are aware of which frequency approximately covers some area. When they do not receive a signal from the transmitter, they try other channels until finding one that works. The taxi drivers only speak one at a time when invited by the base station operator. This is a form of time-division multiple access (TDMA).
== History ==
The idea to establish a standard cellular phone network was first proposed on December 11, 1947. This proposal was put forward by Douglas H. Ring, a Bell Labs engineer, in an internal memo suggesting the development of a cellular telephone system by AT&T.
The first commercial cellular network, the 1G generation, was launched in Japan by Nippon Telegraph and Telephone (NTT) in 1979, initially in the metropolitan area of Tokyo. However, NTT did not initially commercialize the system; the early launch was motivated by an effort to understand a practical cellular system rather than by an interest to profit from it. In 1981, the Nordic Mobile Telephone system was created as the first network to cover an entire country. The network was released in 1981 in Sweden and Norway, then in early 1982 in Finland and Denmark. Televerket, a state-owned corporation responsible for telecommunications in Sweden, launched the system.
In September 1981, Jan Stenbeck, a financier and businessman, launched Comvik, a new Swedish telecommunications company. Comvik was the first European telecommunications firm to challenge the state's telephone monopoly on the industry. According to some sources, Comvik was the first to launch a commercial automatic cellular system before Televerket launched its own in October 1981. However, at the time of the new network’s release, the Swedish Post and Telecom Authority threatened to shut down the system after claiming that the company had used an unlicensed automatic gear that could interfere with its own networks. In December 1981, Sweden awarded Comvik with a license to operate its own automatic cellular network in the spirit of market competition.
The Bell System had developed cellular technology since 1947, and had cellular networks in operation in Chicago, Illinois, and Dallas, Texas, prior to 1979; however, regulatory battles delayed AT&T's deployment of cellular service to 1983, when its Regional Holding Company Illinois Bell first provided cellular service.
First-generation cellular network technology continued to expand its reach to the rest of the world. In 1990, Millicom Inc., a telecommunications service provider, strategically partnered with Comvik’s international cellular operations to become Millicom International Cellular SA. The company went on to establish a 1G systems foothold in Ghana, Africa under the brand name Mobitel. In 2006, the company’s Ghana operations were renamed to Tigo.
The wireless revolution began in the early 1990s, leading to the transition from analog to digital networks. The MOSFET invented at Bell Labs between 1955 and 1960, was adapted for cellular networks by the early 1990s, with the wide adoption of power MOSFET, LDMOS (RF amplifier), and RF CMOS (RF circuit) devices leading to the development and proliferation of digital wireless mobile networks.
The first commercial digital cellular network, the 2G generation, was launched in 1991. This sparked competition in the sector as the new operators challenged the incumbent 1G analog network operators.
== Cell signal encoding ==
To distinguish signals from several different transmitters, a number of channel access methods have been developed, including frequency-division multiple access (FDMA, used by analog and D-AMPS systems), time-division multiple access (TDMA, used by GSM) and code-division multiple access (CDMA, first used for PCS, and the basis of 3G).
With FDMA, the transmitting and receiving frequencies used by different users in each cell are different from each other. Each cellular call was assigned a pair of frequencies (one for base to mobile, the other for mobile to base) to provide full-duplex operation. The original AMPS systems had 666 channel pairs, 333 each for the CLEC "A" system and ILEC "B" system. The number of channels was expanded to 416 pairs per carrier, but ultimately the number of RF channels limits the number of calls that a cell site could handle. FDMA is a familiar technology to telephone companies, which used frequency-division multiplexing to add channels to their point-to-point wireline plants before time-division multiplexing rendered FDM obsolete.
With TDMA, the transmitting and receiving time slots used by different users in each cell are different from each other. TDMA typically uses digital signaling to store and forward bursts of voice data that are fit into time slices for transmission, and expanded at the receiving end to produce a somewhat normal-sounding voice at the receiver. TDMA must introduce latency (time delay) into the audio signal. As long as the latency time is short enough that the delayed audio is not heard as an echo, it is not problematic. TDMA is a familiar technology for telephone companies, which used time-division multiplexing to add channels to their point-to-point wireline plants before packet switching rendered FDM obsolete.
The principle of CDMA is based on spread spectrum technology developed for military use during World War II and improved during the Cold War into direct-sequence spread spectrum that was used for early CDMA cellular systems and Wi-Fi. DSSS allows multiple simultaneous phone conversations to take place on a single wideband RF channel, without needing to channelize them in time or frequency. Although more sophisticated than older multiple access schemes (and unfamiliar to legacy telephone companies because it was not developed by Bell Labs), CDMA has scaled well to become the basis for 3G cellular radio systems.
Other available methods of multiplexing such as MIMO, a more sophisticated version of antenna diversity, combined with active beamforming provides much greater spatial multiplexing ability compared to original AMPS cells, that typically only addressed one to three unique spaces. Massive MIMO deployment allows much greater channel reuse, thus increasing the number of subscribers per cell site, greater data throughput per user, or some combination thereof. Quadrature Amplitude Modulation (QAM) modems offer an increasing number of bits per symbol, allowing more users per megahertz of bandwidth (and decibels of SNR), greater data throughput per user, or some combination thereof.
== Frequency reuse ==
The key characteristic of a cellular network is the ability to reuse frequencies to increase both coverage and capacity. As described above, adjacent cells must use different frequencies, however, there is no problem with two cells sufficiently far apart operating on the same frequency, provided the masts and cellular network users' equipment do not transmit with too much power.
The elements that determine frequency reuse are the reuse distance and the reuse factor. The reuse distance, D is calculated as
D
=
R
3
N
{\displaystyle D=R{\sqrt {3N}}}
,
where R is the cell radius and N is the number of cells per cluster. Cells may vary in radius from 1 to 30 kilometres (0.62 to 18.64 mi). The boundaries of the cells can also overlap between adjacent cells and large cells can be divided into smaller cells.
The frequency reuse factor is the rate at which the same frequency can be used in the network. It is 1/K (or K according to some books) where K is the number of cells which cannot use the same frequencies for transmission. Common values for the frequency reuse factor are 1/3, 1/4, 1/7, 1/9 and 1/12 (or 3, 4, 7, 9 and 12, depending on notation).
In case of N sector antennas on the same base station site, each with different direction, the base station site can serve N different sectors. N is typically 3. A reuse pattern of N/K denotes a further division in frequency among N sector antennas per site. Some current and historical reuse patterns are 3/7 (North American AMPS), 6/4 (Motorola NAMPS), and 3/4 (GSM).
If the total available bandwidth is B, each cell can only use a number of frequency channels corresponding to a bandwidth of B/K, and each sector can use a bandwidth of B/NK.
Code-division multiple access-based systems use a wider frequency band to achieve the same rate of transmission as FDMA, but this is compensated for by the ability to use a frequency reuse factor of 1, for example using a reuse pattern of 1/1. In other words, adjacent base station sites use the same frequencies, and the different base stations and users are separated by codes rather than frequencies. While N is shown as 1 in this example, that does not mean the CDMA cell has only one sector, but rather that the entire cell bandwidth is also available to each sector individually.
Recently also orthogonal frequency-division multiple access based systems such as LTE are being deployed with a frequency reuse of 1. Since such systems do not spread the signal across the frequency band,
inter-cell radio resource management is important to coordinate resource allocation between different cell sites and to limit the inter-cell interference. There are various means of inter-cell interference coordination (ICIC) already defined in the standard. Coordinated scheduling, multi-site MIMO or multi-site beamforming are other examples for inter-cell radio resource management that might be standardized in the future.
== Directional antennas ==
Cell towers frequently use a directional signal to improve reception in higher-traffic areas. In the United States, the Federal Communications Commission (FCC) limits omnidirectional cell tower signals to 100 watts of power. If the tower has directional antennas, the FCC allows the cell operator to emit up to 500 watts of effective radiated power (ERP).
Although the original cell towers created an even, omnidirectional signal, were at the centers of the cells and were omnidirectional, a cellular map can be redrawn with the cellular telephone towers located at the corners of the hexagons where three cells converge. Each tower has three sets of directional antennas aimed in three different directions with 120 degrees for each cell (totaling 360 degrees) and receiving/transmitting into three different cells at different frequencies. This provides a minimum of three channels, and three towers for each cell and greatly increases the chances of receiving a usable signal from at least one direction.
The numbers in the illustration are channel numbers, which repeat every 3 cells. Large cells can be subdivided into smaller cells for high volume areas.
Cell phone companies also use this directional signal to improve reception along highways and inside buildings like stadiums and arenas.
== Broadcast messages and paging ==
Practically every cellular system has some kind of broadcast mechanism. This can be used directly for distributing information to multiple mobiles. Commonly, for example in mobile telephony systems, the most important use of broadcast information is to set up channels for one-to-one communication between the mobile transceiver and the base station. This is called paging. The three different paging procedures generally adopted are sequential, parallel and selective paging.
The details of the process of paging vary somewhat from network to network, but normally we know a limited number of cells where the phone is located (this group of cells is called a Location Area in the GSM or UMTS system, or Routing Area if a data packet session is involved; in LTE, cells are grouped into Tracking Areas). Paging takes place by sending the broadcast message to all of those cells. Paging messages can be used for information transfer. This happens in pagers, in CDMA systems for sending SMS messages, and in the UMTS system where it allows for low downlink latency in packet-based connections.
In LTE/4G, the Paging procedure is initiated by the MME when data packets need to be delivered to the UE.
Paging types supported by the MME are:
Basic.
SGs_CS and SGs_PS.
QCI_1 through QCI_9.
== Movement from cell to cell and handing over ==
In a primitive taxi system, when the taxi moved away from a first tower and closer to a second tower, the taxi driver manually switched from one frequency to another as needed. If communication was interrupted due to a loss of a signal, the taxi driver asked the base station operator to repeat the message on a different frequency.
In a cellular system, as the distributed mobile transceivers move from cell to cell during an ongoing continuous communication, switching from one cell frequency to a different cell frequency is done electronically without interruption and without a base station operator or manual switching. This is called the handover or handoff. Typically, a new channel is automatically selected for the mobile unit on the new base station which will serve it. The mobile unit then automatically switches from the current channel to the new channel and communication continues.
The exact details of the mobile system's move from one base station to the other vary considerably from system to system (see the example below for how a mobile phone network manages handover).
== Mobile phone network ==
The most common example of a cellular network is a mobile phone (cell phone) network. A mobile phone is a portable telephone which receives or makes calls through a cell site (base station) or transmitting tower. Radio waves are used to transfer signals to and from the cell phone.
Modern mobile phone networks use cells because radio frequencies are a limited, shared resource. Cell-sites and handsets change frequency under computer control and use low power transmitters so that the usually limited number of radio frequencies can be simultaneously used by many callers with less interference.
A cellular network is used by the mobile phone operator to achieve both coverage and capacity for their subscribers. Large geographic areas are split into smaller cells to avoid line-of-sight signal loss and to support a large number of active phones in that area. All of the cell sites are connected to telephone exchanges (or switches), which in turn connect to the public telephone network.
In cities, each cell site may have a range of up to approximately 1⁄2 mile (0.80 km), while in rural areas, the range could be as much as 5 miles (8.0 km). It is possible that in clear open areas, a user may receive signals from a cell site 25 miles (40 km) away. In rural areas with low-band coverage and tall towers, basic voice and messaging service may reach 50 miles (80 km), with limitations on bandwidth and number of simultaneous calls.
Since almost all mobile phones use cellular technology, including GSM, CDMA, and AMPS (analog), the term "cell phone" is in some regions, notably the US, used interchangeably with "mobile phone". However, satellite phones are mobile phones that do not communicate directly with a ground-based cellular tower but may do so indirectly by way of a satellite.
There are a number of different digital cellular technologies, including: Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN). The transition from existing analog to the digital standard followed a very different path in Europe and the US. As a consequence, multiple digital standards surfaced in the US, while Europe and many countries converged towards the GSM standard.
=== Structure of the mobile phone cellular network ===
A simple view of the cellular mobile-radio network consists of the following:
A network of radio base stations forming the base station subsystem.
The core circuit switched network for handling voice calls and text
A packet switched network for handling mobile data
The public switched telephone network to connect subscribers to the wider telephony network
This network is the foundation of the GSM system network. There are many functions that are performed by this network in order to make sure customers get the desired service including mobility management, registration, call set-up, and handover.
Any phone connects to the network via an RBS (Radio Base Station) at a corner of the corresponding cell which in turn connects to the Mobile switching center (MSC). The MSC provides a connection to the public switched telephone network (PSTN). The link from a phone to the RBS is called an uplink while the other way is termed downlink.
Radio channels effectively use the transmission medium through the use of the following multiplexing and access schemes: frequency-division multiple access (FDMA), time-division multiple access (TDMA), code-division multiple access (CDMA), and space-division multiple access (SDMA).
=== Small cells ===
Small cells, which have a smaller coverage area than base stations, are categorised as follows:
Microcell -> less than 2 kilometres,
Picocell -> less than 200 metres,
Femtocell -> around 10 metres,
Attocell -> 1–4 metres
=== Cellular handover in mobile phone networks ===
As the phone user moves from one cell area to another cell while a call is in progress, the mobile station will search for a new channel to attach to in order not to drop the call. Once a new channel is found, the network will command the mobile unit to switch to the new channel and at the same time switch the call onto the new channel.
With CDMA, multiple CDMA handsets share a specific radio channel. The signals are separated by using a pseudonoise code (PN code) that is specific to each phone. As the user moves from one cell to another, the handset sets up radio links with multiple cell sites (or sectors of the same site) simultaneously. This is known as "soft handoff" because, unlike with traditional cellular technology, there is no one defined point where the phone switches to the new cell.
In IS-95 inter-frequency handovers and older analog systems such as NMT it will typically be impossible to test the target channel directly while communicating. In this case, other techniques have to be used such as pilot beacons in IS-95. This means that there is almost always a brief break in the communication while searching for the new channel followed by the risk of an unexpected return to the old channel.
If there is no ongoing communication or the communication can be interrupted, it is possible for the mobile unit to spontaneously move from one cell to another and then notify the base station with the strongest signal.
=== Cellular frequency choice in mobile phone networks ===
The effect of frequency on cell coverage means that different frequencies serve better for different uses. Low frequencies, such as 450 MHz NMT, serve very well for countryside coverage. GSM 900 (900 MHz) is suitable for light urban coverage. GSM 1800 (1.8 GHz) starts to be limited by structural walls. UMTS, at 2.1 GHz is quite similar in coverage to GSM 1800.
Higher frequencies are a disadvantage when it comes to coverage, but it is a decided advantage when it comes to capacity. Picocells, covering e.g. one floor of a building, become possible, and the same frequency can be used for cells which are practically neighbors.
Cell service area may also vary due to interference from transmitting systems, both within and around that cell. This is true especially in CDMA based systems. The receiver requires a certain signal-to-noise ratio, and the transmitter should not send with too high transmission power in view to not cause interference with other transmitters. As the receiver moves away from the transmitter, the power received decreases, so the power control algorithm of the transmitter increases the power it transmits to restore the level of received power. As the interference (noise) rises above the received power from the transmitter, and the power of the transmitter cannot be increased anymore, the signal becomes corrupted and eventually unusable. In CDMA-based systems, the effect of interference from other mobile transmitters in the same cell on coverage area is very marked and has a special name, cell breathing.
One can see examples of cell coverage by studying some of the coverage maps provided by real operators on their web sites or by looking at independently crowdsourced maps such as Opensignal or CellMapper. In certain cases they may mark the site of the transmitter; in others, it can be calculated by working out the point of strongest coverage.
A cellular repeater is used to extend cell coverage into larger areas. They range from wideband repeaters for consumer use in homes and offices to smart or digital repeaters for industrial needs.
=== Cell size ===
The following table shows the dependency of the coverage area of one cell on the frequency of a CDMA2000 network:
== See also ==
Lists and technical information:
Mobile technologies
2G networks (the first digital networks, 1G and 0G were analog):
GSM
Circuit Switched Data (CSD)
GPRS
EDGE(IMT-SC)
Evolved EDGE
Digital AMPS
Cellular Digital Packet Data (CDPD)
cdmaOne (IS-95)
Circuit Switched Data (CSD)
Personal Handy-phone System (PHS)
Personal Digital Cellular
3G networks:
UMTS
W-CDMA (air interface)
TD-CDMA (air interface)
TD-SCDMA (air interface)
HSPA
HSDPA
HSPA+
CDMA2000
OFDMA (air interface)
EVDO
SVDO
4G networks:
IMT Advanced
LTE (TD-LTE)
LTE Advanced
LTE Advanced Pro
WiMAX
WiMAX-Advanced (WirelessMAN-Advanced)
Ultra Mobile Broadband (never commercialized)
MBWA (IEEE 802.20, Mobile Broadband Wireless Access, HC-SDMA, iBurst, has been shut down)
5G networks:
5G NR
5G-Advanced
Starting with EVDO the following techniques can also be used to improve performance:
MIMO, SDMA and Beamforming
Cellular frequencies
CDMA frequency bands
GSM frequency bands
UMTS frequency bands
LTE frequency bands
5G NR frequency bands
Deployed networks by technology
List of UMTS networks
List of CDMA2000 networks
List of LTE networks
List of deployed WiMAX networks
List of 5G NR networks
Deployed networks by country (including technology and frequencies)
List of mobile network operators of Europe
List of mobile network operators of the Americas
List of mobile network operators of the Asia Pacific region
List of mobile network operators of the Middle East and Africa
List of mobile network operators (summary)
Mobile country code - code, frequency, and technology for each operator in each country
Comparison of mobile phone standards
List of mobile phone brands by country (manufacturers)
Equipment:
Cellular repeater
Cellular router
Professional mobile radio (PMR)
OpenBTS
Remote radio head
Baseband unit
Radio access network
Mobile cell sites
Other:
Antenna diversity
Cellular traffic
MIMO (multiple-input and multiple-output)
Mobile edge computing
Mobile phone radiation and health
Network simulation
Personal Communications Service
Radio resource management (RRM)
Routing in cellular networks
Signal strength
Title 47 of the Code of Federal Regulations
== References ==
== Further reading ==
P. Key, D. Smith. Teletraffic Engineering in a competitive world. Elsevier Science B.V., Amsterdam Netherlands, 1999. ISBN 978-0444502681. Chapter 1 (Plenary) and 3 (mobile).
William C. Y. Lee, Mobile Cellular Telecommunications Systems (1989), McGraw-Hill. ISBN 978-0-071-00790-0.
== External links ==
Raciti, Robert C. (July 1995). "CELLULAR TECHNOLOGY". Nova Southeastern University. Archived from the original on 15 July 2013. Retrieved 2 April 2012.
A History of Cellular Networks
What are cellular networks? 1G to 6G Features & Evolution
Technical Details with Call Flow about LTE Paging Procedure. | Wikipedia/Cellular_network |
A nanonetwork or nanoscale network is a set of interconnected nanomachines (devices a few hundred nanometers or a few micrometers at most in size) which are able to perform only very simple tasks such as computing, data storing, sensing and actuation. Nanonetworks are expected to expand the capabilities of single nanomachines both in terms of complexity and range of operation by allowing them to coordinate, share and fuse information. Nanonetworks enable new applications of nanotechnology in the biomedical field, environmental research, military technology and industrial and consumer goods applications. Nanoscale communication is defined in IEEE P1906.1.
== Communication approaches ==
Classical communication paradigms need to be revised for the nanoscale. The two main alternatives for communication in the nanoscale are based either on electromagnetic communication or on molecular communication.
=== Electromagnetic ===
This is defined as the transmission and reception of electromagnetic radiation from components based on novel nanomaterials. Recent advancements in carbon and molecular electronics have opened the door to a new generation of electronic nanoscale components such as nanobatteries, nanoscale energy harvesting systems, nano-memories, logical circuitry in the nanoscale and even nano-antennas. From a communication perspective, the unique properties observed in nanomaterials will decide on the specific bandwidths for emission of electromagnetic radiation, the time lag of the emission, or the magnitude of the emitted power for a given input energy, amongst others.
For the time being, two main alternatives for electromagnetic communication in the nanoscale have been envisioned. First, it has been experimentally demonstrated that is possible to receive and demodulate an electromagnetic wave by means of a nanoradio, i.e., an electromechanically resonating carbon nanotube which is able to decode an amplitude or frequency modulated wave. Second, graphene-based nano-antennas have been analyzed as potential electromagnetic radiators in the terahertz band.
=== Molecular ===
Molecular communication is defined as the transmission and reception of information by means of molecules. The different molecular communication techniques can be classified according to the type of molecule propagation in walkaway-based, flow-based or diffusion-based communication.
In walkway-based molecular communication, the molecules propagate through pre-defined pathways by using carrier substances, such as molecular motors. This type of molecular communication can also be achieved by using E. coli bacteria as chemotaxis.
In flow-based molecular communication, the molecules propagate through diffusion in a fluidic medium whose flow and turbulence are guided and predictable. The hormonal communication through blood streams inside the human body is an example of this type of propagation. The flow-based propagation can also be realized by using carrier entities whose motion can be constrained on the average along specific paths, despite showing a random component. A good example of this case is given by pheromonal long range molecular communications.
In diffusion-based molecular communication, the molecules propagate through spontaneous diffusion in a fluidic medium. In this case, the molecules can be subject solely to the laws of diffusion or can also be affected by non-predictable turbulence present in the fluidic medium. Pheromonal communication, when pheromones are released into a fluidic medium, such as air or water, is an example of diffusion-based architecture. Other examples of this kind of transport include calcium signaling among cells, as well as quorum sensing among bacteria.
Based on the macroscopic theory of ideal (free) diffusion the impulse response of a unicast molecular communication channel was reported in a paper that identified that the impulse response of the ideal diffusion based molecular communication channel experiences temporal spreading. Such temporal spreading has a deep impact in the performance of the system, for example in creating the intersymbol interference (ISI) at the receiving nanomachine. In order to detect the concentration-encoded molecular signal two detection methods named sampling-based detection (SD) and energy-based detection (ED) have been proposed. While the SD approach is based on the concentration amplitude of only one sample taken at a suitable time instant during the symbol duration, the ED approach is based on the total accumulated number of molecules received during the entire symbol duration. In order to reduce the impact of ISI a controlled pulse-width based molecular communication scheme has been analysed. The work presented in showed that it is possible to realize multilevel amplitude modulation based on ideal diffusion. A comprehensive study of pulse-based binary and sinus-based, concentration-encoded molecular communication system have also been investigated.
== See also ==
IEEE P1906.1 Recommended Practice for Nanoscale and Molecular Communication Framework
== References ==
== External links ==
IEEE Communications Society Best Readings in Nanoscale Communication Networks
Stack Exchange Page for Q&A on NanoNetworking
Nanoscale Networking in Industry
Instructions to join P1906.1 Working Group
MONACO Project – Broadband Wireless Networking Laboratory at Georgia Tech, Atlanta, Georgia, US
GRANET Project – Broadband Wireless Networking Laboratory at Georgia Tech, Atlanta, Georgia, US
NaNoNetworking Center in Catalunya at Universitat Politècnica de Catalunya, Barcelona, Catalunya, Spain
Molecular communication research at York University, Toronto, Canada
Research on Molecular Communication at University of Ottawa, Ottawa, Canada
Intelligence Networking Lab. at Yonsei University, Korea
Wiki on Molecular Communication at University of California, Irvine, California, US
Home page of the IEEE Communications Society Emerging Technical Subcommittee on Nanoscale, Molecular, and Quantum Networking.
P1906.1 – Recommended Practice for Nanoscale and Molecular Communication Framework
IEEE 802.15 Terahertz Interest Group
Nano Communication Networks (Elsevier) Journal
A simulation tool for nanoscale biological networks – Elsevier presentation
NanoNetworking Research Group (NRG) at Boğaziçi University, Istanbul, Turkey | Wikipedia/Nanonetwork |
A storage area network (SAN) or storage network is a computer network which provides access to consolidated, block-level data storage. SANs are primarily used to access data storage devices, such as disk arrays and tape libraries from servers so that the devices appear to the operating system as direct-attached storage. A SAN typically is a dedicated network of storage devices not accessible through the local area network (LAN).
Although a SAN provides only block-level access, file systems built on top of SANs do provide file-level access and are known as shared-disk file systems.
Newer SAN configurations enable hybrid SAN and allow traditional block storage that appears as local storage but also object storage for web services through APIs.
== Storage architectures ==
Storage area networks (SANs) are sometimes referred to as network behind the servers: 11 and historically developed out of a centralized data storage model, but with its own data network. A SAN is, at its simplest, a dedicated network for data storage. In addition to storing data, SANs allow for the automatic backup of data, and the monitoring of the storage as well as the backup process.: 16–17 A SAN is a combination of hardware and software.: 9 It grew out of data-centric mainframe architectures, where clients in a network can connect to several servers that store different types of data.: 11 To scale storage capacities as the volumes of data grew, direct-attached storage (DAS) was developed, where disk arrays or just a bunch of disks (JBODs) were attached to servers. In this architecture, storage devices can be added to increase storage capacity. However, the server through which the storage devices are accessed is a single point of failure, and a large part of the LAN network bandwidth is used for accessing, storing and backing up data. To solve the single point of failure issue, a direct-attached shared storage architecture was implemented, where several servers could access the same storage device.: 16–17
DAS was the first network storage system and is still widely used where data storage requirements are not very high. Out of it developed the network-attached storage (NAS) architecture, where one or more dedicated file server or storage devices are made available in a LAN.: 18 Therefore, the transfer of data, particularly for backup, still takes place over the existing LAN. If more than a terabyte of data was stored at any one time, LAN bandwidth became a bottleneck.: 21–22 Therefore, SANs were developed, where a dedicated storage network was attached to the LAN, and terabytes of data are transferred over a dedicated high speed and bandwidth network. Within the SAN, storage devices are interconnected. Transfer of data between storage devices, such as for backup, happens behind the servers and is meant to be transparent.: 22 In a NAS architecture data is transferred using the TCP and IP protocols over Ethernet. Distinct protocols were developed for SANs, such as Fibre Channel, iSCSI, Infiniband. Therefore, SANs often have their own network and storage devices, which have to be bought, installed, and configured. This makes SANs inherently more expensive than NAS architectures.: 29
== Components ==
SANs have their own networking devices, such as SAN switches. To access the SAN, so-called SAN servers are used, which in turn connect to SAN host adapters. Within the SAN, a range of data storage devices may be interconnected, such as SAN-capable disk arrays, JBODs and tape libraries.: 32, 35–36
=== Host layer ===
Servers that allow access to the SAN and its storage devices are said to form the host layer of the SAN. Such servers have host adapters, which are cards that attach to slots on the server motherboard (usually PCI slots) and run with a corresponding firmware and device driver. Through the host adapters the operating system of the server can communicate with the storage devices in the SAN.: 26
In Fibre channel deployments, a cable connects to the host adapter through the gigabit interface converter (GBIC). GBICs are also used on switches and storage devices within the SAN, and they convert digital bits into light impulses that can then be transmitted over the Fibre Channel cables. Conversely, the GBIC converts incoming light impulses back into digital bits. The predecessor of the GBIC was called gigabit link module (GLM).: 27
=== Fabric layer ===
The fabric layer consists of SAN networking devices that include SAN switches, routers, protocol bridges, gateway devices, and cables. SAN network devices move data within the SAN, or between an initiator, such as an HBA port of a server, and a target, such as the port of a storage device.
When SANs were first built, hubs were the only devices that were Fibre Channel capable, but Fibre Channel switches were developed and hubs are now rarely found in SANs. Switches have the advantage over hubs that they allow all attached devices to communicate simultaneously, as a switch provides a dedicated link to connect all its ports with one another.: 34 When SANs were first built, Fibre Channel had to be implemented over copper cables, these days multimode optical fibre cables are used in SANs.: 40
SANs are usually built with redundancy, so SAN switches are connected with redundant links. SAN switches connect the servers with the storage devices and are typically non-blocking allowing transmission of data across all attached wires at the same time.: 29 SAN switches are for redundancy purposes set up in a meshed topology. A single SAN switch can have as few as 8 ports and up to 32 ports with modular extensions.: 35 So-called director-class switches can have as many as 128 ports.: 36
In switched SANs, the Fibre Channel switched fabric protocol FC-SW-6 is used under which every device in the SAN has a hardcoded World Wide Name (WWN) address in the host bus adapter (HBA). If a device is connected to the SAN its WWN is registered in the SAN switch name server.: 47 In place of a WWN, or worldwide port name (WWPN), SAN Fibre Channel storage device vendors may also hardcode a worldwide node name (WWNN). The ports of storage devices often have a WWN starting with 5, while the bus adapters of servers start with 10 or 21.: 47
=== Storage layer ===
The serialized Small Computer Systems Interface (SCSI) protocol is often used on top of the Fibre Channel switched fabric protocol in servers and SAN storage devices. The Internet Small Computer Systems Interface (iSCSI) over Ethernet and the Infiniband protocols may also be found implemented in SANs, but are often bridged into the Fibre Channel SAN. However, Infiniband and iSCSI storage devices, in particular, disk arrays, are available.: 47–48
The various storage devices in a SAN are said to form the storage layer. It can include a variety of hard disk and magnetic tape devices that store data. In SANs, disk arrays are joined through a RAID which makes a lot of hard disks look and perform like one big storage device.: 48 Every storage device, or even partition on that storage device, has a logical unit number (LUN) assigned to it. This is a unique number within the SAN. Every node in the SAN, be it a server or another storage device, can access the storage by referencing the LUN. The LUNs allow for the storage capacity of a SAN to be segmented and for the implementation of access controls. A particular server, or a group of servers, may, for example, be only given access to a particular part of the SAN storage layer, in the form of LUNs. When a storage device receives a request to read or write data, it will check its access list to establish whether the node, identified by its LUN, is allowed to access the storage area, also identified by a LUN.: 148–149 LUN masking is a technique whereby the host bus adapter and the SAN software of a server restrict the LUNs for which commands are accepted. In doing so LUNs that should never be accessed by the server are masked.: 354 Another method to restrict server access to particular SAN storage devices is fabric-based access control, or zoning, which is enforced by the SAN networking devices and servers. Under zoning, server access is restricted to storage devices that are in a particular SAN zone.
== Network protocols ==
A mapping layer to other protocols is used to form a network:
ATA over Ethernet (AoE), mapping of AT Attachment (ATA) over Ethernet
Fibre Channel Protocol (FCP), a mapping of SCSI over Fibre Channel
Fibre Channel over Ethernet (FCoE)
ESCON over Fibre Channel (FICON), used by mainframe computers
HyperSCSI, mapping of SCSI over Ethernet
iFCP or SANoIP mapping of FCP over IP
iSCSI, mapping of SCSI over TCP/IP
iSCSI Extensions for RDMA (iSER), mapping of iSCSI over InfiniBand
Network block device, mapping device node requests on UNIX-like systems over stream sockets like TCP/IP
SCSI RDMA Protocol (SRP), another SCSI implementation for remote direct memory access (RDMA) transports
Storage networks may also be built using Serial Attached SCSI (SAS) and Serial ATA (SATA) technologies. SAS evolved from SCSI direct-attached storage. SATA evolved from Parallel ATA direct-attached storage. SAS and SATA devices can be networked using SAS expanders.
== Software ==
The Storage Networking Industry Association (SNIA) defines a SAN as "a network whose primary purpose is the transfer of data between computer systems and storage elements". But a SAN does not just consist of a communication infrastructure, it also has a software management layer. This software organizes the servers, storage devices, and the network so that data can be transferred and stored. Because a SAN does not use direct attached storage (DAS), the storage devices in the SAN are not owned and managed by a server.: 11 A SAN allows a server to access a large data storage capacity and this storage capacity may also be accessible by other servers.: 12 Moreover, SAN software must ensure that data is directly moved between storage devices within the SAN, with minimal server intervention.: 13
SAN management software is installed on one or more servers and management clients on the storage devices. Two approaches have developed in SAN management software: in-band and out-of-band management. In-band means that management data between server and storage devices is transmitted on the same network as the storage data. While out-of-band means that management data is transmitted over dedicated links.: 174 SAN management software will collect management data from all storage devices in the storage layer. This includes info on read and write failures, storage capacity bottlenecks and failure of storage devices. SAN management software may integrate with the Simple Network Management Protocol (SNMP).: 176
In 1999 Common Information Model (CIM), an open standard, was introduced for managing storage devices and to provide interoperability, The web-based version of CIM is called Web-Based Enterprise Management (WBEM) and defines SAN storage device objects and process transactions. Use of these protocols involves a CIM object manager (CIMOM), to manage objects and interactions, and allows for the central management of SAN storage devices. Basic device management for SANs can also be achieved through the Storage Management Interface Specification (SMI-S), were CIM objects and processes are registered in a directory. Software applications and subsystems can then draw on this directory.: 177 Management software applications are also available to configure SAN storage devices, allowing, for example, the configuration of zones and LUNs.: 178
Ultimately SAN networking and storage devices are available from many vendors and every SAN vendor has its own management and configuration software. Common management in SANs that include devices from different vendors is only possible if vendors make the application programming interface (API) for their devices available to other vendors. In such cases, upper-level SAN management software can manage the SAN devices from other vendors.: 180
== Filesystems support ==
In a SAN, data is transferred, stored and accessed on a block level. As such, a SAN does not provide data file abstraction, only block-level storage and operations. Server operating systems maintain their own file systems on their own dedicated, non-shared LUNs on the SAN, as though they were local to themselves. If multiple systems were simply to attempt to share a LUN, these would interfere with each other and quickly corrupt the data. Any planned sharing of data on different computers within a LUN requires software. File systems have been developed to work with SAN software to provide file-level access. These are known as shared-disk file system.
== In media and entertainment ==
Video editing systems require very high data transfer rates and very low latency. SANs in media and entertainment are often referred to as serverless due to the nature of the configuration which places the video workflow (ingest, editing, playout) desktop clients directly on the SAN rather than attaching to servers. Control of data flow is managed by a distributed file system. Per-node bandwidth usage control, sometimes referred to as quality of service (QoS), is especially important in video editing as it ensures fair and prioritized bandwidth usage across the network.
== Quality of service ==
SAN Storage QoS enables the desired storage performance to be calculated and maintained for network customers accessing the device. Some factors that affect SAN QoS are:
Bandwidth – The rate of data throughput available on the system.
Latency – The time delay for a read/write operation to execute.
Queue depth – The number of outstanding operations waiting to execute to the underlying disks (traditional or solid-state drives).
Alternatively, over-provisioning can be used to provide additional capacity to compensate for peak network traffic loads. However, where network loads are not predictable, over-provisioning can eventually cause all bandwidth to be fully consumed and latency to increase significantly resulting in SAN performance degradation.
== Storage virtualization ==
Storage virtualization is the process of abstracting logical storage from physical storage. The physical storage resources are aggregated into storage pools, from which the logical storage is created. It presents to the user a logical space for data storage and transparently handles the process of mapping it to the physical location, a concept called location transparency. This is implemented in modern disk arrays, often using vendor-proprietary technology. However, the goal of storage virtualization is to group multiple disk arrays from different vendors, scattered over a network, into a single storage device. The single storage device can then be managed uniformly.
== See also ==
List of networked storage hardware platforms
List of storage area network management systems
Massive array of idle disks (MAID)
Storage hypervisor
Storage resource management (SRM)
Converged storage
== References ==
== External links ==
What Is a Storage Area Network (SAN)?
Introduction to Storage Area Networks Exhaustive Introduction into SAN, IBM Redbook
SAN vs. DAS: A Cost Analysis of Storage in the Enterprise at the Wayback Machine (archived 2011-10-30)
SAS and SATA, solid-state storage lower data center power consumption Archived 18 October 2010 at the Wayback Machine
SAN NAS Videos | Wikipedia/Storage_area_network |
The Address Resolution Protocol (ARP) is a communication protocol for discovering the link layer address, such as a MAC address, associated with a internet layer address, typically an IPv4 address. The protocol, part of the Internet protocol suite, was defined in 1982 by RFC 826, which is Internet Standard STD 37.
ARP enables a host to send an IPv4 packet to another node in the local network by providing a protocol to get the MAC address associated with an IP address. The host broadcasts a request containing the node's IP address, and the node with that IP address replies with its MAC address.
ARP has been implemented with many combinations of network and data link layer technologies, such as IPv4, Chaosnet, DECnet and Xerox PARC Universal Packet (PUP) using IEEE 802 standards, FDDI, X.25, Frame Relay and Asynchronous Transfer Mode (ATM).
In Internet Protocol Version 6 (IPv6) networks, the functionality of ARP is provided by the Neighbor Discovery Protocol (NDP).
== Operating scope ==
The Address Resolution Protocol is a request-response protocol. Its messages are directly encapsulated by a link layer protocol. It is communicated within the boundaries of a single subnetwork and is never routed.
== Packet structure ==
The Address Resolution Protocol uses a simple message format containing one address resolution request or response. The packets are carried at the data link layer of the underlying network as raw payload. In the case of Ethernet, a 0x0806 EtherType value is used to identify ARP frames.
The size of the ARP message depends on the link layer and network layer address sizes. The message header specifies the types of network in use at each layer as well as the size of addresses of each. The message header is completed with the operation code for request (1) and reply (2). The payload of the packet consists of four addresses, the hardware and protocol address of the sender and receiver hosts.
The principal packet structure of ARP packets is shown in the following table which illustrates the case of IPv4 networks running on Ethernet. In this scenario, the packet has 48-bit fields for the sender hardware address (SHA) and target hardware address (THA), and 32-bit fields for the corresponding sender and target protocol addresses (SPA and TPA). The ARP packet size in this case is 28 bytes.
Hardware Type (HTYPE): 16 bits
This field specifies the network link protocol type. In this example, a value of 1 indicates Ethernet.
Protocol Type (PTYPE): 16 bits
This field specifies the internetwork protocol for which the ARP request is intended. For IPv4, this has the value 0x0800. The permitted PTYPE values share a numbering space with those for EtherType.
Hardware Length (HLEN): 8 bits
Length (in octets) of a hardware address. For Ethernet, the address length is 6.
Protocol Length (PLEN): 8 bits
Length (in octets) of internetwork addresses. The internetwork protocol is specified in PTYPE. In this example: IPv4 address length is 4.
Operation (OPER): 16 bits
Specifies the operation that the sender is performing: 1 for request, 2 for reply.
Sender Hardware Address (SHA): 48 bits
Media address of the sender. In an ARP request this field is used to indicate the address of the host sending the request. In an ARP reply this field is used to indicate the address of the host that the request was looking for.
Sender protocol address (SPA): 32 bits
Internetwork address of the sender.
Target hardware address (THA): 48 bits
Media address of the intended receiver. In an ARP request this field is ignored. In an ARP reply this field is used to indicate the address of the host that originated the ARP request.
Target protocol address (TPA): 32 bits
Internetwork address of the intended receiver.
ARP parameter values have been standardized and are maintained by the Internet Assigned Numbers Authority (IANA).
The EtherType for ARP is 0x0806. This appears in the Ethernet frame header when the payload is an ARP packet and is not to be confused with PTYPE, which appears within this encapsulated ARP packet.
== Layering ==
ARP's placement within the Internet protocol suite and the OSI model may be a matter of confusion or even of dispute. RFC 826 places it into the Link Layer and characterizes it as a tool to inquire about the "higher level layer", such as the Internet layer. RFC 1122 also discusses ARP in its link layer section.
Richard Stevens places ARP in OSI's data link layer while newer editions associate it with the network layer or introduce an intermediate OSI layer 2.5.
== Example ==
Two computers, A and B, are connected to the same local area network with no intervening gateway or router. A has a packet to send to IP address 192.168.0.55 which happens to be the address of B.
Before sending the packet to B, A broadcasts an ARP request message – addressed with the broadcast MAC address FF:FF:FF:FF:FF:FF and requesting response from the node with IP address 192.168.0.55. All nodes of the network receive the message, but only B replies since it has the requested IP address. B responds with an ARP response message containing its MAC addresses which A receives. A sends the data packet on the link addressed with B's MAC address.
Typically, network nodes maintain a lookup cache that associates IP and MAC addressees. In this example, if A had the lookup cached, then it would not need to broadcast the ARP request. Also, when B received the request, it could cache the lookup to A so that if B needs to send a packet to A later, it does not need to use ARP to lookup its MAC address. Finally, when A receives the ARP response, it can cache the lookup for future messages addressed to the same IP address.
== ARP probe ==
An ARP probe in IPv4 is an ARP request constructed with the SHA of the probing host, an SPA of all 0s, a THA of all 0s, and a TPA set to the IPv4 address being probed for. If some host on the network regards the IPv4 address (in the TPA) as its own, it will reply to the probe (via the SHA of the probing host) thus informing the probing host of the address conflict. If instead there is no host which regards the IPv4 address as its own, then there will be no reply. When several such probes have been sent, with slight delays, and none receive replies, it can reasonably be expected that no conflict exists. As the original probe packet contains neither a valid SHA/SPA nor a valid THA/TPA pair, there is no risk of any host using the packet to update its cache with problematic data. Before beginning to use an IPv4 address (whether received from manual configuration, DHCP, or some other means), a host implementing this specification must test to see if the address is already in use, by broadcasting ARP probe packets.
== ARP announcements ==
ARP may also be used as a simple announcement protocol. This is useful for updating other hosts' mappings of a hardware address when the sender's IP address or MAC address changes. Such an announcement, also called a gratuitous ARP (GARP) message, is usually broadcast as an ARP request containing the SPA in the target field (TPA=SPA), with THA set to zero. An alternative way is to broadcast an ARP reply with the sender's SHA and SPA duplicated in the target fields (TPA=SPA, THA=SHA).
The ARP request and ARP reply announcements are both standards-based methods,: §4.6 but the ARP request method is preferred.: §3 Some devices may be configured for the use of either of these two types of announcements.
An ARP announcement is not intended to solicit a reply; instead, it updates any cached entries in the ARP tables of other hosts that receive the packet. The operation code in the announcement may be either request or reply; the ARP standard specifies that the opcode is only processed after the ARP table has been updated from the address fields.: §4.6 : §4.4.1
Many operating systems issue an ARP announcement during startup. This helps to resolve problems that would otherwise occur if, for example, a network card was recently changed (changing the IP-address-to-MAC-address mapping) and other hosts still have the old mapping in their ARP caches.
ARP announcements are also used by some network interfaces to provide load balancing for incoming traffic. In a team of network cards, it is used to announce a different MAC address within the team that should receive incoming packets.
ARP announcements can be used in the Zeroconf protocol to allow automatic assignment of a link-local address to an interface where no other IP address configuration is available. The announcements are used to ensure an address chosen by a host is not in use by other hosts on the network link.
This function can be dangerous from a cybersecurity viewpoint since an attacker can obtain information about the other hosts of its subnet to save in their ARP cache (ARP spoofing) an entry where the attacker MAC is associated, for instance, to the IP of the default gateway, thus allowing them to intercept all the traffic to external networks.
== ARP mediation ==
ARP mediation refers to the process of resolving Layer-2 addresses through a virtual private wire service (VPWS) when different resolution protocols are used on the connected circuits, e.g., Ethernet on one end and Frame Relay on the other. In IPv4, each provider edge (PE) device discovers the IP address of the locally attached customer edge (CE) device and distributes that IP address to the corresponding remote PE device. Then each PE device responds to local ARP requests using the IP address of the remote CE device and the hardware address of the local PE device. In IPv6, each PE device discovers the IP address of both local and remote CE devices and then intercepts local Neighbor Discovery (ND) and Inverse Neighbor Discovery (IND) packets and forwards them to the remote PE device.
== Inverse ARP and Reverse ARP ==
Inverse Address Resolution Protocol (Inverse ARP or InARP) is used to obtain network layer addresses (for example, IP addresses) of other nodes from data link layer (Layer 2) addresses. Since ARP translates layer-3 addresses to layer-2 addresses, InARP may be described as its inverse. In addition, InARP is implemented as a protocol extension to ARP: it uses the same packet format as ARP, but different operation codes.
InARP is primarily used in Frame Relay (DLCI) and ATM networks, in which layer-2 addresses of virtual circuits are sometimes obtained from layer-2 signaling, and the corresponding layer-3 addresses must be available before those virtual circuits can be used.
The Reverse Address Resolution Protocol (Reverse ARP or RARP), like InARP, translates layer-2 addresses to layer-3 addresses. However, in InARP the requesting station queries the layer-3 address of another node, whereas RARP is used to obtain the layer-3 address of the requesting station itself for address configuration purposes. RARP is obsolete; it was replaced by BOOTP, which was later superseded by the Dynamic Host Configuration Protocol (DHCP).
== ARP spoofing and proxy ARP ==
Because ARP does not provide methods for authenticating ARP replies on a network, ARP replies can come from systems other than the one with the required Layer 2 address. An ARP proxy is a system that answers the ARP request on behalf of another system for which it will forward traffic, normally as a part of the network's design, such as for a dialup internet service. By contrast, in ARP spoofing the answering system, or spoofer, replies to a request for another system's address with the aim of intercepting data bound for that system. A malicious user may use ARP spoofing to perform a man-in-the-middle or denial-of-service attack on other users on the network. Various software exists to both detect and perform ARP spoofing attacks, though ARP itself does not provide any methods of protection from such attacks.
== Alternatives ==
IPv6 uses the Neighbor Discovery Protocol and its extensions such as Secure Neighbor Discovery, rather than ARP.
Computers can maintain lists of known addresses, rather than using an active protocol. In this model, each computer maintains a database of the mapping of Layer 3 addresses (e.g., IP addresses) to Layer 2 addresses (e.g., Ethernet MAC addresses). This data is maintained primarily by interpreting ARP packets from the local network link. Thus, it is often called the ARP cache. Since at least the 1980s, networked computers have a utility called arp for interrogating or manipulating this database.
Historically, other methods were used to maintain the mapping between addresses, such as static configuration files, or centrally maintained lists.
== ARP stuffing ==
Embedded systems such as networked cameras and networked power distribution devices, which lack a user interface, can use so-called ARP stuffing to make an initial network connection, although this is a misnomer, as ARP is not involved.
ARP stuffing is accomplished as follows:
The user's computer has an IP address stuffed manually into its address table (normally with the arp command with the MAC address taken from a label on the device)
The computer sends special packets to the device, typically a ping packet with a non-default size.
The device then adopts this IP address
The user then communicates with it by telnet or web protocols to complete the configuration.
Such devices typically have a method to disable this process once the device is operating normally, as the capability can make it vulnerable to attack.
== Standards documents ==
RFC 826 – An Ethernet Address Resolution Protocol, Internet Standard 37.
RFC 903 – A Reverse Address Resolution Protocol, Internet Standard 38.
RFC 2390 – Inverse Address Resolution Protocol, Draft Standard.
RFC 5227 – IPv4 Address Conflict Detection, Proposed Standard.
== See also ==
Arping – Software utility for discovering and probing hosts on a computer network
Arptables – Network administrator's tool
Arpwatch – Computer networking software tool
Bonjour Sleep Proxy – Open source component of zero configuration networking
Cisco HDLC – Extension to the High-Level Data Link Control (HDLC) network protocol
== References ==
== External links ==
"ARP Sequence Diagram (pdf)" (PDF). Archived from the original (PDF) on 2021-03-01.
Gratuitous ARP
Information and sample capture from Wireshark
ARP-SK ARP traffic generation tools | Wikipedia/Address_Resolution_Protocol |
An application-level gateway (ALG, also known as application-layer gateway, application gateway, application proxy, or application-level proxy) is a security component that augments a firewall or NAT employed in a mobile network. It allows customized NAT traversal filters to be plugged into the gateway to support address and port translation for certain application layer "control/data" protocols such as FTP, BitTorrent, SIP, RTSP, file transfer in IM applications. In order for these protocols to work through NAT or a firewall, either the application has to know about an address/port number combination that allows incoming packets, or the NAT has to monitor the control traffic and open up port mappings (firewall pinholes) dynamically as required. Legitimate application data can thus be passed through the security checks of the firewall or NAT that would have otherwise restricted the traffic for not meeting its limited filter criteria.
== Functions ==
An ALG may offer the following functions:
allowing client applications to use dynamic ephemeral TCP/UDP ports to communicate with the known ports used by the server applications, even though a firewall configuration may allow only a limited number of known ports. In the absence of an ALG, either the ports would get blocked or the network administrator would need to explicitly open up a large number of ports in the firewall — rendering the network vulnerable to attacks on those ports.
converting the network layer address information found inside an application payload between the addresses acceptable by the hosts on either side of the firewall/NAT. This aspect introduces the term 'gateway' for an ALG.
recognizing application-specific commands and offering granular security controls over them
synchronizing between multiple streams/sessions of data between two hosts exchanging data. For example, an FTP application may use separate connections for passing control commands and for exchanging data between the client and a remote server. During large file transfers, the control connection may remain idle. An ALG can prevent the control connection getting timed out by network devices before the lengthy file transfer completes.
Deep packet inspection of all the packets handled by ALGs over a given network makes this functionality possible. An ALG understands the protocol used by the specific applications that it supports.
For instance, for Session Initiation Protocol (SIP) Back-to-Back User agent (B2BUA), an ALG can allow firewall traversal with SIP. If the firewall has its SIP traffic terminated on an ALG then the responsibility for permitting SIP sessions passes to the ALG instead of the firewall. An ALG can solve another major SIP headache: NAT traversal. Basically a NAT with a built-in ALG can rewrite information within the SIP messages and can hold address bindings until the session terminates. A SIP ALG will also handle SDP in the body of SIP messages (which is used ubiquitously in VoIP to set up media endpoints), since SDP also contains literal IP addresses and ports that must be translated.
It is common for SIP ALG on some equipment to interfere with other technologies that try to solve the same problem, and various providers recommend turning it off.
An ALG is very similar to a proxy server, as it sits between the client and real server, facilitating the exchange. There seems to be an industry convention that an ALG does its job without the application being configured to use it, by intercepting the messages. A proxy, on the other hand, usually needs to be configured in the client application. The client is then explicitly aware of the proxy and connects to it, rather than the real server.
== Microsoft Windows ==
The Application Layer Gateway service in Microsoft Windows provides support for third-party plugins that allow network protocols to pass through the Windows Firewall and work behind it and Internet Connection Sharing. ALG plugins can open ports and change data that is embedded in packets, such as ports and IP addresses. Windows Server 2003 also includes an ALG FTP plugin. The ALG FTP plugin is designed to support active FTP sessions through the NAT engine in Windows. To do this, the ALG FTP plugin redirects all traffic that passes through the NAT and that is destined for port 21 (FTP control port) to a private listening port in the 3000–5000 range on the Microsoft loopback adapter. The ALG FTP plugin then monitors/updates traffic on the FTP control channel so that the FTP plugin can plumb port mappings through the NAT for the FTP data channels.
== Linux ==
The Linux kernel's Netfilter framework, which implements NAT in Linux, has features and modules for several NAT ALGs:
Amanda protocol
FTP
IRC
SIP
TFTP
IPsec
H.323
PPTP
L2TP
== See also ==
Session border controller
== References ==
== External links ==
DNS Application Level Gateway (DNS_ALG) | Wikipedia/Application-layer_gateway |
The Bianconi–Barabási model is a model in network science that explains the growth of complex evolving networks. This model can explain that nodes with different characteristics acquire links at different rates. It predicts that a node's growth depends on its fitness and can calculate the degree distribution. The Bianconi–Barabási model is named after its inventors Ginestra Bianconi and Albert-László Barabási. This model is a variant of the Barabási–Albert model. The model can be mapped to a Bose gas and this mapping can predict a topological phase transition between a "rich-get-richer" phase and a "winner-takes-all" phase.
== Concepts ==
The Barabási–Albert (BA) model uses two concepts: growth and preferential attachment. Here, growth indicates the increase in the number of nodes in the network with time, and preferential attachment means that more connected nodes receive more links. The Bianconi–Barabási model, on top of these two concepts, uses another new concept called the fitness. This model makes use of an analogy with evolutionary models. It assigns an intrinsic fitness value to each node, which embodies all the properties other than the degree. The higher the fitness, the higher the probability of attracting new edges. Fitness can be defined as the ability to attract new links – "a quantitative measure of a node's ability to stay in front of the competition".
While the Barabási–Albert (BA) model explains the "first mover advantage" phenomenon, the Bianconi–Barabási model explains how latecomers also can win. In a network where fitness is an attribute, a node with higher fitness will acquire links at a higher rate than less fit nodes. This model explains that age is not the best predictor of a node's success, rather latecomers also have the chance to attract links to become a hub.
The Bianconi–Barabási model can reproduce the degree correlations of the Internet Autonomous Systems. This model can also show condensation phase transitions in the evolution of complex network.
The BB model can predict the topological properties of Internet.
== Algorithm ==
The fitness network begins with a fixed number of interconnected nodes. They have different fitness, which can be described with fitness parameter,
η
j
{\displaystyle \eta _{j}}
which is chosen from a fitness distribution
ρ
(
η
)
{\displaystyle \rho (\eta )}
.
=== Growth ===
The assumption here is that a node’s fitness is independent of time, and is fixed.
A new node j with m links and a fitness
η
j
{\displaystyle \eta _{j}}
is added with each time-step.
=== Preferential attachment ===
The probability
Π
i
{\displaystyle \Pi _{i}}
that a new node connects to one of the existing links to a node
i
{\displaystyle i}
in the network depends on the number of edges,
k
i
{\displaystyle k_{i}}
, and on the fitness
η
i
{\displaystyle \eta _{i}}
of node
i
{\displaystyle i}
, such that,
Π
i
=
η
i
k
i
∑
j
η
j
k
j
.
{\displaystyle \Pi _{i}={\frac {\eta _{i}k_{i}}{\sum _{j}\eta _{j}k_{j}}}.}
Each node’s evolution with time can be predicted using the continuum theory. If initial number of node is
m
{\displaystyle m}
, then the degree of node
i
{\displaystyle i}
changes at the rate:
∂
k
i
∂
t
=
m
η
i
k
i
∑
j
η
j
k
j
{\displaystyle {\frac {\partial k_{i}}{\partial t}}=m{\frac {\eta _{i}k_{i}}{\sum _{j}\eta _{j}k_{j}}}}
Assuming the evolution of
k
i
{\displaystyle k_{i}}
follows a power law with a fitness exponent
β
(
η
i
)
{\displaystyle \beta (\eta _{i})}
k
(
t
,
t
i
,
η
i
)
=
m
(
t
t
i
)
β
(
η
i
)
{\displaystyle k(t,t_{i},\eta _{i})=m\left({\frac {t}{t_{i}}}\right)^{\beta (\eta _{i})}}
,
where
t
i
{\displaystyle t_{i}}
is the time since the creation of node
i
{\displaystyle i}
.
Here,
β
(
η
)
=
η
C
and
C
=
∫
ρ
(
η
)
η
1
−
β
(
η
)
d
η
.
{\displaystyle \beta (\eta )={\frac {\eta }{C}}{\text{ and }}C=\int \rho (\eta ){\frac {\eta }{1-\beta (\eta )}}\,d\eta .}
== Properties ==
=== Equal fitnesses ===
If all fitnesses are equal in a fitness network, the Bianconi–Barabási model reduces to the Barabási–Albert model, when the degree is not considered, the model reduces to the fitness model (network theory).
When fitnesses are equal, the probability
Π
i
{\displaystyle \Pi _{i}}
that the new node is connected to node
i
{\displaystyle i}
when
k
i
{\displaystyle k_{i}}
is the degree of node
i
{\displaystyle i}
is,
Π
i
=
k
i
∑
j
k
j
.
{\displaystyle \Pi _{i}={\frac {k_{i}}{\sum _{j}k_{j}}}.}
=== Degree distribution ===
Degree distribution of the Bianconi–Barabási model depends on the fitness distribution
ρ
(
η
)
{\displaystyle \rho (\eta )}
. There are two scenarios that can happen based on the probability distribution. If the fitness distribution has a finite domain, then the degree distribution will have a power-law just like the BA model. In the second case, if the fitness distribution has an infinite domain, then the node with the highest fitness value will attract a large number of nodes and show a winners-take-all scenario.
=== Measuring node fitnesses from empirical network data ===
There are various statistical methods to measure node fitnesses
η
i
{\displaystyle \eta _{i}}
in the Bianconi–Barabási model from real-world network data. From the measurement, one can investigate the fitness distribution
ρ
(
η
)
{\displaystyle \rho (\eta )}
or compare the Bianconi–Barabási model with various competing network models in that particular network.
=== Variations of the Bianconi–Barabási model ===
The Bianconi–Barabási model has been extended to weighted networks displaying linear and superlinear scaling of the strength with the degree of the nodes as observed in real network data. This weighted model can lead to condensation of the weights of the network when few links acquire a finite fraction of the weight of the entire network.
Recently it has been shown that the Bianconi–Barabási model can be interpreted as a limit case of the model for emergent hyperbolic network geometry called Network Geometry with Flavor.
The Bianconi–Barabási model can be also modified to study static networks where the number of nodes is fixed.
== Bose-Einstein condensation ==
Bose–Einstein condensation in networks is a phase transition observed in complex networks that can be described by the Bianconi–Barabási model. This phase transition predicts a "winner-takes-all" phenomena in complex networks and can be mathematically mapped to the mathematical model explaining Bose–Einstein condensation in physics.
=== Background ===
In physics, a Bose–Einstein condensate is a state of matter that occurs in certain gases at very low temperatures. Any elementary particle, atom, or molecule, can be classified as one of two types: a boson or a fermion. For example, an electron is a fermion, while a photon or a helium atom is a boson. In quantum mechanics, the energy of a (bound) particle is limited to a set of discrete values, called energy levels. An important characteristic of a fermion is that it obeys the Pauli exclusion principle, which states that no two fermions may occupy the same state. Bosons, on the other hand, do not obey the exclusion principle, and any number can exist in the same state. As a result, at very low energies (or temperatures), a great majority of the bosons in a Bose gas can be crowded into the lowest energy state, creating a Bose–Einstein condensate.
Bose and Einstein have established that the statistical properties of a Bose gas are governed by the Bose–Einstein statistics. In Bose–Einstein statistics, any number of identical bosons can be in the same state. In particular, given an energy state ε, the number of non-interacting bosons in thermal equilibrium at temperature T = 1/β is given by the Bose occupation number
n
(
ε
)
=
1
e
β
(
ε
−
μ
)
−
1
{\displaystyle n(\varepsilon )={\frac {1}{e^{\beta (\varepsilon -\mu )}-1}}}
where the constant μ is determined by an equation describing the conservation of the number of particles
N
=
∫
d
ε
g
(
ε
)
n
(
ε
)
{\displaystyle N=\int d\varepsilon \,g(\varepsilon )\,n(\varepsilon )}
with g(ε) being the density of states of the system.
This last equation may lack a solution at low enough temperatures when g(ε) → 0 for ε → 0. In this case a critical temperature Tc is found such that for T < Tc the system is in a Bose-Einstein condensed phase and a finite fraction of the bosons are in the ground state.
The density of states g(ε) depends on the dimensionality of the space. In particular
g
(
ε
)
∼
ε
d
−
2
2
{\displaystyle g(\varepsilon )\sim \varepsilon ^{\frac {d-2}{2}}}
therefore g(ε) → 0 for ε → 0 only in dimensions d > 2. Therefore, a Bose-Einstein condensation of an ideal Bose gas can only occur for dimensions d > 2.
=== The concept ===
The evolution of many complex systems, including the World Wide Web, business, and citation networks, is encoded in the dynamic web describing the interactions between the system’s constituents. The evolution of these networks is captured by the Bianconi-Barabási model, which includes two main characteristics of growing networks: their constant growth by the addition of new nodes and links and the heterogeneous ability of each node to acquire new links described by the node fitness. Therefore the model is also known as fitness model.
Despite their irreversible and nonequilibrium nature, these networks follow the Bose statistics and can be mapped to a Bose gas.
In this mapping, each node is mapped to an energy state determined by its fitness and each new link attached to a given node is mapped to a Bose particle occupying the corresponding energy state. This mapping predicts that the Bianconi–Barabási model can undergo a topological phase transition in correspondence to the Bose–Einstein condensation of the Bose gas. This phase transition is therefore called Bose-Einstein condensation in complex networks.
Consequently addressing the dynamical properties of these nonequilibrium systems within the framework of equilibrium quantum gases predicts that the “first-mover-advantage,” “fit-get-rich (FGR),” and “winner-takes-all” phenomena observed in a competitive systems are thermodynamically distinct phases of the underlying evolving networks.
=== The mathematical mapping of the network evolution to the Bose gas ===
Starting from the Bianconi-Barabási model, the mapping of a Bose gas to a network can be done by assigning an energy εi to each node, determined by its fitness through the relation
ε
i
=
−
1
β
ln
η
i
{\displaystyle \varepsilon _{i}=-{\frac {1}{\beta }}\ln {\eta _{i}}}
where β = 1 / T . In particular when β = 0 all the nodes have equal fitness, when instead β ≫ 1 nodes with different "energy" have very different fitness. We assume that the network evolves through a modified preferential attachment mechanism. At each time a new node i with energy εi drawn from a probability distribution p(ε) enters in the network and attach a new link to a node j chosen with probability:
Π
j
=
e
−
β
ε
j
k
j
∑
r
e
−
β
ε
r
k
r
.
{\displaystyle \Pi _{j}={\frac {e^{-\beta \varepsilon _{j}}k_{j}}{\sum _{r}e^{-\beta \varepsilon _{r}}k_{r}}}.}
In the mapping to a Bose gas, we assign to every new link linked by preferential attachment to node j a particle in the energy state εj.
The continuum theory predicts that the rate at which links accumulate on node i with "energy" εi is given by
∂
k
i
(
ε
i
,
t
,
t
i
)
∂
t
=
m
e
−
β
ε
i
k
i
(
ε
i
,
t
,
t
i
)
Z
t
{\displaystyle {\frac {\partial k_{i}(\varepsilon _{i},t,t_{i})}{\partial t}}=m{\frac {e^{-\beta \varepsilon _{i}}k_{i}(\varepsilon _{i},t,t_{i})}{Z_{t}}}}
where
k
i
(
ε
i
,
t
,
t
i
)
{\displaystyle k_{i}(\varepsilon _{i},t,t_{i})}
indicating the number of links attached to node i that was added to the network at the time step
t
i
{\displaystyle t_{i}}
.
Z
t
{\displaystyle Z_{t}}
is the partition function, defined as:
Z
t
=
∑
i
e
−
β
ε
i
k
i
(
ε
i
,
t
,
t
i
)
.
{\displaystyle Z_{t}=\sum _{i}e^{-\beta \varepsilon _{i}}k_{i}(\varepsilon _{i},t,t_{i}).}
The solution of this differential equation is:
k
i
(
ε
i
,
t
,
t
i
)
=
m
(
t
t
i
)
f
(
ε
i
)
{\displaystyle k_{i}(\varepsilon _{i},t,t_{i})=m\left({\frac {t}{t_{i}}}\right)^{f(\varepsilon _{i})}}
where the dynamic exponent
f
(
ε
)
{\displaystyle f(\varepsilon )}
satisfies
f
(
ε
)
=
e
−
β
(
ε
−
μ
)
{\displaystyle f(\varepsilon )=e^{-\beta (\varepsilon -\mu )}}
, μ plays the role of the chemical potential, satisfying the equation
∫
d
ε
p
(
ε
)
1
e
β
(
ε
−
μ
)
−
1
=
1
{\displaystyle \int d\varepsilon \,p(\varepsilon ){\frac {1}{e^{\beta (\varepsilon -\mu )}-1}}=1}
where p(ε) is the probability that a node has "energy" ε and "fitness" η = e−βε. In the limit, t → ∞, the occupation number, giving the number of links linked to nodes with "energy" ε, follows the familiar Bose statistics
n
(
ε
)
=
1
e
β
(
ε
−
μ
)
−
1
.
{\displaystyle n(\varepsilon )={\frac {1}{e^{\beta (\varepsilon -\mu )}-1}}.}
The definition of the constant μ in the network models is surprisingly similar to the definition of the chemical potential in a Bose gas. In particular for probabilities p(ε) such that p(ε) → 0 for ε → 0 at high enough value of β we have a condensation phase transition in the network model. When this occurs, one node, the one with higher fitness acquires a finite fraction of all the links. The Bose–Einstein condensation in complex networks is, therefore, a topological phase transition after which the network has a star-like dominant structure.
=== Bose–Einstein phase transition in complex networks ===
The mapping of a Bose gas predicts the existence of two distinct phases as a function of the energy distribution. In the fit-get-rich phase, describing the case of uniform fitness, the fitter nodes acquire edges at a higher rate than older but less fit nodes. In the end the fittest node will have the most edges, but the richest node is not the absolute winner, since its share of the edges (i.e. the ratio of its edges to the total number of edges in the system) reduces to zero in the limit of large system sizes (Fig.2(b)). The unexpected outcome of this mapping is the possibility of Bose–Einstein condensation for T < TBE, when the fittest node acquires a finite fraction of the edges and maintains this share of edges over time (Fig.2(c)).
A representative fitness distribution
ρ
(
η
)
{\displaystyle \rho (\eta )}
that leads to condensation is given by
ρ
(
η
)
=
(
λ
+
1
)
(
1
−
η
)
λ
,
{\displaystyle \rho (\eta )=(\lambda +1)(1-\eta )^{\lambda },}
where
λ
=
1
{\displaystyle \lambda =1}
.
However, the existence of the Bose–Einstein condensation or the fit-get-rich phase does not depend on the temperature or β of the system but depends only on the functional form of the fitness distribution
ρ
(
η
)
{\displaystyle \rho (\eta )}
of the system. In the end, β falls out of all topologically important quantities. In fact, it can be shown that Bose–Einstein condensation exists in the fitness model even without mapping to a Bose gas. A similar gelation can be seen in models with superlinear preferential attachment, however, it is not clear whether this is an accident or a deeper connection lies between this and the fitness model.
== See also ==
Barabási–Albert model
== References ==
== External links ==
Networks: A Very Short Introduction
Advance Network Dynamics | Wikipedia/Bianconi–Barabási_model |
Mathematical models can project how infectious diseases progress to show the likely outcome of an epidemic (including in plants) and help inform public health and plant health interventions. Models use basic assumptions or collected statistics along with mathematics to find parameters for various infectious diseases and use those parameters to calculate the effects of different interventions, like mass vaccination programs. The modelling can help decide which intervention(s) to avoid and which to trial, or can predict future growth patterns, etc.
== History ==
The modelling of infectious diseases is a tool that has been used to study the mechanisms by which diseases spread, to predict the future course of an outbreak and to evaluate strategies to control an epidemic.
The first scientist who systematically tried to quantify causes of death was John Graunt in his book Natural and Political Observations made upon the Bills of Mortality, in 1662. The bills he studied were listings of numbers and causes of deaths published weekly. Graunt's analysis of causes of death is considered the beginning of the "theory of competing risks" which according to Daley and Gani is "a theory that is now well established among modern epidemiologists".
The earliest account of mathematical modelling of spread of disease was carried out in 1760 by Daniel Bernoulli. Trained as a physician, Bernoulli created a mathematical model to defend the practice of inoculating against smallpox. The calculations from this model showed that universal inoculation against smallpox would increase the life expectancy from 26 years 7 months to 29 years 9 months. Daniel Bernoulli's work preceded the modern understanding of germ theory.
In the early 20th century, William Hamer and Ronald Ross applied the law of mass action to explain epidemic behaviour.
The 1920s saw the emergence of compartmental models. The Kermack–McKendrick epidemic model (1927) and the Reed–Frost epidemic model (1928) both describe the relationship between susceptible, infected and immune individuals in a population. The Kermack–McKendrick epidemic model was successful in predicting the behavior of outbreaks very similar to that observed in many recorded epidemics.
Recently, agent-based models (ABMs) have been used in exchange for simpler compartmental models. For example, epidemiological ABMs have been used to inform public health (nonpharmaceutical) interventions against the spread of SARS-CoV-2. Epidemiological ABMs, in spite of their complexity and requiring high computational power, have been criticized for simplifying and unrealistic assumptions. Still, they can be useful in informing decisions regarding mitigation and suppression measures in cases when ABMs are accurately calibrated.
== Assumptions ==
Models are only as good as the assumptions on which they are based. If a model makes predictions that are out of line with observed results and the mathematics is correct, the initial assumptions must change to make the model useful.
Rectangular and stationary age distribution, i.e., everybody in the population lives to age L and then dies, and for each age (up to L) there is the same number of people in the population. This is often well-justified for developed countries where there is a low infant mortality and much of the population lives to the life expectancy.
Homogeneous mixing of the population, i.e., individuals of the population under scrutiny assort and make contact at random and do not mix mostly in a smaller subgroup. This assumption is rarely justified because social structure is widespread. For example, most people in London only make contact with other Londoners. Further, within London then there are smaller subgroups, such as the Turkish community or teenagers (just to give two examples), who mix with each other more than people outside their group. However, homogeneous mixing is a standard assumption to make the mathematics tractable.
== Types of epidemic models ==
=== Stochastic ===
"Stochastic" means being or having a random variable. A stochastic model is a tool for estimating probability distributions of potential outcomes by allowing for random variation in one or more inputs over time. Stochastic models depend on the chance variations in risk of exposure, disease and other illness dynamics. Statistical agent-level disease dissemination in small or large populations can be determined by stochastic methods.
=== Deterministic ===
When dealing with large populations, as in the case of tuberculosis, deterministic or compartmental mathematical models are often used. In a deterministic model, individuals in the population are assigned to different subgroups or compartments, each representing a specific stage of the epidemic.
The transition rates from one class to another are mathematically expressed as derivatives, hence the model is formulated using differential equations. While building such models, it must be assumed that the population size in a compartment is differentiable with respect to time and that the epidemic process is deterministic. In other words, the changes in population of a compartment can be calculated using only the history that was used to develop the model.
=== Kinetic and mean-field ===
Formally, these models belong to the class of deterministic models; however, they incorporate heterogeneous social features into the dynamics, such as individuals' levels of sociality, opinion, wealth, geographic location, which profoundly influence disease propagation. These models are typically represented by partial differential equations, in contrast to classical models described as systems of ordinary differential equations. Following the derivation principles of kinetic theory, they provide a more rigorous description of epidemic dynamics by starting from agent-based interactions.
== Sub-exponential growth ==
A common explanation for the growth of epidemics holds that 1 person infects 2, those 2 infect 4 and so on and so on with the number of infected doubling every generation.
It is analogous to a game of tag where 1 person tags 2, those 2 tag 4 others who've never been tagged and so on. As this game progresses it becomes increasing frenetic as the tagged run past the previously tagged to hunt down those who have never been tagged.
Thus this model of an epidemic leads to a curve that grows exponentially until it crashes to zero as all the population have been infected. i.e. no herd immunity and no peak and gradual decline as seen in reality.
== Epidemic Models on Networks ==
Epidemics can be modeled as diseases spreading over networks of contact between people. Such a network can be represented mathematically with a graph and is called the contact network. Every node in a contact network is a representation of an individual and each link (edge) between a pair of nodes represents the contact between them. Links in the contact networks may be used to transmit the disease between the individuals and each disease has its own dynamics on top of its contact network. The combination of disease dynamics under the influence of interventions, if any, on a contact network may be modeled with another network, known as a transmission network. In a transmission network, all the links are responsible for transmitting the disease. If such a network is a locally tree-like network, meaning that any local neighborhood in such a network takes the form of a tree, then the basic reproduction can be written in terms of the average excess degree of the transmission network such that:
R
0
=
⟨
k
2
⟩
⟨
k
⟩
−
1
,
{\displaystyle R_{0}={\frac {\langle k^{2}\rangle }{\langle k\rangle }}-1,}
where
⟨
k
⟩
{\displaystyle {\langle k\rangle }}
is the mean-degree (average degree) of the network and
⟨
k
2
⟩
{\displaystyle {\langle k^{2}\rangle }}
is the second moment of the transmission network degree distribution. It is, however, not always straightforward to find the transmission network out of the contact network and the disease dynamics. For example, if a contact network can be approximated with an Erdős–Rényi graph with a Poissonian degree distribution, and the disease spreading parameters are as defined in the example above, such that
β
{\displaystyle \beta }
is the transmission rate per person and the disease has a mean infectious period of
1
γ
{\displaystyle {\dfrac {1}{\gamma }}}
, then the basic reproduction number is
R
0
=
β
γ
⟨
k
⟩
{\displaystyle R_{0}={\dfrac {\beta }{\gamma }}{\langle k\rangle }}
since
⟨
k
2
⟩
−
⟨
k
⟩
2
=
⟨
k
⟩
{\displaystyle {\langle k^{2}\rangle }-{\langle k\rangle }^{2}={\langle k\rangle }}
for a Poisson distribution.
== Reproduction number ==
The basic reproduction number (denoted by R0) is a measure of how transferable a disease is. It is the average number of people that a single infectious person will infect over the course of their infection. This quantity determines whether the infection will increase sub-exponentially, die out, or remain constant: if R0 > 1, then each person on average infects more than one other person so the disease will spread; if R0 < 1, then each person infects fewer than one person on average so the disease will die out; and if R0 = 1, then each person will infect on average exactly one other person, so the disease will become endemic: it will move throughout the population but not increase or decrease.
== Endemic steady state ==
An infectious disease is said to be endemic when it can be sustained in a population without the need for external inputs. This means that, on average, each infected person is infecting exactly one other person (any more and the number of people infected will grow sub-exponentially and there will be an epidemic, any less and the disease will die out). In mathematical terms, that is:
R
0
S
=
1.
{\displaystyle \ R_{0}S\ =1.}
The basic reproduction number (R0) of the disease, assuming everyone is susceptible, multiplied by the proportion of the population that is actually susceptible (S) must be one (since those who are not susceptible do not feature in our calculations as they cannot contract the disease). Notice that this relation means that for a disease to be in the endemic steady state, the higher the basic reproduction number, the lower the proportion of the population susceptible must be, and vice versa. This expression has limitations concerning the susceptibility proportion, e.g. the R0 equals 0.5 implicates S has to be 2, however this proportion exceeds the population size.
Assume the rectangular stationary age distribution and let also the ages of infection have the same distribution for each birth year. Let the average age of infection be A, for instance when individuals younger than A are susceptible and those older than A are immune (or infectious). Then it can be shown by an easy argument that the proportion of the population that is susceptible is given by:
S
=
A
L
.
{\displaystyle S={\frac {A}{L}}.}
We reiterate that L is the age at which in this model every individual is assumed to die. But the mathematical definition of the endemic steady state can be rearranged to give:
S
=
1
R
0
.
{\displaystyle S={\frac {1}{R_{0}}}.}
Therefore, due to the transitive property:
1
R
0
=
A
L
⇒
R
0
=
L
A
.
{\displaystyle {\frac {1}{R_{0}}}={\frac {A}{L}}\Rightarrow R_{0}={\frac {L}{A}}.}
This provides a simple way to estimate the parameter R0 using easily available data.
For a population with an exponential age distribution,
R
0
=
1
+
L
A
.
{\displaystyle R_{0}=1+{\frac {L}{A}}.}
This allows for the basic reproduction number of a disease given A and L in either type of population distribution.
== Compartmental models in epidemiology ==
Compartmental models are formulated as Markov chains. A classic compartmental model in epidemiology is the SIR model, which may be used as a simple model for modelling epidemics. Multiple other types of compartmental models are also employed.
=== The SIR model ===
In 1927, W. O. Kermack and A. G. McKendrick created a model in which they considered a fixed population with only three compartments: susceptible,
S
(
t
)
{\displaystyle S(t)}
; infected,
I
(
t
)
{\displaystyle I(t)}
; and recovered,
R
(
t
)
{\displaystyle R(t)}
. The compartments used for this model consist of three classes:
S
(
t
)
{\displaystyle S(t)}
, or those susceptible to the disease of the population.
I
(
t
)
{\displaystyle I(t)}
denotes the individuals of the population who have been infected with the disease and are capable of spreading the disease to those in the susceptible category.
R
(
t
)
{\displaystyle R(t)}
is the compartment used for the individuals of the population who have been infected and then removed from the disease, either due to immunization or due to death. Those in this category are not able to be infected again or to transmit the infection to others.
=== Other compartmental models ===
There are many modifications of the SIR model, including those that include births and deaths, where upon recovery there is no immunity (SIS model), where immunity lasts only for a short period of time (SIRS), where there is a latent period of the disease where the person is not infectious (SEIS and SEIR), and where infants can be born with immunity (MSIR).
== Infectious disease dynamics ==
Mathematical models need to integrate the increasing volume of data being generated on host-pathogen interactions. Many theoretical studies of the population dynamics, structure and evolution of infectious diseases of plants and animals, including humans, are concerned with this problem.
Research topics include:
antigenic shift
epidemiological networks
evolution and spread of resistance
immuno-epidemiology
intra-host dynamics
Pandemic
pathogen population genetics
persistence of pathogens within hosts
phylodynamics
role and identification of infection reservoirs
role of host genetic factors
spatial epidemiology
statistical and mathematical tools and innovations
Strain (biology) structure and interactions
transmission, spread and control of infection
virulence
== Mathematics of mass vaccination ==
If the proportion of the population that is immune exceeds the herd immunity level for the disease, then the disease can no longer persist in the population and its transmission dies out. Thus, a disease can be eliminated from a population if enough individuals are immune due to either vaccination or recovery from prior exposure to disease. For example, smallpox eradication, with the last wild case in 1977, and certification of the eradication of indigenous transmission of 2 of the 3 types of wild poliovirus (type 2 in 2015, after the last reported case in 1999, and type 3 in 2019, after the last reported case in 2012).
The herd immunity level will be denoted q. Recall that, for a stable state:
R
0
⋅
S
=
1.
{\displaystyle R_{0}\cdot S=1.}
In turn,
R
0
=
N
S
=
μ
N
E
(
T
L
)
μ
N
E
[
min
(
T
L
,
T
S
)
]
=
E
(
T
L
)
E
[
min
(
T
L
,
T
S
)
]
,
{\displaystyle R_{0}={\frac {N}{S}}={\frac {\mu N\operatorname {E} (T_{L})}{\mu N\operatorname {E} [\min(T_{L},T_{S})]}}={\frac {\operatorname {E} (T_{L})}{\operatorname {E} [\min(T_{L},T_{S})]}},}
which is approximately:
E
(
T
L
)
E
(
T
S
)
=
1
+
λ
μ
=
β
N
v
.
{\displaystyle {\frac {\operatorname {\operatorname {E} } (T_{L})}{\operatorname {\operatorname {E} } (T_{S})}}=1+{\frac {\lambda }{\mu }}={\frac {\beta N}{v}}.}
S will be (1 − q), since q is the proportion of the population that is immune and q + S must equal one (since in this simplified model, everyone is either susceptible or immune). Then:
R
0
⋅
(
1
−
q
)
=
1
,
1
−
q
=
1
R
0
,
q
=
1
−
1
R
0
.
{\displaystyle {\begin{aligned}&R_{0}\cdot (1-q)=1,\\[6pt]&1-q={\frac {1}{R_{0}}},\\[6pt]&q=1-{\frac {1}{R_{0}}}.\end{aligned}}}
Remember that this is the threshold level. Die out of transmission will only occur if the proportion of immune individuals exceeds this level due to a mass vaccination programme.
We have just calculated the critical immunization threshold (denoted qc). It is the minimum proportion of the population that must be immunized at birth (or close to birth) in order for the infection to die out in the population.
q
c
=
1
−
1
R
0
.
{\displaystyle q_{c}=1-{\frac {1}{R_{0}}}.}
Because the fraction of the final size of the population p that is never infected can be defined as:
lim
t
→
∞
S
(
t
)
=
e
−
∫
0
∞
λ
(
t
)
d
t
=
1
−
p
.
{\displaystyle \lim _{t\to \infty }S(t)=e^{-\int _{0}^{\infty }\lambda (t)\,dt}=1-p.}
Hence,
p
=
1
−
e
−
∫
0
∞
β
I
(
t
)
d
t
=
1
−
e
−
R
0
p
.
{\displaystyle p=1-e^{-\int _{0}^{\infty }\beta I(t)\,dt}=1-e^{-R_{0}p}.}
Solving for
R
0
{\displaystyle R_{0}}
, we obtain:
R
0
=
−
ln
(
1
−
p
)
p
.
{\displaystyle R_{0}={\frac {-\ln(1-p)}{p}}.}
=== When mass vaccination cannot exceed the herd immunity ===
If the vaccine used is insufficiently effective or the required coverage cannot be reached, the program may fail to exceed qc. Such a program will protect vaccinated individuals from disease, but may change the dynamics of transmission.
Suppose that a proportion of the population q (where q < qc) is immunised at birth against an infection with R0 > 1. The vaccination programme changes R0 to Rq where
R
q
=
R
0
(
1
−
q
)
{\displaystyle R_{q}=R_{0}(1-q)}
This change occurs simply because there are now fewer susceptibles in the population who can be infected. Rq is simply R0 minus those that would normally be infected but that cannot be now since they are immune.
As a consequence of this lower basic reproduction number, the average age of infection A will also change to some new value Aq in those who have been left unvaccinated.
Recall the relation that linked R0, A and L. Assuming that life expectancy has not changed, now:
R
q
=
L
A
q
,
{\displaystyle R_{q}={\frac {L}{A_{q}}},}
A
q
=
L
R
q
=
L
R
0
(
1
−
q
)
.
{\displaystyle A_{q}={\frac {L}{R_{q}}}={\frac {L}{R_{0}(1-q)}}.}
But R0 = L/A so:
A
q
=
L
(
L
/
A
)
(
1
−
q
)
=
A
L
L
(
1
−
q
)
=
A
1
−
q
.
{\displaystyle A_{q}={\frac {L}{(L/A)(1-q)}}={\frac {AL}{L(1-q)}}={\frac {A}{1-q}}.}
Thus, the vaccination program may raise the average age of infection, and unvaccinated individuals will experience a reduced force of infection due to the presence of the vaccinated group. For a disease that leads to greater clinical severity in older populations, the unvaccinated proportion of the population may experience the disease relatively later in life than would occur in the absence of vaccine.
=== When mass vaccination exceeds the herd immunity ===
If a vaccination program causes the proportion of immune individuals in a population to exceed the critical threshold for a significant length of time, transmission of the infectious disease in that population will stop. If elimination occurs everywhere at the same time, then this can lead to eradication.
Elimination
Interruption of endemic transmission of an infectious disease, which occurs if each infected individual infects less than one other, is achieved by maintaining vaccination coverage to keep the proportion of immune individuals above the critical immunization threshold.
Eradication
Elimination everywhere at the same time such that the infectious agent dies out (for example, smallpox and rinderpest).
== Reliability ==
Models have the advantage of examining multiple outcomes simultaneously, rather than making a single forecast. Models have shown broad degrees of reliability in past pandemics, such as SARS, SARS-CoV-2, Swine flu, MERS and Ebola.
== See also ==
== References ==
== Sources ==
Barabási AL (2016). Network Science. Cambridge University Press. ISBN 978-1-107-07626-6.
Brauer F, Castillo-Chavez C (2012). Mathematical Models in Population Biology and Epidemiology. Texts in Applied Mathematics. Vol. 40. doi:10.1007/978-1-4614-1686-9. ISBN 978-1-4614-1685-2.
Daley DJ, Gani JM (1999). Epidemic Modelling: An Introduction. Cambridge University Press. ISBN 978-0-521-01467-0.
Hamer WH (1929). Epidemiology, Old and New. Macmillan. hdl:2027/mdp.39015006657475. OCLC 609575950.
Ross R (1910). The Prevention of Malaria. Dutton. hdl:2027/uc2.ark:/13960/t02z1ds0q. OCLC 610268760.
== Further reading ==
== External links ==
Software
Model-Builder: Interactive (GUI-based) software to build, simulate, and analyze ODE models.
GLEaMviz Simulator: Enables simulation of emerging infectious diseases spreading across the world.
STEM: Open source framework for Epidemiological Modeling available through the Eclipse Foundation.
R package surveillance: Temporal and Spatio-Temporal Modeling and Monitoring of Epidemic Phenomena | Wikipedia/Epidemic_model |
Virtual private network (VPN) is a network architecture for virtually extending a private network (i.e. any computer network which is not the public Internet) across one or multiple other networks which are either untrusted (as they are not controlled by the entity aiming to implement the VPN) or need to be isolated (thus making the lower network invisible or not directly usable).
A VPN can extend access to a private network to users who do not have direct access to it, such as an office network allowing secure access from off-site over the Internet. This is achieved by creating a link between computing devices and computer networks by the use of network tunneling protocols.
It is possible to make a VPN secure to use on top of insecure communication medium (such as the public internet) by choosing a tunneling protocol that implements encryption. This kind of VPN implementation has the benefit of reduced costs and greater flexibility, with respect to dedicated communication lines, for remote workers.
The term VPN is also used to refer to VPN services which sell access to their own private networks for internet access by connecting their customers using VPN tunneling protocols.
== Motivation ==
The goal of a virtual private network is to allow network hosts to exchange network messages across another network to access private content, as if they were part of the same network. This is done in a way that makes crossing the intermediate network transparent to network applications. Users of a network connectivity service may consider such an intermediate network to be untrusted, since it is controlled by a third-party, and might prefer a VPN implemented via protocols that protect the privacy of their communication.
In the case of a Provider-provisioned VPN, the goal is not to protect against untrusted networks, but to isolate parts of the provider's own network infrastructure in virtual segments, in ways that make the contents of each segment private with respect to the others. This situation makes many other tunneling protocols suitable for building PPVPNs, even with weak or no security features (like in VLAN).
== Operation ==
How a VPN works depends on which technologies and protocols the VPN is built upon. A tunneling protocol is used to transfer the network messages from one side to the other. The goal is to take network messages from applications on one side of the tunnel and replay them on the other side. Applications do not need to be modified to let their messages pass through the VPN, because the virtual network or link is made available to the OS.
Applications that do implement tunneling or proxying features for themselves without making such features available as a network interface, are not to be considered VPN implementations but may achieve the same or similar end-user goal of exchanging private contents with a remote network.
== Topology ==
Virtual private networks configurations can be classified depending on the purpose of the virtual extension, which makes different tunneling strategies appropriate for different topologies:
Remote access
A host-to-network configuration is analogous to joining one or more computers to a network to which they cannot be directly connected. This type of extension provides that computer access to local area network of a remote site, or any wider enterprise networks, such as an intranet. Each computer is in charge of activating its own tunnel towards the network it wants to join. The joined network is only aware of a single remote host for each tunnel. This may be employed for remote workers, or to enable people accessing their private home or company resources without exposing them on the public Internet. Remote access tunnels can be either on-demand or always-on. Because the remote host location is usually unknown to the central network until the former tries to reach it, proper implementations of this configuration require the remote host to initiate the communication towards the central network it is accessing.
Site-to-site
A site-to-site configuration connects two networks. This configuration expands a network across geographically disparate locations. Tunneling is only done between gateway devices located at each network location. These devices then make the tunnel available to other local network hosts that aim to reach any host on the other side. This is useful to keep sites connected to each other in a stable manner, like office networks to their headquarters or datacenter. In this case, any side may be configured to initiate the communication as long as it knows how to reach the other.
In the context of site-to-site configurations, the terms intranet and extranet are used to describe two different use cases. An intranet site-to-site VPN describes a configuration where the sites connected by the VPN belong to the same organization, whereas an extranet site-to-site VPN joins sites belonging to multiple organizations.
Typically, individuals interact with remote access VPNs, whereas businesses tend to make use of site-to-site connections for business-to-business, cloud computing, and branch office scenarios. However, these technologies are not mutually exclusive and, in a significantly complex business network, may be combined.
Apart from the general topology configuration, a VPN may also be characterized by:
the tunneling protocol used to tunnel the traffic,
the tunnel's termination point location, e.g., on the customer edge or network-provider edge,
the security features provided,
the OSI layer they present to the connecting network, such as Layer 2 link/circuit or Layer 3 network connectivity,
the number of simultaneous allowed tunnels,
the relationship between the actor implementing the VPN and the network infrastructure provider, and whether the former trusts the medium of the former or not
A variety of VPN technics exist to adapt to the above characteristics, each providing different network tunneling capabilities and different security model coverage or interpretation.
== Native and third-party support ==
Operating systems vendors and developers do typically offer native support to a selection of VPN protocols. These are subject to change over the years, as some have been proven to be unsecure with respect to modern requirements and expectations, and others have emerged.
=== Support in consumer operating systems ===
Desktop, smartphone and other end-user device operating systems usually support configuring remote access VPN from their graphical or command-line tools. However, due to the variety of, often non standard, VPN protocols there exists many third-party applications that implement additional protocols not yet or no longer natively supported by the OS.
For instance, Android lacked native IPsec IKEv2 support until version 11, and users needed to install third-party apps in order to connect that kind of VPN. Conversely, Windows does not natively support plain IPsec IKEv1 remote access native VPN configuration (commonly used by Cisco and Fritz!Box VPN solutions).
=== Support in network devices ===
Network appliances, such as firewalls, often include VPN gateway functionality for either remote access or site-to-site configurations. Their administration interfaces often facilitate setting up virtual private networks with a selection of supported protocols. In some cases, like in the open source operating systems devoted to firewalls and network devices (like OpenWrt, IPFire, PfSense or OPNsense) it is possible to add support for additional VPN protocols by installing missing software components or third-party apps.
Commercial appliances with VPN features based on proprietary hardware/software platforms, usually support a consistent VPN protocol across their products but do not open up for customizations outside the use cases they intended to implement. This is often the case for appliances that rely on hardware acceleration of VPNs to provide higher throughput or support a larger amount of simultaneously connected users.
== Security mechanisms ==
Whenever a VPN is intended to virtually extend a private network over a third-party untrusted medium, it is desirable that the chosen protocols match the following security model:
confidentiality to prevent disclosure of private information or data sniffing, such that even if the network traffic is sniffed at the packet level (see network sniffer or deep packet inspection), an attacker would see only encrypted data, not the raw data
message integrity to detect and reject any instances of tampering with transmitted messages, data packets are secured by tamper proofing via a message authentication code (MAC), which prevents the message from being altered or tampered without being rejected due to the MAC not matching with the altered data packet.
VPN are not intended to make connecting users anonymous or unidentifiable from the untrusted medium network provider perspective. If the VPN makes use of protocols that do provide those confidentiality features, their usage can increase user privacy by making the untrusted medium owner unable to access the private data exchanged across the VPN.
=== Authentication ===
In order to prevent unauthorized users from accessing the VPN, most protocols can be implemented in ways that also enable authentication of connecting parties. This secures the joined remote network confidentiality, integrity and availability.
Tunnel endpoints can be authenticated in various ways during the VPN access initiation. Authentication can happen immediately on VPN initiation (e.g. by simple whitelisting of endpoint IP address), or very lately after actual tunnels are already active (e.g. with a web captive portal).
Remote-access VPNs, which are typically user-initiated, may use passwords, biometrics, two-factor authentication, or other cryptographic methods. People initiating this kind of VPN from unknown arbitrary network locations are also called "road-warriors". In such cases, it is not possible to use originating network properties (e.g. IP addresses) as secure authentication factors, and stronger methods are needed.
Site-to-site VPNs often use passwords (pre-shared keys) or digital certificates. Depending on the VPN protocol, they may store the key to allow the VPN tunnel to establish automatically, without intervention from the administrator.
== Protocols ==
A virtual private network is based on a tunneling protocol, and may be possibly combined with other network or application protocols providing extra capabilities and different security model coverage.
Internet Protocol Security (IPsec) was initially developed by the Internet Engineering Task Force (IETF) for IPv6, and was required in all standards-compliant implementations of IPv6 before RFC 6434 made it only a recommendation. This standards-based security protocol is also widely used with IPv4. Its design meets most security goals: availability, integrity, and confidentiality. IPsec uses encryption, encapsulating an IP packet inside an IPsec packet. De-encapsulation happens at the end of the tunnel, where the original IP packet is decrypted and forwarded to its intended destination. IPsec tunnels are set up by Internet Key Exchange (IKE) protocol. IPsec tunnels made with IKE version 1 (also known as IKEv1 tunnels, or often just "IPsec tunnels") can be used alone to provide VPN, but have been often combined to the Layer 2 Tunneling Protocol (L2TP). Their combination made possible to reuse existing L2TP-related implementations for more flexible authentication features (e.g. Xauth), desirable for remote-access configurations. IKE version 2, which was created by Microsoft and Cisco, can be used alone to provide IPsec VPN functionality. Its primary advantages are the native support for authenticating via the Extensible Authentication Protocol (EAP) and that the tunnel can be seamlessly restored when the IP address of the associated host is changing, which is typical of a roaming mobile device, whether on 3G or 4G LTE networks. IPsec is also often supported by network hardware accelerators, which makes IPsec VPN desirable for low-power scenarios, like always-on remote access VPN configurations.
Transport Layer Security (SSL/TLS) can tunnel an entire network's traffic (as it does in the OpenVPN project and SoftEther VPN project) or secure an individual connection. A number of vendors provide remote-access VPN capabilities through TLS. A VPN based on TLS can connect from locations where the usual TLS web navigation (HTTPS) is supported without special extra configurations,
Datagram Transport Layer Security (DTLS) – used in Cisco AnyConnect VPN and in OpenConnect VPN to solve the issues TLS has with tunneling over TCP (SSL/TLS are TCP-based, and tunneling TCP over TCP can lead to big delays and connection aborts).
Microsoft Point-to-Point Encryption (MPPE) works with the Point-to-Point Tunneling Protocol and in several compatible implementations on other platforms.
Microsoft Secure Socket Tunneling Protocol (SSTP) tunnels Point-to-Point Protocol (PPP) or Layer 2 Tunneling Protocol traffic through an SSL/TLS channel (SSTP was introduced in Windows Server 2008 and in Windows Vista Service Pack 1).
Multi Path Virtual Private Network (MPVPN). Ragula Systems Development Company owns the registered trademark "MPVPN".
Secure Shell (SSH) VPN – OpenSSH offers VPN tunneling (distinct from port forwarding) to secure remote connections to a network, inter-network links, and remote systems. OpenSSH server provides a limited number of concurrent tunnels. The VPN feature itself does not support personal authentication. SSH is more often used to remotely connect to machines or networks instead of a site to site VPN connection.
WireGuard is a protocol. In 2020, WireGuard support was added to both the Linux and Android kernels, opening it up to adoption by VPN providers. By default, WireGuard utilizes the Curve25519 protocol for key exchange and ChaCha20-Poly1305 for encryption and message authentication, but also includes the ability to pre-share a symmetric key between the client and server.
OpenVPN is a free and open-source VPN protocol based on the TLS protocol. It supports perfect forward-secrecy, and most modern secure cipher suites, like AES, Serpent, TwoFish, etc. It is currently being developed and updated by OpenVPN Inc., a non-profit providing secure VPN technologies.
Crypto IP Encapsulation (CIPE) is a free and open-source VPN implementation for tunneling IPv4 packets over UDP via encapsulation. CIPE was developed for Linux operating systems by Olaf Titz, with a Windows port implemented by Damion K. Wilson. Development for CIPE ended in 2002.
== Trusted delivery networks ==
Trusted VPNs do not use cryptographic tunneling; instead, they rely on the security of a single provider's network to protect the traffic.
Multiprotocol Label Switching (MPLS) often overlays VPNs, often with quality-of-service control over a trusted delivery network.
L2TP which is a standards-based replacement, and a compromise taking the good features from each, for two proprietary VPN protocols: Cisco's Layer 2 Forwarding (L2F) (obsolete as of 2009) and Microsoft's Point-to-Point Tunneling Protocol (PPTP).
From a security standpoint, a VPN must either trust the underlying delivery network or enforce security with a mechanism in the VPN itself. Unless the trusted delivery network runs among physically secure sites only, both trusted and secure models need an authentication mechanism for users to gain access to the VPN.
== Mobile environments ==
Mobile virtual private networks are used in settings where an endpoint of the VPN is not fixed to a single IP address, but instead roams across various networks such as data networks from cellular carriers or between multiple Wi-Fi access points without dropping the secure VPN session or losing application sessions. Mobile VPNs are widely used in public safety where they give law-enforcement officers access to applications such as computer-assisted dispatch and criminal databases, and in other organizations with similar requirements such as field service management and healthcare.
== Networking limitations ==
A limitation of traditional VPNs is that they are point-to-point connections and do not tend to support broadcast domains; therefore, communication, software, and networking, which are based on layer 2 and broadcast packets, such as NetBIOS used in Windows networking, may not be fully supported as on a local area network. Variants on VPN such as Virtual Private LAN Service (VPLS) and layer 2 tunneling protocols are designed to overcome this limitation.
== See also ==
== References ==
== Further reading ==
Kelly, Sean (August 2001). "Necessity is the mother of VPN invention". Communication News: 26–28. ISSN 0010-3632. Archived from the original on 17 December 2001. | Wikipedia/Virtual_private_network |
Network delay is a design and performance characteristic of a telecommunications network. It specifies the latency for a bit of data to travel across the network from one communication endpoint to another.: 5 It is typically measured in multiples or fractions of a second. Delay may differ slightly, depending on the location of the specific pair of communicating endpoints. Engineers usually report both the maximum and average delay, and they divide the delay into several parts:
Processing delay – time it takes a router to process the packet header
Queuing delay – time the packet spends in routing queues
Transmission delay – time it takes to push the packet's bits onto the link
Propagation delay – time for a signal to propagate through the media
A certain minimum level of delay is experienced by signals due to the time it takes to transmit a packet serially through a link. This delay is extended by more variable levels of delay due to network congestion. IP network delays can range from a few milliseconds to several hundred milliseconds.
== See also ==
Age of Information
End-to-end delay
Lag (video games)
Latency (engineering)
Minimum-Pairs Protocol
Round-trip delay
== References ==
== External links ==
Impact of Delay in Voice over IP Services (PDF), retrieved 2018-10-31
Internet Delay Space Study at Rice University (PDF), retrieved 2018-10-31 | Wikipedia/Network_delay |
A public data network (PDN) is a network established and operated by a telecommunications administration, or a recognized private operating agency, for the specific purpose of providing data transmission services for the public.
The first public packet switching networks were RETD in Spain (1972), the experimental RCP network in France (1972) and Telenet in the United States (1975). "Public data network" was the common name given to the collection of X.25 providers, the first of which were Telenet in the U.S. and DATAPAC in Canada (both in 1976), and Transpac in France (in 1978). The International Packet Switched Service (IPSS) was the first commercial and international packet-switched network (1978). The networks were interconnected with gateways using X.75. These combined networks had large global coverage during the 1980s and into the 1990s. The networks later provided the infrastructure for the early Internet.
== Description ==
In communications, a PDN is a circuit- or packet-switched network that is available to the public and that can transmit data in digital form. A PDN provider is a company that provides access to a PDN and that provides any of X.25, Frame Relay, or cell relay (ATM) services. Access to a PDN generally includes a guaranteed bandwidth, known as the committed information rate (CIR). Costs for the access depend on the guaranteed rate. PDN providers differ in how they charge for temporary increases in required bandwidth (known as surges). Some use the amount of overrun; others use the surge duration.
=== Public switched data network ===
A public switched data network (PSDN) is a network for providing data services via a system of multiple wide area networks, similar in concept to the public switched telephone network (PSTN). A PSDN may use a variety of switching technologies, including packet switching, circuit switching, and message switching. A packet-switched PSDN may also be called a packet-switched data network.
Originally the term PSDN referred only to Packet Switch Stream (PSS), an X.25-based packet-switched network in the United Kingdom, mostly used to provide leased-line connections between local area networks and the Internet using permanent virtual circuits (PVCs). Today, the term may refer not only to Frame Relay and Asynchronous Transfer Mode (ATM), both providing PVCs, but also to Internet Protocol (IP), GPRS, and other packet-switching techniques.
Whilst there are several technologies that are superficially similar to the PSDN, such as Integrated Services Digital Network (ISDN) and the digital subscriber line (DSL) technologies, they are not examples of it. ISDN utilizes the PSTN circuit-switched network, and DSL uses point-to-point circuit switching communications overlaid on the PSTN local loop (copper wires), usually utilized for access to a packet-switched broadband IP network.
=== Public data transmission service ===
A public data transmission service is a data transmission service that is established and operated by a telecommunication administration, or a recognized private operating agency, and uses a public data network. A public data transmission service may include Circuit Switched Data, packet-switched, and leased line data transmission.
== History ==
Public packet switching networks came into operation in the 1970s. The first were RETD in Spain, in 1972; the experimental RCP in France, also in 1972; Telenet in the United States, which began operation with proprietary protocols in 1975; EIN in the EEC in 1976; and EPSS in the United Kingdom in 1976 (in development since 1969).
Telenet adopted X.25 protocols shortly after they were published in 1976 while DATAPAC in Canada was the first public data network specifically designed for X.25, also in 1976. Many other PDNs adopted X.25 when they came into operation, including Transpac in France in 1978, Euronet in the EEC in 1979, Packet Switch Stream in the United Kingdom in 1980, and AUSTPAC in Australia in 1982. Iberpac in Spain adopted X.25 in the 1980s. Tymnet and CompuServe in the United States also adopted X.25.
The International Packet Switched Service (IPSS) was the first commercial and international packet-switched network. It was a collaboration between British and American telecom companies that became operational in 1978.
The SITA Data Transport Network for airlines adopted X.25 in 1981, becoming the world's most extensive packet-switching network.
The networks were interconnected with gateways using X.75. These combined networks had large global coverage during the 1980s and into the 1990s.
Over time, other packet-switching technologies, including Frame Relay (FR) and Asynchronous Transfer Mode (ATM) gradually replaced X.25.
Many of these networks later adopted TCP/IP and provided the infrastructure for the early Internet.
== See also ==
History of the Internet
International Network Working Group
National research and education network
Protocol Wars
OSI model
X.25 § History
== References ==
This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 2022-01-22.
== Sources ==
Schatt, Stan (1991). Linking LANs: A Micro Manager's Guide. McGraw-Hill. ISBN 0-8306-3755-9. | Wikipedia/Public_data_network |
Data rate and data transfer rate can refer to several related and overlapping concepts in communications networks:
== Achieved rate ==
Bit rate, the number of bits that are conveyed or processed per unit of time
Data signaling rate or gross bit rate, a bit rate that includes protocol overhead
Symbol rate or baud rate, the number of symbol changes, waveform changes, or signaling events across the transmission medium per unit of time
Data-rate units, measures of the bit rate or baud rate of a link
Data transfer rate (disk drive), a data rate specific to disk drive operations
Throughput, the rate of successful message delivery, or level of bandwidth consumption
Transfers per second
== Capacity ==
Bandwidth (computing), the maximum rate of data transfer across a given path
Channel capacity, an information-theoretic upper bound on the rate at which data can be reliably transmitted, given noise on a channel | Wikipedia/Data_transfer_rate |
A network bridge is a computer networking device that creates a single, aggregate network from multiple communication networks or network segments. This function is called network bridging. Bridging is distinct from routing. Routing allows multiple networks to communicate independently and yet remain separate, whereas bridging connects two separate networks as if they were a single network. In the OSI model, bridging is performed in the data link layer (layer 2). If one or more segments of the bridged network are wireless, the device is known as a wireless bridge.
The main types of network bridging technologies are simple bridging, multiport bridging, and learning or transparent bridging.
== Transparent bridging ==
Transparent bridging uses a table called the forwarding information base to control the forwarding of frames between network segments. The table starts empty and entries are added as the bridge receives frames. If a destination address entry is not found in the table, the frame is forwarded to all other ports of the bridge, flooding the frame to all segments except the one from which it was received. By means of these flooded frames, a host on the destination network will respond and a forwarding database entry will be created. Both source and destination addresses are used in this process: source addresses are recorded in entries in the table, while destination addresses are looked up in the table and matched to the proper segment to send the frame to. Digital Equipment Corporation (DEC) originally developed the technology in 1983 and introduced the LANBridge 100 that implemented it in 1986.
In the context of a two-port bridge, the forwarding information base can be seen as a filtering database. A bridge reads a frame's destination address and decides to either forward or filter. If the bridge determines that the destination host is on another segment on the network, it forwards the frame to that segment. If the destination address belongs to the same segment as the source address, the bridge filters the frame, preventing it from reaching the other network where it is not needed.
Transparent bridging can also operate over devices with more than two ports. As an example, consider a bridge connected to three hosts, A, B, and C. The bridge has three ports. A is connected to bridge port 1, B is connected to bridge port 2, C is connected to bridge port 3. A sends a frame addressed to B to the bridge. The bridge examines the source address of the frame and creates an address and port number entry for host A in its forwarding table. The bridge examines the destination address of the frame and does not find it in its forwarding table so it floods (broadcasts) it to all other ports: 2 and 3. The frame is received by hosts B and C. Host C examines the destination address and ignores the frame as it does not match with its address. Host B recognizes a destination address match and generates a response to A. On the return path, the bridge adds an address and port number entry for B to its forwarding table. The bridge already has A's address in its forwarding table so it forwards the response only to port 1. Host C or any other hosts on port 3 are not burdened with the response. Two-way communication is now possible between A and B without any further flooding to the network. Now, if A sends a frame addressed to C, the same procedure will be used, but this time the bridge will not create a new forwarding-table entry for A's address/port because it has already done so.
Bridging is called transparent when the frame format and its addressing aren't changed substantially. Non-transparent bridging is required especially when the frame addressing schemes on both sides of a bridge are not compatible with each other, e.g. between ARCNET with local addressing and Ethernet using IEEE MAC addresses, requiring translation. However, most often such incompatible networks are routed in between, not bridged.
== Simple bridging ==
A simple bridge connects two network segments, typically by operating transparently and deciding on a frame-by-frame basis whether or not to forward from one network to the other. A store and forward technique is typically used so, as part of forwarding, the frame integrity is verified on the source network and CSMA/CD delays are accommodated on the destination network. In contrast to repeaters which simply extend the maximum span of a segment, bridges only forward frames that are required to cross the bridge. Additionally, bridges reduce collisions by creating a separate collision domain on either side of the bridge.
== Multiport bridging ==
A multiport bridge connects multiple networks and operates transparently to decide on a frame-by-frame basis whether to forward traffic. Additionally, a multiport bridge must decide where to forward traffic. Like the simple bridge, a multiport bridge typically uses store and forward operation. The multiport bridge function serves as the basis for network switches.
== Implementation ==
The forwarding information base stored in content-addressable memory (CAM) is initially empty. For each received Ethernet frame the switch learns from the frame's source MAC address and adds this together with an interface identifier to the forwarding information base. The switch then forwards the frame to the interface found in the CAM based on the frame's destination MAC address. If the destination address is unknown the switch sends the frame out on all interfaces (except the ingress interface). This behavior is called unicast flooding.
== Forwarding ==
Once a bridge learns the addresses of its connected nodes, it forwards data link layer frames using a layer-2 forwarding method. There are four forwarding methods a bridge can use, of which the second through fourth methods were performance-increasing methods when used on switch products with the same input and output port bandwidths:
Store and forward: the switch buffers and verifies each frame before forwarding it; a frame is received in its entirety before it is forwarded.
Cut through: the switch starts forwarding after the frame's destination address is received. There is no error checking with this method. When the outgoing port is busy at the time, the switch falls back to store-and-forward operation. Also, when the egress port is running at a faster data rate than the ingress port, store-and-forward is usually used.
Fragment free: a method that attempts to retain the benefits of both store and forward and cut through. Fragment free checks the first 64 bytes of the frame, where addressing information is stored. According to Ethernet specifications, collisions should be detected during the first 64 bytes of the frame, so frame transmissions that are aborted because of a collision will not be forwarded. Error checking of the actual data in the packet is left for the end device.
Adaptive switching: a method of automatically selecting between the other three modes.
== Shortest Path Bridging ==
Shortest Path Bridging (SPB), specified in the IEEE 802.1aq standard and based on Dijkstra's algorithm, is a computer networking technology intended to simplify the creation and configuration of networks, while enabling multipath routing. It is a proposed replacement for Spanning Tree Protocol which blocks any redundant paths that could result in a switching loop. SPB allows all paths to be active with multiple equal-cost paths. SPB also increases the number of VLANs allowed on a layer-2 network.
TRILL (Transparent Interconnection of Lots of Links) is the successor to Spanning Tree Protocol, both having been created by the same person, Radia Perlman. The catalyst for TRILL was an event at Beth Israel Deaconess Medical Center which began on 13 November 2002. The concept of Rbridges [sic] was first proposed to the Institute of Electrical and Electronics Engineers in the year 2004, whom in 2005 rejected what came to be known as TRILL, and in the years 2006 through 2012 devised an incompatible variation known as Shortest Path Bridging.
== See also ==
Audio Video Bridging – Specifications for synchronized, low-latency streaming
IEEE 802.1D – Standard which includes bridging, Spanning Tree Protocol and others
IEEE 802.1Q – IEEE networking standard supporting VLANs
IEEE 802.1ah-2008 – Standard for bridging over a provider's networkPages displaying short descriptions of redirect targets
Promiscuous mode – Network interface controller mode that eavesdrops on messages intended for others
== References == | Wikipedia/Network_bridge |
A frame is a digital data transmission unit in computer networking and telecommunications. In packet switched systems, a frame is a simple container for a single network packet. In other telecommunications systems, a frame is a repeating structure supporting time-division multiplexing.
A frame typically includes frame synchronization features consisting of a sequence of bits or symbols that indicate to the receiver the beginning and end of the payload data within the stream of symbols or bits it receives. If a receiver is connected to the system during frame transmission, it ignores the data until it detects a new frame synchronization sequence.
== Packet switching ==
In the OSI model of computer networking, a frame is the protocol data unit at the data link layer. Frames are the result of the final layer of encapsulation before the data is transmitted over the physical layer. A frame is "the unit of transmission in a link layer protocol, and consists of a link layer header followed by a packet." Each frame is separated from the next by an interframe gap. A frame is a series of bits generally composed of frame synchronization bits, the packet payload, and a frame check sequence. Examples are Ethernet frames, Point-to-Point Protocol (PPP) frames, Fibre Channel frames, and V.42 modem frames.
Often, frames of several different sizes are nested inside each other. For example, when using Point-to-Point Protocol (PPP) over asynchronous serial communication, the eight bits of each individual byte are framed by start and stop bits, the payload data bytes in a network packet are framed by the header and footer, and several packets can be framed with frame boundary octets.
== Time-division multiplex ==
In telecommunications, specifically in time-division multiplex (TDM) and time-division multiple access (TDMA) variants, a frame is a cyclically repeated data block that consists of a fixed number of time slots, one for each logical TDM channel or TDMA transmitter. In this context, a frame is typically an entity at the physical layer. TDM application examples are SONET/SDH and the ISDN circuit-switched B-channel, while TDMA examples are Circuit Switched Data used in early cellular voice services. The frame is also an entity for time-division duplex, where the mobile terminal may transmit during some time slots and receive during others.
== See also ==
Application-layer framing
Datagram
Jumbo frame
Multiplex techniques
Overhead bit
== References == | Wikipedia/Frame_(networking) |
An overlay network is a logical computer network that is layered on top of a physical network. The concept of overlay networking is distinct from the traditional model of OSI layered networks, and almost always assumes that the underlay network is an IP network of some kind.
Some examples of overlay networking technologies are, VXLAN, BGP VPNs, and IP over IP technologies, such as GRE, IPSEC tunnels, or SD-WAN.
== Structure ==
Nodes in an overlay network can be thought of as being connected by virtual or logical links, each of which corresponds to a path, perhaps through many physical links, in the underlying network. For example, distributed systems such as peer-to-peer networks are overlay networks because their nodes form networks over existing network connections.
The Internet was originally built as an overlay upon the telephone network, while today (through the advent of VoIP), the telephone network is increasingly turning into an overlay network built on top of the Internet.
=== Attributes ===
Overlay networks have a certain set of attributes, including separation of logical addressing, security and quality of service. Other optional attributes include resiliency/recovery, encryption and bandwidth control.
== Uses ==
=== Telcos ===
Many telcos use overlay networks to provide services over their physical infrastructure. In the networks that connect physically diverse sites (wide area networks, WANs), one common overlay network technology is BGP VPNs. These VPNs are provided in the form of a service to enterprises to connect their own sites and applications. The advantage of these kinds of overlay networks are that the telecom operator does not need to manage addressing or other enterprise specific network attributes.
Within data centers, it was more common to use VXLAN, however due to its complexity and the need to stitch Layer 2 VXLAN-based overlay networks to Layer 3 IP/BGP networks, it has become more common to use BGP within data centers to provide Layer 2 connectivity between virtual machines or Kubernetes clusters.
=== Enterprise networks ===
Enterprise private networks were first overlaid on telecommunication networks such as Frame Relay and Asynchronous Transfer Mode packet switching infrastructures but migration from these (now legacy) infrastructures to IP-based MPLS networks and virtual private networks started (2001~2002) and is now completed, with very few remaining Frame Relay or ATM networks.
From an enterprise point of view, while an overlay VPN service configured by the operator might fulfill their basic connectivity requirements, they lack flexibility. For example, connecting services from competitive operators, or an enterprise service over an internet service and securing that service is impossible with standard VPN technologies, hence the proliferation of SD-WAN overlay networks that allow enterprises to connect sites and users regardless of the network access type they have.
=== Over the Internet ===
The Internet is the basis for more overlaid networks that can be constructed in order to permit routing of messages to destinations not specified by an IP address. For example, distributed hash tables can be used to route messages to a node having a specific logical address, whose IP address is not known in advance.
==== Quality of Service ====
Guaranteeing bandwidth through marking traffic has multiple solutions, including IntServ and DiffServ. IntServ requires per-flow tracking and consequently causes scaling issues in routing platforms. It has not been widely deployed. DiffServ has been widely deployed in many operators as a method to differentiate traffic types. DiffServ itself provides no guarantee of throughput, it does allow the network operator to decide which traffic is higher priority, and hence will be forwarded first in congestion situations.
Overlay networks implement a much finer granularity of quality of service, allowing enterprise users to decide on an application and user or site basis which traffic should be prioritised.
=== Ease of Deployment ===
Overlay networks can be incrementally deployed at end-user sites or on hosts running the overlay protocol software, without cooperation from ISPs. The overlay has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes a message traverses before reaching its destination.
== Advantages ==
=== Resilience ===
The objective of resilience in telecommunications networks is to enable automated recovery during failure events in order to maintain a wanted service level or availability. As telecommunications networks are built in a layered fashion, resilience can be used in the physical, optical, IP or session/application layers. Each layer relies on the resilience features of the layer below it. Overlay IP networks in the form of SD-WAN services therefore rely on the physical, optical and underlying IP services they are transported over. Application layer overlays depend on the all the layers below them. The advantage of overlays are that they are more flexible/programmable than traditional network infrastructure, which outweighs the disadvantages of additional latency, complexity and bandwidth overheads.
==== Application Layer Resilience Approaches ====
Resilient Overlay Networks (RON) are architectures that allow distributed Internet applications to detect and recover from disconnection or interference. Current wide-area routing protocols that take at least several minutes to recover from are improved upon with this application layer overlay. The RON nodes monitor the Internet paths among themselves and will determine whether or not to reroute packets directly over the Internet or over other RON nodes thus optimizing application-specific metrics.
The Resilient Overlay Network has a relatively simple conceptual design. RON nodes are deployed at various locations on the Internet. These nodes form an application layer overlay that cooperates in routing packets. Each of the RON nodes monitors the quality of the Internet paths between each other and uses this information to accurately and automatically select paths from each packet, thus reducing the amount of time required to recover from poor quality of service.
=== Multicast ===
Overlay multicast is also known as End System or Peer-to-Peer Multicast. High bandwidth multi-source multicast among widely distributed nodes is a critical capability for a wide range of applications, including audio and video conferencing, multi-party games and content distribution. Throughout the last decade, a number of research projects have explored the use of multicast as an efficient and scalable mechanism to support such group communication applications. Multicast decouples the size of the receiver set from the amount of state kept at any single node and potentially avoids redundant communication in the network.
The limited deployment of IP Multicast, a best-effort network layer multicast protocol, has led to considerable interest in alternate approaches that are implemented at the application layer, using only end-systems. In an overlay or end-system multicast approach, participating peers organize themselves into an overlay topology for data delivery. Each edge in this topology corresponds to a unicast path between two end-systems or peers in the underlying internet. All multicast-related functionality is implemented at the peers instead of at routers, and the goal of the multicast protocol is to construct and maintain an efficient overlay for data transmission.
== Disadvantages ==
No knowledge of the real network topology, subject to the routing inefficiencies of the underlying network, may be routed on sub-optimal paths
Possible increased latency compared to non-overlay services
Duplicate packets at certain points.
Additional encapsulation overhead, meaning lower total network capacity due to multiple payload encapsulation
== List of overlay network protocols ==
Overlay network protocols based on TCP/IP include:
Distributed hash tables (DHTs) based on the Chord
JXTA
XMPP: the routing of messages based on an endpoint Jabber ID (Example: nodeId_or_userId@domainId\resourceId) instead of by an IP Address
Many peer-to-peer protocols including Gnutella, Gnutella2, Freenet, I2P and Tor.
PUCC
Solipsis: a France Télécom system for massively shared virtual world
Overlay network protocols based on UDP/IP include:
Distributed hash tables (DHTs) based on Kademlia algorithm, such as KAD, etc.
Real Time Media Flow Protocol – Adobe Flash
== See also ==
Darknet
Mesh network
Computer network
Peercasting
Virtual Private Network
== References ==
== External links ==
List of overlay network implementations, July 2003
Resilient Overlay Networks
Overcast: reliable multicasting with an overlay network
OverQoS: An overlay based architecture for enhancing Internet QoS
End System Multicast | Wikipedia/Overlay_network |
Routing metrics are configuration values used by a router to make routing decisions. A metric is typically one of many fields in a routing table. Router metrics help the router choose the best route among multiple feasible routes to a destination. The route will go in the direction of the gateway with the lowest metric.
A router metric is typically based on information such as path length, bandwidth, load, hop count, path cost, delay, maximum transmission unit (MTU), reliability and communications cost.
== Examples ==
A metric can include:
measuring link utilization (using SNMP)
number of hops (hop count)
speed of the path
packet loss (router congestion/conditions)
network delay
path reliability
path bandwidth
throughput [SNMP - query routers]
load
maximum transmission unit (MTU)
administrator configured value
In EIGRP, metrics is represented by an integer from 0 to 4,294,967,295 (The size of a 32-bit integer). In Microsoft Windows XP routing it ranges from 1 to 9999.
A metric can be considered as:
additive - the total cost of a path is the sum of the costs of individual links along the path,
concave - the total cost of a path is the minimum of the costs of individual links along the path,
multiplicative - the total cost of a path is the product of the costs of individual links along the path.
== Service level metrics ==
Router metrics are metrics used by a router to make routing decisions. It is typically one of many fields in a routing table.
Router metrics can contain any number of values that help the router determine the best route among multiple routes to a destination. A router metric is typically based on information like path length, bandwidth, load, hop count, path cost, delay, MTU, reliability and communications cost.
== See also ==
Administrative distance, indicates the source of routing table entry and is used in preference to metrics for routing decisions
== References ==
== External links ==
Survey of routing metrics | Wikipedia/Metrics_(networking) |
A telephone network is a telecommunications network that connects telephones, which allows telephone calls between two or more parties, as well as newer features such as fax and internet. The idea was revolutionized in the 1920s, as more and more people purchased telephones and used them to communicate news, ideas, and personal information. During the 1990s, it was further revolutionized by the advent of computers and other sophisticated communication devices, and with the use of dial-up internet.
There are a number of different types of telephone network:
A landline network where the telephones must be directly wired into a single telephone exchange. This is known as the public switched telephone network or PSTN.
A wireless network where the telephones are mobile and can move around anywhere within the coverage area.
A private network where a closed group of telephones are connected primarily to each other and use a gateway to reach the outside world. This is usually used inside companies and call centres and is called a private branch exchange (PBX).
Integrated Services Digital Network (ISDN)
Public telephone operators (PTOs) own and build networks of the first two types and provide services to the public under license from the national government. Virtual Network Operators (VNOs) lease capacity wholesale from the PTOs and sell on telephony service to the public directly.
== See also ==
Telephone service (disambiguation)
== References == | Wikipedia/Telephone_network |
In graph theory, a clique ( or ) is a subset of vertices of an undirected graph such that every two distinct vertices in the clique are adjacent. That is, a clique of a graph
G
{\displaystyle G}
is an induced subgraph of
G
{\displaystyle G}
that is complete. Cliques are one of the basic concepts of graph theory and are used in many other mathematical problems and constructions on graphs. Cliques have also been studied in computer science: the task of finding whether there is a clique of a given size in a graph (the clique problem) is NP-complete, but despite this hardness result, many algorithms for finding cliques have been studied.
Although the study of complete subgraphs goes back at least to the graph-theoretic reformulation of Ramsey theory by Erdős & Szekeres (1935), the term clique comes from Luce & Perry (1949), who used complete subgraphs in social networks to model cliques of people; that is, groups of people all of whom know each other. Cliques have many other applications in the sciences and particularly in bioinformatics.
== Definitions ==
A clique, C, in an undirected graph G = (V, E) is a subset of the vertices, C ⊆ V, such that every two distinct vertices are adjacent. This is equivalent to the condition that the induced subgraph of G induced by C is a complete graph. In some cases, the term clique may also refer to the subgraph directly.
A maximal clique is a clique that cannot be extended by including one more adjacent vertex, that is, a clique which does not exist exclusively within the vertex set of a larger clique. Some authors define cliques in a way that requires them to be maximal, and use other terminology for complete subgraphs that are not maximal.
A maximum clique of a graph, G, is a clique, such that there is no clique with more vertices. Moreover, the clique number ω(G) of a graph G is the number of vertices in a maximum clique in G.
The intersection number of G is the smallest number of cliques that together cover all edges of G.
The clique cover number of a graph G is the smallest number of cliques of G whose union covers the set of vertices V of the graph.
A maximum clique transversal of a graph is a subset of vertices with the property that each maximum clique of the graph contains at least one vertex in the subset.
The opposite of a clique is an independent set, in the sense that every clique corresponds to an independent set in the complement graph. The clique cover problem concerns finding as few cliques as possible that include every vertex in the graph.
A related concept is a biclique, a complete bipartite subgraph. The bipartite dimension of a graph is the minimum number of bicliques needed to cover all the edges of the graph.
== Mathematics ==
Mathematical results concerning cliques include the following.
Turán's theorem gives a lower bound on the size of a clique in dense graphs. If a graph has sufficiently many edges, it must contain a large clique. For instance, every graph with
n
{\displaystyle n}
vertices and more than
⌊
n
2
⌋
⋅
⌈
n
2
⌉
{\displaystyle \scriptstyle \left\lfloor {\frac {n}{2}}\right\rfloor \cdot \left\lceil {\frac {n}{2}}\right\rceil }
edges must contain a three-vertex clique.
Ramsey's theorem states that every graph or its complement graph contains a clique with at least a logarithmic number of vertices.
According to a result of Moon & Moser (1965), a graph with 3n vertices can have at most 3n maximal cliques. The graphs meeting this bound are the Moon–Moser graphs K3,3,..., a special case of the Turán graphs arising as the extremal cases in Turán's theorem.
Hadwiger's conjecture, still unproven, relates the size of the largest clique minor in a graph (its Hadwiger number) to its chromatic number.
The Erdős–Faber–Lovász conjecture relates graph coloring to cliques.
The Erdős–Hajnal conjecture states that families of graphs defined by forbidden graph characterization have either large cliques or large cocliques.
Several important classes of graphs may be defined or characterized by their cliques:
A cluster graph is a graph whose connected components are cliques.
A block graph is a graph whose biconnected components are cliques.
A chordal graph is a graph whose vertices can be ordered into a perfect elimination ordering, an ordering such that the neighbors of each vertex v that come later than v in the ordering form a clique.
A cograph is a graph all of whose induced subgraphs have the property that any maximal clique intersects any maximal independent set in a single vertex.
An interval graph is a graph whose maximal cliques can be ordered in such a way that, for each vertex v, the cliques containing v are consecutive in the ordering.
A line graph is a graph whose edges can be covered by edge-disjoint cliques in such a way that each vertex belongs to exactly two of the cliques in the cover.
A perfect graph is a graph in which the clique number equals the chromatic number in every induced subgraph.
A split graph is a graph in which some clique contains at least one endpoint of every edge.
A triangle-free graph is a graph that has no cliques other than its vertices and edges.
Additionally, many other mathematical constructions involve cliques in graphs. Among them,
The clique complex of a graph G is an abstract simplicial complex X(G) with a simplex for every clique in G
A simplex graph is an undirected graph κ(G) with a vertex for every clique in a graph G and an edge connecting two cliques that differ by a single vertex. It is an example of median graph, and is associated with a median algebra on the cliques of a graph: the median m(A,B,C) of three cliques A, B, and C is the clique whose vertices belong to at least two of the cliques A, B, and C.
The clique-sum is a method for combining two graphs by merging them along a shared clique.
Clique-width is a notion of the complexity of a graph in terms of the minimum number of distinct vertex labels needed to build up the graph from disjoint unions, relabeling operations, and operations that connect all pairs of vertices with given labels. The graphs with clique-width one are exactly the disjoint unions of cliques.
The intersection number of a graph is the minimum number of cliques needed to cover all the graph's edges.
The clique graph of a graph is the intersection graph of its maximal cliques.
Closely related concepts to complete subgraphs are subdivisions of complete graphs and complete graph minors. In particular, Kuratowski's theorem and Wagner's theorem characterize planar graphs by forbidden complete and complete bipartite subdivisions and minors, respectively.
== Computer science ==
In computer science, the clique problem is the computational problem of finding a maximum clique, or all cliques, in a given graph. It is NP-complete, one of Karp's 21 NP-complete problems. It is also fixed-parameter intractable, and hard to approximate. Nevertheless, many algorithms for computing cliques have been developed, either running in exponential time (such as the Bron–Kerbosch algorithm) or specialized to graph families such as planar graphs or perfect graphs for which the problem can be solved in polynomial time.
== Applications ==
The word "clique", in its graph-theoretic usage, arose from the work of Luce & Perry (1949), who used complete subgraphs to model cliques (groups of people who all know each other) in social networks. The same definition was used by Festinger (1949) in an article using less technical terms. Both works deal with uncovering cliques in a social network using matrices. For continued efforts to model social cliques graph-theoretically, see e.g. Alba (1973), Peay (1974), and Doreian & Woodard (1994).
Many different problems from bioinformatics have been modeled using cliques. For instance, Ben-Dor, Shamir & Yakhini (1999) model the problem of clustering gene expression data as one of finding the minimum number of changes needed to transform a graph describing the data into a graph formed as the disjoint union of cliques; Tanay, Sharan & Shamir (2002) discuss a similar biclustering problem for expression data in which the clusters are required to be cliques. Sugihara (1984) uses cliques to model ecological niches in food webs. Day & Sankoff (1986) describe the problem of inferring evolutionary trees as one of finding maximum cliques in a graph that has as its vertices characteristics of the species, where two vertices share an edge if there exists a perfect phylogeny combining those two characters. Samudrala & Moult (1998) model protein structure prediction as a problem of finding cliques in a graph whose vertices represent positions of subunits of the protein. And by searching for cliques in a protein–protein interaction network, Spirin & Mirny (2003) found clusters of proteins that interact closely with each other and have few interactions with proteins outside the cluster. Power graph analysis is a method for simplifying complex biological networks by finding cliques and related structures in these networks.
In electrical engineering, Prihar (1956) uses cliques to analyze communications networks, and Paull & Unger (1959) use them to design efficient circuits for computing partially specified Boolean functions. Cliques have also been used in automatic test pattern generation: a large clique in an incompatibility graph of possible faults provides a lower bound on the size of a test set. Cong & Smith (1993) describe an application of cliques in finding a hierarchical partition of an electronic circuit into smaller subunits.
In chemistry, Rhodes et al. (2003) use cliques to describe chemicals in a chemical database that have a high degree of similarity with a target structure. Kuhl, Crippen & Friesen (1983) use cliques to model the positions in which two chemicals will bind to each other.
== See also ==
Clique game
== Notes ==
== References ==
== External links ==
Weisstein, Eric W., "Clique", MathWorld
Weisstein, Eric W., "Clique Number", MathWorld | Wikipedia/Clique_(graph_theory) |
In the mathematical field of graph theory, the Erdős–Rényi model refers to one of two closely related models for generating random graphs or the evolution of a random network. These models are named after Hungarian mathematicians Paul Erdős and Alfréd Rényi, who introduced one of the models in 1959. Edgar Gilbert introduced the other model contemporaneously with and independently of Erdős and Rényi. In the model of Erdős and Rényi, all graphs on a fixed vertex set with a fixed number of edges are equally likely. In the model introduced by Gilbert, also called the Erdős–Rényi–Gilbert model, each edge has a fixed probability of being present or absent, independently of the other edges. These models can be used in the probabilistic method to prove the existence of graphs satisfying various properties, or to provide a rigorous definition of what it means for a property to hold for almost all graphs.
== Definition ==
There are two closely related variants of the Erdős–Rényi random graph model.
In the
G
(
n
,
M
)
{\displaystyle G(n,M)}
model, a graph is chosen uniformly at random from the collection of all graphs which have
n
{\displaystyle n}
nodes and
M
{\displaystyle M}
edges. The nodes are considered to be labeled, meaning that graphs obtained from each other by permuting the vertices are considered to be distinct. For example, in the
G
(
3
,
2
)
{\displaystyle G(3,2)}
model, there are three two-edge graphs on three labeled vertices (one for each choice of the middle vertex in a two-edge path), and each of these three graphs is included with probability
1
3
{\displaystyle {\tfrac {1}{3}}}
.
In the
G
(
n
,
p
)
{\displaystyle G(n,p)}
model, a graph is constructed by connecting labeled nodes randomly. Each edge is included in the graph with probability
p
{\displaystyle p}
, independently from every other edge. Equivalently, the probability for generating each graph that has
n
{\displaystyle n}
nodes and
M
{\displaystyle M}
edges is
p
M
(
1
−
p
)
(
n
2
)
−
M
.
{\displaystyle p^{M}(1-p)^{{n \choose 2}-M}.}
The parameter
p
{\displaystyle p}
in this model can be thought of as a weighting function; as
p
{\displaystyle p}
increases from
0
{\displaystyle 0}
to
1
{\displaystyle 1}
, the model becomes more and more likely to include graphs with more edges and less and less likely to include graphs with fewer edges. In particular, the case
p
=
1
2
{\displaystyle p={\tfrac {1}{2}}}
corresponds to the case where all
2
(
n
2
)
{\displaystyle 2^{\binom {n}{2}}}
graphs on
n
{\displaystyle n}
vertices are chosen with equal probability.
The behavior of random graphs are often studied in the case where
n
{\displaystyle n}
, the number of vertices, tends to infinity. Although
p
{\displaystyle p}
and
M
{\displaystyle M}
can be fixed in this case, they can also be functions depending on
n
{\displaystyle n}
. For example, the statement that almost every graph in
G
(
n
,
2
ln
(
n
)
/
n
)
{\displaystyle G(n,2\ln(n)/n)}
is connected means that, as
n
{\displaystyle n}
tends to infinity, the probability that a graph on
n
{\displaystyle n}
vertices with edge probability
2
ln
(
n
)
/
n
{\displaystyle 2\ln(n)/n}
is connected tends to
1
{\displaystyle 1}
.
== Comparison between the two models ==
The expected number of edges in G(n, p) is
(
n
2
)
p
{\displaystyle {\tbinom {n}{2}}p}
, with a standard deviation asymptotic to
s
(
n
)
=
n
p
(
1
−
p
)
{\displaystyle s(n)=n{\sqrt {p(1-p)}}}
. Therefore, a rough heuristic is that if some property of G(n, M) with
M
=
(
n
2
)
p
{\displaystyle M={\tbinom {n}{2}}p}
does not significantly change in behavior if M is changed by up to s(n), then G(n, p) should share that behavior.
This is formalized in a result of Łuczak. Suppose that P is a graph property such that for every sequence M = M(n) with
|
M
−
(
n
2
)
p
|
=
O
(
s
(
n
)
)
{\displaystyle |M-{\tbinom {n}{2}}p|=O(s(n))}
, the probability that a graph sampled from G(n, M) has property P tends to a as n → ∞. Then the probability that G(n, p) has property P also tends to a.
Implications in the other direction are less reliable, but a partial converse (also shown by Łuczak) is known when P is monotone with respect to the subgraph ordering (meaning that if A is a subgraph of B and B satisfies P, then A will satisfy P as well). Let
ε
(
n
)
≫
s
(
n
)
/
n
3
{\displaystyle \varepsilon (n)\gg s(n)/n^{3}}
, and suppose that a monotone property P is true of both G(n, p – ε) and G(n, p + ε) with a probability tending to the same constant a as n → ∞. Then the probability that
G
(
n
,
(
n
2
)
p
)
{\displaystyle G(n,{\tbinom {n}{2}}p)}
has property P also tends to a.
For example, both directions of equivalency hold if P is the property of being connected, or if P is the property of containing a Hamiltonian cycle. However, properties that are not monotone (e.g. the property of having an even number of edges) or that change too rapidly (e.g. the property of having at least
1
2
(
n
2
)
{\displaystyle {\tfrac {1}{2}}{\tbinom {n}{2}}}
edges) may behave differently in the two models.
In practice, the G(n, p) model is the one more commonly used today, in part due to the ease of analysis allowed by the independence of the edges.
== Properties of G(n, p) ==
With the notation above, a graph in G(n, p) has on average
(
n
2
)
p
{\displaystyle {\tbinom {n}{2}}p}
edges. The distribution of the degree of any particular vertex is binomial:
P
(
deg
(
v
)
=
k
)
=
(
n
−
1
k
)
p
k
(
1
−
p
)
n
−
1
−
k
,
{\displaystyle P(\deg(v)=k)={n-1 \choose k}p^{k}(1-p)^{n-1-k},}
where n is the total number of vertices in the graph. Since
P
(
deg
(
v
)
=
k
)
→
(
n
p
)
k
e
−
n
p
k
!
as
n
→
∞
and
n
p
=
constant
,
{\displaystyle P(\deg(v)=k)\to {\frac {(np)^{k}\mathrm {e} ^{-np}}{k!}}\quad {\text{ as }}n\to \infty {\text{ and }}np={\text{constant}},}
this distribution is Poisson for large n and np = const.
In a 1960 paper, Erdős and Rényi described the behavior of G(n, p) very precisely for various values of p. Their results included that:
If np < 1, then a graph in G(n, p) will almost surely have no connected components of size larger than O(log(n)).
If np = 1, then a graph in G(n, p) will almost surely have a largest component whose size is of order n2/3.
If np → c > 1, where c is a constant, then a graph in G(n, p) will almost surely have a unique giant component containing a positive fraction of the vertices. No other component will contain more than O(log(n)) vertices.
If
p
<
(
1
−
ε
)
ln
n
n
{\displaystyle p<{\tfrac {(1-\varepsilon )\ln n}{n}}}
, then a graph in G(n, p) will almost surely contain isolated vertices, and thus be disconnected.
If
p
>
(
1
+
ε
)
ln
n
n
{\displaystyle p>{\tfrac {(1+\varepsilon )\ln n}{n}}}
, then a graph in G(n, p) will almost surely be connected.
Thus
ln
n
n
{\displaystyle {\tfrac {\ln n}{n}}}
is a sharp threshold for the connectedness of G(n, p).
Further properties of the graph can be described almost precisely as n tends to infinity. For example, there is a k(n) (approximately equal to 2log2(n)) such that the largest clique in G(n, 0.5) has almost surely either size k(n) or k(n) + 1.
Thus, even though finding the size of the largest clique in a graph is NP-complete, the size of the largest clique in a "typical" graph (according to this model) is very well understood.
Edge-dual graphs of Erdos-Renyi graphs are graphs with nearly the same degree distribution, but with degree correlations and a significantly higher clustering coefficient.
== Relation to percolation ==
In percolation theory one examines a finite or infinite graph and removes edges (or links) randomly. Thus the Erdős–Rényi process is in fact unweighted link percolation on the complete graph. (One refers to percolation in which nodes and/or links are removed with heterogeneous weights as weighted percolation). As percolation theory has much of its roots in physics, much of the research done was on the lattices in Euclidean spaces. The transition at np = 1 from giant component to small component has analogs for these graphs, but for lattices the transition point is difficult to determine. Physicists often refer to study of the complete graph as a mean field theory. Thus the Erdős–Rényi process is the mean-field case of percolation.
Some significant work was also done on percolation on random graphs. From a physicist's point of view this would still be a mean-field model, so the justification of the research is often formulated in terms of the robustness of the graph, viewed as a communication network. Given a random graph of n ≫ 1 nodes with an average degree
⟨
k
⟩
{\displaystyle \langle k\rangle }
. Remove randomly a fraction
1
−
p
′
{\displaystyle 1-p'}
of nodes and leave only a fraction
p
′
{\displaystyle p'}
from the network. There exists a critical percolation threshold
p
c
′
=
1
⟨
k
⟩
{\displaystyle p'_{c}={\tfrac {1}{\langle k\rangle }}}
below which the network becomes fragmented while above
p
c
′
{\displaystyle p'_{c}}
a giant connected component of order n exists. The relative size of the giant component, P∞, is given by
P
∞
=
p
′
[
1
−
exp
(
−
⟨
k
⟩
P
∞
)
]
.
{\displaystyle P_{\infty }=p'[1-\exp(-\langle k\rangle P_{\infty })].\,}
== Caveats ==
Both of the two major assumptions of the G(n, p) model (that edges are independent and that each edge is equally likely) may be inappropriate for modeling certain real-life phenomena. Erdős–Rényi graphs have low clustering, unlike many social networks. Some modeling alternatives include Barabási–Albert model and Watts and Strogatz model. These alternative models are not percolation processes, but instead represent a growth and rewiring model, respectively. Another alternative family of random graph models, capable of reproducing many real-life phenomena, are exponential random graph models.
== History ==
The G(n, p) model was first introduced by Edgar Gilbert in a 1959 paper studying the connectivity threshold mentioned above. The G(n, M) model was introduced by Erdős and Rényi in their 1959 paper. As with Gilbert, their first investigations were as to the connectivity of G(n, M), with the more detailed analysis following in 1960.
== Continuum limit representation of critical G(n, p) ==
A continuum limit of the graph was obtained when
p
{\displaystyle p}
is of order
1
/
n
{\displaystyle 1/n}
. Specifically, consider the sequence of graphs
G
n
:=
G
(
n
,
1
/
n
+
λ
n
−
4
3
)
{\displaystyle G_{n}:=G(n,1/n+\lambda n^{-{\frac {4}{3}}})}
for
λ
∈
R
{\displaystyle \lambda \in \mathbb {R} }
. The limit object can be constructed as follows:
First, generate a diffusion
W
λ
(
t
)
:=
W
(
t
)
+
λ
t
−
t
2
2
{\displaystyle W^{\lambda }(t):=W(t)+\lambda t-{\frac {t^{2}}{2}}}
where
W
{\displaystyle W}
is a standard Brownian motion.
From this process, we define the reflected process
R
λ
(
t
)
:=
W
λ
(
t
)
−
inf
s
∈
[
0
,
t
]
W
λ
(
s
)
{\displaystyle R^{\lambda }(t):=W^{\lambda }(t)-\inf \limits _{s\in [0,t]}W^{\lambda }(s)}
. This process can be seen as containing many successive excursion (not quite a Brownian excursion, see ). Because the drift of
W
λ
{\displaystyle W^{\lambda }}
is dominated by
−
t
2
2
{\displaystyle -{\frac {t^{2}}{2}}}
, these excursions become shorter and shorter as
t
→
+
∞
{\displaystyle t\to +\infty }
. In particular, they can be sorted in order of decreasing lengths: we can partition
R
{\displaystyle \mathbb {R} }
into intervals
(
C
i
)
i
∈
N
{\displaystyle (C_{i})_{i\in \mathbb {N} }}
of decreasing lengths such that
R
λ
{\displaystyle R^{\lambda }}
restricted to
C
i
{\displaystyle C_{i}}
is a Brownian excursion for any
i
∈
N
{\displaystyle i\in \mathbb {N} }
.
Now, consider an excursion
(
e
(
s
)
)
s
∈
[
0
,
1
]
{\displaystyle (e(s))_{s\in [0,1]}}
. Construct a random graph as follows:
Construct a real tree
T
e
{\displaystyle T_{e}}
(see Brownian tree).
Consider a Poisson point process
Ξ
{\displaystyle \Xi }
on
[
0
,
1
]
×
R
+
{\displaystyle [0,1]\times \mathbb {R} _{+}}
with unit intensity. To each point
(
x
,
s
)
∈
Ξ
{\displaystyle (x,s)\in \Xi }
such that
x
≤
e
(
s
)
{\displaystyle x\leq e(s)}
, corresponds an underlying internal node and a leaf of the tree
T
e
{\displaystyle T_{e}}
. Identifying the two vertices, the tree
T
e
{\displaystyle T_{e}}
becomes a graph
Γ
e
{\displaystyle \Gamma _{e}}
Applying this procedure, one obtains a sequence of random infinite graphs of decreasing sizes:
(
Γ
i
)
i
∈
N
{\displaystyle (\Gamma _{i})_{i\in \mathbb {N} }}
. The theorem states that this graph corresponds in a certain sense to the limit object of
G
n
{\displaystyle G_{n}}
as
n
→
+
∞
{\displaystyle n\to +\infty }
.
== See also ==
Rado graph – Infinite graph containing all countable graphs, the graph formed by extending the G(n, p) model to graphs with a countably infinite number of vertices. Unlike in the finite case, the result of this infinite process is (with probability 1) the same graph, up to isomorphism.
Dual-phase evolution – Process that drives self-organization within complex adaptive systems describes ways in which properties associated with the Erdős–Rényi model contribute to the emergence of order in systems.
Exponential random graph models – statistical models for network analysisPages displaying wikidata descriptions as a fallback describe a general probability distribution of graphs on "n" nodes given a set of network statistics and various parameters associated with them.
Stochastic block model – Concept in network science, a generalization of the Erdős–Rényi model for graphs with latent community structure
Watts–Strogatz model – Method of generating random small-world graphs
Barabási–Albert model – Scale-free network generation algorithm
== References ==
== Literature ==
West, Douglas B. (2001). Introduction to Graph Theory (2nd ed.). Prentice Hall. ISBN 0-13-014400-2.
Newman, M. E. J. (2010). Networks: An Introduction. Oxford.
== External links ==
Video: Erdos-Renyi Random Graph | Wikipedia/Erdős–Rényi_model |
An Internet area network (IAN) is a concept for a communications network that connects voice and data endpoints within a cloud environment over IP, replacing an existing local area network (LAN), wide area network (WAN) or the public switched telephone network (PSTN).
== Overview ==
An IAN securely connects endpoints through the public Internet to communicate and exchange information and data without being tied to a physical location.
The IAN eliminates a geographic profile for the entire network because the applications and communications services have become virtualized. Endpoints need to be connected only over a broadband connection across the Internet. Unlike IAN, LAN interconnects computers in a limited area, such as a home, a school, a computer laboratory, or an office building. The WAN also differs from the IAN because it is a network that covers a broad area, such as any telecommunications network that links across metropolitan, regional, or national boundaries, using private or public network transports.
Hosted in the cloud by a managed services provider, an IAN platform offers users secure access to information from anywhere, anytime, via an Internet connection. Users can access telephony, voicemail, email, and fax services from any connected endpoint. The hosted model reduces IT and communications expenses for businesses, protects against data loss and disaster downtime, and realizes a greater return on their invested resources through increased employee productivity and reduced telecom costs.
== History ==
The IAN is rooted in the rise of cloud computing. The underlying concept dates back to the 1950s, when large-scale mainframes became available in academia and corporations, accessible via thin clients and terminal computers. Because it was costly to buy a mainframe, it became essential to find ways to get the greatest return on the investment, allowing multiple users to share both the physical access to the computer from multiple terminals as well as to share the CPU time, eliminating periods of inactivity, which became known in the industry as time-sharing.
The increasing demand and use of computers in universities and research labs in the late 1960s generated the need for high-speed interconnections between computer systems. A 1970 report from the Lawrence Radiation Laboratory detailing the growth of their "Octopus" network gave a good indication of the situation.
As computers became more prevalent, scientists and technologists explored ways to make large-scale computing power available to more users through time-sharing, experimenting with algorithms to provide the optimal use of the infrastructure, platform, and applications with prioritized access to the CPU and efficiency for the end users.
John McCarthy opined in the 1960s that "computation may someday be organized as a public utility.". Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry, and the use of public, private, government, and community forms were thoroughly explored in Douglas Parkhill's 1966 book, The Challenge of the Computer Utility. Other scholars have shown that cloud computing's roots go back to the 1950s when scientist Herb Grosch (the author of Grosch's law) postulated that the entire world would operate on dumb terminals powered by about 15 large data centers. Due to the expense of these powerful computers, many corporations and other entities could avail themselves of computing capability through time-sharing, and several organizations, such as GE's GEISCO, IBM subsidiary The Service Bureau Corporation (SBC, founded in 1957), Tymshare (founded in 1966), National CSS (founded in 1967 and bought by Dun & Bradstreet in 1979), Dial Data (bought by Tymshare in 1968), and Bolt, Beranek, and Newman (BBN) marketed time-sharing as a commercial venture.
The development of the Internet from being document-centric via semantic data towards more and more services was described as a "Dynamic Web." This contribution focused on the need for better meta-data to explain implementation details and conceptual details of model-based applications.
In the 1990s, telecommunications companies that previously offered primarily dedicated point-to-point data circuits began offering virtual private network (VPN) services with comparable quality of service but at a much lower cost. By switching traffic to balance utilization as they saw fit, they were able to optimize their overall network usage. The cloud symbol was used to denote the demarcation point between the provider's responsibility and the users' responsibility. Cloud computing extends this boundary to cover servers and the network infrastructure.
After the dot-com bubble, Amazon played a crucial role in the development of cloud computing by modernizing their data centers, which, like most computer networks, were using as little as 10% of their capacity at any time to leave room for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby small, fast-moving "two-pizza teams" (teams small enough to be fed with two pizzas) could add new features faster and more efficiently, Amazon initiated a new product development effort to provide cloud computing to external customers and launched Amazon Web Services (AWS) on a utility computing basis in 2006.
In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying private clouds. In early 2008, OpenNebula, enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds and for the federation of clouds. In the same year, efforts were focused on providing Quality of Service guarantees (as required by real-time interactive applications) to cloud-based infrastructures in the IRMOS European Commission-funded project's framework, resulting in a real-time cloud environment. By mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and those who sell them" and observed that "organizations are switching from company-owned hardware and software assets to per-use service-based models" so that the "projected shift to computing... will result in dramatic growth in IT products in some areas and significant reductions in other areas."
In 2011, RESERVOIR was established in Europe to create open-source technologies that allow cloud providers to build an advanced cloud by balancing workloads, lowering costs, and moving workloads across geographic locations through a federation of clouds. Also, in 2011, IBM announced that the Smarter Computing framework would support a Smarter Planet. Cloud computing is a critical piece among the various components of the Smarter Computing foundation.
Currently, The ubiquitous availability of high-capacity networks, low-cost computers, and storage devices and the widespread adoption of hardware virtualization, service-oriented architecture, and autonomic and utility computing have led to tremendous growth in cloud computing. Virtual worlds and peer-to-peer architectures have paved the way for the concept of an IAN.
iAreaNet was founded in 1999 by CEO James DeCrescenzo as a company called Internet Area Network, devoted to providing offsite data storage and disaster prevention before the cloud existed in widely deployed commercial form. It pioneered the idea of an IAN. Since then, it has strengthened operations. It has made significant investments in developing a robust infrastructure to provide businesses with an array of technology solutions, including the patent-pending iAreaOffice, which commercializes the concept of an IAN by eliminating the need for traditional LAN, WAN, or telephone system for business communications.
== See also ==
Cloud-computing comparison
Cloud database
Cloud storage
Cloud collaboration
VPN
== Notes ==
== References ==
Winkleman, Roy. “Networking Handbook.” Florida Center for Instructional Technology College of Education. 2009-2013. http://fcit.usf.edu/network/chap1/chap1.htm
iAreaNetwork Vision Statement. https://web.archive.org/web/20130408032637/http://iareanet.com/about-the-cloud-company.html
Martínez-Mateo, J., Munoz-Hernandez, S. and Pérez-Rey, D. “A Discussion of Thin Client Technology for Computer Labs.” University of Madrid. May 2010. https://www.researchgate.net/publication/45917654_A_Discussion_of_Thin_Client_Technology_for_Computer_Labs
McCarthy, John. “Reminiscences on the History of Time Sharing.” Stanford University. 1983 Winter or Spring. https://web.archive.org/web/20071020032705/http://www-formal.stanford.edu/jmc/history/timesharing/timesharing.html
Mendicino, Samuel. Computer Networks. 1972. pp 95–100. http://rogerdmoore.ca/PS/OCTOA/OCTO.html Archived 2013-10-20 at the Wayback Machine
Garfinkle, Simson. “The Cloud Imperative.” MIT Technology Review. Oct. 3, 2011. http://www.technologyreview.com/news/425623/the-cloud-imperative/
https://www.amazon.com/Challenge-Computer-Utility-Douglas-Parkhill/dp/0201057204
Deboosere, L., De Wachter, J., Simoens, P., De Turck, F., Dhoedt, B., and Demeester, P. “Thin Client Computing Solutions in Low- and High-Motion Scenarios.” Third International Conference on Networking and Services (ICNS), 2007.
Gardner, W. David. “Author Of Grosch's Law Going Strong At 87.” InformationWeek. April 12, 2005. http://www.informationweek.com/author-of-groschs-law-going-strong-at-87/160701576 Archived 2012-10-23 at the Wayback Machine
McCarthy, John. “Reminiscences on the History of Time Sharing.” Stanford University. 1983 Winter or Spring. https://web.archive.org/web/20071020032705/http://www-formal.stanford.edu/jmc/history/timesharing/timesharing.html
“A History of the Dynamic Web.” Pingdom. Dec. 7. 2007. http://royal.pingdom.com/2007/12/07/a-history-of-the-dynamic-web/ Archived 2018-01-20 at the Wayback Machine
“Virtual Private Networks: Managing Telecom’s Golden Horde.” Billing World. May 1, 1999. http://www.billingworld.com/articles/1999/05/virtual-private-networks-managing-telecom-s-golde.aspx
Anders, George. “Inside Amazon's Idea Machine: How Bezos Decodes The Customer.” Forbes. April 2012
Arrington, Michael. “Interview with Jeff Bezos On Amazon Web Services.” TechCrunch, Nov. 14, 2006. https://techcrunch.com/2006/11/14/interview-with-jeff-bezos-on-amazon-web-services/
OpenNebula Website http://www.opennebula.org/start
IRMOS Website http://www.irmosproject.eu/ Archived 2018-10-10 at the Wayback Machine
Plummer, Daryl. “Cloud Computing Confusion Leads to Opportunity.” Gartner Inc. June 2008
RESERVOIR Website http://www.reservoir-fp7.eu/
IBM Smarter Planet Home Page. http://www.ibm.com/smarter-computing/us/en/analytics-infrastructure/
Naone, Erica. “Peer to Peer Virtual Worlds.” MIT Technology Review. April 16, 2008. http://www.technologyreview.com/news/409912/peer-to-peer-virtual-worlds/
== External links ==
https://web.archive.org/web/20130408032637/http://iareanet.com/about-the-cloud-company.html | Wikipedia/Internet_area_network |
A ring network is a network topology in which each node connects to exactly two other nodes, forming a single continuous pathway for signals through each node – a ring. Data travels from node to node, with each node along the way handling every packet.
Rings can be unidirectional, with all traffic travelling either clockwise or anticlockwise around the ring, or bidirectional (as in SONET/SDH). Because a unidirectional ring topology provides only one pathway between any two nodes, unidirectional ring networks may be disrupted by the failure of a single link. A node failure or cable break might isolate every node attached to the ring. In response, some ring networks add a "counter-rotating ring" (C-Ring) to form a redundant topology: in the event of a break, data are wrapped back onto the complementary ring before reaching the end of the cable, maintaining a path to every node along the resulting C-Ring. Such "dual ring" networks include the ITU-T's PSTN telephony systems network Signalling System No. 7 (SS7), Spatial Reuse Protocol, Fiber Distributed Data Interface (FDDI), Resilient Packet Ring, and Ethernet Ring Protection Switching. IEEE 802.5 networks – also known as IBM Token Ring networks – avoid the weakness of a ring topology altogether: they actually use a star topology at the physical layer and a media access unit (MAU) to imitate a ring at the datalink layer. Ring networks are used by ISPs to provide data backhaul services, connecting the ISP's facilities such as central offices/headends together.
All Signalling System No. 7 (SS7), and some SONET/SDH rings have two sets of bidirectional links between nodes. This allows maintenance or failures at multiple points of the ring usually without loss of the primary traffic on the outer ring by switching the traffic onto the inner ring past the failure points.
== Advantages ==
Very orderly network where every device has access to the token and the opportunity to transmit
Performs better than a bus topology under heavy network load
Does not require a central node to manage the connectivity between the computers
Due to the point-to-point line configuration of devices with a device on either side (each device is connected to its immediate neighbor), it is quite easy to install and reconfigure since adding or removing a device requires moving just two connections.
Point-to-point line configuration makes it easy to identify and isolate faults.
Ring Protection reconfiguration for line faults of bidirectional rings can be very fast, as switching happens at a high level, and thus the traffic does not require individual rerouting.
Ring topology helps mitigate collisions in a network.
== Disadvantages ==
One malfunctioning workstation can create problems for the entire network. This can be solved by using a dual ring or a switch that closes off the break.
Moving, adding and changing the devices can affect the network
Communication delay is directly proportional to number of nodes in the network
Bandwidth is shared on all links between devices
More difficult to configure than a Star: node adjunction = Ring shutdown and reconfiguration
== Access protocols ==
Rings can be used to carry circuits or packets or a combination of both. SDH rings carry circuits. Circuits are set up with out-of-band signalling protocols, whereas packets are usually carried via a Medium Access Control Protocol (MAC).
The purpose of media access control is to determine which station transmits when. As in any MAC protocol, the aims are to resolve contention and provide fairness. There are three main classes of media access protocol for ring networks: slotted, token and register insertion.
The slotted ring treats the latency of the ring network as a large shift register that permanently rotates. It is formatted into so-called slots of fixed size. A slot is either full or empty, as indicated by control flags in the head of the slot. A station that wishes to transmit waits for an empty slot and puts data in. Other stations can copy out the data and may free the slot, or it may circulate back to the source who frees it. An advantage of source-release, if the sender is banned from immediately re-using it, is that all other stations get the chance to use it first, hence avoiding bandwidth hogging. The pre-eminent example of the slotted ring is the Cambridge Ring.
== Misconceptions ==
"Token Ring is an example of a ring topology." 802.5 (Token Ring) networks do not use a ring topology at layer 1. Token Ring networks are technologies developed by IBM typically used in local area networks. Token Ring (802.5) networks imitate a ring at layer 2 but use a physical star at layer 1.
"Rings prevent collisions." The term "ring" only refers to the layout of the cables. It is true that there are no collisions on an IBM Token Ring, but this is because of the layer 2 Media Access Control method, not the physical topology (which again is a star, not a ring.) Token passing, not rings, prevent collisions.
"Token passing happens on rings." Token passing is a way of managing access to the cable, implemented at the MAC sublayer of layer 2. Ring topology is the cable layout at layer one. It is possible to do token passing on a bus (802.4) a star (802.5) or a ring (FDDI). Token passing is not restricted to rings.
== References == | Wikipedia/Ring_network |
Optical networking is a means of communication that uses signals encoded in light to transmit information in various types of telecommunications networks. These include limited range local-area networks (LAN) or wide area networks (WANs), which cross metropolitan and regional areas as well as long-distance national, international and transoceanic networks. It is a form of optical communication that relies on optical amplifiers, lasers or LEDs and wavelength-division multiplexing (WDM) to transmit large quantities of data, generally across fiber-optic cables. Because it is capable of achieving extremely high bandwidth, it is an enabling technology for the Internet and telecommunication networks that transmit the vast majority of all human and machine-to-machine information.
== Types ==
=== Fiber-optic networks ===
The most common fiber-optic networks are communication networks, mesh networks or ring networks commonly used in metropolitan, regional, national and international systems. Another variant of fiber-optic networks is the passive optical network, which uses unpowered optical splitters to link one fiber to multiple premises for last mile applications.
=== Free-space optical networks ===
Free-space optical networks use many of the same principles as a fiber-optic network but transmit their signals across open space without the use of fiber. Several planned satellite constellations such as SpaceX's Starlink intended for global internet provisioning will use wireless laser communication to establish optical mesh networks between satellites in outer space. Airborne optical networks between high-altitude platforms are planned as part of Google's Project Loon and Facebook Aquila with the same technology.
Free-space optical networks can also be used to set up temporary terrestrial networks e.g. to link LANs on a campus.
== Components ==
Components of a fiber-optical networking system include:
Fiber. Multi-mode or single-mode.
Laser or LED light source.
Multiplexer/demultiplexer, also called mux/demux, filter, or prism. These can include Optical Add/Drop Multiplexer (OADM) and Reconfigurable Optical Add/Drop Multiplexer (ROADM).
Optical switch, to direct light between ports without an optical-electrical-optical conversion
Optical splitter, to send a signal down different fiber paths.
Circulator, to tie in other components, such as an OADM.
Optical amplifier.
Wave division multiplexer.
== Transmission medium ==
At its inception, the telecommunications network relied on copper to carry information. But the bandwidth of copper is limited by its physical characteristics—as the frequency of the signal increases to carry more data, more of the signal's energy is lost as heat. Additionally, electrical signals can interfere with each other when the wires are spaced too close together, a problem known as crosstalk. In 1940, the first communication system relied on coaxial cable that operated at 3 MHz and could carry 300 telephone conversations or one television channel. By 1975, the most advanced coaxial system had a bit rate of 274 Mbit/s, but such high-frequency systems require a repeater approximately every kilometer to strengthen the signal, making such a network expensive to operate.
It was clear that light waves could have much higher bit rates without crosstalk. In 1957, Gordon Gould first described the design of the optical amplifier and the laser that was demonstrated in 1960 by Theodore Maiman. The laser is a source for light waves, but a medium was needed to carry the light through a network. In 1960, glass fibers were in use to transmit light into the body for medical imaging, but they had high optical loss—light was absorbed as it passed through the glass at a rate of 1 decibel per meter, a phenomenon known as attenuation. In 1964, Charles Kao showed that to transmit data for long distances, a glass fiber would need loss no greater than 20 dB per kilometer. A breakthrough came in 1970, when Donald B. Keck, Robert D. Maurer, and Peter C. Schultz of Corning Incorporated designed a glass fiber, made of fused silica, with a loss of only 16 dB/km. Their fiber was able to carry 65,000 times more information than copper.
The first fiber-optic system for live telephone traffic was in 1977 in Long Beach, Calif., by General Telephone and Electronics, with a data rate of 6 Mbit/s. Early systems used infrared light at a wavelength of 800 nm, and could transmit at up to 45 Mbit/s with repeaters approximately 10 km apart. By the early 1980s, lasers and detectors that operated at 1300 nm, where the optical loss is 1 dB/km, had been introduced. By 1987, they were operating at 1.7 Gbit/s with repeater spacing of about 50 km.
== Optical amplification ==
The capacity of fiber optic networks has increased in part due to improvements in components, such as optical amplifiers and optical filters that can separate light waves into frequencies with less than 50 GHz difference, fitting more channels into a fiber. The erbium-doped optical amplifier (EDFA) was developed by David Payne at the University of Southampton in 1986 using atoms of the rare earth erbium that are distributed through a length of optical fiber. A pump laser excites the atoms, which emit light, thus boosting the optical signal. As the paradigm shift in network design proceeded, a broad range of amplifiers emerged because most optical communication systems used optical fiber amplifiers. Erbium-doped amplifiers were the most commonly used means of supporting dense wavelength division multiplexing systems. In fact, EDFAs were so prevalent that, as WDM became the technology of choice in the optical networks, the erbium amplifier became "the optical amplifier of choice for WDM applications." Today, EDFAs and hybrid optical amplifiers are considered the most important components of wave division multiplexing systems and networks.
== Wavelength division multiplexing ==
Using optical amplifiers, the capacity of fibers to carry information was dramatically increased with the introduction of wavelength-division multiplexing (WDM) in the early 1990s. AT&T's Bell Labs developed a WDM process in which a prism splits light into different wavelengths, which could travel through a fiber simultaneously. The peak wavelength of each beam is spaced far enough apart that the beams are distinguishable from each another, creating multiple channels within a single fiber. The earliest WDM systems had only two or four channels—AT&T, for example, deployed an oceanic 4-channel long-haul system in 1995. The erbium-doped amplifiers on which they depend, however, did not amplify signals uniformly across their spectral gain region. During signal regeneration, slight discrepancies in various frequencies introduced an intolerable level of noise, making WDM with greater than 4 channels impractical for high-capacity fiber communications.
To address this limitation, Optelecom, Inc. and General Instruments Corp. developed components to increase fiber bandwidth with far more channels. Optelecom and its head of Light Optics, engineer David Huber and Kevin Kimberlin co-founded Ciena Corp in 1992 to design and commercialize optical telecommunications systems, the objective being an expansion in the capacity of cable systems to 50,000 channels. Ciena developed the dual-stage optical amplifier capable of transmitting data at uniform gain on multiple wavelengths, and with that, in June 1996, introduced the first commercial dense WDM system. That 16-channel system, with a total capacity of 40 Gbit/s, was deployed on the Sprint network, the world's largest carrier of internet traffic at the time. This first application of all-optical amplification in public networks was seen by analysts as a harbinger of a permanent change in network design for which Sprint and Ciena would receive much of the credit. Advanced optical communication experts cite the introduction of WDM as the real start of optical networking.
== Capacity ==
The density of light paths from WDM was the key to the massive expansion of fiber optic capacity that enabled the growth of the Internet in the 1990s. Since the 1990s, the channel count and capacity of dense WDM systems has increased substantially, with commercial systems able to transmit close to 1 Tbit/s of traffic at 100 Gbit/s on each wavelength. In 2010, researchers at AT&T reported an experimental system with 640 channels operating at 107 Gbit/s, for a total transmission of 64 Tbit/s. In 2018, Telstra of Australia deployed a live system that enables the transmission of 30.4 Tbit/s per fiber pair over 61.5 GHz spectrum, equal to 1.2 million 4K Ultra HD videos being streamed simultaneously. As a result of this ability to transport large traffic volumes, WDM has become the common basis of nearly every global communication network and thus, a foundation of the Internet today. Demand for bandwidth is driven primarily by Internet Protocol (IP) traffic from video services, telemedicine, social networking, mobile phone use and cloud-based computing. At the same time, machine-to-machine, IoT and scientific community traffic require support for the large-scale exchange of data files. According to the Cisco Visual Networking Index, global IP traffic will be more than 150,700 Gbits per second in 2022. Of that, video content will equal 82% of all IP traffic, all transmitted by optical networking.
== Standards and protocols ==
Synchronous optical networking (SONET) and synchronous digital hierarchy (SDH) have evolved as the most commonly used protocols for optical networks. The optical transport network (OTN) protocol was developed by the International Telecommunication Union as a successor and allows interoperability across the network as described by Recommendation G.709. Both protocols allow for delivery of a variety of protocols such as Asynchronous Transfer Mode (ATM), Ethernet, TCP/IP and others.
== References == | Wikipedia/Optical_networking |
A network segment is a portion of a computer network. The nature and extent of a segment depends on the nature of the network and the device or devices used to interconnect end stations.
== Ethernet ==
According to the defining IEEE 802.3 standards for Ethernet, a network segment is an electrical connection between networked devices using a shared medium. In the original 10BASE5 and 10BASE2 Ethernet varieties, a segment would therefore correspond to a single coax cable and all devices tapped into it. At this point in the evolution of Ethernet, multiple network segments could be connected with repeaters (in accordance with the 5-4-3 rule for 10 Mbit Ethernet) to form a larger collision domain.
With twisted-pair Ethernet, electrical segments can be joined using repeaters or repeater hubs as can other varieties of Ethernet. This corresponds to the extent of an OSI layer 1 network and is equivalent to the collision domain. The 5-4-3 rule applies to this collision domain.
Using switches or bridges, multiple layer-1 segments can be combined to a common layer-2 segment, i.e. all nodes can communicate with each other through MAC addressing or broadcasts. A layer-2 segment is equivalent to a broadcast domain. Traffic within a layer-2 segment can be separated into virtually distinct partitions by using VLANs. Each VLAN forms its own logical layer-2 segment.
== IP ==
A layer-3 segment in an IP network is called a subnetwork, formed by all nodes sharing the same network prefix as defined by their IP addresses and the network mask. Communication between layer-3 subnets requires a router. Hosts on a subnet communicate directly using the layer-2 segment that connects them. Most often a subnetwork corresponds exactly with the underlying layer-2 segment but it is also possible to run multiple subnets on a single layer-2 segment.
== References == | Wikipedia/Network_segment |
Compartmental models are a mathematical framework used to simulate how populations move between different states or "compartments." While widely applied in various fields, they have become particularly fundamental to the mathematical modelling of infectious diseases. In these models, the population is divided into compartments labeled with shorthand notation – most commonly S, I, and R, representing Susceptible, Infectious, and Recovered individuals. The sequence of letters typically indicates the flow patterns between compartments; for example, an SEIS model represents progression from susceptible to exposed to infectious and then back to susceptible again.
These models originated in the early 20th century through pioneering epidemiological work by several mathematicians. Key developments include Hamer's work in 1906, Ross's contributions in 1916, collaborative work by Ross and Hudson in 1917, the seminal Kermack and McKendrick model in 1927, and Kendall's work in 1956. The historically significant Reed–Frost model, though often overlooked, also substantially influenced modern epidemiological modeling approaches.
Most implementations of compartmental models use ordinary differential equations (ODEs), providing deterministic results that are mathematically tractable. However, they can also be formulated within stochastic frameworks that incorporate randomness, offering more realistic representations of population dynamics at the cost of greater analytical complexity.
Epidemiologists and public health officials use these models for several critical purposes: analyzing disease transmission dynamics, projecting the total number of infections and recoveries over time, estimating key epidemiological parameters such as the basic reproduction number (R₀) or effective reproduction number (Rt), evaluating potential impacts of different public health interventions before implementation, and informing evidence-based policy decisions during disease outbreaks. Beyond infectious disease modeling, the approach has been adapted for applications in population ecology, pharmacokinetics, chemical kinetics, and other fields requiring the study of transitions between defined states.
== SIR model ==
The SIR model is one of the simplest compartmental models, and many models are derivatives of this basic form. The model consists of three compartments:
S: The number of susceptible individuals. When a susceptible and an infectious individual come into "infectious contact", the susceptible individual contracts the disease and transitions to the infectious compartment.
I: The number of infectious individuals. These are individuals who have been infected and are capable of infecting susceptible individuals.
R for the number of removed (and immune) or deceased individuals. These are individuals who have been infected and have either recovered from the disease and entered the removed compartment, or died. It is assumed that the number of deaths is negligible with respect to the total population. This compartment may also be called "recovered" or "resistant".
This model is reasonably predictive for infectious diseases that are transmitted from human to human, and where recovery confers lasting resistance, such as measles, mumps, and rubella.
These variables (S, I, and R) represent the number of people in each compartment at a particular time. To represent that the number of susceptible, infectious, and removed individuals may vary over time (even if the total population size remains constant), we make the precise numbers a function of t (time): S(t), I(t), and R(t). For a specific disease in a specific population, these functions may be worked out in order to predict possible outbreaks and bring them under control. Note that in the SIR model,
R
(
0
)
{\displaystyle R(0)}
and
R
0
{\displaystyle R_{0}}
are different quantities – the former describes the number of recovered at t = 0 whereas the latter describes the ratio between the frequency of contacts to the frequency of recovery.
As implied by the variable function of t, the model is dynamic in that the numbers in each compartment may fluctuate over time. The importance of this dynamic aspect is most obvious in an endemic disease with a short infectious period, such as measles in the UK prior to the introduction of a vaccine in 1968. Such diseases tend to occur in cycles of outbreaks due to the variation in number of susceptibles (S(t)) over time. During an epidemic, the number of susceptible individuals falls rapidly as more of them are infected and thus enter the infectious and removed compartments. The disease cannot break out again until the number of susceptibles has built back up, e.g. as a result of offspring being born into the susceptible compartment.
Each member of the population typically progresses from susceptible to infectious to recovered. This can be shown as a flow diagram in which the boxes represent the different compartments and the arrows the transition between compartments (see diagram).
=== Transition rates ===
For the full specification of the model, the arrows should be labeled with the transition rates between compartments. Between S and I, the transition rate is assumed to be
d
(
S
/
N
)
/
d
t
=
−
β
S
I
/
N
2
{\displaystyle d(S/N)/dt=-\beta SI/N^{2}}
, where
N
{\displaystyle N}
is the total population,
β
{\displaystyle \beta }
is the average number of contacts per person per time, multiplied by the probability of disease transmission in a contact between a susceptible and an infectious subject, and
S
I
/
N
2
{\displaystyle SI/N^{2}}
is the fraction of all possible contacts that involves an infectious and susceptible individual. (This is mathematically similar to the law of mass action in chemistry in which random collisions between molecules result in a chemical reaction and the fractional rate is proportional to the concentration of the two reactants.)
Between I and R, the transition rate is assumed to be proportional to the number of infectious individuals which is
γ
I
{\displaystyle \gamma I}
. If an individual is infectious for an average time period
D
{\displaystyle D}
, then
γ
=
1
/
D
{\displaystyle \gamma =1/D}
. This is also equivalent to the assumption that the length of time spent by an individual in the infectious state is a random variable with an exponential distribution. The "classical" SIR model may be modified by using more complex and realistic distributions for the I-R transition rate (e.g. the Erlang distribution).
For the special case in which there is no removal from the infectious compartment (
γ
=
0
{\displaystyle \gamma =0}
), the SIR model reduces to a very simple SI model, which has a logistic solution, in which every individual eventually becomes infected.
=== The SIR model without birth and death ===
The dynamics of an epidemic, for example, the flu, are often much faster than the dynamics of birth and death, therefore, birth and death are often omitted in simple compartmental models. The SIR system without so-called vital dynamics (birth and death, sometimes called demography) described above can be expressed by the following system of ordinary differential equations:
{
d
S
d
t
=
−
β
N
I
S
,
d
I
d
t
=
β
N
I
S
−
γ
I
,
d
R
d
t
=
γ
I
,
{\displaystyle \left\{{\begin{aligned}&{\frac {dS}{dt}}=-{\frac {\beta }{N}}IS,\\[6pt]&{\frac {dI}{dt}}={\frac {\beta }{N}}IS-\gamma I,\\[6pt]&{\frac {dR}{dt}}=\gamma I,\end{aligned}}\right.}
where
S
{\displaystyle S}
is the stock of susceptible population in unit number of people,
I
{\displaystyle I}
is the stock of infected in unit number of people,
R
{\displaystyle R}
is the stock of removed population (either by death or recovery) in unit number of people, and
N
{\displaystyle N}
is the sum of these three in unit number of people.
β
{\displaystyle \beta }
is the infection rate constant in the unit number of people infected per day per infected person, and
γ
{\displaystyle \gamma }
is the recovery rate constant in the unit fraction of a person recovered per day per infected person, when time is in unit day.
This model was for the first time proposed by William Ogilvy Kermack and Anderson Gray McKendrick as a special case of what we now call Kermack–McKendrick theory, and followed work McKendrick had done with Ronald Ross.
This system is non-linear, however it is possible to derive its analytic solution in implicit form. Firstly note that from:
d
S
d
t
+
d
I
d
t
+
d
R
d
t
=
0
,
{\displaystyle {\frac {dS}{dt}}+{\frac {dI}{dt}}+{\frac {dR}{dt}}=0,}
it follows that:
S
(
t
)
+
I
(
t
)
+
R
(
t
)
=
constant
=
N
,
{\displaystyle S(t)+I(t)+R(t)={\text{constant}}=N,}
expressing in mathematical terms the constancy of population
N
{\displaystyle N}
. Note that the above relationship implies that one need only study the equation for two of the three variables.
Secondly, we note that the dynamics of the infectious class depends on the following ratio:
R
0
=
β
γ
,
{\displaystyle R_{0}={\frac {\beta }{\gamma }},}
the so-called basic reproduction number (also called basic reproduction ratio). This ratio is derived as the expected number of new infections (these new infections are sometimes called secondary infections) from a single infection in a population where all subjects are susceptible. This idea can probably be more readily seen if we say that the typical time between contacts is
T
c
=
β
−
1
{\displaystyle T_{c}=\beta ^{-1}}
, and the typical time until removal is
T
r
=
γ
−
1
{\displaystyle T_{r}=\gamma ^{-1}}
. From here it follows that, on average, the number of contacts by an infectious individual with others before the infectious has been removed is:
T
r
/
T
c
.
{\displaystyle T_{r}/T_{c}.}
By dividing the first differential equation by the third, separating the variables and integrating we get
S
(
t
)
=
S
(
0
)
e
−
R
0
(
R
(
t
)
−
R
(
0
)
)
/
N
,
{\displaystyle S(t)=S(0)e^{-R_{0}(R(t)-R(0))/N},}
where
S
(
0
)
{\displaystyle S(0)}
and
R
(
0
)
{\displaystyle R(0)}
are the initial numbers of, respectively, susceptible and removed subjects.
Writing
s
0
=
S
(
0
)
/
N
{\displaystyle s_{0}=S(0)/N}
for the initial proportion of susceptible individuals, and
s
∞
=
S
(
∞
)
/
N
{\displaystyle s_{\infty }=S(\infty )/N}
and
r
∞
=
R
(
∞
)
/
N
{\displaystyle r_{\infty }=R(\infty )/N}
for the proportion of susceptible and removed individuals respectively in the limit
t
→
∞
,
{\displaystyle t\to \infty ,}
one has
s
∞
=
1
−
r
∞
=
s
0
e
−
R
0
(
r
∞
−
r
0
)
{\displaystyle s_{\infty }=1-r_{\infty }=s_{0}e^{-R_{0}(r_{\infty }-r_{0})}}
(note that the infectious compartment empties in this limit).
This transcendental equation has a solution in terms of the Lambert W function, namely
s
∞
=
1
−
r
∞
=
−
R
0
−
1
W
(
−
s
0
R
0
e
−
R
0
(
1
−
r
0
)
)
.
{\displaystyle s_{\infty }=1-r_{\infty }=-R_{0}^{-1}\,W(-s_{0}R_{0}e^{-R_{0}(1-r_{0})}).}
This shows that at the end of an epidemic that conforms to the simple assumptions of the SIR model, unless
s
0
=
0
{\displaystyle s_{0}=0}
, not all individuals of the population have been removed, so some must remain susceptible. A driving force leading to the end of an epidemic is a decline in the number of infectious individuals. The epidemic does not typically end because of a complete lack of susceptible individuals.
The role of both the basic reproduction number and the initial susceptibility are extremely important. In fact, upon rewriting the equation for infectious individuals as follows:
d
I
d
t
=
(
R
0
S
N
−
1
)
γ
I
,
{\displaystyle {\frac {dI}{dt}}=\left(R_{0}{\frac {S}{N}}-1\right)\gamma I,}
it yields that if:
R
0
⋅
S
(
0
)
>
N
,
{\displaystyle R_{0}\cdot S(0)>N,}
then:
d
I
d
t
(
0
)
>
0
,
{\displaystyle {\frac {dI}{dt}}(0)>0,}
i.e., there will be a proper epidemic outbreak with an increase of the number of the infectious (which can reach a considerable fraction of the population). On the contrary, if
R
0
⋅
S
(
0
)
<
N
,
{\displaystyle R_{0}\cdot S(0)<N,}
then
d
I
d
t
(
0
)
<
0
,
{\displaystyle {\frac {dI}{dt}}(0)<0,}
i.e., independently from the initial size of the susceptible population the disease can never cause a proper epidemic outbreak. As a consequence, it is clear that both the basic reproduction number and the initial susceptibility are extremely important.
==== The force of infection ====
Note that in the above model the function:
F
=
β
I
,
{\displaystyle F=\beta I,}
models the transition rate from the compartment of susceptible individuals to the compartment of infectious individuals, so that it is called the force of infection. However, for large classes of communicable diseases it is more realistic to consider a force of infection that does not depend on the absolute number of infectious subjects, but on their fraction (with respect to the total constant population
N
{\displaystyle N}
):
F
=
β
I
N
.
{\displaystyle F=\beta {\frac {I}{N}}.}
Capasso and, afterwards, other authors have proposed nonlinear forces of infection to model more realistically the contagion process.
==== Exact analytical solutions to the SIR model ====
In 2014, Harko and coauthors derived an exact so-called analytical solution (involving an integral that can only be calculated numerically) to the SIR model. In the case without vital dynamics setup, for
S
(
u
)
=
S
(
t
)
{\displaystyle {\mathcal {S}}(u)=S(t)}
, etc., it corresponds to the following time parametrization
S
(
u
)
=
S
(
0
)
u
{\displaystyle {\mathcal {S}}(u)=S(0)u}
I
(
u
)
=
N
−
R
(
u
)
−
S
(
u
)
{\displaystyle {\mathcal {I}}(u)=N-{\mathcal {R}}(u)-{\mathcal {S}}(u)}
R
(
u
)
=
R
(
0
)
−
ρ
ln
(
u
)
{\displaystyle {\mathcal {R}}(u)=R(0)-\rho \ln(u)}
for
t
=
N
β
∫
u
1
d
u
∗
u
∗
I
(
u
∗
)
,
ρ
=
γ
N
β
,
{\displaystyle t={\frac {N}{\beta }}\int _{u}^{1}{\frac {du^{*}}{u^{*}{\mathcal {I}}(u^{*})}},\quad \rho ={\frac {\gamma N}{\beta }},}
with initial conditions
(
S
(
1
)
,
I
(
1
)
,
R
(
1
)
)
=
(
S
(
0
)
,
N
−
R
(
0
)
−
S
(
0
)
,
R
(
0
)
)
,
u
T
<
u
<
1
,
{\displaystyle ({\mathcal {S}}(1),{\mathcal {I}}(1),{\mathcal {R}}(1))=(S(0),N-R(0)-S(0),R(0)),\quad u_{T}<u<1,}
where
u
T
{\displaystyle u_{T}}
satisfies
I
(
u
T
)
=
0
{\displaystyle {\mathcal {I}}(u_{T})=0}
. By the transcendental equation for
R
∞
{\displaystyle R_{\infty }}
above, it follows that
u
T
=
e
−
(
R
∞
−
R
(
0
)
)
/
ρ
(
=
S
∞
/
S
(
0
)
{\displaystyle u_{T}=e^{-(R_{\infty }-R(0))/\rho }(=S_{\infty }/S(0)}
, if
S
(
0
)
≠
0
)
{\displaystyle S(0)\neq 0)}
and
I
∞
=
0
{\displaystyle I_{\infty }=0}
.
An equivalent so-called analytical solution (involving an integral that can only be calculated numerically) found by Miller yields
S
(
t
)
=
S
(
0
)
e
−
ξ
(
t
)
I
(
t
)
=
N
−
S
(
t
)
−
R
(
t
)
R
(
t
)
=
R
(
0
)
+
ρ
ξ
(
t
)
ξ
(
t
)
=
β
N
∫
0
t
I
(
t
∗
)
d
t
∗
{\displaystyle {\begin{aligned}S(t)&=S(0)e^{-\xi (t)}\\[8pt]I(t)&=N-S(t)-R(t)\\[8pt]R(t)&=R(0)+\rho \xi (t)\\[8pt]\xi (t)&={\frac {\beta }{N}}\int _{0}^{t}I(t^{*})\,dt^{*}\end{aligned}}}
Here
ξ
(
t
)
{\displaystyle \xi (t)}
can be interpreted as the expected number of transmissions an individual has received by time
t
{\displaystyle t}
. The two solutions are related by
e
−
ξ
(
t
)
=
u
{\displaystyle e^{-\xi (t)}=u}
.
Effectively the same result can be found in the original work by Kermack and McKendrick.
These solutions may be easily understood by noting that all of the terms on the right-hand sides of the original differential equations are proportional to
I
{\displaystyle I}
. The equations may thus be divided through by
I
{\displaystyle I}
, and the time rescaled so that the differential operator on the left-hand side becomes simply
d
/
d
τ
{\displaystyle d/d\tau }
, where
d
τ
=
I
d
t
{\displaystyle d\tau =Idt}
, i.e.
τ
=
∫
I
d
t
{\displaystyle \tau =\int Idt}
. The differential equations are now all linear, and the third equation, of the form
d
R
/
d
τ
=
{\displaystyle dR/d\tau =}
const., shows that
τ
{\displaystyle \tau }
and
R
{\displaystyle R}
(and
ξ
{\displaystyle \xi }
above) are simply linearly related.
A highly accurate analytic approximant of the SIR model as well as exact analytic expressions for the final values
S
∞
{\displaystyle S_{\infty }}
,
I
∞
{\displaystyle I_{\infty }}
, and
R
∞
{\displaystyle R_{\infty }}
were provided by Kröger and Schlickeiser, so that there is no need to perform a numerical integration to solve the SIR model (a simplified example practice on COVID-19 numerical simulation using Microsoft Excel can be found here ), to obtain its parameters from existing data, or to predict the future dynamics of an epidemics modeled by the SIR model. The approximant involves the Lambert W function which is part of all basic data visualization software such as Microsoft Excel, MATLAB, and Mathematica.
While Kendall considered the so-called all-time SIR model where the initial conditions
S
(
0
)
{\displaystyle S(0)}
,
I
(
0
)
{\displaystyle I(0)}
, and
R
(
0
)
{\displaystyle R(0)}
are coupled through the above relations, Kermack and McKendrick proposed to study the more general semi-time case, for which
S
(
0
)
{\displaystyle S(0)}
and
I
(
0
)
{\displaystyle I(0)}
are both arbitrary. This latter version, denoted as semi-time SIR model, makes predictions only for future times
t
>
0
{\displaystyle t>0}
. An analytic approximant and exact expressions for the final values are available for the semi-time SIR model as well.
==== Numerical solutions to the SIR model with approximations ====
Numerical solutions to the SIR model can be found in the literature. An example is using the model to analyze COVID-19 spreading data. Three reproduction numbers can be pulled out from the data analyzed with numerical approximation,
the basic reproduction number:
R
0
=
β
0
γ
0
{\displaystyle R_{0}={\frac {\beta _{0}}{\gamma _{0}}}}
the real-time reproduction number:
R
t
=
β
t
γ
t
{\displaystyle R_{t}={\frac {\beta _{t}}{\gamma _{t}}}}
and the real-time effective reproduction number:
R
e
=
β
t
S
γ
t
N
{\displaystyle R_{e}={\frac {\beta _{t}S}{\gamma _{t}N}}}
R
0
{\displaystyle R_{0}}
represents the speed of reproduction rate at the beginning of the spreading when all populations are assumed susceptible, e.g. if
β
0
=
0.4
d
a
y
−
1
{\displaystyle \beta _{0}=0.4day^{-1}}
and
γ
0
=
0.2
d
a
y
−
1
{\displaystyle \gamma _{0}=0.2day^{-1}}
meaning one infectious person on average infects 0.4 susceptible people per day and recovers in 1/0.2=5 days. Thus when this person recovered, there are two people still infectious directly got from this person and
R
0
=
2
{\displaystyle R_{0}=2}
, i.e. the number of infectious people doubled in one cycle of 5 days. The data simulated by the model with
R
0
=
2
{\displaystyle R_{0}=2}
or real data fitted will yield a doubling of the number of infectious people faster than 5 days because the two infected people are infecting people. From the SIR model, we can tell that
β
{\displaystyle \beta }
is determined by the nature of the disease and also a function of the interactive frequency between the infectious person
I
{\displaystyle I}
with the susceptible people
S
{\displaystyle S}
and also the intensity/duration of the interaction like how close they interact for how long and whether or not they both wear masks, thus, it changes over time when the average behavior of the carriers and susceptible people changes. The model use
S
I
{\displaystyle SI}
to represent these factors but it indeed is referenced to the initial stage when no action is taken to prevent the spread and all population is susceptible, thus all changes are absorbed by the change of
β
{\displaystyle \beta }
.
γ
{\displaystyle \gamma }
is usually more stable over time assuming when the infectious person shows symptoms, she/he will seek medical attention or be self-isolated. So if we find
R
t
{\displaystyle R_{t}}
changes, most probably the behaviors of people in the community have changed from their normal patterns before the outbreak, or the disease has mutated to a new form. Costive massive detection and isolation of susceptible close contacts have effects on reducing
1
/
γ
{\displaystyle 1/\gamma }
but whose efficiencies are under debate. This debate is largely on the uncertainty of the number of days reduced from after infectious or detectable whichever comes first to before a symptom shows up for an infected susceptible person. If the person is infectious after symptoms show up, or detection only works for a person with symptoms, then these prevention methods are not necessary, and self-isolation and/or medical attention is the best way to cut the
1
/
γ
{\displaystyle 1/\gamma }
values. The typical onset of the COVID-19 infectious period is in the order of one day from the symptoms showing up, making massive detection with typical frequency in a few days useless.
R
t
{\displaystyle R_{t}}
does not tell us whether or not the spreading will speed up or slow down in the latter stages when the fraction of susceptible people in the community has dropped significantly after recovery or vaccination.
R
e
{\displaystyle R_{e}}
corrects this dilution effect by multiplying the fraction of the susceptible population over the total population. It corrects the effective/transmissible interaction between an infectious person and the rest of the community when many of the interaction is immune in the middle to late stages of the disease spreading. Thus, when
R
e
>
1
{\displaystyle R_{e}>1}
, we will see an exponential-like outbreak; when
R
e
=
1
{\displaystyle R_{e}=1}
, a steady state reached and no number of infectious people changes over time; and when
R
e
<
1
{\displaystyle R_{e}<1}
, the disease decays and fades away over time.
Using the differential equations of the SIR model and converting them to numerical discrete forms, one can set up the recursive equations and calculate the S, I, and R populations with any given initial conditions but accumulate errors over a long calculation time from the reference point. Sometimes a convergence test is needed to estimate the errors. Given a set of initial conditions and the disease-spreading data, one can also fit the data with the SIR model and pull out the three reproduction numbers when the errors are usually negligible due to the short time step from the reference point. Any point of the time can be used as the initial condition to predict the future after it using this numerical model with assumption of time-evolved parameters such as population,
R
t
{\displaystyle R_{t}}
, and
γ
{\displaystyle \gamma }
. However, away from this reference point, errors will accumulate over time thus convergence test is needed to find an optimal time step for more accurate results.
Among these three reproduction numbers,
R
0
{\displaystyle R_{0}}
is very useful to judge the control pressure, e.g., a large value meaning the disease will spread very fast and is very difficult to control.
R
t
{\displaystyle R_{t}}
is most useful in predicting future trends, for example, if we know the social interactions have reduced 50% frequently from that before the outbreak and the interaction intensities among people are the same, then we can set
R
t
=
0.5
R
0
{\displaystyle R_{t}=0.5R_{0}}
. If social distancing and masks add another 50% cut in infection efficiency, we can set
R
t
=
0.25
R
0
{\displaystyle R_{t}=0.25R_{0}}
.
R
e
{\displaystyle R_{e}}
will perfectly correlate with the waves of the spreading and whenever
R
e
>
1
{\displaystyle R_{e}>1}
, the spreading accelerates, and when
R
e
<
1
{\displaystyle R_{e}<1}
, the spreading slows down thus useful to set a prediction on the short-term trends. Also, it can be used to directly calculate the threshold population of vaccination/immunization for the herd immunity stage by setting
R
t
=
R
0
{\displaystyle R_{t}=R_{0}}
, and
R
E
=
1
{\displaystyle R_{E}=1}
, i.e.
S
=
N
/
R
0
{\displaystyle S=N/R_{0}}
.
=== The SIR model with vital dynamics and constant population ===
Consider a population characterized by a death rate
μ
{\displaystyle \mu }
and birth rate
Λ
{\displaystyle \Lambda }
, and where a communicable disease is spreading. The model with mass-action transmission is:
d
S
d
t
=
Λ
−
μ
S
−
β
I
S
N
d
I
d
t
=
β
I
S
N
−
γ
I
−
μ
I
d
R
d
t
=
γ
I
−
μ
R
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=\Lambda -\mu S-{\frac {\beta IS}{N}}\\[8pt]{\frac {dI}{dt}}&={\frac {\beta IS}{N}}-\gamma I-\mu I\\[8pt]{\frac {dR}{dt}}&=\gamma I-\mu R\end{aligned}}}
for which the disease-free equilibrium (DFE) is:
(
S
(
t
)
,
I
(
t
)
,
R
(
t
)
)
=
(
Λ
μ
,
0
,
0
)
.
{\displaystyle \left(S(t),I(t),R(t)\right)=\left({\frac {\Lambda }{\mu }},0,0\right).}
In this case, we can derive a basic reproduction number:
R
0
=
β
μ
+
γ
,
{\displaystyle R_{0}={\frac {\beta }{\mu +\gamma }},}
which has threshold properties. In fact, independently from biologically meaningful initial values, one can show that:
R
0
≤
1
⇒
lim
t
→
∞
(
S
(
t
)
,
I
(
t
)
,
R
(
t
)
)
=
DFE
=
(
Λ
μ
,
0
,
0
)
{\displaystyle R_{0}\leq 1\Rightarrow \lim _{t\to \infty }(S(t),I(t),R(t))={\textrm {DFE}}=\left({\frac {\Lambda }{\mu }},0,0\right)}
R
0
>
1
,
I
(
0
)
>
0
⇒
lim
t
→
∞
(
S
(
t
)
,
I
(
t
)
,
R
(
t
)
)
=
EE
=
(
γ
+
μ
β
,
μ
β
(
R
0
−
1
)
,
γ
β
(
R
0
−
1
)
)
.
{\displaystyle R_{0}>1,I(0)>0\Rightarrow \lim _{t\to \infty }(S(t),I(t),R(t))={\textrm {EE}}=\left({\frac {\gamma +\mu }{\beta }},{\frac {\mu }{\beta }}\left(R_{0}-1\right),{\frac {\gamma }{\beta }}\left(R_{0}-1\right)\right).}
The point EE is called the Endemic Equilibrium (the disease is not totally eradicated and remains in the population). With heuristic arguments, one may show that
R
0
{\displaystyle R_{0}}
may be read as the average number of infections caused by a single infectious subject in a wholly susceptible population, the above relationship biologically means that if this number is less than or equal to one the disease goes extinct, whereas if this number is greater than one the disease will remain permanently endemic in the population.
=== The SIR model ===
In 1927, W. O. Kermack and A. G. McKendrick created a model in which they considered a fixed population with only three compartments: susceptible,
S
(
t
)
{\displaystyle S(t)}
; infected,
I
(
t
)
{\displaystyle I(t)}
; and recovered,
R
(
t
)
{\displaystyle R(t)}
. The compartments used for this model consist of three classes:
S
(
t
)
{\displaystyle S(t)}
is used to represent the individuals not yet infected with the disease at time t, or those susceptible to the disease of the population.
I
(
t
)
{\displaystyle I(t)}
denotes the individuals of the population who have been infected with the disease and are capable of spreading the disease to those in the susceptible category.
R
(
t
)
{\displaystyle R(t)}
is the compartment used for the individuals of the population who have been infected and then removed from the disease, either due to immunization or due to death. Those in this category are not able to be infected again or to transmit the infection to others.
The flow of this model may be considered as follows:
S
→
I
→
R
{\displaystyle {\color {blue}{{\mathcal {S}}\rightarrow {\mathcal {I}}\rightarrow {\mathcal {R}}}}}
Using a fixed population,
N
=
S
(
t
)
+
I
(
t
)
+
R
(
t
)
{\displaystyle N=S(t)+I(t)+R(t)}
in the three functions resolves that the value
N
{\displaystyle N}
should remain constant within the simulation, if a simulation is used to solve the SIR model. Alternatively, the analytic approximant can be used without performing a simulation. The model is started with values of
S
(
t
=
0
)
{\displaystyle S(t=0)}
,
I
(
t
=
0
)
{\displaystyle I(t=0)}
and
R
(
t
=
0
)
{\displaystyle R(t=0)}
. These are the number of people in the susceptible, infected and removed categories at time equals zero. If the SIR model is assumed to hold at all times, these initial conditions are not independent. Subsequently, the flow model updates the three variables for every time point with set values for
β
{\displaystyle \beta }
and
γ
{\displaystyle \gamma }
. The simulation first updates the infected from the susceptible and then the removed category is updated from the infected category for the next time point (t=1). This describes the flow persons between the three categories. During an epidemic the susceptible category is not shifted with this model,
β
{\displaystyle \beta }
changes over the course of the epidemic and so does
γ
{\displaystyle \gamma }
. These variables determine the length of the epidemic and would have to be updated with each cycle.
d
S
d
t
=
−
β
S
I
{\displaystyle {\frac {dS}{dt}}=-\beta SI}
d
I
d
t
=
β
S
I
−
γ
I
{\displaystyle {\frac {dI}{dt}}=\beta SI-\gamma I}
d
R
d
t
=
γ
I
{\displaystyle {\frac {dR}{dt}}=\gamma I}
Several assumptions were made in the formulation of these equations: First, an individual in the population must be considered as having an equal probability as every other individual of contracting the disease with a rate of
a
{\displaystyle a}
and an equal fraction
b
{\displaystyle b}
of people that an individual makes contact with per unit time. Then, let
β
{\displaystyle \beta }
be the multiplication of
a
{\displaystyle a}
and
b
{\displaystyle b}
. This is the transmission probability times the contact rate. Besides, an infected individual makes contact with
b
{\displaystyle b}
persons per unit time whereas only a fraction,
S
/
N
{\displaystyle S/N}
of them are susceptible. Thus, we have every infective can infect
a
b
S
=
β
S
{\displaystyle abS=\beta S}
susceptible persons, and therefore, the whole number of susceptibles infected by infectives per unit time is
β
S
I
{\displaystyle \beta SI}
. For the second and third equations, consider the population leaving the susceptible class as equal to the number entering the infected class. However, a number equal to the fraction
γ
{\displaystyle \gamma }
(which represents the mean recovery/death rate, or
1
/
γ
{\displaystyle 1/\gamma }
the mean infective period) of infectives are leaving this class per unit time to enter the removed class. These processes which occur simultaneously are referred to as the Law of Mass Action, a widely accepted idea that the rate of contact between two groups in a population is proportional to the size of each of the groups concerned. Finally, it is assumed that the rate of infection and recovery is much faster than the time scale of births and deaths and therefore, these factors are ignored in this model.
=== Steady-state solutions ===
The only steady state solution to the classic SIR model as defined by the differential equations above is I=0, S and R can then take any values. The model can be changed while retaining three compartments to give a steady-state endemic solution by adding some input to the S compartment.
For example, one may postulate that the expected duration of susceptibility will be
E
[
min
(
T
L
∣
T
S
)
]
{\displaystyle \operatorname {E} [\min(T_{L}\mid T_{S})]}
where
T
L
{\displaystyle T_{L}}
reflects the time alive (life expectancy) and
T
S
{\displaystyle T_{S}}
reflects the time in the susceptible state before becoming infected, which can be simplified to:
E
[
min
(
T
L
∣
T
S
)
]
=
∫
0
∞
e
−
(
μ
+
δ
)
x
d
x
=
1
μ
+
δ
,
{\displaystyle \operatorname {E} [\min(T_{L}\mid T_{S})]=\int _{0}^{\infty }e^{-(\mu +\delta )x}\,dx={\frac {1}{\mu +\delta }},}
such that the number of susceptible persons is the number entering the susceptible compartment
μ
N
{\displaystyle \mu N}
times the duration of susceptibility:
S
=
μ
N
μ
+
λ
.
{\displaystyle S={\frac {\mu N}{\mu +\lambda }}.}
Analogously, the steady-state number of infected persons is the number entering the infected state from the susceptible state (number susceptible, times rate of infection)
λ
=
β
I
N
,
{\displaystyle \lambda ={\tfrac {\beta I}{N}},}
times the duration of infectiousness
1
μ
+
v
{\displaystyle {\tfrac {1}{\mu +v}}}
:
I
=
μ
N
μ
+
λ
λ
1
μ
+
v
.
{\displaystyle I={\frac {\mu N}{\mu +\lambda }}\lambda {\frac {1}{\mu +v}}.}
=== Other compartmental models ===
There are many modifications of the SIR model, including those that include births and deaths, where upon recovery there is no immunity (SIS model), where immunity lasts only for a short period of time (SIRS), where there is a latent period of the disease where the person is not infectious (SEIS and SEIR), and where infants can be born with immunity (MSIR). Compartmental models can also be used to model multiple risk groups, and even the interaction of multiple pathogens.
== Variations on the basic SIR model ==
=== SIS model ===
Some infections, for example, those from the common cold and influenza, do not confer any long-lasting immunity. Such infections may give temporary resistance but do not give long-term immunity upon recovery from infection, and individuals become susceptible again.
We have the model:
d
S
d
t
=
−
β
S
I
N
+
γ
I
d
I
d
t
=
β
S
I
N
−
γ
I
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=-{\frac {\beta SI}{N}}+\gamma I\\[6pt]{\frac {dI}{dt}}&={\frac {\beta SI}{N}}-\gamma I\end{aligned}}}
Note that denoting with N the total population it holds that:
d
S
d
t
+
d
I
d
t
=
0
⇒
S
(
t
)
+
I
(
t
)
=
N
{\displaystyle {\frac {dS}{dt}}+{\frac {dI}{dt}}=0\Rightarrow S(t)+I(t)=N}
.
It follows that:
d
I
d
t
=
(
β
−
γ
)
I
−
β
N
I
2
{\displaystyle {\frac {dI}{dt}}=(\beta -\gamma )I-{\frac {\beta }{N}}I^{2}}
,
i.e. the dynamics of infectious is ruled by a logistic function, so that
∀
I
(
0
)
>
0
{\displaystyle \forall I(0)>0}
:
β
γ
≤
1
⇒
lim
t
→
+
∞
I
(
t
)
=
0
,
β
γ
>
1
⇒
lim
t
→
+
∞
I
(
t
)
=
(
1
−
γ
β
)
N
.
{\displaystyle {\begin{aligned}&{\frac {\beta }{\gamma }}\leq 1\Rightarrow \lim _{t\to +\infty }I(t)=0,\\[6pt]&{\frac {\beta }{\gamma }}>1\Rightarrow \lim _{t\to +\infty }I(t)=\left(1-{\frac {\gamma }{\beta }}\right)N.\end{aligned}}}
It is possible to find an analytical solution to this model (by making a transformation of variables:
I
=
y
−
1
{\displaystyle I=y^{-1}}
and substituting this into the mean-field equations), such that the basic reproduction rate is greater than unity. The solution is given as
I
(
t
)
=
I
∞
1
+
V
e
−
χ
t
{\displaystyle I(t)={\frac {I_{\infty }}{1+Ve^{-\chi t}}}}
.
where
I
∞
=
(
1
−
γ
/
β
)
N
{\displaystyle I_{\infty }=(1-\gamma /\beta )N}
is the endemic infectious population,
χ
=
β
−
γ
{\displaystyle \chi =\beta -\gamma }
, and
V
=
I
∞
/
I
0
−
1
{\displaystyle V=I_{\infty }/I_{0}-1}
. As the system is assumed to be closed, the susceptible population is then
S
(
t
)
=
N
−
I
(
t
)
{\displaystyle S(t)=N-I(t)}
.
Whenever the integer nature of the number of agents is evident (populations with fewer than tens of thousands of individuals), inherent fluctuations in the disease spreading process caused by discrete agents result in uncertainties. In this scenario, the evolution of the disease predicted by compartmental equations deviates significantly from the observed results. These uncertainties may even cause the epidemic to end earlier than predicted by the compartmental equations.
As a special case, one obtains the usual logistic function by assuming
γ
=
0
{\displaystyle \gamma =0}
. This can be also considered in the SIR model with
R
=
0
{\displaystyle R=0}
, i.e. no removal will take place. That is the SI model. The differential equation system using
S
=
N
−
I
{\displaystyle S=N-I}
thus reduces to:
d
I
d
t
∝
I
⋅
(
N
−
I
)
.
{\displaystyle {\frac {dI}{dt}}\propto I\cdot (N-I).}
In the long run, in the SI model, all individuals will become infected.
=== SIRD model ===
The Susceptible-Infectious-Recovered-Deceased model differentiates between Recovered (meaning specifically individuals having survived the disease and now immune) and Deceased. The SIRD model has semi analytical solutions based on the four parts method. This model uses the following system of differential equations:
d
S
d
t
=
−
β
I
S
N
,
d
I
d
t
=
β
I
S
N
−
γ
I
−
μ
I
,
d
R
d
t
=
γ
I
,
d
D
d
t
=
μ
I
,
{\displaystyle {\begin{aligned}&{\frac {dS}{dt}}=-{\frac {\beta IS}{N}},\\[6pt]&{\frac {dI}{dt}}={\frac {\beta IS}{N}}-\gamma I-\mu I,\\[6pt]&{\frac {dR}{dt}}=\gamma I,\\[6pt]&{\frac {dD}{dt}}=\mu I,\end{aligned}}}
where
β
,
γ
,
μ
{\displaystyle \beta ,\gamma ,\mu }
are the rates of infection, recovery, and mortality, respectively.
=== SIRV model ===
The Susceptible-Infectious-Recovered-Vaccinated model is an extended SIR model that accounts for vaccination of the susceptible population. This model uses the following system of differential equations:
d
S
d
t
=
−
β
(
t
)
I
S
N
−
v
(
t
)
S
,
d
I
d
t
=
β
(
t
)
I
S
N
−
γ
(
t
)
I
,
d
R
d
t
=
γ
(
t
)
I
,
d
V
d
t
=
v
(
t
)
S
,
{\displaystyle {\begin{aligned}&{\frac {dS}{dt}}=-{\frac {\beta (t)IS}{N}}-v(t)S,\\[6pt]&{\frac {dI}{dt}}={\frac {\beta (t)IS}{N}}-\gamma (t)I,\\[6pt]&{\frac {dR}{dt}}=\gamma (t)I,\\[6pt]&{\frac {dV}{dt}}=v(t)S,\end{aligned}}}
where
β
,
γ
,
v
{\displaystyle \beta ,\gamma ,v}
are the rates of infection, recovery, and vaccination, respectively. For the semi-time initial conditions
S
(
0
)
=
(
1
−
η
)
N
{\displaystyle S(0)=(1-\eta )N}
,
I
(
0
)
=
η
N
{\displaystyle I(0)=\eta N}
,
R
(
0
)
=
V
(
0
)
=
0
{\displaystyle R(0)=V(0)=0}
and constant ratios
k
=
γ
(
t
)
/
β
(
t
)
{\displaystyle k=\gamma (t)/\beta (t)}
and
b
=
v
(
t
)
/
β
(
t
)
{\displaystyle b=v(t)/\beta (t)}
the model had been solved approximately. The occurrence of a pandemic outburst requires
k
+
b
<
1
−
2
η
{\displaystyle k+b<1-2\eta }
and there is a critical reduced vaccination rate
b
c
{\displaystyle b_{c}}
beyond which the steady-state size
S
∞
{\displaystyle S_{\infty }}
of the susceptible compartment remains relatively close to
S
(
0
)
{\displaystyle S(0)}
. Arbitrary initial conditions satisfying
S
(
0
)
+
I
(
0
)
+
R
(
0
)
+
V
(
0
)
=
N
{\displaystyle S(0)+I(0)+R(0)+V(0)=N}
can be mapped to the solved special case with
R
(
0
)
=
V
(
0
)
=
0
{\displaystyle R(0)=V(0)=0}
.
The numerical solution of this model to calculate the real-time reproduction number
R
t
{\displaystyle R_{t}}
of COVID-19 can be practiced based on information from the different populations in a community. Numerical solution is a commonly used method to analyze complicated kinetic networks when the analytical solution is difficult to obtain or limited by requirements such as boundary conditions or special parameters. It uses recursive equations to calculate the next step by converting the numerical integration into Riemann sum of discrete time steps e.g., use yesterday's principal and interest rate to calculate today's interest which assumes the interest rate is fixed during the day. The calculation contains projected errors if the analytical corrections on the numerical step size are not included, e.g. when the interest rate of annual collection is simplified to 12 times the monthly rate, a projected error is introduced. Thus the calculated results will carry accumulative errors when the time step is far away from the reference point and a convergence test is needed to estimate the error. However, this error is usually acceptable for data fitting. When fitting a set of data with a close time step, the error is relatively small because the reference point is nearby compared to when predicting a long period of time after a reference point. Once the real-time
R
t
{\displaystyle R_{t}}
is pulled out, one can compare it to the basic reproduction number
R
0
{\displaystyle R_{0}}
. Before the vaccination,
R
t
{\displaystyle R_{t}}
gives the policy maker and general public a measure of the efficiency of social mitigation activities such as social distancing and face masking simply by dividing
R
t
R
0
{\displaystyle {\frac {R_{t}}{R_{0}}}}
. Under massive vaccination, the goal of disease control is to reduce the effective reproduction number
R
e
=
R
t
S
N
<
1
{\displaystyle R_{e}={\frac {R_{t}S}{N}}<1}
, where
S
{\displaystyle S}
is the number of susceptible population at the time and
N
{\displaystyle N}
is the total population. When
R
e
<
1
{\displaystyle R_{e}<1}
, the spreading decays and daily infected cases go down.
=== SIRVD model ===
The susceptible-infected-recovered-vaccinated-deceased (SIRVD) epidemic compartment model extends the SIR model to include the effects of vaccination campaigns and time-dependent fatality rates on epidemic outbreaks. It encompasses the SIR, SIRV, SIRD, and SI models as special cases, with individual time-dependent rates governing transitions between different fractions. This model uses the following system of differential equations for the population fractions
S
,
I
,
R
,
V
,
D
{\displaystyle S,I,R,V,D}
:
d
S
d
t
=
−
a
(
t
)
S
I
−
v
(
t
)
S
,
d
I
d
t
=
a
(
t
)
S
I
−
μ
(
t
)
I
−
ψ
(
t
)
I
,
d
R
d
t
=
μ
(
t
)
I
,
d
V
d
t
=
v
(
t
)
S
,
d
D
d
t
=
ψ
(
t
)
I
{\displaystyle {\begin{aligned}&{\frac {dS}{dt}}=-a(t)SI-v(t)S,\\[6pt]&{\frac {dI}{dt}}=a(t)SI-\mu (t)I-\psi (t)I,\\[6pt]&{\frac {dR}{dt}}=\mu (t)I,\\[6pt]&{\frac {dV}{dt}}=v(t)S,\\[6pt]&{\frac {dD}{dt}}=\psi (t)I\end{aligned}}}
where
a
(
t
)
,
v
(
t
)
,
μ
(
t
)
,
ψ
(
t
)
{\displaystyle a(t),v(t),\mu (t),\psi (t)}
are the infection, vaccination, recovery, and fatality rates, respectively. For the semi-time initial conditions
S
(
0
)
=
1
−
η
{\displaystyle S(0)=1-\eta }
,
I
(
0
)
=
η
{\displaystyle I(0)=\eta }
,
R
(
0
)
=
V
(
0
)
=
D
(
0
)
=
0
{\displaystyle R(0)=V(0)=D(0)=0}
and constant ratios
k
=
μ
(
t
)
/
a
(
t
)
{\displaystyle k=\mu (t)/a(t)}
,
b
=
v
(
t
)
/
a
(
t
)
{\displaystyle b=v(t)/a(t)}
, and
q
=
ψ
(
t
)
/
a
(
t
)
{\displaystyle q=\psi (t)/a(t)}
the model had been solved approximately, and exactly for some special cases, irrespective of the functional form of
a
(
t
)
{\displaystyle a(t)}
. This is achieved upon rewriting the above SIRVD model equations in equivalent, but reduced form
d
S
d
τ
=
−
S
I
−
b
(
τ
)
S
,
d
I
d
τ
=
S
I
−
[
k
(
τ
)
+
q
(
τ
)
]
I
,
d
R
d
τ
=
k
(
τ
)
I
,
d
V
d
τ
=
b
(
τ
)
S
,
d
D
d
τ
=
q
(
τ
)
S
{\displaystyle {\begin{aligned}&{\frac {dS}{d\tau }}=-SI-b(\tau )S,\\[6pt]&{\frac {dI}{d\tau }}=SI-[k(\tau )+q(\tau )]I,\\[6pt]&{\frac {dR}{d\tau }}=k(\tau )I,\\[6pt]&{\frac {dV}{d\tau }}=b(\tau )S,\\[6pt]&{\frac {dD}{d\tau }}=q(\tau )S\end{aligned}}}
where
τ
(
t
)
=
∫
0
t
a
(
ξ
)
d
ξ
{\displaystyle \tau (t)=\int _{0}^{t}a(\xi )d\xi }
is a reduced, dimensionless time. The temporal dependence of the infected fraction
I
(
τ
)
{\displaystyle I(\tau )}
and the rate of new infections
j
(
τ
)
=
S
(
τ
)
I
(
τ
)
{\displaystyle j(\tau )=S(\tau )I(\tau )}
differs when considering the effects of vaccinations and when the real-time dependence of fatality and recovery rates diverge. These differences have been highlighted for stationary ratios and gradually decreasing fatality rates. The case of stationary ratios allows one to construct a diagnostics method to extract analytically all SIRVD model parameters from measured
COVID-19 data of a completed pandemic wave.
=== SIRVB model ===
The SIRVB model adds a breakthrough pathway in the SIRV model.
The kinetic equations become:
d
S
d
t
=
−
a
(
t
)
S
I
−
v
(
t
)
S
+
b
(
t
)
[
μ
(
t
)
I
+
v
(
t
)
S
]
,
d
I
d
t
=
a
(
t
)
S
I
−
μ
(
t
)
I
,
d
R
d
t
=
[
1
−
b
(
t
)
]
μ
(
t
)
I
,
d
V
d
t
=
[
1
−
b
(
t
)
]
v
(
t
)
S
,
{\displaystyle {\begin{aligned}&{\frac {dS}{dt}}=-a(t)SI-v(t)S+b(t)[\mu (t)I+v(t)S],\\[6pt]&{\frac {dI}{dt}}=a(t)SI-\mu (t)I,\\[6pt]&{\frac {dR}{dt}}=[1-b(t)]\mu (t)I,\\[6pt]&{\frac {dV}{dt}}=[1-b(t)]v(t)S,\\[6pt]\end{aligned}}}
where infection rate
a
(
t
)
{\displaystyle a(t)}
can be write as
β
(
t
)
/
N
{\displaystyle \beta (t)/N}
, recovery rate
μ
(
t
)
{\displaystyle \mu (t)}
can be simplified to a constant
γ
{\displaystyle \gamma }
,
v
(
t
)
{\displaystyle v(t)}
is the vaccination rate,
b
(
t
)
{\displaystyle b(t)}
is the break through ratio or fraction of immuned people susceptible to reinfection (<1).
=== MSIR model ===
For many infections, including measles, babies are not born into the susceptible compartment but are immune to the disease for the first few months of life due to protection from maternal antibodies (passed across the placenta and additionally through colostrum). This is called passive immunity. This added detail can be shown by including an M class (for maternally derived immunity) at the beginning of the model.
To indicate this mathematically, an additional compartment is added, M(t). This results in the following differential equations:
d
M
d
t
=
Λ
−
δ
M
−
μ
M
d
S
d
t
=
δ
M
−
β
S
I
N
−
μ
S
d
I
d
t
=
β
S
I
N
−
γ
I
−
μ
I
d
R
d
t
=
γ
I
−
μ
R
{\displaystyle {\begin{aligned}{\frac {dM}{dt}}&=\Lambda -\delta M-\mu M\\[8pt]{\frac {dS}{dt}}&=\delta M-{\frac {\beta SI}{N}}-\mu S\\[8pt]{\frac {dI}{dt}}&={\frac {\beta SI}{N}}-\gamma I-\mu I\\[8pt]{\frac {dR}{dt}}&=\gamma I-\mu R\end{aligned}}}
=== Carrier state ===
Some people who have had an infectious disease such as tuberculosis never completely recover and continue to carry the infection, whilst not suffering the disease themselves. They may then move back into the infectious compartment and suffer symptoms (as in tuberculosis) or they may continue to infect others in their carrier state, while not suffering symptoms. The most famous example of this is probably Mary Mallon, who infected 22 people with typhoid fever. The carrier compartment is labelled C.
=== SEIR model ===
For many important infections, there is a significant latency period during which individuals have been infected but are not yet infectious themselves. During this period the individual is in compartment E (for exposed).
Assuming that the latency period is a random variable with exponential distribution with parameter
a
{\displaystyle a}
(i.e. the average latency period is
a
−
1
{\displaystyle a^{-1}}
), and also assuming the presence of vital dynamics with birth rate
Λ
{\displaystyle \Lambda }
equal to death rate
N
μ
{\displaystyle N\mu }
(so that the total number
N
{\displaystyle N}
is constant), we have the model:
d
S
d
t
=
μ
N
−
μ
S
−
β
I
S
N
d
E
d
t
=
β
I
S
N
−
(
μ
+
a
)
E
d
I
d
t
=
a
E
−
(
γ
+
μ
)
I
d
R
d
t
=
γ
I
−
μ
R
.
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=\mu N-\mu S-{\frac {\beta IS}{N}}\\[8pt]{\frac {dE}{dt}}&={\frac {\beta IS}{N}}-(\mu +a)E\\[8pt]{\frac {dI}{dt}}&=aE-(\gamma +\mu )I\\[8pt]{\frac {dR}{dt}}&=\gamma I-\mu R.\end{aligned}}}
We have
S
+
E
+
I
+
R
=
N
,
{\displaystyle S+E+I+R=N,}
but this is only constant because of the simplifying assumption that birth and death rates are equal; in general
N
{\displaystyle N}
is a variable.
For this model, the basic reproduction number is:
R
0
=
a
μ
+
a
β
μ
+
γ
.
{\displaystyle R_{0}={\frac {a}{\mu +a}}{\frac {\beta }{\mu +\gamma }}.}
Similarly to the SIR model, also, in this case, we have a Disease-Free-Equilibrium (N,0,0,0) and an Endemic Equilibrium EE, and one can show that, independently from biologically meaningful initial conditions
(
S
(
0
)
,
E
(
0
)
,
I
(
0
)
,
R
(
0
)
)
∈
{
(
S
,
E
,
I
,
R
)
∈
[
0
,
N
]
4
:
S
≥
0
,
E
≥
0
,
I
≥
0
,
R
≥
0
,
S
+
E
+
I
+
R
=
N
}
{\displaystyle \left(S(0),E(0),I(0),R(0)\right)\in \left\{(S,E,I,R)\in [0,N]^{4}:S\geq 0,E\geq 0,I\geq 0,R\geq 0,S+E+I+R=N\right\}}
it holds that:
R
0
≤
1
⇒
lim
t
→
+
∞
(
S
(
t
)
,
E
(
t
)
,
I
(
t
)
,
R
(
t
)
)
=
D
F
E
=
(
N
,
0
,
0
,
0
)
,
{\displaystyle R_{0}\leq 1\Rightarrow \lim _{t\to +\infty }\left(S(t),E(t),I(t),R(t)\right)=DFE=(N,0,0,0),}
R
0
>
1
,
I
(
0
)
>
0
⇒
lim
t
→
+
∞
(
S
(
t
)
,
E
(
t
)
,
I
(
t
)
,
R
(
t
)
)
=
E
E
.
{\displaystyle R_{0}>1,I(0)>0\Rightarrow \lim _{t\to +\infty }\left(S(t),E(t),I(t),R(t)\right)=EE.}
In case of periodically varying contact rate
β
(
t
)
{\displaystyle \beta (t)}
the condition for the global attractiveness of DFE is that the following linear system with periodic coefficients:
d
E
1
d
t
=
β
(
t
)
I
1
−
(
γ
+
a
)
E
1
d
I
1
d
t
=
a
E
1
−
(
γ
+
μ
)
I
1
{\displaystyle {\begin{aligned}{\frac {dE_{1}}{dt}}&=\beta (t)I_{1}-(\gamma +a)E_{1}\\[8pt]{\frac {dI_{1}}{dt}}&=aE_{1}-(\gamma +\mu )I_{1}\end{aligned}}}
is stable (i.e. it has its Floquet's eigenvalues inside the unit circle in the complex plane).
=== SEIS model ===
The SEIS model is like the SEIR model (above) except that no immunity is acquired at the end.
S
→
E
→
I
→
S
{\displaystyle {\color {blue}{{\mathcal {S}}\to {\mathcal {E}}\to {\mathcal {I}}\to {\mathcal {S}}}}}
In this model an infection does not leave any immunity thus individuals that have recovered return to being susceptible, moving back into the S(t) compartment. The following differential equations describe this model:
d
S
d
t
=
Λ
−
β
S
I
N
−
μ
S
+
γ
I
d
E
d
t
=
β
S
I
N
−
(
ϵ
+
μ
)
E
d
I
d
t
=
ε
E
−
(
γ
+
μ
)
I
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=\Lambda -{\frac {\beta SI}{N}}-\mu S+\gamma I\\[6pt]{\frac {dE}{dt}}&={\frac {\beta SI}{N}}-(\epsilon +\mu )E\\[6pt]{\frac {dI}{dt}}&=\varepsilon E-(\gamma +\mu )I\end{aligned}}}
=== MSEIR model ===
For the case of a disease, with the factors of passive immunity, and a latency period there is the MSEIR model.
M
→
S
→
E
→
I
→
R
{\displaystyle \color {blue}{{\mathcal {M}}\to {\mathcal {S}}\to {\mathcal {E}}\to {\mathcal {I}}\to {\mathcal {R}}}}
d
M
d
t
=
Λ
−
δ
M
−
μ
M
d
S
d
t
=
δ
M
−
β
S
I
N
−
μ
S
d
E
d
t
=
β
S
I
N
−
(
ε
+
μ
)
E
d
I
d
t
=
ε
E
−
(
γ
+
μ
)
I
d
R
d
t
=
γ
I
−
μ
R
{\displaystyle {\begin{aligned}{\frac {dM}{dt}}&=\Lambda -\delta M-\mu M\\[6pt]{\frac {dS}{dt}}&=\delta M-{\frac {\beta SI}{N}}-\mu S\\[6pt]{\frac {dE}{dt}}&={\frac {\beta SI}{N}}-(\varepsilon +\mu )E\\[6pt]{\frac {dI}{dt}}&=\varepsilon E-(\gamma +\mu )I\\[6pt]{\frac {dR}{dt}}&=\gamma I-\mu R\end{aligned}}}
=== MSEIRS model ===
An MSEIRS model is similar to the MSEIR, but the immunity in the R class would be temporary, so that individuals would regain their susceptibility when the temporary immunity ended.
M
→
S
→
E
→
I
→
R
→
S
{\displaystyle {\color {blue}{{\mathcal {M}}\to {\mathcal {S}}\to {\mathcal {E}}\to {\mathcal {I}}\to {\mathcal {R}}\to {\mathcal {S}}}}}
=== Variable contact rates ===
It is well known that the probability of getting a disease is not constant in time. As a pandemic progresses, reactions to the pandemic may change the contact rates which are assumed constant in the simpler models. Counter-measures such as masks, social distancing, and lockdown will alter the contact rate in a way to reduce the speed of the pandemic.
In addition, Some diseases are seasonal, such as the common cold viruses, which are more prevalent during winter. With childhood diseases, such as measles, mumps, and rubella, there is a strong correlation with the school calendar, so that during the school holidays the probability of getting such a disease dramatically decreases. As a consequence, for many classes of diseases, one should consider a force of infection with periodically ('seasonal') varying contact rate
F
=
β
(
t
)
I
N
,
β
(
t
+
T
)
=
β
(
t
)
{\displaystyle F=\beta (t){\frac {I}{N}},\quad \beta (t+T)=\beta (t)}
with period T equal to one year.
Thus, our model becomes
d
S
d
t
=
μ
N
−
μ
S
−
β
(
t
)
I
N
S
d
I
d
t
=
β
(
t
)
I
N
S
−
(
γ
+
μ
)
I
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=\mu N-\mu S-\beta (t){\frac {I}{N}}S\\[8pt]{\frac {dI}{dt}}&=\beta (t){\frac {I}{N}}S-(\gamma +\mu )I\end{aligned}}}
(the dynamics of recovered easily follows from
R
=
N
−
S
−
I
{\displaystyle R=N-S-I}
), i.e. a nonlinear set of differential equations with periodically varying parameters. It is well known that this class of dynamical systems may undergo very interesting and complex phenomena of nonlinear parametric resonance. It is easy to see that if:
1
T
∫
0
T
β
(
t
)
μ
+
γ
d
t
<
1
⇒
lim
t
→
+
∞
(
S
(
t
)
,
I
(
t
)
)
=
D
F
E
=
(
N
,
0
)
,
{\displaystyle {\frac {1}{T}}\int _{0}^{T}{\frac {\beta (t)}{\mu +\gamma }}\,dt<1\Rightarrow \lim _{t\to +\infty }(S(t),I(t))=DFE=(N,0),}
whereas if the integral is greater than one the disease will not die out and there may be such resonances. For example, considering the periodically varying contact rate as the 'input' of the system one has that the output is a periodic function whose period is a multiple of the period of the input.
This allowed to give a contribution to explain the poly-annual (typically biennial) epidemic outbreaks of some infectious diseases as interplay between the period of the contact rate oscillations and the pseudo-period of the damped oscillations near the endemic equilibrium. Remarkably, in some cases, the behavior may also be quasi-periodic or even chaotic.
=== SIR model with diffusion ===
Spatiotemporal compartmental models describe not the total number, but the density of susceptible/infective/recovered persons. Consequently, they also allow to model the distribution of infected persons in space. In most cases, this is done by combining the SIR model with a diffusion equation
∂
t
S
=
D
S
∇
2
S
−
β
I
S
N
,
∂
t
I
=
D
I
∇
2
I
+
β
I
S
N
−
γ
I
,
∂
t
R
=
D
R
∇
2
R
+
γ
I
,
{\displaystyle {\begin{aligned}&\partial _{t}S=D_{S}\nabla ^{2}S-{\frac {\beta IS}{N}},\\[6pt]&\partial _{t}I=D_{I}\nabla ^{2}I+{\frac {\beta IS}{N}}-\gamma I,\\[6pt]&\partial _{t}R=D_{R}\nabla ^{2}R+\gamma I,\end{aligned}}}
where
D
S
{\displaystyle D_{S}}
,
D
I
{\displaystyle D_{I}}
and
D
R
{\displaystyle D_{R}}
are diffusion constants. Thereby, one obtains a reaction-diffusion equation. (Note that, for dimensional reasons, the parameter
β
{\displaystyle \beta }
has to be changed compared to the simple SIR model.) Early models of this type have been used to model the spread of the black death in Europe. Extensions of this model have been used to incorporate, e.g., effects of nonpharmaceutical interventions such as social distancing.
=== Interacting Subpopulation SEIR Model ===
As social contacts, disease severity and lethality, as well as the efficacy of prophylactic measures may differ substantially between interacting subpopulations, e.g., the elderly versus the young, separate SEIR models for each subgroup may be used that are mutually connected through interaction links. Such Interacting Subpopulation SEIR models have been used for modeling the COVID-19 pandemic at continent scale to develop personalized, accelerated, subpopulation-targeted vaccination strategies that promise a shortening of the pandemic and a reduction of case and death counts in the setting of limited access to vaccines during a wave of virus Variants of Concern.
=== SIR Model on Networks ===
The SIR model has been studied on networks of various kinds in order to model a more realistic form of connection than the homogeneous mixing condition which is usually required. A simple model for epidemics on networks in which an individual has a probability p of being infected by each of his infected neighbors in a given time step leads to results similar to giant component formation on Erdos Renyi random graphs.
A stochastic compartment model with a transmission pathway via vectors has been developed recently in which a multiple random walkers approach
is implemented to investigate the spreading dynamics in random graphs of the Watts-Strogatz and the Barabási-Albert type
to mimic human mobility patterns in complex real world environments such as cities, streets, and transportation networks.
This model captures the class of vector transmitted infectious diseases such as Dengue, Malaria (transmission by mosquitoes), pestilence (transmission by fleas), and others.
=== SIRSS model - combination of SIR with modelling of social stress ===
Dynamics of epidemics depend on how people's behavior changes in time. For example, at the beginning of the epidemic, people are ignorant and careless, then, after the outbreak of epidemics and alarm, they begin to comply with the various restrictions and the spreading of epidemics may decline. Over time, some people get tired/frustrated by the restrictions and stop following them (exhaustion), especially if the number of new cases drops down. After resting for some time, they can follow the restrictions again. But during this pause the second wave can come and become even stronger than the first one. Social dynamics should be considered. The social physics models of social stress complement the classical epidemics models.
The simplest SIR-social stress (SIRSS) model is organised as follows. The susceptible individuals (S) can be split in three subgroups by the types of behavior: ignorant or unaware of the epidemic (Sign), rationally resistant (Sres), and exhausted (Sexh) that do not react on the external stimuli (this is a sort of refractory period). In other words: S(t) = Sign(t) + Sres(t) + Sexh(t). Symbolically, the social stress model can be presented by the "reaction scheme" (where I denotes the infected individuals):
S
i
g
n
+
2
I
→
S
r
e
s
+
2
I
{\displaystyle {\color {blue}{{\mathcal {S_{ign}}}+2{\mathcal {I}}\to {\mathcal {S_{res}}}+2{\mathcal {I}}}}}
– mobilization reaction (the autocatalytic form here means that the transition rate is proportional to the square of the infected fraction I);
S
r
e
s
→
S
e
x
h
{\displaystyle {\color {blue}{{\mathcal {S_{res}}}\to {\mathcal {S_{exh}}}}}}
– exhaustion process due to fatigue from anti-epidemic restrictions;
S
e
x
h
→
S
i
g
n
{\displaystyle {\color {blue}{{\mathcal {S_{exh}}}\to {\mathcal {S_{ign}}}}}}
– slow relaxation to the initial state (end of the refractory period).
The main SIR epidemic reaction
S
.
.
.
+
I
→
2
I
{\displaystyle {\color {blue}{{\mathcal {S_{...}}}+{\mathcal {I}}\to {\mathcal {2I}}}}}
has different reaction rate constants
β
{\displaystyle \beta }
for Sign, Sres, and Sexh. Presumably, for Sres,
β
{\displaystyle \beta }
is lower than for Sign and Sign.
The differences between countries are concentrated in two kinetic constants: the rate of mobilization and the rate of exhaustion calculated for COVID-19 epidemic in 13 countries. These constants for this epidemic in all countries can be extracted by the fitting of the SIRSS model to publicly available data
=== KdV-SIR equation ===
Based on the classical SIR model, a Korteweg-de Vries (KdV)–SIR equation and its analytical solution have been proposed to illustrate the fundamental dynamics of an epidemic wave, the dependence of solutions on parameters, and the dependence of predictability horizons on various types of solutions. The KdV-SIR equation is written as follows:
d
2
I
d
t
−
σ
o
2
I
+
3
2
σ
o
2
I
m
a
x
I
2
=
0
{\displaystyle {\frac {d^{2}I}{dt}}-\sigma _{o}^{2}I+{\frac {3}{2}}{\frac {\sigma _{o}^{2}}{I_{max}}}I^{2}=0}
.
Here,
σ
o
=
γ
(
R
o
−
1
)
{\displaystyle \sigma _{o}=\gamma (R_{o}-1)}
,
R
o
=
β
γ
S
o
N
{\displaystyle R_{o}={\frac {\beta }{\gamma }}{\frac {S_{o}}{N}}}
,
and
I
m
a
x
=
S
o
2
(
R
o
−
1
)
2
R
o
2
{\displaystyle I_{max}={\frac {S_{o}}{2}}{\frac {(R_{o}-1)^{2}}{R_{o}^{2}}}}
.
S
o
{\displaystyle S_{o}}
indicates the initial value of the state variable
S
{\displaystyle S}
. Parameters
σ
o
{\displaystyle \sigma _{o}}
(σ-naught) and
R
o
{\displaystyle R_{o}}
(R-naught) are the time-independent relative growth rate and basic reproduction number, respectively.
I
m
a
x
{\displaystyle I_{max}}
presents the maximum of the state variables
I
{\displaystyle I}
(for the number of infected persons). The KdV-SIR equation shares the same form as the Korteweg–De Vries equation in the traveling wave coordinate. An analytical solution to the KdV-SIR equation is written as follows:
I
=
I
m
a
x
s
e
c
h
2
(
σ
o
2
t
)
{\displaystyle I=I_{max}sech^{2}\left({\frac {\sigma _{o}}{2}}t\right)}
,
which represents a solitary wave solution.
== Heterogeneous (structured, Bayesian) model ==
Modeling a full population of possibly millions people using two constants
β
{\displaystyle \beta }
and
γ
{\displaystyle \gamma }
seem far fetched; each individual has personal characteristics that influence the propagation : immunity status, contact habits and so on. So it is interesting to know what happens if, for instance,
β
{\displaystyle \beta }
and
γ
{\displaystyle \gamma }
are not two constants but some random variables (a pair for each individual). This procedure has several names : "heterogeneous model", "structuration" (see also below for age structured models) or "Bayesian" view. Surprising results emerge, for instance it was proved in that the number of infected at the peak of a heterogeneous epidemic is smaller than the deterministic epidemic having same average
β
{\displaystyle \beta }
; the same holds true for the total epidemic size
S
(
0
)
−
S
(
∞
)
{\displaystyle S(0)-S(\infty )}
and other models, e.g. SEIR.
== Modelling vaccination ==
The SIR model can be modified to model vaccination. Typically these introduce an additional compartment to the SIR model,
V
{\displaystyle V}
, for vaccinated individuals. Below are some examples.
=== Vaccinating newborns ===
In presence of a communicable diseases, one of the main tasks is that of eradicating it via prevention measures and, if possible, via the establishment of a mass vaccination program. Consider a disease for which the newborn are vaccinated (with a vaccine giving lifelong immunity) at a rate
P
∈
(
0
,
1
)
{\displaystyle P\in (0,1)}
:
d
S
d
t
=
ν
N
(
1
−
P
)
−
μ
S
−
β
I
N
S
d
I
d
t
=
β
I
N
S
−
(
μ
+
γ
)
I
d
V
d
t
=
ν
N
P
−
μ
V
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=\nu N(1-P)-\mu S-\beta {\frac {I}{N}}S\\[8pt]{\frac {dI}{dt}}&=\beta {\frac {I}{N}}S-(\mu +\gamma )I\\[8pt]{\frac {dV}{dt}}&=\nu NP-\mu V\end{aligned}}}
where
V
{\displaystyle V}
is the class of vaccinated subjects. It is immediate to show that:
lim
t
→
+
∞
V
(
t
)
=
N
P
,
{\displaystyle \lim _{t\to +\infty }V(t)=NP,}
thus we shall deal with the long term behavior of
S
{\displaystyle S}
and
I
{\displaystyle I}
, for which it holds that:
R
0
(
1
−
P
)
≤
1
⇒
lim
t
→
+
∞
(
S
(
t
)
,
I
(
t
)
)
=
D
F
E
=
(
N
(
1
−
P
)
,
0
)
{\displaystyle R_{0}(1-P)\leq 1\Rightarrow \lim _{t\to +\infty }\left(S(t),I(t)\right)=DFE=\left(N\left(1-P\right),0\right)}
R
0
(
1
−
P
)
>
1
,
I
(
0
)
>
0
⇒
lim
t
→
+
∞
(
S
(
t
)
,
I
(
t
)
)
=
E
E
=
(
N
R
0
(
1
−
P
)
,
N
(
R
0
(
1
−
P
)
−
1
)
)
.
{\displaystyle R_{0}(1-P)>1,\quad I(0)>0\Rightarrow \lim _{t\to +\infty }\left(S(t),I(t)\right)=EE=\left({\frac {N}{R_{0}(1-P)}},N\left(R_{0}(1-P)-1\right)\right).}
In other words, if
P
<
P
∗
=
1
−
1
R
0
{\displaystyle P<P^{*}=1-{\frac {1}{R_{0}}}}
the vaccination program is not successful in eradicating the disease, on the contrary, it will remain endemic, although at lower levels than the case of absence of vaccinations. This means that the mathematical model suggests that for a disease whose basic reproduction number may be as high as 18 one should vaccinate at least 94.4% of newborns in order to eradicate the disease.
=== Vaccination and information ===
Modern societies are facing the challenge of "rational" exemption, i.e. the family's decision to not vaccinate children as a consequence of a "rational" comparison between the perceived risk from infection and that from getting damages from the vaccine. In order to assess whether this behavior is really rational, i.e. if it can equally lead to the eradication of the disease, one may simply assume that the vaccination rate is an increasing function of the number of infectious subjects:
P
=
P
(
I
)
,
P
′
(
I
)
>
0.
{\displaystyle P=P(I),\quad P'(I)>0.}
In such a case the eradication condition becomes:
P
(
0
)
≥
P
∗
,
{\displaystyle P(0)\geq P^{*},}
i.e. the baseline vaccination rate should be greater than the "mandatory vaccination" threshold, which, in case of exemption, cannot hold. Thus, "rational" exemption might be myopic since it is based only on the current low incidence due to high vaccine coverage, instead taking into account future resurgence of infection due to coverage decline.
=== Vaccination of non-newborns ===
In case there also are vaccinations of non newborns at a rate ρ the equation for the susceptible and vaccinated subject has to be modified as follows:
d
S
d
t
=
μ
N
(
1
−
P
)
−
μ
S
−
ρ
S
−
β
I
N
S
d
V
d
t
=
μ
N
P
+
ρ
S
−
μ
V
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=\mu N(1-P)-\mu S-\rho S-\beta {\frac {I}{N}}S\\[8pt]{\frac {dV}{dt}}&=\mu NP+\rho S-\mu V\end{aligned}}}
leading to the following eradication condition:
P
≥
1
−
(
1
+
ρ
μ
)
1
R
0
{\displaystyle P\geq 1-\left(1+{\frac {\rho }{\mu }}\right){\frac {1}{R_{0}}}}
=== Pulse vaccination strategy ===
This strategy repeatedly vaccinates a defined age-cohort (such as young children or the elderly) in a susceptible population over time. Using this strategy, the block of susceptible individuals is then immediately removed, making it possible to eliminate an infectious disease, (such as measles), from the entire population. Every T time units a constant fraction p of susceptible subjects is vaccinated in a relatively short (with respect to the dynamics of the disease) time. This leads to the following impulsive differential equations for the susceptible and vaccinated subjects:
d
S
d
t
=
μ
N
−
μ
S
−
β
I
N
S
,
S
(
n
T
+
)
=
(
1
−
p
)
S
(
n
T
−
)
,
n
=
0
,
1
,
2
,
…
d
V
d
t
=
−
μ
V
,
V
(
n
T
+
)
=
V
(
n
T
−
)
+
p
S
(
n
T
−
)
,
n
=
0
,
1
,
2
,
…
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=\mu N-\mu S-\beta {\frac {I}{N}}S,\quad S(nT^{+})=(1-p)S(nT^{-}),&&n=0,1,2,\ldots \\[8pt]{\frac {dV}{dt}}&=-\mu V,\quad V(nT^{+})=V(nT^{-})+pS(nT^{-}),&&n=0,1,2,\ldots \end{aligned}}}
It is easy to see that by setting I = 0 one obtains that the dynamics of the susceptible subjects is given by:
S
∗
(
t
)
=
1
−
p
1
−
(
1
−
p
)
E
−
μ
T
E
−
μ
M
O
D
(
t
,
T
)
{\displaystyle S^{*}(t)=1-{\frac {p}{1-(1-p)E^{-\mu T}}}E^{-\mu MOD(t,T)}}
and that the eradication condition is:
R
0
∫
0
T
S
∗
(
t
)
d
t
<
1
{\displaystyle R_{0}\int _{0}^{T}S^{*}(t)\,dt<1}
=== Vaccination games ===
A huge literature recognizes that the vaccination can be seen as a game: in a population where everybody is vaccinated any epidemic will die off immediately so an additional person will have no interest to vaccinate at all. On the contrary, a person arriving in a population where nobody is vaccinated will have all incentives to vaccinate (the epidemic will break loose in such a population). So, it seems that the individual has interest to do the opposite of the population as a whole. But the population is the sum of all individuals, and the previous affirmation should be false. So, in fact, a Nash equilibrium is reached. Technical tools to treat such situations involve game theory or modern tools such as Mean-field game theory.
== The influence of age: age-structured models ==
Age has a deep influence on the disease spread rate in a population, especially the contact rate. This rate summarizes the effectiveness of contacts between susceptible and infectious subjects. Taking into account the ages of the epidemic classes
s
(
t
,
a
)
,
i
(
t
,
a
)
,
r
(
t
,
a
)
{\displaystyle s(t,a),i(t,a),r(t,a)}
(to limit ourselves to the susceptible-infectious-removed scheme) such that:
S
(
t
)
=
∫
0
a
M
s
(
t
,
a
)
d
a
{\displaystyle S(t)=\int _{0}^{a_{M}}s(t,a)\,da}
I
(
t
)
=
∫
0
a
M
i
(
t
,
a
)
d
a
{\displaystyle I(t)=\int _{0}^{a_{M}}i(t,a)\,da}
R
(
t
)
=
∫
0
a
M
r
(
t
,
a
)
d
a
{\displaystyle R(t)=\int _{0}^{a_{M}}r(t,a)\,da}
(where
a
M
≤
+
∞
{\displaystyle a_{M}\leq +\infty }
is the maximum admissible age) and their dynamics is not described, as one might think, by "simple" partial differential equations, but by integro-differential equations:
∂
t
s
(
t
,
a
)
+
∂
a
s
(
t
,
a
)
=
−
μ
(
a
)
s
(
a
,
t
)
−
s
(
a
,
t
)
∫
0
a
M
k
(
a
,
a
1
;
t
)
i
(
a
1
,
t
)
d
a
1
{\displaystyle \partial _{t}s(t,a)+\partial _{a}s(t,a)=-\mu (a)s(a,t)-s(a,t)\int _{0}^{a_{M}}k(a,a_{1};t)i(a_{1},t)\,da_{1}}
∂
t
i
(
t
,
a
)
+
∂
a
i
(
t
,
a
)
=
s
(
a
,
t
)
∫
0
a
M
k
(
a
,
a
1
;
t
)
i
(
a
1
,
t
)
d
a
1
−
μ
(
a
)
i
(
a
,
t
)
−
γ
(
a
)
i
(
a
,
t
)
{\displaystyle \partial _{t}i(t,a)+\partial _{a}i(t,a)=s(a,t)\int _{0}^{a_{M}}{k(a,a_{1};t)i(a_{1},t)da_{1}}-\mu (a)i(a,t)-\gamma (a)i(a,t)}
∂
t
r
(
t
,
a
)
+
∂
a
r
(
t
,
a
)
=
−
μ
(
a
)
r
(
a
,
t
)
+
γ
(
a
)
i
(
a
,
t
)
{\displaystyle \partial _{t}r(t,a)+\partial _{a}r(t,a)=-\mu (a)r(a,t)+\gamma (a)i(a,t)}
where:
F
(
a
,
t
,
i
(
⋅
,
⋅
)
)
=
∫
0
a
M
k
(
a
,
a
1
;
t
)
i
(
a
1
,
t
)
d
a
1
{\displaystyle F(a,t,i(\cdot ,\cdot ))=\int _{0}^{a_{M}}k(a,a_{1};t)i(a_{1},t)\,da_{1}}
is the force of infection, which, of course, will depend, though the contact kernel
k
(
a
,
a
1
;
t
)
{\displaystyle k(a,a_{1};t)}
on the interactions between the ages.
Complexity is added by the initial conditions for newborns (i.e. for a=0), that are straightforward for infectious and removed:
i
(
t
,
0
)
=
r
(
t
,
0
)
=
0
{\displaystyle i(t,0)=r(t,0)=0}
but that are nonlocal for the density of susceptible newborns:
s
(
t
,
0
)
=
∫
0
a
M
(
φ
s
(
a
)
s
(
a
,
t
)
+
φ
i
(
a
)
i
(
a
,
t
)
+
φ
r
(
a
)
r
(
a
,
t
)
)
d
a
{\displaystyle s(t,0)=\int _{0}^{a_{M}}\left(\varphi _{s}(a)s(a,t)+\varphi _{i}(a)i(a,t)+\varphi _{r}(a)r(a,t)\right)\,da}
where
φ
j
(
a
)
,
j
=
s
,
i
,
r
{\displaystyle \varphi _{j}(a),j=s,i,r}
are the fertilities of the adults.
Moreover, defining now the density of the total population
n
(
t
,
a
)
=
s
(
t
,
a
)
+
i
(
t
,
a
)
+
r
(
t
,
a
)
{\displaystyle n(t,a)=s(t,a)+i(t,a)+r(t,a)}
one obtains:
∂
t
n
(
t
,
a
)
+
∂
a
n
(
t
,
a
)
=
−
μ
(
a
)
n
(
a
,
t
)
{\displaystyle \partial _{t}n(t,a)+\partial _{a}n(t,a)=-\mu (a)n(a,t)}
In the simplest case of equal fertilities in the three epidemic classes, we have that in order to have demographic equilibrium the following necessary and sufficient condition linking the fertility
φ
(
.
)
{\displaystyle \varphi (.)}
with the mortality
μ
(
a
)
{\displaystyle \mu (a)}
must hold:
1
=
∫
0
a
M
φ
(
a
)
exp
(
−
∫
0
a
μ
(
q
)
d
q
)
d
a
{\displaystyle 1=\int _{0}^{a_{M}}\varphi (a)\exp \left(-\int _{0}^{a}{\mu (q)dq}\right)\,da}
and the demographic equilibrium is
n
∗
(
a
)
=
C
exp
(
−
∫
0
a
μ
(
q
)
d
q
)
,
{\displaystyle n^{*}(a)=C\exp \left(-\int _{0}^{a}\mu (q)\,dq\right),}
automatically ensuring the existence of the disease-free solution:
D
F
S
(
a
)
=
(
n
∗
(
a
)
,
0
,
0
)
.
{\displaystyle DFS(a)=(n^{*}(a),0,0).}
A basic reproduction number can be calculated as the spectral radius of an appropriate functional operator.
=== Next-generation method ===
One way to calculate
R
0
{\displaystyle R_{0}}
is to average the expected number of new infections over all possible infected types. The next-generation method is a general method of deriving
R
0
{\displaystyle R_{0}}
when more than one class of infectives is involved. This method, originally introduced by Diekmann et al. (1990), can be used for models with underlying age structure or spatial structure, among other possibilities. In this picture, the spectral radius of the next-generation matrix
G
{\displaystyle G}
gives the basic reproduction number,
R
0
=
ρ
(
G
)
.
{\displaystyle R_{0}=\rho (G).}
Consider a sexually transmitted disease. In a naive population where almost everyone is susceptible, but the infection seed, if the expected number of gender 1 is
f
{\displaystyle f}
and the expected number of infected gender 2 is
m
{\displaystyle m}
, we can know how many would be infected in the next-generation. Such that the next-generation matrix
G
{\displaystyle G}
can be written as:
G
=
(
0
f
m
0
)
,
{\displaystyle G={\begin{pmatrix}0&f\\m&0\end{pmatrix}},}
where each element
g
i
j
{\displaystyle g_{ij}}
is the expected number of secondary infections of gender
i
{\displaystyle i}
caused by a single infected individual of gender
j
{\displaystyle j}
, assuming that the population of gender
i
{\displaystyle i}
is entirely susceptible. Diagonal elements are zero because people of the same gender cannot transmit the disease to each other but, for example, each
f
{\displaystyle f}
can transmit the disease to
m
{\displaystyle m}
, on average. Meaning that each element
g
i
j
{\displaystyle g_{ij}}
is a reproduction number, but one where who infects whom is accounted for. If generation
a
{\displaystyle a}
is represented with
ϕ
a
{\displaystyle \phi _{a}}
then the next generation
ϕ
a
+
1
{\displaystyle \phi _{a+1}}
would be
G
ϕ
a
{\displaystyle G\phi _{a}}
.
The spectral radius of the next-generation matrix is the basic reproduction number,
R
0
=
ρ
(
G
)
=
m
f
{\displaystyle R_{0}=\rho (G)={\sqrt {mf}}}
, that is here, the geometric mean of the expected number of each gender in the next-generation. Note that multiplication factors
f
{\displaystyle f}
and
m
{\displaystyle m}
alternate because, the infectious person has to ‘pass through’ a second gender before it can enter a new host of the first gender. In other words, it takes two generations to get back to the same type, and every two generations numbers are multiplied by
m
{\displaystyle m}
×
f
{\displaystyle f}
. The average per generation multiplication factor is therefore
m
f
{\displaystyle {\sqrt {mf}}}
. Note that
G
{\displaystyle G}
is a non-negative matrix so it has single, unique, positive, real eigenvalue which is strictly greater than all the others.
=== Next-generation matrix for compartmental models ===
In mathematical modelling of infectious disease, the dynamics of spreading is usually described through a set of non-linear ordinary differential equations (ODE). So there is always
n
{\displaystyle n}
coupled equations of form
C
i
˙
=
d
C
i
d
t
=
f
(
C
1
,
C
2
,
.
.
.
,
C
n
)
{\displaystyle {\dot {C_{i}}}={\operatorname {d} \!C_{i} \over \operatorname {d} \!t}=f(C_{1},C_{2},...,C_{n})}
which shows how the number of people in compartment
C
i
{\displaystyle C_{i}}
changes over time. For example, in a SIR model,
C
1
=
S
{\displaystyle C_{1}=S}
,
C
2
=
I
{\displaystyle C_{2}=I}
, and
C
3
=
R
{\displaystyle C_{3}=R}
. Compartmental models have a disease-free equilibrium (DFE) meaning that it is possible to find an equilibrium while setting the number of infected people to zero,
I
=
0
{\displaystyle I=0}
. In other words, as a rule, there is an infection-free steady state. This solution, also usually ensures that the disease-free equilibrium is also an equilibrium of the system. There is another fixed point known as an Endemic Equilibrium (EE) where the disease is not totally eradicated and remains in the population. Mathematically,
R
0
{\displaystyle R_{0}}
is a threshold for stability of a disease-free equilibrium such that:
R
0
≤
1
⇒
lim
t
→
∞
(
C
1
(
t
)
,
C
2
(
t
)
,
⋯
,
C
n
(
t
)
)
=
DFE
{\displaystyle R_{0}\leq 1\Rightarrow \lim _{t\to \infty }(C_{1}(t),C_{2}(t),\cdots ,C_{n}(t))={\textrm {DFE}}}
R
0
>
1
,
I
(
0
)
>
0
⇒
lim
t
→
∞
(
C
1
(
t
)
,
C
2
(
t
)
,
⋯
,
C
n
(
t
)
)
=
EE
.
{\displaystyle R_{0}>1,I(0)>0\Rightarrow \lim _{t\to \infty }(C_{1}(t),C_{2}(t),\cdots ,C_{n}(t))={\textrm {EE}}.}
To calculate
R
0
{\displaystyle R_{0}}
, the first step is to linearise around the disease-free equilibrium (DFE), but for the infected subsystem of non-linear ODEs which describe the production of new infections and changes in state among infected individuals. Epidemiologically, the linearisation reflects that
R
0
{\displaystyle R_{0}}
characterizes the potential for initial spread of an infectious person in a naive population, assuming the change in the susceptible population is negligible during the initial spread. A linear system of ODEs can always be described by a matrix. So, the next step is to construct a linear positive operator that provides the next generation of infected people when applied to the present generation. Note that this operator (matrix) is responsible for the number of infected people, not all the compartments. Iteration of this operator describes the initial progression of infection within the heterogeneous population. So comparing the spectral radius of this operator to unity determines whether the generations of infected people grow or not.
R
0
{\displaystyle R_{0}}
can be written as a product of the infection rate near the disease-free equilibrium and average duration of infectiousness. It is used to find the peak and final size of an epidemic.
==== The SEIR model with vital dynamics and constant population ====
As described in the example above, so many epidemic processes can be described with a SIR model. However, for many important infections, such as COVID-19, there is a significant latency period during which individuals have been infected but are not yet infectious themselves. During this period the individual is in compartment E (for exposed). Here, the formation of the next-generation matrix from the SEIR model involves determining two compartments, infected and non-infected, since they are the populations that spread the infection. So we only need to model the exposed, E, and infected, I, compartments. Consider a population characterized by a death rate
μ
{\displaystyle \mu }
and birth rate
λ
{\displaystyle \lambda }
where a communicable disease is spreading. As in the previous example, we can use the transition rates between the compartments per capita such that
β
{\displaystyle \beta }
be the infection rate,
γ
{\displaystyle \gamma }
be the recovery rate, and
κ
{\displaystyle \kappa }
be the rate at which a latent individual becomes infectious. Then, we can define the model dynamics using the following equations:
{
S
˙
=
λ
−
μ
S
−
β
S
I
,
E
˙
=
β
S
I
−
(
μ
+
κ
)
E
,
I
˙
=
κ
E
−
(
μ
+
γ
)
I
,
R
˙
=
γ
I
−
μ
R
.
{\displaystyle {\begin{cases}{\dot {S}}=\lambda -\mu S-\beta SI,\\\\{\dot {E}}=\beta SI-(\mu +\kappa )E,\\\\{\dot {I}}=\kappa E-(\mu +\gamma )I,\\\\{\dot {R}}=\gamma I-\mu R.\end{cases}}}
Here we have 4 compartments and we can define vector
x
=
(
S
,
E
,
I
,
R
)
{\displaystyle \mathrm {x} =(S,E,I,R)}
where
x
i
{\displaystyle \mathrm {x} _{i}}
denotes the number or proportion of individuals in the
i
{\displaystyle i}
-th compartment. Let
F
i
(
x
)
{\displaystyle F_{i}(\mathrm {x} )}
be the rate of appearance of new infections in compartment
i
{\displaystyle i}
such that it includes only infections that are newly arising, but does not include terms which describe the transfer of infectious individuals from one infected compartment to another. Then if
V
i
+
{\displaystyle V_{i}^{+}}
is the rate of transfer of individuals into compartment
i
{\displaystyle i}
by all other means and
V
i
−
{\displaystyle V_{i}^{-}}
is the rate of transfer of individuals out of the
i
{\displaystyle i}
-th compartment, then the difference
F
i
(
x
)
−
V
i
(
x
)
{\displaystyle F_{i}(\mathrm {x} )-V_{i}(\mathrm {x} )}
gives the rate of change of such that
V
i
(
x
)
=
V
i
−
(
x
)
−
V
i
+
(
x
)
{\displaystyle V_{i}(\mathrm {x} )=V_{i}^{-}(\mathrm {x} )-V_{i}^{+}(\mathrm {x} )}
.
We can now make matrices of partial derivatives of
F
{\displaystyle F}
and
V
{\displaystyle V}
such that
F
i
j
=
∂
F
i
(
x
∗
)
∂
x
j
{\displaystyle F_{ij}={\partial \!\ F_{i}(\mathrm {x} ^{*}) \over \partial \!\ \mathrm {x} _{j}}}
and
V
i
j
=
∂
V
i
(
x
∗
)
∂
x
j
{\displaystyle V_{ij}={\partial \!\ V_{i}(\mathrm {x} ^{*}) \over \partial \!\ \mathrm {x} _{j}}}
, where
x
∗
=
(
S
∗
,
E
∗
,
I
∗
,
R
∗
)
=
(
λ
/
μ
,
0
,
0
,
0
)
{\displaystyle \mathrm {x} ^{*}=(S^{*},E^{*},I^{*},R^{*})=(\lambda /\mu ,0,0,0)}
is the disease-free equilibrium.
We now can form the next-generation matrix (operator)
G
=
F
V
−
1
{\displaystyle G=FV^{-1}}
. Basically,
F
{\displaystyle F}
is a non-negative matrix which represents the infection rates near the equilibrium, and
V
{\displaystyle V}
is an M-matrix for linear transition terms making
V
−
1
{\displaystyle V^{-1}}
a matrix which represents the average duration of infectiousness. Therefore,
G
i
j
{\displaystyle G_{ij}}
gives the rate at which infected individuals in
x
j
{\displaystyle \mathrm {x} _{j}}
produce new infections in
x
i
{\displaystyle \mathrm {x} _{i}}
, times the average length of time an individual spends in a single visit to compartment
j
.
{\displaystyle j.}
Finally, for this SEIR process we can have:
F
=
(
0
β
S
∗
0
0
)
{\displaystyle F={\begin{pmatrix}0&\beta S^{*}\\0&0\end{pmatrix}}}
and
V
=
(
μ
+
κ
0
−
κ
γ
+
μ
)
{\displaystyle V={\begin{pmatrix}\mu +\kappa &0\\-\kappa &\gamma +\mu \end{pmatrix}}}
and so
R
0
=
ρ
(
F
V
−
1
)
=
κ
β
S
∗
(
μ
+
κ
)
(
μ
+
γ
)
.
{\displaystyle R_{0}=\rho (FV^{-1})={\frac {\kappa \beta S^{*}}{(\mu +\kappa )(\mu +\gamma )}}.}
== Estimation methods ==
The basic reproduction number can be estimated through examining detailed transmission chains or through genomic sequencing. However, it is most frequently calculated using epidemiological models. During an epidemic, typically the number of diagnosed infections
N
(
t
)
{\displaystyle N(t)}
over time
t
{\displaystyle t}
is known. In the early stages of an epidemic, growth is exponential, with a logarithmic growth rate
K
:=
d
ln
(
N
)
d
t
.
{\displaystyle K:={\frac {d\ln(N)}{dt}}.}
For exponential growth,
N
{\displaystyle N}
can be interpreted as the cumulative number of diagnoses (including individuals who have recovered) or the present number of infection cases; the logarithmic growth rate is the same for either definition. In order to estimate
R
0
{\displaystyle R_{0}}
, assumptions are necessary about the time delay between infection and diagnosis and the time between infection and starting to be infectious.
In exponential growth,
K
{\displaystyle K}
is related to the doubling time
T
d
{\displaystyle T_{d}}
as
K
=
ln
(
2
)
T
d
.
{\displaystyle K={\frac {\ln(2)}{T_{d}}}.}
=== Simple model ===
If an individual, after getting infected, infects exactly
R
0
{\displaystyle R_{0}}
new individuals only after exactly a time
τ
{\displaystyle \tau }
(the serial interval) has passed, then the number of infectious individuals over time grows as
n
E
(
t
)
=
n
E
(
0
)
R
0
t
/
τ
=
n
E
(
0
)
e
K
t
{\displaystyle n_{E}(t)=n_{E}(0)\,R_{0}^{t/\tau }=n_{E}(0)\,e^{Kt}}
or
ln
(
n
E
(
t
)
)
=
ln
(
n
E
(
0
)
)
+
ln
(
R
0
)
t
/
τ
.
{\displaystyle \ln(n_{E}(t))=\ln(n_{E}(0))+\ln(R_{0})t/\tau .}
The underlying matching differential equation is
d
n
E
(
t
)
d
t
=
n
E
(
t
)
ln
(
R
0
)
τ
.
{\displaystyle {\frac {dn_{E}(t)}{dt}}=n_{E}(t){\frac {\ln(R_{0})}{\tau }}.}
or
d
ln
(
n
E
(
t
)
)
d
t
=
ln
(
R
0
)
τ
.
{\displaystyle {\frac {d\ln(n_{E}(t))}{dt}}={\frac {\ln(R_{0})}{\tau }}.}
In this case,
R
0
=
e
K
τ
{\displaystyle R_{0}=e^{K\tau }}
or
K
=
ln
R
0
τ
{\displaystyle K={\frac {\ln R_{0}}{\tau }}}
.
For example, with
τ
=
5
d
{\displaystyle \tau =5~\mathrm {d} }
and
K
=
0.183
d
−
1
{\displaystyle K=0.183~\mathrm {d} ^{-1}}
, we would find
R
0
=
2.5
{\displaystyle R_{0}=2.5}
.
If
R
0
{\displaystyle R_{0}}
is time dependent
ln
(
n
E
(
t
)
)
=
ln
(
n
E
(
0
)
)
+
1
τ
∫
0
t
ln
(
R
0
(
t
)
)
d
t
{\displaystyle \ln(n_{E}(t))=\ln(n_{E}(0))+{\frac {1}{\tau }}\int \limits _{0}^{t}\ln(R_{0}(t))dt}
showing that it may be important to keep
ln
(
R
0
)
{\displaystyle \ln(R_{0})}
below 0, time-averaged, to avoid exponential growth.
=== Latent infectious period, isolation after diagnosis ===
In this model, an individual infection has the following stages:
Exposed: an individual is infected, but has no symptoms and does not yet infect others. The average duration of the exposed state is
τ
E
{\displaystyle \tau _{E}}
.
Latent infectious: an individual is infected, has no symptoms, but does infect others. The average duration of the latent infectious state is
τ
I
{\displaystyle \tau _{I}}
. The individual infects
R
0
{\displaystyle R_{0}}
other individuals during this period.
Isolation after diagnosis: measures are taken to prevent further infections, for example by isolating the infected person.
This is a SEIR model and
R
0
{\displaystyle R_{0}}
may be written in the following form
R
0
=
1
+
K
(
τ
E
+
τ
I
)
+
K
2
τ
E
τ
I
.
{\displaystyle R_{0}=1+K(\tau _{E}+\tau _{I})+K^{2}\tau _{E}\tau _{I}.}
This estimation method has been applied to COVID-19 and SARS. It follows from the differential equation for the number of exposed individuals
n
E
{\displaystyle n_{E}}
and the number of latent infectious individuals
n
I
{\displaystyle n_{I}}
,
d
d
t
(
n
E
n
I
)
=
(
−
1
/
τ
E
R
0
/
τ
I
1
/
τ
E
−
1
/
τ
I
)
(
n
E
n
I
)
.
{\displaystyle {\frac {d}{dt}}{\begin{pmatrix}n_{E}\\n_{I}\end{pmatrix}}={\begin{pmatrix}-1/\tau _{E}&R_{0}/\tau _{I}\\1/\tau _{E}&-1/\tau _{I}\end{pmatrix}}{\begin{pmatrix}n_{E}\\n_{I}\end{pmatrix}}.}
The largest eigenvalue of the matrix is the logarithmic growth rate
K
{\displaystyle K}
, which can be solved for
R
0
{\displaystyle R_{0}}
.
In the special case
τ
I
=
0
{\displaystyle \tau _{I}=0}
, this model results in
R
0
=
1
+
K
τ
E
{\displaystyle R_{0}=1+K\tau _{E}}
, which is different from the simple model above (
R
0
=
exp
(
K
τ
E
)
{\displaystyle R_{0}=\exp(K\tau _{E})}
). For example, with the same values
τ
=
5
d
{\displaystyle \tau =5~\mathrm {d} }
and
K
=
0.183
d
−
1
{\displaystyle K=0.183~\mathrm {d} ^{-1}}
, we would find
R
0
=
1.9
{\displaystyle R_{0}=1.9}
, rather than the true value of
2.5
{\displaystyle 2.5}
. The difference is due to a subtle difference in the underlying growth model; the matrix equation above assumes that newly infected patients are currently already contributing to infections, while in fact infections only occur due to the number infected at
τ
E
{\displaystyle \tau _{E}}
ago. A more correct treatment would require the use of delay differential equations.
Latent period is the transition time between contagion event and disease manifestation. In cases of diseases with varying latent periods, the basic reproduction number can be calculated as the sum of the reproduction numbers for each transition time into the disease. An example of this is tuberculosis (TB). Blower and coauthors calculated from a simple model of TB the following reproduction number:
R
0
=
R
0
FAST
+
R
0
SLOW
{\displaystyle R_{0}=R_{0}^{\text{FAST}}+R_{0}^{\text{SLOW}}}
In their model, it is assumed that the infected individuals can develop active TB by either direct progression (the disease develops immediately after infection) considered above as FAST tuberculosis or endogenous reactivation (the disease develops years after the infection) considered above as SLOW tuberculosis.
== Other considerations within compartmental epidemic models ==
=== Vertical transmission ===
In the case of some diseases such as AIDS and hepatitis B, it is possible for the offspring of infected parents to be born infected. This transmission of the disease down from the mother is referred to as vertical transmission. The influx of additional members into the infected category can be considered within the model by including a fraction of the newborn members in the infected compartment.
=== Vector transmission ===
Diseases transmitted from human to human indirectly, i.e. malaria spread by way of mosquitoes, are transmitted through a vector. In these cases, the infection transfers from human to insect and an epidemic model must include both species, generally requiring many more compartments than a model for direct transmission.
=== Others ===
Other occurrences which may need to be considered when modeling an epidemic include things such as the following:
Non-homogeneous mixing
Variable infectivity
Distributions that are spatially non-uniform
Diseases caused by macroparasites
== Deterministic versus stochastic epidemic models ==
The deterministic models presented here are valid only in case of sufficiently large populations, and as such should be used cautiously. These models are only valid in the thermodynamic limit, where the population is effectively infinite. In stochastic models, the long-time endemic equilibrium derived above, does not hold, as there is a finite probability that the number of infected individuals drops below one in a system. In a true system then, the pathogen may not propagate, as no host will be infected. But, in deterministic mean-field models, the number of infected can take on real, namely, non-integer values of infected hosts, and the number of hosts in the model can be less than one, but more than zero, thereby allowing the pathogen in the model to propagate. The reliability of compartmental models is limited to compartmental applications.
One of the possible extensions of mean-field models considers the spreading of epidemics on a network based on percolation theory concepts. Stochastic epidemic models have been studied on different networks and more recently applied to the COVID-19 pandemic.
== See also ==
Attack rate
Basic reproduction number
Flatten the curve
List of COVID-19 simulation models
Mathematical modelling in epidemiology
Modifiable areal unit problem
Next-generation matrix
Risk assessment
== References ==
== Further reading ==
May RM, Anderson RM (1991). Infectious diseases of humans: dynamics and control. Oxford: Oxford University Press. ISBN 0-19-854040-X.
Vynnycky E, White RG, eds. (2010). An Introduction to Infectious Disease Modelling. Oxford: Oxford University Press. ISBN 978-0-19-856576-5.
Capasso V (2008). Mathematical Structures of Epidemic Systems. 2nd Printing. Heidelberg: Springer. ISBN 978-3-540-56526-0.
Carlson CS, Rubin DM, Heikkilä V, Postema M (2021). "Extracting transmission and recovery parameters for an adaptive global system dynamics model of the COVID-19 pandemic". 2021 IEEE Africon (PDF). pp. 456–459. doi:10.1109/AFRICON51333.2021.9570946. ISBN 978-1-6654-1984-0. S2CID 239899862.
== External links ==
SIR model: Online experiments with JSXGraph
"Simulating an epidemic". 3Blue1Brown. March 27, 2020 – via YouTube. | Wikipedia/SIR_model |
In network science, the efficiency of a network is a measure of how efficiently it exchanges information and it is also called communication efficiency. The underlying idea (and main assumption) is that the more distant two nodes are in the network, the less efficient their communication will be. The concept of efficiency can be applied to both local and global scales in a network. On a global scale, efficiency quantifies the exchange of information across the whole network where information is concurrently exchanged. The local efficiency quantifies a network's resistance to failure on a small scale. That is the local efficiency of a node
i
{\displaystyle i}
characterizes how well information is exchanged by its neighbors when it is removed.
== Definition ==
The definition of communication efficiency assumes that the efficiency is inversely proportional to the distance, so in mathematical terms
ϵ
i
j
=
1
d
i
j
{\displaystyle \epsilon _{ij}={\frac {1}{d_{ij}}}}
where
ϵ
i
j
{\displaystyle \epsilon _{ij}}
is the pairwise efficiency of nodes
i
,
j
∈
V
{\displaystyle i,j\in V}
in network
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
and
d
i
j
{\displaystyle d_{ij}}
is their distance.
The average communication efficiency of the network
G
{\displaystyle G}
is then defined as the average over the pairwise efficiencies:
E
(
G
)
=
1
N
(
N
−
1
)
∑
i
≠
j
∈
V
1
d
i
j
{\displaystyle E(G)={\frac {1}{N(N-1)}}\sum _{i\neq j\in V}{\frac {1}{d_{ij}}}}
where
N
=
|
V
|
{\displaystyle N=|V|}
denotes the number of nodes in the network.
Distances can be measured in different ways, depending on the type of networks. The most natural distance for unweighted networks is the length of a shortest path between a nodes
i
{\displaystyle i}
and
j
{\displaystyle j}
, i.e. a shortest path between
i
,
j
{\displaystyle i,j}
is a path with minimum number of edges and the number of edges is its length. Observe that if
i
=
j
{\displaystyle i=j}
then
d
i
j
=
0
{\displaystyle d_{ij}=0}
—and that is why the sum above is over
i
≠
j
{\displaystyle i\neq j}
— while if there is no path connecting
i
{\displaystyle i}
and
j
{\displaystyle j}
,
d
i
j
=
∞
{\displaystyle d_{ij}=\infty }
and their pairwise efficiency is zero. Being
d
i
j
{\displaystyle d_{ij}}
a count, for
i
≠
j
{\displaystyle i\neq j}
d
i
j
≥
1
{\displaystyle d_{ij}\geq 1}
and so
E
(
G
)
{\displaystyle E(G)}
is bounded between 0 and 1, i.e. it is a normalised descriptor.
== Weighted networks ==
The shortest path distance can also be generalised to weighted networks, see the weighted shortest path distance, but in this case
d
i
j
W
∈
[
0
,
+
∞
]
{\displaystyle d_{ij}^{W}\in [0,+\infty ]}
and the average communication efficiency needs to be properly normalised in order to be comparable among different networks.
In the authors proposed to normalise
E
(
G
)
{\displaystyle E(G)}
dividing it by the efficiency of an idealised version of the network
G
{\displaystyle G}
:
E
glob
(
G
)
=
E
(
G
)
E
(
G
ideal
)
.
{\displaystyle E_{\text{glob}}(G)={\frac {E(G)}{E(G^{\text{ideal}})}}.}
G
ideal
{\displaystyle G^{\text{ideal}}}
is the "ideal" graph on
N
{\displaystyle N}
nodes wherein all possible edges are present. In the unweighted case every edge has unitary weight,
G
ideal
{\displaystyle G^{\text{ideal}}}
is a clique, a full network, and
E
(
G
ideal
)
=
1
{\displaystyle E(G^{\text{ideal}})=1}
. When the edges are weighted, a sufficient condition (for having a proper normalisation, i.e.
E
glob
(
G
)
∈
[
0
,
1
]
{\displaystyle E_{\text{glob}}(G)\in [0,1]}
) on the distances in the ideal network, called this time
l
i
j
{\displaystyle l_{ij}}
, is
l
i
j
≤
d
i
j
{\displaystyle l_{ij}\leq d_{ij}}
for
i
,
j
=
1
,
.
.
.
,
N
{\displaystyle i,j=1,...,N}
.
l
i
j
{\displaystyle l_{ij}}
should be known (and different from zero) for all node pairs.
A common choice is to take them as the geographical or physical distances in spatial networks or as the maximum cost over all links, e.g.
l
i
j
=
1
w
max
{\displaystyle l_{ij}={\frac {1}{w_{\max }}}}
where
w
max
{\displaystyle w_{\max }}
indicates the maximum interaction strength in the network. However, in the authors highlight the issues of these choices when dealing with real world networks, which are characterised by heterogeneous structure and flows. For instance, choosing
l
i
j
=
1
w
max
{\displaystyle l_{ij}={\frac {1}{w_{\max }}}}
makes the global measure very sensitive to outliers in the distribution of weights and tends to under-estimate the actual efficiency of a network. The authors also propose a normalising procedure, i.e. a way for building
G
ideal
{\displaystyle G^{\text{ideal}}}
using all and only the information contained in the edge weights (and no other meta-data such as geographical distances), which is statistically robust and physically grounded.
== Efficiency and small-world behaviour ==
The global efficiency of a network is a measure comparable to
1
/
L
{\displaystyle 1/L}
, rather than just the average path length
L
{\displaystyle L}
itself. The key distinction is that, while
1
/
L
{\displaystyle 1/L}
measures efficiency in a system where only one packet of information is being moved through the network,
E
glob
(
G
)
{\displaystyle E_{\text{glob}}(G)}
measures the efficiency of parallel communication, that is when all the nodes are exchanging packets of information with each other concurrently.
A local average of pairwise communication efficiencies can be used as an alternative to the clustering coefficient of a network. The local efficiency of a network
G
{\displaystyle G}
is defined as:
E
l
o
c
(
G
,
i
)
=
1
N
∑
i
∈
G
E
(
G
i
)
{\displaystyle E_{loc}(G,i)={\frac {1}{N}}\sum _{i\in G}E(G_{i})}
where
G
i
{\displaystyle G_{i}}
is the local subgraph consisting only of a node
i
{\displaystyle i}
's immediate neighbors, but not the node
i
{\displaystyle i}
itself.
== Applications ==
Broadly speaking, the efficiency of a network can be used to quantify small world behavior in networks. Efficiency can also be used to determine cost-effective structures in weighted and unweighted networks. Comparing the two measures of efficiency in a network to a random network of the same size to see how economically a network is constructed. Furthermore, global efficiency is easier to use numerically than its counterpart, path length.
For these reasons the concept of efficiency has been used across the many diverse applications of network science.
Efficiency is useful in analysis of man-made networks such as transportation networks and communications networks. It is used to help determine how cost-efficient a particular network construction is, as well as how fault tolerant it is. Studies of such networks reveal that they tend to have high global efficiency, implying good use of resources, but low local efficiency. This is because, for example, a subway network is not closed, and passengers can be re-routed, by buses for example, even if a particular line in the network is down.
Beyond human constructed networks, efficiency is a useful metric when talking about physical biological networks. In any facet of biology, the scarcity of resource plays a key role, and biological networks are no exception. Efficiency is used in neuroscience to discuss information transfer across neural networks, where the physical space and resource constraints are a major factor. Efficiency has also been used in the study of ant colony tunnel systems, which are usually composed of large rooms as well as many sprawling tunnels. This application to ant colonies is not too surprising because the large structure of a colony must serve as a transportation network for various resources, most namely food.
== See also ==
Reliability (computer networking) – Protocol acknowledgement capability
== References == | Wikipedia/Efficiency_(network_science) |
The Communications Assistance for Law Enforcement Act (CALEA), also known as the "Digital Telephony Act," is a United States wiretapping law passed in 1994, during the presidency of Bill Clinton (Pub. L. No. 103-414, 108 Stat. 4279, codified at 47 USC 1001–1010).
CALEA's purpose is to enhance the ability of law enforcement agencies to conduct lawful interception of communication by requiring that telecommunications carriers and manufacturers of telecommunications equipment modify and design their equipment, facilities, and services to ensure that they have built-in capabilities for targeted surveillance, allowing federal agencies to selectively wiretap any telephone traffic; it has since been extended to cover broadband Internet and VoIP traffic. Some government agencies argue that it covers mass surveillance of communications rather than just tapping specific lines and that not all CALEA-based access requires a warrant.
Journalists and technologists have characterised the CALEA-mandated infrastructure as government backdoors. In 2024, the U.S. government realized that China had been tapping communications in the U.S. using that infrastructure for months, or perhaps longer.
The original reason for adopting CALEA was the Federal Bureau of Investigation's worry that increasing use of digital telephone exchange switches would make tapping phones at the phone company's central office harder and slower to execute, or in some cases impossible. Since the original requirement to add CALEA-compliant interfaces required phone companies to modify or replace hardware and software in their systems, U.S. Congress included funding for a limited time period to cover such network upgrades. CALEA was passed into law on October 25, 1994, and came into force on January 1, 1995.
In the years since CALEA was passed it has been greatly expanded to include all VoIP and broadband Internet traffic. From 2004 to 2007 there was a 62 percent growth in the number of wiretaps performed under CALEA – and more than 3,000 percent growth in interception of Internet data such as email.
By 2007, the FBI had spent $39 million on its Digital Collection System Network (DCSNet) system, which collects, stores, indexes, and analyzes communications data.
== Provisions of CALEA ==
In its own words, the purpose of CALEA is:
To amend title 18, United States Code, to make clear a telecommunications carrier's duty to cooperate in the interception of communications for Law Enforcement purposes, and for other purposes.
The U.S. Congress passed the CALEA to aid law enforcement in its effort to conduct criminal investigations requiring wiretapping of digital telephone networks. The Act obliges telecommunications companies to make it possible for law enforcement agencies to tap any phone conversations carried out over its networks, as well as making call detail records available. The act stipulates that it must not be possible for a person to detect that his or her conversation is being monitored by the respective government agency.
Common carriers, facilities-based broadband Internet access providers, and providers of interconnected Voice over Internet Protocol (VoIP) service – all three types of entities are defined to be “telecommunications carriers” and must meet the requirements of CALEA.
The CALEA Implementation Unit at the FBI has clarified that intercepted information is supposed to be sent to Law Enforcement concurrently with its capture.
On March 10, 2004, the United States Department of Justice, FBI and Drug Enforcement Administration filed a "Joint Petition for Expedited Rulemaking" in which they requested certain steps to accelerate CALEA compliance, and to extend the provisions of CALEA to include the ability to perform surveillance of all communications that travel over the Internet – such as Internet traffic and VoIP.
As a result, the Federal Communications Commission adopted its First Report and Order on the matter concluding that CALEA applies to facilities-based broadband Internet access providers and providers of interconnected (with the public switched telephone network) Voice-over-Internet-Protocol (VoIP) services.
In May 2006, the FCC adopted a "Second Report and Order", which clarified and affirmed the First Order:
The CALEA compliance deadline remains May 14, 2007.
Carriers are permitted to meet their CALEA obligations through the services of "Trusted Third Parties (TTP)" – that is, they can hire outside companies, which meet security requirements outlined in CALEA, to perform all of the required functions.
Carriers are responsible for CALEA development and implementation costs.
== Technical implementation ==
For Voice and Text messaging, CALEA software in the central office enables wiretap. If a call comes in for a number on the target phone a "conference bridge" is created and the second leg is sent to law enforcement at the place of their choosing. By law this must be outside of the phone company. This prevents law enforcement from being inside the phone company and possibly illegally tapping other phones.
Text messages are also sent to law enforcement.
There are two levels of CALEA wiretapping:
The first level only allows that the "meta data" about a call be sent. That is the parties to the call, the time of the call and for cell phones, the cell tower being used by the target phone. For text message, the same information is sent but the content is not sent. This level is called "Trap and Trace".
The second level of CALEA wiretap, when permitted, actually sends the voice and content of text messages. This is called "Title III" wiretap.
USA telecommunications providers must install new hardware or software, as well as modify old equipment, so that it doesn't interfere with the ability of a law enforcement agency (LEA) to perform real-time surveillance of any telephone or Internet traffic. Modern voice switches now have this capability built in, yet Internet equipment almost always requires some kind of intelligent deep packet inspection probe to get the job done. In both cases, the intercept function must single out a subscriber named in a warrant for intercept and then immediately send some (headers-only) or all (full content) of the intercepted data to an LEA. The LEA will then process this data with analysis software that is specialized towards criminal investigations.
All traditional voice switches on the U.S. market today have the CALEA intercept feature built in. The IP-based "soft switches" typically do not contain a built-in CALEA intercept feature; and other IP-transport elements (routers, switches, access multiplexers) almost always delegate the CALEA function to elements dedicated to inspecting and intercepting traffic. In such cases, hardware taps or switch/router mirror-ports are employed to deliver copies of all of a network's data to dedicated IP probes.
Probes can either send directly to the LEA according to the industry standard delivery formats (cf. ATIS T1.IAS, T1.678v2, et al.); or they can deliver to an intermediate element called a mediation device, where the mediation device does the formatting and communication of the data to the LEA. A probe that can send the correctly formatted data to the LEA is called a "self-contained" probe.
In order to be compliant, IP-based service providers (broadband, cable, VoIP) must choose either a self-contained probe, or a "dumb" probe component plus a mediation device, or they must implement the delivery of correctly formatted data for a named subscriber on their own.
== Controversy ==
The Electronic Frontier Foundation (EFF) warns that:
CALEA makes U.S. software and hardware less attractive for worldwide consumers.
CALEA is a reason to move research and development out of the US.
CALEA-free devices will probably be available in the grey market.
Journalist Marc Zwillinger from the Wall Street Journal explains his concerns with proposed revisions to the CALEA that would require Internet companies to provide law enforcement with a method of gaining access to communication on their networks. Zwillinger warns this new mandatory access could create a dangerous situation for multinational companies not being able to refuse demands from foreign governments. These governments could “threaten financial sanctions, asset seizures, imprisonment of employees and prohibition against a company's services in their countries." In addition, the creation of this new mechanism could create an easier way for hackers to gain access to the U.S. government's key. Moreover, the U.S. telephone network and the global internet differ in that U.S. telephone carriers “weren't responsible for decrypting communications unless the carrier possessed the decryption key. In fact, CALEA's legislative history is full of assurances that the Department of Justice and FBI had no intention to require providers to decrypt communications for which they did not have the key.” Therefore, a revision of the CALEA cannot necessarily secure companies from providing data on their devices during criminal investigations to foreign governments.
== Lawsuits ==
Originally CALEA only granted the ability to wiretap digital telephone networks, but in 2004, the United States Department of Justice (DOJ), Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF), Federal Bureau of Investigation (FBI), and Drug Enforcement Administration (DEA) filed a joint petition with the Federal Communications Commission (FCC) to expand their powers to include the ability to monitor VoIP and broadband Internet communications – so that they could monitor Web traffic as well as phone calls.
The Electronic Frontier Foundation has filed several lawsuits to prevent the FCC from granting these expanded domestic surveillance capabilities.
The FCC's First Report and Order, issued in September 2005, ruled that providers of broadband Internet access and interconnected VoIP services are regulable as “telecommunications carriers” under CALEA. That order was affirmed and further clarified by the Second Report and Order, dated May 2006. On May 5, 2006, a group of higher education and library organizations led by the American Council on Education (ACE) challenged that ruling, arguing that CALEA did not apply to them. On June 9, 2006, the D.C. Circuit Court summarily denied the petition without addressing the constitutionality.
== See also ==
Carnivore (FBI)
NSA ANT catalog
Tempora
DCSNET
ECHELON
Hepting v. AT&T
Lawful interception
Magic Lantern
PositiveID
Secrecy of correspondence
Secure communication
SORM (Russia)
Surveillance
Telecommunications Intercept and Collection Technology Unit
Telephone tapping
Total Information Awareness
Patriot Act
Verint
PRISM
== References ==
== Further reading ==
White Paper on Lawful Interception of IP Networks
Communications Assistance for Law Enforcement Act of 1994
FCC CALEA Home page
EFF CALEA page
Digital Surveillance: The Communications Assistance for Law Enforcement Act, Congressional Research Service, June 8, 2007
RFC 3924 - Cisco Architecture for Lawful Intercept in IP Networks
CALEA for Broadband? The Critics Are Unanimous :: Lasar's Letter on the Federal Communications Commission
Law enforcement groups have been lobbying for years for FCC Internet wiretapping plan: Lasar's Letter on the Federal Communications Commission
Cybertelecom: CALEA Information
Guide to lawful intercept legislation around the world
CableLabs Cable Broadband Intercept Specification
CALEA Q&A
== External links ==
Communications Assistance for Law Enforcement Act (PDF/details) as amended in the GPO Statute Compilations collection | Wikipedia/Communications_Assistance_For_Law_Enforcement_Act |
In the seven-layer OSI model of computer networking, the network layer is layer 3. The network layer is responsible for packet forwarding including routing through intermediate routers.
== Functions ==
The network layer provides the means of transferring variable-length network packets from a source to a destination host via one or more networks. Within the service layering semantics of the OSI (Open Systems Interconnection) network architecture, the network layer responds to service requests from the transport layer and issues service requests to the data link layer.
Functions of the network layer include:
Connectionless communication
For example, Internet Protocol is connectionless, in that a data packet can travel from a sender to a recipient without the recipient having to send an acknowledgement. Connection-oriented protocols exist at other, higher layers of the OSI model.
Host addressing
Every host in the network must have a unique address that determines where it is. This address is normally assigned from a hierarchical system. For example, you can be:
"Fred Murphy" to people in your house,
"Fred Murphy, 1 Main Street" to Dubliners,
"Fred Murphy, 1 Main Street, Dublin" to people in Ireland,
"Fred Murphy, 1 Main Street, Dublin, Ireland" to people anywhere in the world.
On the Internet, addresses are known as IP addresses (Internet Protocol).
Message forwarding
Since many networks are partitioned into subnetworks and connect to other networks for wide-area communications, networks use specialized hosts, called gateways or routers, to forward packets between networks.
== Relation to TCP/IP model ==
The TCP/IP model describes the protocols used by the Internet. The TCP/IP model has a layer called the Internet layer, located above the link layer. In many textbooks and other secondary references, the TCP/IP Internet layer is equated with the OSI network layer. However, this comparison is misleading, as the allowed characteristics of protocols (e.g., whether they are connection-oriented or connection-less) placed into these layers are different in the two models. The TCP/IP Internet layer is in fact only a subset of functionality of the network layer. It describes only one type of network architecture, the Internet.
== Fragmentation of Internet Protocol packets ==
The network layer is responsible for fragmentation and reassembly for IPv4 packets that are larger than the smallest MTU of all the intermediate links on the packet's path to its destination. It is the function of routers to fragment packets if needed, and of hosts to reassemble them if received.
Conversely, IPv6 packets are not fragmented during forwarding, but the MTU supported by a specific path must still be established, to avoid packet loss. For this, Path MTU discovery is used between endpoints, which makes it part of the Transport layer, instead of this layer.
== Protocols ==
The following are examples of protocols operating at the network layer.
== References ==
Tanenbaum, Andrew S. (2003). Computer networks. Upper Saddle River, New Jersey: Prentice Hall. ISBN 0-13-066102-3.
== External links ==
OSI Reference Model—The ISO Model of Architecture for Open Systems Interconnection, Hubert Zimmermann, IEEE Transactions on Communications, vol. 28, no. 4, April 1980, pp. 425 – 432. (PDF-Datei; 776 kB) | Wikipedia/Network_layer |
Law enforcement is the activity of some members of the government or other social institutions who act in an organized manner to enforce the law by investigating, deterring, rehabilitating, or punishing people who violate the rules and norms governing that society. The term encompasses police, courts and corrections. These three components of the criminal justice system may operate independently of each other or collectively through the use of record sharing and cooperation. Throughout the world, law enforcement are also associated with protecting the public, life, property, and keeping the peace in society.
The concept of law enforcement dates back to ancient times, and forms of law enforcement and police have existed in various forms across many human societies. Modern state legal codes use the term law enforcement officer or peace officer to include every person vested by the legislating state with police power or authority; traditionally, anyone sworn or badged who can arrest any person for a violation of criminal law is included under the umbrella term of law enforcement.
Although law enforcement may be most concerned with the prevention and punishment of crimes, organizations exist to discourage a wide variety of non-criminal violations of rules and norms, effected through the imposition of less severe consequences such as probation.
== History ==
Law enforcement organizations existed in ancient times, such as prefects in ancient China, paqūdus in Babylonia, curaca in the Inca Empire, vigiles in the Roman Empire, and Medjay in ancient Egypt. Who law enforcers were and reported to depended on the civilization and often changed over time, but they were typically enslaved people, soldiers, officers of a judge, or hired by settlements and households. Aside from their duties to enforce laws, many ancient law enforcers also served as slave catchers, firefighters, watchmen, city guards, and bodyguards.
By the post-classical period and the Middle Ages, forces such as the Santa Hermandades, the shurta, and the Maréchaussée provided services ranging from law enforcement and personal protection to customs enforcement and waste collection. In England, a complex law enforcement system emerged, where tithings, groups of ten families, were responsible for ensuring good behavior and apprehending criminals; groups of ten tithings ("hundreds") were overseen by a reeve; hundreds were governed by administrative divisions known as shires; and shires were overseen by shire-reeves. In feudal Japan, samurai were responsible for enforcing laws.
The concept of police as the primary law enforcement organization originated in Europe in the early modern period; the first statutory police force was the High Constables of Edinburgh in 1611, while the first organized police force was the Paris lieutenant général de police in 1667. Until the 18th century, law enforcement in England was mostly the responsibility of private citizens and thief-takers, albeit also including constables and watchmen. This system gradually shifted to government control following the 1749 establishment of the London Bow Street Runners, the first formal police force in Britain. In 1800, Napoleon reorganized French law enforcement to form the Paris Police Prefecture; the British government passed the Glasgow Police Act, establishing the City of Glasgow Police; and the Thames River Police was formed in England to combat theft on the River Thames. In September 1829, Robert Peel merged the Bow Street Runners and the Thames River Police to form the Metropolitan Police. The title of the "first modern police force" has still been claimed by the modern successors to these organizations.
Following European colonization of the Americas, the first law enforcement agencies in the Thirteen Colonies were the New York Sheriff's Office and the Albany County Sheriff's Department, both formed in the 1660s in the Province of New York. The Province of Carolina established slave-catcher patrols in the 1700s, and by 1785, the Charleston Guard and Watch was reported to have the duties and organization of a modern police force. The first municipal police department in the United States was the Philadelphia Police Department, while the first American state police, federal law enforcement agency was the United States Marshals Service, both formed in 1789. In the American frontier, law enforcement was the responsibility of county sheriffs, rangers, constables, and marshals. The first law enforcement agency in Canada was the Royal Newfoundland Constabulary, established in 1729, while the first Canadian national law enforcement agency was the Dominion Police, established in 1868. By the 19th century, improvements in technology, greater global connections, and changes in the sociopolitical order led to the establishment of police forces worldwide. National, regional, and municipal civilian law enforcement agencies exist in practically all countries; to promote their international cooperation, the International Criminal Police Organization, also known as Interpol, was formed in September 1923. Technology has made an immense impact on law enforcement, leading to the development and regular use of police cars, police radio systems, police aviation, police tactical units, and police body cameras.
== Law enforcement agencies ==
Most law enforcement is conducted by some law enforcement agency, typically a police force. Civilians generally staff police agencies, which are typically not a military branch. However, some militaries do have branches that enforce laws among the civilian populace, often called gendarmerie, security forces, or internal troops. Social investment in enforcement through such organizations can be massive in terms of the resources invested in the activity and the number of people professionally engaged to perform those functions.
Law enforcement agencies are limited to operating within a specified jurisdiction. These are typically organized into three basic levels: national, regional, and municipal. However, depending on certain factors, there may be more or less levels, or they may be merged: in the United States, there are federal, state, and local police and sheriff agencies; in Canada, some territories may only have national-level law enforcement, while some provinces have national, provincial, and municipal; in Japan, there is a national police agency, which supervises the police agencies for each individual prefecture; and in Niger, there is a national police for urban areas and a gendarmerie for rural areas, both technically national-level. In some cases, there may be multiple agencies at the same level but with different focuses: for example, in the United States, the Drug Enforcement Administration and the Bureau of Alcohol, Tobacco, Firearms and Explosives are both national-level federal law enforcement agencies, but the DEA focuses on narcotics crimes, while the ATF focuses on weapon regulation violations.
Various segments of society may have their own specialist law enforcement agency, such as the military having military police, schools having school police or campus police, or airports having airport police. Private police may exist in some jurisdictions, often to provide dedicated law enforcement for privately-owned property or infrastructure, such as railroad police for private railways or hospital police for privately-owned hospital campuses.
Depending on various factors, such as whether an agency is autonomous or dependent on other organizations for its operations, the governing body that funds and oversees the agency may decide to dissolve or consolidate its operations. Dissolution of an agency may occur when the governing body or the agent itself decides to end operations. This can occur due to multiple reasons, including criminal justice reform, a lack of population in the jurisdiction, mass resignations, efforts to deter corruption, or the governing body contracting with a different agency that renders the original agency redundant or obsolete. According to the International Association of Chiefs of Police, agency consolidation can occur to improve efficiency, consolidate resources, or when forming a new type of government.
== See also ==
Outline of law enforcement – structured list of topics related to law enforcement, organized by subject area
Law enforcement by country
Vigilantism
Criminal law
Parking enforcement officer
== References ==
== External links == | Wikipedia/Law_enforcement |
The Open Systems Interconnection (OSI) model is a reference model developed by the International Organization for Standardization (ISO) that "provides a common basis for the coordination of standards development for the purpose of systems interconnection."
In the OSI reference model, the components of a communication system are distinguished in seven abstraction layers: Physical, Data Link, Network, Transport, Session, Presentation, and Application.
The model describes communications from the physical implementation of transmitting bits across a transmission medium to the highest-level representation of data of a distributed application. Each layer has well-defined functions and semantics and serves a class of functionality to the layer above it and is served by the layer below it. Established, well-known communication protocols are decomposed in software development into the model's hierarchy of function calls.
The Internet protocol suite as defined in RFC 1122 and RFC 1123 is a model of networking developed contemporarily to the OSI model, and was funded primarily by the U.S. Department of Defense. It was the foundation for the development of the Internet. It assumed the presence of generic physical links and focused primarily on the software layers of communication, with a similar but much less rigorous structure than the OSI model.
In comparison, several networking models have sought to create an intellectual framework for clarifying networking concepts and activities, but none have been as successful as the OSI reference model in becoming the standard model for discussing and teaching networking in the field of information technology. The model allows transparent communication through equivalent exchange of protocol data units (PDUs) between two parties, through what is known as peer-to-peer networking (also known as peer-to-peer communication). As a result, the OSI reference model has not only become an important piece among professionals and non-professionals alike, but also in all networking between one or many parties, due in large part to its commonly accepted user-friendly framework.
== History ==
The development of the OSI model started in the late 1970s to support the emergence of the diverse computer networking methods that were competing for application in the large national networking efforts in the world (see OSI protocols and Protocol Wars). In the 1980s, the model became a working product of the Open Systems Interconnection group at the International Organization for Standardization (ISO). While attempting to provide a comprehensive description of networking, the model failed to garner reliance during the design of the Internet, which is reflected in the less prescriptive Internet Protocol Suite, principally sponsored under the auspices of the Internet Engineering Task Force (IETF).
In the early- and mid-1970s, networking was largely either government-sponsored (NPL network in the UK, ARPANET in the US, CYCLADES in France) or vendor-developed with proprietary standards, such as IBM's Systems Network Architecture and Digital Equipment Corporation's DECnet. Public data networks were only just beginning to emerge, and these began to use the X.25 standard in the late 1970s.
The Experimental Packet Switched System in the UK c. 1973–1975 identified the need for defining higher-level protocols. The UK National Computing Centre publication, Why Distributed Computing, which came from considerable research into future configurations for computer systems, resulted in the UK presenting the case for an international standards committee to cover this area at the ISO meeting in Sydney in March 1977.
Beginning in 1977, the ISO initiated a program to develop general standards and methods of networking. A similar process evolved at the International Telegraph and Telephone Consultative Committee (CCITT, from French: Comité Consultatif International Téléphonique et Télégraphique). Both bodies developed documents that defined similar networking models. The British Department of Trade and Industry acted as the secretariat, and universities in the United Kingdom developed prototypes of the standards.
The OSI model was first defined in raw form in Washington, D.C., in February 1978 by French software engineer Hubert Zimmermann, and the refined but still draft standard was published by the ISO in 1980.
The drafters of the reference model had to contend with many competing priorities and interests. The rate of technological change made it necessary to define standards that new systems could converge to rather than standardizing procedures after the fact; the reverse of the traditional approach to developing standards. Although not a standard itself, it was a framework in which future standards could be defined.
In May 1983, the CCITT and ISO documents were merged to form The Basic Reference Model for Open Systems Interconnection, usually referred to as the Open Systems Interconnection Reference Model, OSI Reference Model, or simply OSI model. It was published in 1984 by both the ISO, as standard ISO 7498, and the renamed CCITT (now called the Telecommunications Standardization Sector of the International Telecommunication Union or ITU-T) as standard X.200.
OSI had two major components: an abstract model of networking, called the Basic Reference Model or seven-layer model, and a set of specific protocols. The OSI reference model was a major advance in the standardisation of network concepts. It promoted the idea of a consistent model of protocol layers, defining interoperability between network devices and software.
The concept of a seven-layer model was provided by the work of Charles Bachman at Honeywell Information Systems. Various aspects of OSI design evolved from experiences with the NPL network, ARPANET, CYCLADES, EIN, and the International Network Working Group (IFIP WG6.1). In this model, a networking system was divided into layers. Within each layer, one or more entities implement its functionality. Each entity interacted directly only with the layer immediately beneath it and provided facilities for use by the layer above it.
The OSI standards documents are available from the ITU-T as the X.200 series of recommendations. Some of the protocol specifications were also available as part of the ITU-T X series. The equivalent ISO/IEC standards for the OSI model were available from ISO. Not all are free of charge.
OSI was an industry effort, attempting to get industry participants to agree on common network standards to provide multi-vendor interoperability. It was common for large networks to support multiple network protocol suites, with many devices unable to interoperate with other devices because of a lack of common protocols. For a period in the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks. However, while OSI developed its networking standards in the late 1980s, TCP/IP came into widespread use on multi-vendor networks for internetworking.
The OSI model is still used as a reference for teaching and documentation; however, the OSI protocols originally conceived for the model did not gain popularity. Some engineers argue the OSI reference model is still relevant to cloud computing. Others say the original OSI model does not fit today's networking protocols and have suggested instead a simplified approach.
== Definitions ==
Communication protocols enable an entity in one host to interact with a corresponding entity at the same layer in another host. Service definitions, like the OSI model, abstractly describe the functionality provided to a layer N by a layer N−1, where N is one of the seven layers of protocols operating in the local host.
At each level N, two entities at the communicating devices (layer N peers) exchange protocol data units (PDUs) by means of a layer N protocol. Each PDU contains a payload, called the service data unit (SDU), along with protocol-related headers or footers.
Data processing by two communicating OSI-compatible devices proceeds as follows:
The data to be transmitted is composed at the topmost layer of the transmitting device (layer N) into a protocol data unit (PDU).
The PDU is passed to layer N−1, where it is known as the service data unit (SDU).
At layer N−1 the SDU is concatenated with a header, a footer, or both, producing a layer N−1 PDU. It is then passed to layer N−2.
The process continues until reaching the lowermost level, from which the data is transmitted to the receiving device.
At the receiving device the data is passed from the lowest to the highest layer as a series of SDUs while being successively stripped from each layer's header or footer until reaching the topmost layer, where the last of the data is consumed.
=== Standards documents ===
The OSI model was defined in ISO/IEC 7498 which consists of the following parts:
ISO/IEC 7498-1 The Basic Model
ISO/IEC 7498-2 Security Architecture
ISO/IEC 7498-3 Naming and addressing
ISO/IEC 7498-4 Management framework
ISO/IEC 7498-1 is also published as ITU-T Recommendation X.200.
== Layer architecture ==
The recommendation X.200 describes seven layers, labelled 1 to 7. Layer 1 is the lowest layer in this model.
=== Layer 1: Physical layer ===
The physical layer is responsible for the transmission and reception of unstructured raw data between a device, such as a network interface controller, Ethernet hub, or network switch, and a physical transmission medium. It converts the digital bits into electrical, radio, or optical signals (analogue signals). Layer specifications define characteristics such as voltage levels, the timing of voltage changes, physical data rates, maximum transmission distances, modulation scheme, channel access method and physical connectors. This includes the layout of pins, voltages, line impedance, cable specifications, signal timing and frequency for wireless devices. Bit rate control is done at the physical layer and may define transmission mode as simplex, half duplex, and full duplex. The components of a physical layer can be described in terms of the network topology. Physical layer specifications are included in the specifications for the ubiquitous Bluetooth, Ethernet, and USB standards. An example of a less well-known physical layer specification would be for the CAN standard.
The physical layer also specifies how encoding occurs over a physical signal, such as electrical voltage or a light pulse. For example, a 1 bit might be represented on a copper wire by the transition from a 0-volt to a 5-volt signal, whereas a 0 bit might be represented by the transition from a 5-volt to a 0-volt signal. As a result, common problems occurring at the physical layer are often related to the incorrect media termination, EMI or noise scrambling, and NICs and hubs that are misconfigured or do not work correctly.
=== Layer 2: Data link layer ===
The data link layer provides node-to-node data transfer—a link between two directly connected nodes. It detects and possibly corrects errors that may occur in the physical layer. It defines the protocol to establish and terminate a connection between two physically connected devices. It also defines the protocol for flow control between them.
IEEE 802 divides the data link layer into two sublayers:
Medium access control (MAC) layer – responsible for controlling how devices in a network gain access to a medium and permission to transmit data.
Logical link control (LLC) layer – responsible for identifying and encapsulating network layer protocols, and controls error checking and frame synchronization.
The MAC and LLC layers of IEEE 802 networks such as 802.3 Ethernet, 802.11 Wi-Fi, and 802.15.4 Zigbee operate at the data link layer.
The Point-to-Point Protocol (PPP) is a data link layer protocol that can operate over several different physical layers, such as synchronous and asynchronous serial lines.
The ITU-T G.hn standard, which provides high-speed local area networking over existing wires (power lines, phone lines and coaxial cables), includes a complete data link layer that provides both error correction and flow control by means of a selective-repeat sliding-window protocol.
Security, specifically (authenticated) encryption, at this layer can be applied with MACsec.
=== Layer 3: Network layer ===
The network layer provides the functional and procedural means of transferring packets from one node to another connected in "different networks". A network is a medium to which many nodes can be connected, on which every node has an address and which permits nodes connected to it to transfer messages to other nodes connected to it by merely providing the content of a message and the address of the destination node and letting the network find the way to deliver the message to the destination node, possibly routing it through intermediate nodes. If the message is too large to be transmitted from one node to another on the data link layer between those nodes, the network may implement message delivery by splitting the message into several fragments at one node, sending the fragments independently, and reassembling the fragments at another node. It may, but does not need to, report delivery errors.
Message delivery at the network layer is not necessarily guaranteed to be reliable; a network layer protocol may provide reliable message delivery, but it does not need to do so.
A number of layer-management protocols, a function defined in the management annex, ISO 7498/4, belong to the network layer. These include routing protocols, multicast group management, network-layer information and error, and network-layer address assignment. It is the function of the payload that makes these belong to the network layer, not the protocol that carries them.
=== Layer 4: Transport layer ===
The transport layer provides the functional and procedural means of transferring variable-length data sequences from a source host to a destination host from one application to another across a network while maintaining the quality-of-service functions. Transport protocols may be connection-oriented or connectionless.
This may require breaking large protocol data units or long data streams into smaller chunks called "segments", since the network layer imposes a maximum packet size called the maximum transmission unit (MTU), which depends on the maximum packet size imposed by all data link layers on the network path between the two hosts. The amount of data in a data segment must be small enough to allow for a network-layer header and a transport-layer header. For example, for data being transferred across Ethernet, the MTU is 1500 bytes, the minimum size of a TCP header is 20 bytes, and the minimum size of an IPv4 header is 20 bytes, so the maximum segment size is 1500−(20+20) bytes, or 1460 bytes. The process of dividing data into segments is called segmentation; it is an optional function of the transport layer. Some connection-oriented transport protocols, such as TCP and the OSI connection-oriented transport protocol (COTP), perform segmentation and reassembly of segments on the receiving side; connectionless transport protocols, such as UDP and the OSI connectionless transport protocol (CLTP), usually do not.
The transport layer also controls the reliability of a given link between a source and destination host through flow control, error control, and acknowledgments of sequence and existence. Some protocols are state- and connection-oriented. This means that the transport layer can keep track of the segments and retransmit those that fail delivery through the acknowledgment hand-shake system. The transport layer will also provide the acknowledgement of the successful data transmission and sends the next data if no errors occurred.
Reliability, however, is not a strict requirement within the transport layer. Protocols like UDP, for example, are used in applications that are willing to accept some packet loss, reordering, errors or duplication. Streaming media, real-time multiplayer games and voice over IP (VoIP) are examples of applications in which loss of packets is not usually a fatal problem.
The OSI connection-oriented transport protocol defines five classes of connection-mode transport protocols, ranging from class 0 (which is also known as TP0 and provides the fewest features) to class 4 (TP4, designed for less reliable networks, similar to the Internet). Class 0 contains no error recovery and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the session layer. Also, all OSI TP connection-mode protocol classes provide expedited data and preservation of record boundaries. Detailed characteristics of TP0–4 classes are shown in the following table:
An easy way to visualize the transport layer is to compare it with a post office, which deals with the dispatch and classification of mail and parcels sent. A post office inspects only the outer envelope of mail to determine its delivery. Higher layers may have the equivalent of double envelopes, such as cryptographic presentation services that can be read by the addressee only. Roughly speaking, tunnelling protocols operate at the transport layer, such as carrying non-IP protocols such as IBM's SNA or Novell's IPX over an IP network, or end-to-end encryption with IPsec. While Generic Routing Encapsulation (GRE) might seem to be a network-layer protocol, if the encapsulation of the payload takes place only at the endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains complete Layer 2 frames or Layer 3 packets to deliver to the endpoint. L2TP carries PPP frames inside transport segments.
Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition of the transport layer, the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) of the Internet Protocol Suite are commonly categorized as layer 4 protocols within OSI.
Transport Layer Security (TLS) does not strictly fit inside the model either. It contains characteristics of the transport and presentation layers.
=== Layer 5: Session layer ===
The session layer creates the setup, controls the connections, and ends the teardown, between two or more computers, which is called a "session". Common functions of the session layer include user logon (establishment) and user logoff (termination) functions. Including this matter, authentication methods are also built into most client software, such as FTP Client and NFS Client for Microsoft Networks. Therefore, the session layer establishes, manages and terminates the connections between the local and remote applications. The session layer also provides for full-duplex, half-duplex, or simplex operation, and establishes procedures for checkpointing, suspending, restarting, and terminating a session between two related streams of data, such as an audio and a video stream in a web-conferencing application. Therefore, the session layer is commonly implemented explicitly in application environments that use remote procedure calls.
=== Layer 6: Presentation layer ===
The presentation layer establishes data formatting and data translation into a format specified by the application layer during the encapsulation of outgoing messages while being passed down the protocol stack, and possibly reversed during the deencapsulation of incoming messages when being passed up the protocol stack. For this very reason, outgoing messages during encapsulation are converted into a format specified by the application layer, while the conversion for incoming messages during deencapsulation are reversed.
The presentation layer handles protocol conversion, data encryption, data decryption, data compression, data decompression, incompatibility of data representation between operating systems, and graphic commands. The presentation layer transforms data into the form that the application layer accepts, to be sent across a network. Since the presentation layer converts data and graphics into a display format for the application layer, the presentation layer is sometimes called the syntax layer. For this reason, the presentation layer negotiates the transfer of syntax structure through the Basic Encoding Rules of Abstract Syntax Notation One (ASN.1), with capabilities such as converting an EBCDIC-coded text file to an ASCII-coded file, or serialization of objects and other data structures from and to XML.
=== Layer 7: Application layer ===
The application layer is the layer of the OSI model that is closest to the end user, which means both the OSI application layer and the user interact directly with a software application that implements a component of communication between the client and server, such as File Explorer and Microsoft Word. Such application programs fall outside the scope of the OSI model unless they are directly integrated into the application layer through the functions of communication, as is the case with applications such as web browsers and email programs. Other examples of software are Microsoft Network Software for File and Printer Sharing and Unix/Linux Network File System Client for access to shared file resources.
Application-layer functions typically include file sharing, message handling, and database access, through the most common protocols at the application layer, known as HTTP, FTP, SMB/CIFS, TFTP, and SMTP. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. The most important distinction in the application layer is the distinction between the application entity and the application. For example, a reservation website might have two application entities: one using HTTP to communicate with its users, and one for a remote database protocol to record reservations. Neither of these protocols have anything to do with reservations. That logic is in the application itself. The application layer has no means to determine the availability of resources in the network.
== Cross-layer functions ==
Cross-layer functions are services that are not tied to a given layer, but may affect more than one layer. Some orthogonal aspects, such as management and security, involve all of the layers (See ITU-T X.800 Recommendation). These services are aimed at improving the CIA triad—confidentiality, integrity, and availability—of the transmitted data.
Cross-layer functions are the norm, in practice, because the availability of a communication service is determined by the interaction between network design and network management protocols.
Specific examples of cross-layer functions include the following:
Security service (telecommunication) as defined by ITU-T X.800 recommendation.
Management functions, i.e. functions that permit to configure, instantiate, monitor, terminate the communications of two or more entities: there is a specific application-layer protocol, Common Management Information Protocol (CMIP) and its corresponding service, Common Management Information Service (CMIS), they need to interact with every layer in order to deal with their instances.
Multiprotocol Label Switching (MPLS), ATM, and X.25 are 3a protocols. OSI subdivides the Network Layer into three sublayers: 3a) Subnetwork Access, 3b) Subnetwork Dependent Convergence and 3c) Subnetwork Independent Convergence. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram-based service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames. Sometimes one sees reference to a Layer 2.5.
Cross MAC and PHY Scheduling is essential in wireless networks because of the time-varying nature of wireless channels. By scheduling packet transmission only in favourable channel conditions, which requires the MAC layer to obtain channel state information from the PHY layer, network throughput can be significantly improved and energy waste can be avoided.
== Programming interfaces ==
Neither the OSI Reference Model, nor any OSI protocol specifications, outline any programming interfaces, other than deliberately abstract service descriptions. Protocol specifications define a methodology for communication between peers, but the software interfaces are implementation-specific.
For example, the Network Driver Interface Specification (NDIS) and Open Data-Link Interface (ODI) are interfaces between the media (layer 2) and the network protocol (layer 3).
== Comparison to other networking suites ==
The table below presents a list of OSI layers, the original OSI protocols, and some approximate modern matches. This correspondence is rough: the OSI model contains idiosyncrasies not found in later systems such as the IP stack in modern Internet.
=== Comparison with TCP/IP model ===
The design of protocols in the TCP/IP model of the Internet does not concern itself with strict hierarchical encapsulation and layering. RFC 3439 contains a section entitled "Layering considered harmful". TCP/IP does recognize four broad layers of functionality which are derived from the operating scope of their contained protocols: the scope of the software application; the host-to-host transport path; the internetworking range; and the scope of the direct links to other nodes on the local network.
Despite using a different concept for layering than the OSI model, these layers are often compared with the OSI layering scheme in the following manner:
The Internet application layer maps to the OSI application layer, presentation layer, and most of the session layer.
The TCP/IP transport layer maps to the graceful close function of the OSI session layer as well as the OSI transport layer.
The internet layer performs functions as those in a subset of the OSI network layer.
The link layer corresponds to the OSI data link layer and may include similar functions as the physical layer, as well as some protocols of the OSI's network layer.
These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in the internal organization of the network layer.
The OSI protocol suite that was specified as part of the OSI project was considered by many as too complicated and inefficient, and to a large extent unimplementable. Taking the "forklift upgrade" approach to networking, it specified eliminating all existing networking protocols and replacing them at all layers of the stack. This made implementation difficult and was resisted by many vendors and users with significant investments in other network technologies. In addition, the protocols included so many optional features that many vendors' implementations were not interoperable.
Although the OSI model is often still referenced, the Internet protocol suite has become the standard for networking. TCP/IP's pragmatic approach to computer networking and to independent implementations of simplified protocols made it a practical methodology. Some protocols and specifications in the OSI stack remain in use, one example being IS-IS, which was specified for OSI as ISO/IEC 10589:2002 and adapted for Internet use with TCP/IP as RFC 1142.
== See also ==
== References ==
== Further reading ==
Day, John D. (2008). Patterns in Network Architecture: A Return to Fundamentals. Upper Saddle River, N.J.: Pearson Education. ISBN 978-0-13-225242-3. OCLC 213482801.
Dickson, Gary; Lloyd, Alan (1992). Open Systems Interconnection. New York: Prentice Hall. ISBN 978-0-13-640111-7. OCLC 1245634475 – via Internet Archive.
Piscitello, David M.; Chapin, A. Lyman (1993). Open systems networking : TCP/IP and OSI. Reading, Mass.: Addison-Wesley Pub. Co. ISBN 978-0-201-56334-4. OCLC 624431223 – via Internet Archive.
Rose, Marshall T. (1990). The Open Book: A Practical Perspective on OSI. Englewood Cliffs, N.J.: Prentice Hall. ISBN 978-0-13-643016-2. OCLC 1415988401 – via Internet Archive.
Russell, Andrew L. (2014). Open Standards and the Digital Age: History, Ideology, and Networks. Cambridge University Press. ISBN 978-1-139-91661-5. OCLC 881237495. Partial preview at Google Books.
Zimmermann, Hubert (April 1980). "OSI Reference Model — The ISO Model of Architecture for Open Systems Interconnection". IEEE Transactions on Communications. 28 (4): 425–432. CiteSeerX 10.1.1.136.9497. doi:10.1109/TCOM.1980.1094702. ISSN 0090-6778. OCLC 5858668034. S2CID 16013989.
== External links ==
"Windows network architecture and the OSI model". Microsoft Learn. 2 February 2024. Retrieved 12 July 2024.
"ISO/IEC standard 7498-1:1994 - Service definition for the association control service element". ISO Standards Maintenance Portal. Retrieved 12 July 2024. (PDF document inside ZIP archive) (requires HTTP cookies in order to accept licence agreement)
"ITU Recommendation X.200". International Telecommunication Union. 2 June 1998. Retrieved 12 July 2024.
"INFormation CHanGe Architectures and Flow Charts powered by Google App Engine". infchg.appspot.com. Archived from the original on 26 May 2012.
"Internetworking Technology Handbook". docwiki.cisco.com. 10 July 2015. Archived from the original on 6 September 2015.
EdXD; Saikot, Mahmud Hasan (25 November 2021). "7 Layers of OSI Model Explained". ByteXD. Retrieved 12 July 2024. | Wikipedia/OSI_model |
Retransmission, essentially identical with automatic repeat request (ARQ), is the resending of packets which have been either damaged or lost. Retransmission is one of the basic mechanisms used by protocols operating over a packet switched computer network to provide reliable communication (such as that provided by a reliable byte stream, for example TCP).
Such networks are usually "unreliable", meaning they offer no guarantees that they will not delay, damage, or lose packets, or deliver them out of order. Protocols which provide reliable communication over such networks use a combination of acknowledgments (i.e., an explicit receipt from the destination of the data), retransmission of missing or damaged packets (usually initiated by a time-out), and checksums to provide that reliability.
== Acknowledgment ==
There are several forms of acknowledgement which can be used alone or together in networking protocols:
Positive Acknowledgement: the receiver explicitly notifies the sender which packets, messages, or segments were received correctly. Positive Acknowledgement therefore also implicitly informs the sender which packets were not received and provides detail on packets which need to be retransmitted.
Negative Acknowledgment (NACK): the receiver explicitly notifies the sender which packets, messages, or segments were received incorrectly and thus may need to be retransmitted (RFC 4077).
Selective Acknowledgment (SACK): the receiver explicitly lists which packets, messages, or segments in a stream are acknowledged (either negatively or positively). Positive selective acknowledgment is an option in TCP (RFC 2018) that is useful in Satellite Internet access (RFC 2488).
Cumulative Acknowledgment: the receiver acknowledges that it correctly received a packet, message, or segment in a stream which implicitly informs the sender that the previous packets were received correctly. TCP uses cumulative acknowledgment with its TCP sliding window.
== Retransmission ==
Retransmission is a very simple concept. Whenever one party sends something to the other party, it retains a copy of the data it sent until the recipient has acknowledged that it received it. In a variety of circumstances the sender automatically retransmits the data using the retained copy. Reasons for resending include:
if no such acknowledgment is forthcoming within a reasonable time, the time-out
the sender discovers, often through some out of band means, that the transmission was unsuccessful
if the receiver knows that expected data has not arrived, and so notifies the sender
if the receiver knows that the data has arrived, but in a damaged condition, and indicates that to the sender
== See also ==
Error control
Reliable system design
Truncated binary exponential backoff
TCP congestion avoidance algorithm
Development of TCP
QSL card
== References == | Wikipedia/Retransmission_(data_networks) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.