source stringlengths 32 199 | text stringlengths 26 3k |
|---|---|
https://en.wikipedia.org/wiki/Input | Input may refer to:
Computing
Input (computer science), the act of entering data into a computer or data processing system
Information, any data entered into a computer or data processing system
Input device
Input method
Input port (disambiguation)
Input/output (I/O), in computing
Other
Input (talk show)
Input (typeface)
International Public Television Screening Conference (INPUT), an international public television organization
Input (online magazine), an online technology and culture magazine owned by Bustle Digital Group
See also
Independent variable in a mathematical function
In economics, a factor of production, a resource employed to produce goods and services
Advice (opinion)
Impute (disambiguation)
Output (disambiguation) |
https://en.wikipedia.org/wiki/Intelligent%20Network | The Intelligent Network (IN) is the standard network architecture specified in the ITU-T Q.1200 series recommendations. It is intended for fixed as well as mobile telecom networks. It allows operators to differentiate themselves by providing value-added services in addition to the standard telecom services such as PSTN, ISDN on fixed networks, and GSM services on mobile phones or other mobile devices.
The intelligence is provided by network nodes on the service layer, distinct from the switching layer of the core network, as opposed to solutions based on intelligence in the core switches or equipment. The IN nodes are typically owned by telecommunications service providers such as a telephone company or mobile phone operator.
IN is supported by the Signaling System #7 (SS7) protocol between network switching centers and other network nodes owned by network operators.
Examples of IN services
Televoting
Call screening
Local number portability
Toll-free calls/Freephone
Prepaid calling
Account card calling
Virtual private networks (such as family group calling)
Centrex service (Virtual PBX)
Private-number plans (with numbers remaining unpublished in directories)
Universal Personal Telecommunications service (a universal personal telephone number)
Mass-calling service
Prefix free dialing from cellphones abroad
Seamless MMS message access from abroad
Reverse charging
Home Area Discount
Premium Rate calls
Call distribution based on various criteria associated with the call
Location-based routing
Time-based routing
Proportional call distribution (such as between two or more call centres or offices)
Call queueing
Call transfer
History and key concepts
The IN concepts, architecture and protocols were originally developed as standards by the ITU-T which is the standardization committee of the International Telecommunication Union; prior to this a number of telecommunications providers had proprietary implementations. The primary aim of the IN was to enhance the core telephony services offered by traditional telecommunications networks, which usually amounted to making and receiving voice calls, sometimes with call divert. This core would then provide a basis upon which operators could build services in addition to those already present on a standard telephone exchange.
A complete description of the IN emerged in a set of ITU-T standards named Q.1210 to Q.1219, or Capability Set One (CS-1) as they became known. The standards defined a complete architecture including the architectural view, state machines, physical implementation and protocols. They were universally embraced by telecom suppliers and operators, although many variants were derived for use in different parts of the world (see Variants below).
Following the success of CS-1, further enhancements followed in the form of CS-2. Although the standards were completed, they were not as widely implemented as CS-1, partly because of the increasing power of the variants, but |
https://en.wikipedia.org/wiki/Interchangeability | Interchangeability can refer to:
Interchangeable parts, the ability to select components for assembly at random and fit them together within proper tolerances
Interchangeability (computer science), the ability that an object can be replaced by another object without affecting code using the object
Interchangeable random variables in mathematics |
https://en.wikipedia.org/wiki/Interchange%20circuit | In telecommunication, an interchange circuit is a circuit that facilitates the exchange of data and signaling information between data terminal equipment (DTE) and data circuit-terminating equipment (DCE).
An interchange circuit can carry many types of signals and provide many types of service features, such as control signals, timing signals, and common return functions.
References
Communication circuits
Telecommunications equipment |
https://en.wikipedia.org/wiki/Interconnect%20facility | Interconnect facility: In a communications network, one or more communications links that (a) are used to provide local area communications service among several locations and (b) collectively form a node in the network.
An interconnect facility may include network control and administrative circuits as well as the primary traffic circuits.
An interconnect facility may use any medium available and may be redundant.
References
Network architecture |
https://en.wikipedia.org/wiki/IP%20address%20spoofing | In computer networking, IP address spoofing or IP spoofing is the creation of Internet Protocol (IP) packets with a false source IP address, for the purpose of impersonating another computing system.
Background
The basic protocol for sending data over the Internet network and many other computer networks is the Internet Protocol (IP). The protocol specifies that each IP packet must have a header which contains (among other things) the IP address of the sender of the packet. The source IP address is normally the address that the packet was sent from, but the sender's address in the header can be altered, so that to the recipient it appears that the packet came from another source.
The protocol requires the receiving computer to send back a response to the source IP address therefore spoofing is mainly used when the sender can anticipate the network response or does not care about the response.
The source IP address provides only limited information about the sender. It may provide general information on the region, city and town when on the packet was sent. It does not provide information on the identity of the sender or the computer being used.
Applications
IP address spoofing involving the use of a trusted IP address can be used by network intruders to overcome network security measures, such as authentication based on IP addresses. This type of attack is most effective where trust relationships exist between machines. For example, it is common on some corporate networks to have internal systems trust each other, so that users can log in without a username or password provided they are connecting from another machine on the internal network – which would require them already being logged in. By spoofing a connection from a trusted machine, an attacker on the same network may be able to access the target machine without authentication.
IP address spoofing is most frequently used in denial-of-service attacks, where the objective is to flood the target with an overwhelming volume of traffic, and the attacker does not care about receiving responses to the attack packets. Packets with spoofed IP addresses are more difficult to filter since each spoofed packet appears to come from a different address, and they hide the true source of the attack. Denial of service attacks that use spoofing typically randomly choose addresses from the entire IP address space, though more sophisticated spoofing mechanisms might avoid non-routable addresses or unused portions of the IP address space. The proliferation of large botnets makes spoofing less important in denial of service attacks, but attackers typically have spoofing available as a tool, if they want to use it, so defenses against denial-of-service attacks that rely on the validity of the source IP address in attack packets might have trouble with spoofed packets. Backscatter, a technique used to observe denial-of-service attack activity in the Internet, relies on attackers' use of IP spoofing for its |
https://en.wikipedia.org/wiki/Ionospheric%20sounding | In telecommunication and radio science, an ionospheric sounding is a technique that provides real-time data on high-frequency ionospheric-dependent radio propagation, using a basic system consisting of a synchronized transmitter and receiver.
The time delay between transmission and reception is translated into effective ionospheric layer altitude. Vertical incident sounding uses a collocated transmitter and receiver and involves directing a range of frequencies vertically to the ionosphere and measuring the values of the reflected returned signals to determine the effective ionosphere layer altitude. This technique is also used to determine the critical frequency. Oblique sounders use a transmitter at one end of a given propagation path, and a synchronized receiver, usually with an oscilloscope-type display (ionogram), at the other end. The transmitter emits a stepped- or swept-frequency signal which is displayed or measured at the receiver. The measurement converts time delay to effective altitude of the ionospheric layer. The ionogram display shows the effective altitude of the ionospheric layer as a function of frequency.
See also
Kennelly–Heaviside layer
Edward V. Appleton
References
Ionosphere
Radio frequency propagation |
https://en.wikipedia.org/wiki/Isochronous%20burst%20transmission | Isochronous burst transmission is a method of transmission. In a data network where the information-bearer channel rate is higher than the input data signaling rate, transmission is performed by interrupting, at controlled intervals, the data stream being transmitted.
Note 1: Burst transmission in isochronous form enables communication between data terminal equipment (DTE) and data networks that operate at dissimilar data signaling rates, such as when the information-bearer channel rate is higher than the DTE output data signaling rate.
Note 2: The binary digits are transferred at the information-bearer channel rate. The data transfer is interrupted at intervals in order to produce the required average data signaling rate.
Note 3: The interruption is always for an integral number of unit intervals.
Note 4: Isochronous burst transmission has particular application where envelopes are being transferred between data circuit terminating equipment (DCE) and only the bytes contained within the envelopes are being transferred between the DCE and the DTE. Synonyms: burst isochronous (deprecated), interrupted isochronous transmission.
References
Quantized radio modulation modes |
https://en.wikipedia.org/wiki/Line%20code | In telecommunication, a line code is a pattern of voltage, current, or photons used to represent digital data transmitted down a communication channel or written to a storage medium. This repertoire of signals is usually called a constrained code in data storage systems.
Some signals are more prone to error than others as the physics of the communication channel or storage medium constrains the repertoire of signals that can be used reliably.
Common line encodings are unipolar, polar, bipolar, and Manchester code.
Transmission and storage
After line coding, the signal is put through a physical communication channel, either a transmission medium or data storage medium. The most common physical channels are:
the line-coded signal can directly be put on a transmission line, in the form of variations of the voltage or current (often using differential signaling).
the line-coded signal (the baseband signal) undergoes further pulse shaping (to reduce its frequency bandwidth) and then is modulated (to shift its frequency) to create an RF signal that can be sent through free space.
the line-coded signal can be used to turn on and off a light source in free-space optical communication, most commonly used in an infrared remote control.
the line-coded signal can be printed on paper to create a bar code.
the line-coded signal can be converted to magnetized spots on a hard drive or tape drive.
the line-coded signal can be converted to pits on an optical disc.
Some of the more common binary line codes include:
Each line code has advantages and disadvantages. Line codes are chosen to meet one or more of the following criteria:
Minimize transmission hardware
Facilitate synchronization
Ease error detection and correction
Achieve a target spectral density
Eliminate a DC component
Disparity
Most long-distance communication channels cannot reliably transport a DC component. The DC component is also called the disparity, the bias, or the DC coefficient. The disparity of a bit pattern is the difference in the number of one bits vs the number of zero bits. The running disparity is the running total of the disparity of all previously transmitted bits. The simplest possible line code, unipolar, gives too many errors on such systems, because it has an unbounded DC component.
Most line codes eliminate the DC component such codes are called DC-balanced, zero-DC, or DC-free. There are three ways of eliminating the DC component:
Use a constant-weight code. Each transmitted code word in a constant-weight code is designed such that every code word that contains some positive or negative levels also contains enough of the opposite levels, such that the average level over each code word is zero. Examples of constant-weight codes include Manchester code and Interleaved 2 of 5.
Use a paired disparity code. Each code word in a paired disparity code that averages to a negative level is paired with another code word that averages to a positive level. The transmi |
https://en.wikipedia.org/wiki/Link%20level | For computer networking, Link level: In the hierarchical structure of a primary or secondary station, the conceptual level of control or data processing logic that controls the data link.
Note: Link-level functions provide an interface between the station high-level logic and the data link. Link-level functions include (a) transmit bit injection and receive bit extraction, (b) address and control field interpretation, (c) command response generation, transmission and interpretation, and (d) frame check sequence computation and interpretation.
References
Computer networking |
https://en.wikipedia.org/wiki/Local%20call | In telephony, the term local call has the following meanings:
Any call using a single switching center; that is, not traveling to another telephone network;
A call made within a local calling area as defined by the local exchange carrier;
Any call for which an additional charge, i.e., toll charge, is not billed to the calling or called party, or (depending on the country) for which this charge is reduced because it is a short-distance call (e.g. within a town or local metropolitan area).
Typically, local calls have shorter numbers than long-distance calls, as the area code may not be required. However, this is not true in parts of the United States and Canada that are subject to overlay plans or many countries in Europe that require closed dialing plans.
Toll free (e.g. "800" numbers in the United States) are not necessarily local calls; despite being free to the caller, any charge due for the distance of the connection is charged to the called party.
Commercial users who make or accept many long-distance calls to or from a particular distant place may make them as local calls by use of a foreign exchange service. Such an "FX" line also allows people in the distant place to call by using a telephone number local to them.
See also
Local access and transport area
Local telephone service
Trunk vs. Toll
References
Teletraffic |
https://en.wikipedia.org/wiki/Long-haul%20communications | In telecommunication, the term long-haul communications has the following meanings:
1. In public switched networks, pertaining to circuits that span large distances, such as the circuits in inter-LATA, interstate, and international communications. See also Long line (telecommunications)
2. In the military community, communications among users on a national or worldwide basis.
Note 1: Compared to tactical communications, long-haul communications are characterized by (a) higher levels of users, such as the US National Command Authority, (b) more stringent performance requirements, such as higher quality circuits, (c) longer distances between users, including worldwide distances, (d) higher traffic volumes and densities, (e) larger switches and trunk cross sections, and (f) fixed and recoverable assets.
Note 2: "Long-haul communications" usually pertains to the U.S. Defense Communications System.
Note 3: "Long-haul telecommunications technicians" can be translated into many fields of IT work within the corporate industry (Information Technology, Network Technician, Telecommunication Specialist, It Support, and so on). While the term is used in military most career fields that are in communications such as 3D1X2 - Cyber Transport Systems (the career field has been renamed so many times over the course of many years but essentially it is the same job (Network Infrastructure Tech., Systems Control Technician, and Cyber Transport Systems)) or may work in areas that require the "in between" (cloud networking) for networks (MSPP, ATM, Routers, Switches), phones (VOIP, DS0 - DS4 or higher, and so on), encryption (configuring encryption devices or monitoring), and video support data transfers. The "bulk data transfer" or aggregation networking.
The Long-haul telecommunication technicians is considered a "jack of all" but it is much in the technician's interest to gather greater education with certifications to qualify for certain jobs outside the military. The Military provides an avenue but does not make the individual a master of the career field. The technician will find that the job out look outside of military requires many things that aren't required of them within the career field while in the military. So it is best to find the job that is similar to the AFSC and also view the companies description of the qualification to fit that job. Also at least get an associate degree, over 5 years experience, and all of the required "certs" (Network +, Security +, CCNA, CCNP and so on) to acquire the job or at least an interview. The best time to apply or get a guaranteed job is the last three months before you leave the military. Military personnel that are within the career field 3D1X2 require a Secret, TS, or TS with SCI clearance in order to do the job.
See also
Long-distance calling
Meteor burst communications
Communication circuits |
https://en.wikipedia.org/wiki/Machine-readable%20medium%20and%20data | In communications and computing, a machine-readable medium (or computer-readable medium) is a medium capable of storing data in a format easily readable by a digital computer or a sensor.
It contrasts with human-readable medium and data.
The result is called machine-readable data or computer-readable data, and the data itself can be described as having machine-readability.
Data
Machine-readable data must be structured data.
Attempts to create machine-readable data occurred as early as the 1960s. At the same time that seminal developments in machine-reading and natural-language processing were releasing (like Weizenbaum's ELIZA), people were anticipating the success of machine-readable functionality and attempting to create machine-readable documents. One such example was musicologist Nancy B. Reich's creation of a machine-readable catalog of composer William Jay Sydeman's works in 1966.
In the United States, the OPEN Government Data Act of 14 January 2019 defines machine-readable data as "data in a format that can be easily processed by a computer without human intervention while ensuring no semantic meaning is lost." The law directs U.S. federal agencies to publish public data in such a manner, ensuring that "any public data asset of the agency is machine-readable".
Machine-readable data may be classified into two groups: human-readable data that is marked up so that it can also be read by machines (e.g. microformats, RDFa, HTML), and data file formats intended principally for processing by machines (CSV, RDF, XML, JSON). These formats are only machine readable if the data contained within them is formally structured; exporting a CSV file from a badly structured spreadsheet does not meet the definition.
Machine readable is not synonymous with digitally accessible. A digitally accessible document may be online, making it easier for humans to access via computers, but its content is much harder to extract, transform, and process via computer programming logic if it is not machine-readable.
Extensible Markup Language (XML) is designed to be both human- and machine-readable, and Extensible Stylesheet Language Transformation (XSLT) is used to improve the presentation of the data for human readability. For example, XSLT can be used to automatically render XML in Portable Document Format (PDF). Machine-readable data can be automatically transformed for human-readability but, generally speaking, the reverse is not true.
For purposes of implementation of the Government Performance and Results Act (GPRA) Modernization Act, the Office of Management and Budget (OMB) defines "machine readable format" as follows: "Format in a standard computer language (not English text) that can be read automatically by a web browser or computer system. (e.g.; xml). Traditional word processing documents and portable document format (PDF) files are easily read by humans but typically are difficult for machines to interpret. Other formats such as extensible markup |
https://en.wikipedia.org/wiki/Managed%20object | In telecommunication, the term managed object has the following meanings:
1. In a network, an abstract representation of network resources that are managed. With "representation", we mean not only the actual device that is managed, but also the device driver, that communicates with the device. An example of a printer as a managed object is the window that shows information about the printer, such as the location, printer status, printing progress, paper choice, and printing margins.
The database, where all managed objects are stored, is called Management Information Base. In contrast with a CI, a managed object is "dynamic" and communicates with other network resources that are managed.
Note: A managed object may represent a physical entity, a network service, or an abstraction of a resource that exists independently of its use in management.
2. In telecommunications management, a resource within the telecommunications environment that may be managed through the use of operation, administration, maintenance, and provisioning (OAMP) application protocols.
Network management |
https://en.wikipedia.org/wiki/Manchester%20code | In telecommunication and data storage, Manchester code (also known as phase encoding, or PE) is a line code in which the encoding of each data bit is either low then high, or high then low, for equal time. It is a self-clocking signal with no DC component. Consequently, electrical connections using a Manchester code are easily galvanically isolated.
Manchester code derives its name from its development at the University of Manchester, where the coding was used for storing data on the magnetic drums of the Manchester Mark 1 computer.
Manchester code was widely used for magnetic recording on 1600 bpi computer tapes before the introduction of 6250 bpi tapes which used the more efficient group-coded recording. Manchester code was used in early Ethernet physical layer standards and is still used in consumer IR protocols, RFID and near-field communication.
Features
Manchester coding is a special case of binary phase-shift keying (BPSK), where the data controls the phase of a square wave carrier whose frequency is the data rate. Manchester code ensures frequent line voltage transitions, directly proportional to the clock rate; this helps clock recovery.
The DC component of the encoded signal is not dependent on the data and therefore carries no information. Therefore connections may be inductively or capacitively coupled, allowing the signal to be conveyed conveniently by galvanically isolated media (e.g., Ethernet) using a network isolator—a simple one-to-one pulse transformer which cannot convey a DC component.
Limitations
Manchester coding's data rate is only half that of a non-coded signal, which limits its usefulness to systems where bandwidth is not an issue, such as a local area network (LAN).
Manchester encoding introduces difficult frequency-related problems that make it unsuitable for use at higher data rates.
There are more complex codes, such as 8B/10B encoding, that use less bandwidth to achieve the same data rate but may be less tolerant of frequency errors and jitter in the transmitter and receiver reference clocks.
Encoding and decoding
Manchester code always has a transition at the middle of each bit period and may (depending on the information to be transmitted) have a transition at the start of the period also. The direction of the mid-bit transition indicates the data. Transitions at the period boundaries do not carry information. They exist only to place the signal in the correct state to allow the mid-bit transition.
Conventions for representation of data
There are two opposing conventions for the representations of data.
The first of these was first published by G. E. Thomas in 1949 and is followed by numerous authors (e.g., Andy Tanenbaum). It specifies that for a 0 bit the signal levels will be low–high (assuming an amplitude physical encoding of the data) – with a low level in the first half of the bit period, and a high level in the second half. For a 1 bit the signal levels will be high–low. This is also known a |
https://en.wikipedia.org/wiki/Master%20station | In telecommunication, a master station is a station that controls or coordinates the activities of other stations in the system.
Examples:
In a data network, the control station may designate a master station to ensure data transfer to one or more slave stations. Such a master station controls one or more data links of the data communications network at any given instant. The assignment of master status to a given station is temporary and is controlled by the control station according to the procedures set forth in the operational protocol. Master status is normally conferred upon a station so that it may transmit a message, but a station need not have a message to send to be designated the master station.
In navigation systems using precise time dissemination, the master station is a station that has the clock that is used to synchronize the clocks of subordinate stations.
In basic mode link control, the master station is a data station that has accepted an invitation to ensure a data transfer to one or more slave stations. At a given instant, there can be only one master station on a data link.
Operation modes
In data transmission, a master station can be set to not wait for a reply from a slave station after transmitting each message or transmission block. In this case the station is said to be in "continuous operation".
References
Telecommunications systems |
https://en.wikipedia.org/wiki/Mediation%20function | In telecommunications network management, a mediation function is a function that routes or acts on information passing between network elements and network operations.
Examples of mediation functions are communications control, protocol conversion, data handling, communications of primitives, processing that includes decision-making, and data storage.
Mediation functions can be shared among network elements, mediation devices, and network operation centers.
Sources
Network management |
https://en.wikipedia.org/wiki/Message%20format | In telecommunication, a message Base station is a predetermined or prescribed spatial or time-sequential arrangement of the parts of a message that is recorded in or on a data storage medium.
At one time, messages prepared for electrical transmission were composed on a printed blank form with spaces for each part of the message and for administrative entries.
Telecommunication theory
Data transmission |
https://en.wikipedia.org/wiki/Micro-mainframe%20link | In telecommunication, a micro-mainframe link is a physical or logical connection established between a remote microprocessor and mainframe host computer for the express purpose of uploading, downloading, or viewing interactive data and databases on-line in real time.
Note: A micro-mainframe link usually requires terminal emulation software on the microcomputer.
References
Data transmission |
https://en.wikipedia.org/wiki/%CE%9C-law%20algorithm | The μ-law algorithm (sometimes written mu-law, often approximated as u-law) is a companding algorithm, primarily used in 8-bit PCM digital telecommunication systems in North America and Japan. It is one of the two companding algorithms in the G.711 standard from ITU-T, the other being the similar A-law. A-law is used in regions where digital telecommunication signals are carried on E-1 circuits, e.g. Europe.
The terms PCMU, G711u or G711MU are used for G711 μ-law.
Companding algorithms reduce the dynamic range of an audio signal. In analog systems, this can increase the signal-to-noise ratio (SNR) achieved during transmission; in the digital domain, it can reduce the quantization error (hence increasing the signal-to-quantization-noise ratio). These SNR increases can be traded instead for reduced bandwidth for equivalent SNR.
At the cost of a reduced peak SNR, it can be mathematically shown that μ-law's non-linear quantization effectively increases dynamic range by 33 dB or bits over a linearly-quantized signal, hence 13.5 bits (which rounds up to 14 bits) is the most resolution required for an input digital signal to be compressed for 8-bit μ-law.
Algorithm types
The μ-law algorithm may be described in an analog form and in a quantized digital form.
Continuous
For a given input , the equation for μ-law encoding is
where in the North American and Japanese standards, and is the sign function. It is important to note that the range of this function is −1 to 1.
μ-law expansion is then given by the inverse equation:
Discrete
The discrete form is defined in ITU-T Recommendation G.711.
G.711 is unclear about how to code the values at the limit of a range (e.g. whether +31 codes to 0xEF or 0xF0).
However, G.191 provides example code in the C language for a μ-law encoder. The difference between the positive and negative ranges, e.g. the negative range corresponding to +30 to +1 is −31 to −2. This is accounted for by the use of 1's complement (simple bit inversion) rather than 2's complement to convert a negative value to a positive value during encoding.
Implementation
The μ-law algorithm may be implemented in several ways:
Analog Use an amplifier with non-linear gain to achieve companding entirely in the analog domain.
Non-linear ADC Use an analog-to-digital converter with quantization levels which are unequally spaced to match the μ-law algorithm.
Digital Use the quantized digital version of the μ-law algorithm to convert data once it is in the digital domain.
Software/DSP
Use the continuous version of the μ-law algorithm to calculate the companded values.
Usage justification
μ-law encoding is used because speech has a wide dynamic range. In analog signal transmission, in the presence of relatively constant background noise, the finer detail is lost. Given that the precision of the detail is compromised anyway, and assuming that the signal is to be perceived as audio by a human, one can take advantage of the fact that the perceived aco |
https://en.wikipedia.org/wiki/Multicast%20address | A multicast address is a logical identifier for a group of hosts in a computer network that are available to process datagrams or frames intended to be multicast for a designated network service. Multicast addressing can be used in the link layer (layer 2 in the OSI model), such as Ethernet multicast, and at the internet layer (layer 3 for OSI) for Internet Protocol Version 4 (IPv4) or Version 6 (IPv6) multicast.
IPv4
IPv4 multicast addresses are defined by the most-significant bit pattern of 1110. This originates from the classful network design of the early Internet when this group of addresses was designated as Class D. The CIDR notation for this group is . The group includes the addresses from to . Address assignments from within this range are specified in , an Internet Engineering Task Force (IETF) Best Current Practice document (BCP 51).
The address range is divided into blocks each assigned a specific purpose or behavior.
Local subnetwork
Addresses in the range of to are individually assigned by IANA and designated for multicasting on the local subnetwork only. For example, the Routing Information Protocol (RIPv2) uses , Open Shortest Path First (OSPF) uses and , and Multicast DNS uses . Routers must not forward these messages outside the subnet from which they originate.
Internetwork control block
Addresses in the range to are individually assigned by IANA and designated as the internetwork control block. This block of addresses is used for traffic that must be routed through the public Internet, such as for applications of the Network Time Protocol using .
AD-HOC block
Addresses in three separate blocks are not individually assigned by IANA. These addresses are globally routed and are used for applications that don't fit either of the previously described purposes.
Source-specific multicast
The (IPv4) and (IPv6) blocks are reserved for use by source-specific multicast.
GLOP
The range was originally assigned by as an experimental, public statically-assigned multicast address space for publishers and Internet service providers that wished to source content on the Internet. The allocation method is termed GLOP addressing and provides implementers a block of 255 addresses that is determined by their 16-bit autonomous system number (ASN) allocation. In a nutshell, the middle two octets of this block are formed from assigned ASNs, giving any operator assigned an ASN 256 globally unique multicast group addresses. The method is not applicable to the newer 32-bit ASNs. , superseding , envisioned the use of the range for many-to-many multicast applications. Unfortunately, with only 256 multicast addresses available to each autonomous system, GLOP is not adequate for large-scale broadcasters.
Unicast-prefix-based
The range is assigned by as a range of global IPv4 multicast address space provided to each organization that has or larger globally routed unicast address space allocated; one multicast address is reserved per of u |
https://en.wikipedia.org/wiki/Multiplexing | In telecommunications and computer networking, multiplexing (sometimes contracted to muxing) is a method by which multiple analog or digital signals are combined into one signal over a shared medium. The aim is to share a scarce resource a physical transmission medium. For example, in telecommunications, several telephone calls may be carried using one wire. Multiplexing originated in telegraphy in the 1870s, and is now widely applied in communications. In telephony, George Owen Squier is credited with the development of telephone carrier multiplexing in 1910.
The multiplexed signal is transmitted over a communication channel such as a cable. The multiplexing divides the capacity of the communication channel into several logical channels, one for each message signal or data stream to be transferred. A reverse process, known as demultiplexing, extracts the original channels on the receiver end.
A device that performs the multiplexing is called a multiplexer (MUX), and a device that performs the reverse process is called a demultiplexer (DEMUX or DMX).
Inverse multiplexing (IMUX) has the opposite aim as multiplexing, namely to break one data stream into several streams, transfer them simultaneously over several communication channels, and recreate the original data stream.
In computing, I/O multiplexing can also be used to refer to the concept of processing multiple input/output events from a single event loop, with system calls like poll and select (Unix).
Types
Multiple variable bit rate digital bit streams may be transferred efficiently over a single fixed bandwidth channel by means of statistical multiplexing. This is an asynchronous mode time-domain multiplexing which is a form of time-division multiplexing.
Digital bit streams can be transferred over an analog channel by means of code-division multiplexing techniques such as frequency-hopping spread spectrum (FHSS) and direct-sequence spread spectrum (DSSS).
In wireless communications, multiplexing can also be accomplished through alternating polarization (horizontal/vertical or clockwise/counterclockwise) on each adjacent channel and satellite, or through phased multi-antenna array combined with a multiple-input multiple-output communications (MIMO) scheme.
Space-division multiplexing
In wired communication, space-division multiplexing, also known as space-division multiple access (SDMA) is the use of separate point-to-point electrical conductors for each transmitted channel. Examples include an analogue stereo audio cable, with one pair of wires for the left channel and another for the right channel, and a multi-pair telephone cable, a switched star network such as a telephone access network, a switched Ethernet network, and a mesh network.
In wireless communication, space-division multiplexing is achieved with multiple antenna elements forming a phased array antenna. Examples are multiple-input and multiple-output (MIMO), single-input and multiple-output (SIMO) and multiple-inpu |
https://en.wikipedia.org/wiki/Narrative%20traffic | Narrative traffic is data communications consisting of plain or encrypted messages written in a natural language and transmitted in accordance with standard formats and procedures.
Examples of narrative traffic include:
Messages that are placed on paper tape and transmitted via a teletypewriter (TTY), and on reception, are converted back to a printed page on another teletypewriter or teleprinter
Messages printed on a sheet of paper, transmitted via optical character recognition (OCR) equipment, and on reception, converted back to a printed page on a printer.
References
Data transmission
Teletraffic |
https://en.wikipedia.org/wiki/National%20Information%20Infrastructure | The National Information Infrastructure (NII) was the product of the High Performance Computing Act of 1991. It was a telecommunications policy buzzword, which was popularized during the Clinton Administration under the leadership of Vice-President Al Gore.
It proposed to build communications networks, interactive services, interoperable computer hardware and software, computers, databases, and consumer electronics in order to put vast amounts of information available to both public and private sectors. NII was to have included more than just the physical facilities (more than the cameras, scanners, keyboards, telephones, fax machines, computers, switches, compact disks, video and audio tape, cable, wire, satellites, optical fiber transmission lines, microwave nets, switches, televisions, monitors, and printers) used to transmit, store, process, and display voice, data, and images; it was also to encompass a wide range of interactive functions, user-tailored services, and multimedia databases that were interconnected in a technology-neutral manner that will favor no one industry over any other.
See also
Al Gore and information technology
High Performance Computing Act of 1991
Information Superhighway
History of the Internet
NII Award
References
Chapman, Gary and Marc Rotenberg. "The National Information Infrastructure: A Public Interest Opportunity." Summer, 1993.
Gore, Al. Remarks on the National Information Infrastructure by Vice President Al Gore at the National Press club, December 21, 1993.
History of the Internet
Internet terminology
Telecommunications in the United States |
https://en.wikipedia.org/wiki/Network%20architecture | Network architecture is the design of a computer network. It is a framework for the specification of a network's physical components and their functional organization and configuration, its operational principles and procedures, as well as communication protocols used.
In telecommunication, the specification of a network architecture may also include a detailed description of products and services delivered via a communications network, as well as detailed rate and billing structures under which services are compensated.
The network architecture of the Internet is predominantly expressed by its use of the Internet protocol suite, rather than a specific model for interconnecting networks or nodes in the network, or the usage of specific types of hardware links.
OSI model
The Open Systems Interconnection model (OSI model) defines and codifies the concept of layered network architecture. Abstraction layers are used to subdivide a communications system further into smaller manageable parts. A layer is a collection of similar functions that provide services to the layer above it and receives services from the layer below it. On each layer, an instance provides services to the instances at the layer above and requests services from the layer below.
Distributed computing
In distributed computing, the network architecture often describes the structure and classification of a distributed application architecture, as the participating nodes in a distributed application are often referred to as a network. For example, the applications architecture of the public switched telephone network (PSTN) has been termed the Intelligent Network. There are a number of specific classifications but all lie on a continuum between the dumb network (e.g. the Internet) and the intelligent network (e.g. the PSTN).
A popular example of such usage of the term in distributed applications, as well as permanent virtual circuits, is the organization of nodes in peer-to-peer (P2P) services and networks. P2P networks usually implement overlay networks running over an underlying physical or logical network. These overlay networks may implement certain organizational structures of the nodes according to several distinct models, the network architecture of the system.
See also
Network topology
Spawning networks
References
External links
Computer Network Architects at the US Department of Labor
Telecommunications engineering |
https://en.wikipedia.org/wiki/Network%20engineering | Network engineering may refer to:
The field concerned with internetworking service requirements for switched telephone networks
The field concerned with Computer Networking; the design and management of computer networks
The field concerned with Telecommunications Engineering; developing telecommunications network topologies
The field concerned with Broadcasting; spreading messages to a dispersed audience electronically.
See also
Network administrator
Telecommunications engineering |
https://en.wikipedia.org/wiki/Network%20interface | Network interface may refer to:
Network interface controller, a computer hardware component that connects a computer to a computer network
Network interface device, a device that serves as the demarcation point between a telephone carrier's local loop and the customer's wiring
Virtual network interface, an abstract virtualized representation of a computer network interface
Loopback interface, a virtual network interface that connects a host to itself |
https://en.wikipedia.org/wiki/Network%20interface%20device | In telecommunications, a network interface device (NID; also known by several other names) is a device that serves as the demarcation point between the carrier's local loop and the customer's premises wiring. Outdoor telephone NIDs also provide the subscriber with access to the station wiring and serve as a convenient test point for verification of loop integrity and of the subscriber's inside wiring.
Naming
Generically, an NID may also be called a network interface unit (NIU), telephone network interface (TNI), system network interface (SNI), or telephone network box.
Australia's National Broadband Network uses the term network termination device or NTD.
A smartjack is a type of NID with capabilities beyond simple electrical connection, such as diagnostics.
An optical network terminal (ONT) is a type of NID used with fiber-to-the-premises applications.
Wiring termination
The simplest NIDs are essentially just a specialized set of wiring terminals. These will typically take the form of a small, weather-proof box, mounted on the outside of the building. The telephone line from the telephone company will enter the NID and be connected to one side. The customer connects their wiring to the other side. A single NID enclosure may contain termination for a single line or multiple lines.
In its role as the demarcation point (dividing line), the NID separates the telephone company's equipment from the customer's wiring and equipment. The telephone company owns the NID itself, and all wiring up to it. Anything past the NID is the customer's responsibility. To facilitate this, there is typically a test jack inside the NID. Accessing the test jack disconnects the customer premises wiring from the public switched telephone network and allows the customer to plug a "known good" telephone into the jack to isolate trouble. If the telephone works at the test jack, the problem is the customer's wiring, and the customer is responsible for repair. If the telephone does not work, the line is faulty and the telephone company is responsible for repair.
Most NIDs also include "circuit protectors", which are surge protectors for a telephone line. They protect customer wiring, equipment, and personnel from any transient energy on the line, such as from a lightning strike to a telephone pole.
Simple NIDs contain no digital logic; they are "dumb" devices. They have no capabilities beyond wiring termination, circuit protection, and providing a place to connect test equipment.
Smartjack
Several types of NIDs provide more than just a terminal for the connection of wiring. Such NIDs are colloquially called smartjacks or Intelligent Network Interface Devices (INIDs) as an indication of their built-in "intelligence", as opposed to a simple NID, which is just a wiring device. Smartjacks are typically used for more complicated types of telecommunications service, such as T1 lines. Plain old telephone service lines generally cannot be equipped with smartjacks.
Despite |
https://en.wikipedia.org/wiki/Network%20management | Network management is the process of administering and managing computer networks. Services provided by this discipline include fault analysis, performance management, provisioning of networks and maintaining quality of service. Network management software is used by network administrators to help perform these functions.
Technologies
A small number of accessory methods exist to support network and network device management. Network management allows IT professionals to monitor network components within large network area. Access methods include the SNMP, command-line interface (CLI), custom XML, CMIP, Windows Management Instrumentation (WMI), Transaction Language 1 (TL1), CORBA, NETCONF, and the Java Management Extensions (JMX).
Schemas include the Structure of Management Information (SMI), WBEM, the Common Information Model (CIM Schema), and MTOSI amongst others.
See also
Application service management
Business service management
Capacity management
Comparison of network monitoring systems
FCAPS
In-network management
Integrated business planning
Network and service management taxonomy
Network monitoring
Network traffic measurement
Out-of-band management
Systems management
Website monitoring
References
External links
Network Monitoring and Management Tools
Software-Defined Network Management |
https://en.wikipedia.org/wiki/Network%20operating%20system | A network operating system (NOS) is a specialized operating system for a network device such as a router, switch or firewall.
Historically operating systems with networking capabilities were described as network operating systems, because they allowed personal computers (PCs) to participate in computer networks and shared file and printer access within a local area network (LAN). This description of operating systems is now largely historical, as common operating systems include a network stack to support a client–server model.
History
Early microcomputer operating systems such as CP/M, MS-DOS and classic Mac OS were designed for one user on one computer. Packet switching networks were developed to share hardware resources, such as a mainframe computer, a printer or a large and expensive hard disk. As local area network technology became available, two general approaches to handle sharing of resources on networks arose.
Historically a network operating system was an operating system for a computer which implemented network capabilities. Operating systems with a network stack allowed personal computers to participate in a client-server architecture in which a server enables multiple clients to share resources, such as printers. Early examples of client-server operating systems that were shipped with fully integrated network capabilities are Novell NetWare using the Internetwork Packet Exchange (IPX) network protocol and Banyan VINES which used a variant of the Xerox Network Systems (XNS) protocols.
These limited client/server networks were gradually replaced by Peer-to-peer networks, which used networking capabilities to share resources and files located on a variety of computers of all sizes. A peer-to-peer network sets all connected computers equal; they all share the same abilities to use resources available on the network. The most popular peer-to-peer networks as of 2020 are Ethernet, Wi-Fi and the Internet protocol suite. Software that allowed users to interact with these networks, despite a lack of networking support in the underlying manufacturer's operating system, was sometimes called a network operating system. Examples of such add-on software include Phil Karn's KA9Q NOS (adding Internet support to CP/M and MS-DOS), PC/TCP Packet Drivers (adding Ethernet and Internet support to MS-DOS), and LANtastic (for MS-DOS, Microsoft Windows and OS/2), and Windows for Workgroups (adding NetBIOS to Windows). Examples of early operating systems with peer-to-peer networking capabilities built-in include MacOS (using AppleTalk and LocalTalk), and the Berkeley Software Distribution.
Today, distributed computing and groupware applications have become the norm. Computer operating systems include a networking stack as a matter of course. During the 1980s the need to integrate dissimilar computers with network capabilities grew and the number of networked devices grew rapidly. Partly because it allowed for multi-vendor interoperability, and could |
https://en.wikipedia.org/wiki/Network%20topology | Network topology is the arrangement of the elements (links, nodes, etc.) of a communication network. Network topology can be used to define or describe the arrangement of various types of telecommunication networks, including command and control radio networks, industrial fieldbusses and computer networks.
Network topology is the topological structure of a network and may be depicted physically or logically. It is an application of graph theory wherein communicating devices are modeled as nodes and the connections between the devices are modeled as links or lines between the nodes. Physical topology is the placement of the various components of a network (e.g., device location and cable installation), while logical topology illustrates how data flows within a network. Distances between nodes, physical interconnections, transmission rates, or signal types may differ between two different networks, yet their logical topologies may be identical. A network’s physical topology is a particular concern of the physical layer of the OSI model.
Examples of network topologies are found in local area networks (LAN), a common computer network installation. Any given node in the LAN has one or more physical links to other devices in the network; graphically mapping these links results in a geometric shape that can be used to describe the physical topology of the network. A wide variety of physical topologies have been used in LANs, including ring, bus, mesh and star. Conversely, mapping the data flow between the components determines the logical topology of the network. In comparison, Controller Area Networks, common in vehicles, are primarily distributed control system networks of one or more controllers interconnected with sensors and actuators over, invariably, a physical bus topology.
Topologies
Two basic categories of network topologies exist, physical topologies and logical topologies.
The transmission medium layout used to link devices is the physical topology of the network. For conductive or fiber optical mediums, this refers to the layout of cabling, the locations of nodes, and the links between the nodes and the cabling. The physical topology of a network is determined by the capabilities of the network access devices and media, the level of control or fault tolerance desired, and the cost associated with cabling or telecommunication circuits.
In contrast, logical topology is the way that the signals act on the network media, or the way that the data passes through the network from one device to the next without regard to the physical interconnection of the devices. A network's logical topology is not necessarily the same as its physical topology. For example, the original twisted pair Ethernet using repeater hubs was a logical bus topology carried on a physical star topology. Token Ring is a logical ring topology, but is wired as a physical star from the media access unit. Physically, AFDX can be a cascaded star topology of multiple dual r |
https://en.wikipedia.org/wiki/Online%20and%20offline | In computer technology and telecommunications, online indicates a state of connectivity and offline indicates a disconnected state. In modern terminology, this usually refers to an Internet connection, but (especially when expressed "on line" or "on the line") could refer to any piece of equipment or functional unit that is connected to a larger system. Being online means that the equipment or subsystem is connected, or that it is ready for use.
"Online" has come to describe activities performed on and data available on the Internet, for example: "online identity", "online predator", "online gambling", "online game", "online shopping", "online banking", and "online learning". Similar meaning is also given by the prefixes "cyber" and "e", as in the words "cyberspace", "cybercrime", "email", and "ecommerce". In contrast, "offline" can refer to either computing activities performed while disconnected from the Internet, or alternatives to Internet activities (such as shopping in brick-and-mortar stores). The term "offline" is sometimes used interchangeably with the acronym "IRL", meaning "in real life".
History
During the 19th century, the term on line was commonly used in both the railroad and telegraph industries. For railroads, a signal box would send messages down the line (track), via a telegraph line (cable), indicating the track's status: Train on line or Line clear. Telegraph linemen would refer to sending current through a line as direct on line or battery on line; or they may refer to a problem with the circuit as being on line, as opposed to the power source or end-point equipment.
Since at least 1950, in computing, the terms on-line and off-line have been used to refer to whether machines, including computers and peripheral devices, are connected or not. Here is an excerpt from the 1950 book High-Speed Computing Devices:
The use of automatic computing equipment for large-scale reduction of data will be strikingly successful only if means are provided for the automatic transcription of these data to a form suitable for automatic entry into the machine. For some applications, of which the most prominent are those in which the reduced data are used to control the process being measured, the input must be developed for on-line operation. In on-line operation the input is communicated directly and without delay to the data-reduction device. For other applications, off-line operation, involving automatic transcription of data in a form suitable for later introduction to the machine, may be tolerated. These requirements may be compared with teleprinter operating requirements. For example, some teletype machines operate on line. Their operators are in instantaneous communication. Other teletype machines are operated off line, through the intervention of punched paper tape. The message is preserved by means of holes punched in the tape and is transmitted later by feeding the tape to another machine.
Examples
Offline e-mail
One example of a |
https://en.wikipedia.org/wiki/Open%20network%20architecture | In telecommunications, and in the context of Federal Communications Commission's (FCC) Computer Inquiry III, Open network architecture (ONA) is the overall design of a communication carrier's basic network facilities and services to permit all users of the basic network to interconnect to specific basic network functions and interfaces on an unbundled, equal-access basis.
The ONA concept consists of three integral components:
Basic serving arrangements (BSAs)
Basic service elements (BSEs)
Complementary network services
See also
Open Garden
References
Network architecture |
https://en.wikipedia.org/wiki/Overflow | Overflow may refer to:
Computing and telecommunications
Integer overflow, a condition that occurs when an integer calculation produces a result that is greater than what a given register can store or represent
Buffer overflow, a situation whereby the incoming data size exceeds that which can be accommodated by a buffer.
Heap overflow, a type of buffer overflow that occurs in the heap data area
Overflow (software), a NASA-developed computational fluid dynamics program using overset (Chimera) grids
Overflow condition, a situation that occurs when more information is being transmitted than the hardware can handle
Overspill, a proof technique in non-standard analysis, is less commonly called overflow
Stack overflow in which a computer program makes too many subroutine calls and its call stack runs out of space
Other
Overflow (company), a Japanese video game developer
Overflow (magazine), a free quarterly in Brooklyn, New York, US
River overflow, a relatively long and significant increase in the water content of a river, causing a rise in its level
Sanitary sewer overflow, a condition whereby untreated sewage is discharged into the environment, escaping wastewater treatment |
https://en.wikipedia.org/wiki/Packet-switching%20node | A packet-switching node is a node in a packet-switching network that contains data switches and equipment for controlling, formatting, transmitting, routing, and receiving data packets.
Note: In the Defense Data Network (DDN), a packet-switching node is usually configured to support up to thirty-two X.25 56 kbit/s host connections, as many as six 56 kbit/s interswitch trunk (IST) lines to other packet-switching nodes, and at least one Terminal Access Controller (TAC).
Packets (information technology) |
https://en.wikipedia.org/wiki/Paired%20disparity%20code | In telecommunication, a paired disparity code is a line code in which at least one of the data characters is represented by two codewords of opposite disparity that are used in sequence so as to minimize the total disparity of a longer sequence of digits.
A particular codeword of any line code can either have no disparity (the average weight of the codeword is zero), negative disparity (the average weight of the codeword is negative), or positive disparity (the average weight of the codeword is positive).
In a paired disparity code, every codeword that averages to a negative level (negative disparity) is paired with some other codeword that averages to a positive level (positive disparity).
In a system that uses a paired disparity code, the transmitter must keep track of the running DC buildup the running disparity and always pick the codeword that pushes the DC level back towards zero. The receiver is designed so that either codeword of the pair decodes to the same data bits.
Most line codes use either a paired disparity code or a constant-weight code.
The simplest paired disparity code is alternate mark inversion signal. Other paired disparity codes include 8b/10b, 8B12B, the modified AMI codes, coded mark inversion, and 4B3T.
The digits may be represented by disparate physical quantities, such as two different frequencies, phases, voltage levels, magnetic polarities, or electrical polarities, each one of the pair representing a 0 or a 1.
References
Line codes |
https://en.wikipedia.org/wiki/Password%20length%20parameter | In telecommunication, a password length parameter is a basic parameter the value of which affects password strength against brute force attack and so is a contributor to computer security.
One use of the password length parameters is in the expression , where is the probability that a password can be guessed in its lifetime, is the maximum lifetime a password can be used to log into a system, is the number of guesses per unit of time, and is the number of unique algorithm-generated passwords (the 'password space').
The degree of password security is determined by the probability that a password can be guessed in its lifetime.
See also
Key stretching
Password cracking
References
Computer network security
Password authentication |
https://en.wikipedia.org/wiki/Phased%20array | In antenna theory, a phased array usually means an electronically scanned array, a computer-controlled array of antennas which creates a beam of radio waves that can be electronically steered to point in different directions without moving the antennas.
The general theory of an electromagnetic phased array also finds applications in ultrasonic and medical imaging application (phased array ultrasonics) and in optics optical phased array.
In a simple array antenna, the radio frequency current from the transmitter is fed to multiple individual antenna elements with the proper phase relationship so that the radio waves from the separate elements combine (superpose) to form beams, to increase power radiated in desired directions and suppress radiation in undesired directions. In a phased array, the power from the transmitter is fed to the radiating elements through devices called phase shifters, controlled by a computer system, which can alter the phase or signal delay electronically, thus steering the beam of radio waves to a different direction. Since the size of an antenna array must extend many wavelengths to achieve the high gain needed for narrow beamwidth, phased arrays are mainly practical at the high frequency end of the radio spectrum, in the UHF and microwave bands, in which the operating wavelengths are conveniently small.
Phased arrays were originally conceived for use in military radar systems, to steer a beam of radio waves quickly across the sky to detect planes and missiles. These systems are now widely used and have spread to civilian applications such as 5G MIMO for cell phones. The phased array principle is also used in acoustics, and phased arrays of acoustic transducers are used in medical ultrasound imaging scanners (phased array ultrasonics), oil and gas prospecting (reflection seismology), and military sonar systems.
The term "phased array" is also used to a lesser extent for unsteered array antennas in which the phase of the feed power and thus the radiation pattern of the antenna array is fixed. For example, AM broadcast radio antennas consisting of multiple mast radiators fed so as to create a specific radiation pattern are also called "phased arrays".
Types
Phased arrays take multiple forms. However, the four most common are the passive electronically scanned array (PESA), active electronically scanned array (AESA), hybrid beam forming phased array, and digital beam forming (DBF) array.
A passive phased array or passive electronically scanned array (PESA) is a phased array in which the antenna elements are connected to a single transmitter and/or receiver, as shown in the first animation at top. PESAs are the most common type of phased array. Generally speaking, a PESA uses one receiver/exciter for the entire array.
An active phased array or active electronically scanned array (AESA) is a phased array in which each antenna element has an analog transmitter/receiver (T/R) module which creates the phase shifting req |
https://en.wikipedia.org/wiki/Phase-shift%20keying | Phase-shift keying (PSK) is a digital modulation process which conveys data by changing (modulating) the phase of a constant frequency carrier wave. The modulation is accomplished by varying the sine and cosine inputs at a precise time. It is widely used for wireless LANs, RFID and Bluetooth communication.
Any digital modulation scheme uses a finite number of distinct signals to represent digital data. PSK uses a finite number of phases, each assigned a unique pattern of binary digits. Usually, each phase encodes an equal number of bits. Each pattern of bits forms the symbol that is represented by the particular phase. The demodulator, which is designed specifically for the symbol-set used by the modulator, determines the phase of the received signal and maps it back to the symbol it represents, thus recovering the original data. This requires the receiver to be able to compare the phase of the received signal to a reference signal such a system is termed coherent (and referred to as CPSK).
CPSK requires a complicated demodulator, because it must extract the reference wave from the received signal and keep track of it, to compare each sample to. Alternatively, the phase shift of each symbol sent can be measured with respect to the phase of the previous symbol sent. Because the symbols are encoded in the difference in phase between successive samples, this is called differential phase-shift keying (DPSK). DPSK can be significantly simpler to implement than ordinary PSK, as it is a 'non-coherent' scheme, i.e. there is no need for the demodulator to keep track of a reference wave. A trade-off is that it has more demodulation errors.
Introduction
There are three major classes of digital modulation techniques used for transmission of digitally represented data:
Amplitude-shift keying (ASK)
Frequency-shift keying (FSK)
Phase-shift keying (PSK)
All convey data by changing some aspect of a base signal, the carrier wave (usually a sinusoid), in response to a data signal. In the case of PSK, the phase is changed to represent the data signal. There are two fundamental ways of utilizing the phase of a signal in this way:
By viewing the phase itself as conveying the information, in which case the demodulator must have a reference signal to compare the received signal's phase against; or
By viewing the change in the phase as conveying information differential schemes, some of which do not need a reference carrier (to a certain extent).
A convenient method to represent PSK schemes is on a constellation diagram. This shows the points in the complex plane where, in this context, the real and imaginary axes are termed the in-phase and quadrature axes respectively due to their 90° separation. Such a representation on perpendicular axes lends itself to straightforward implementation. The amplitude of each point along the in-phase axis is used to modulate a cosine (or sine) wave and the amplitude along the quadrature axis to modulate a sine (or cosin |
https://en.wikipedia.org/wiki/Precision | Precision, precise or precisely may refer to:
Science, and technology, and mathematics
Mathematics and computing (general)
Accuracy and precision, measurement deviation from true value and its scatter
Significant figures, the number of digits that carry real information about a measurement
Precision and recall, in information retrieval: the proportion of relevant documents returned
Precision (computer science), a measure of the detail in which a quantity is expressed
Precision (statistics), a model parameter or a quantification of precision
Computing products
Dell Precision, a line of Dell workstations
Precision Architecture, former name for PA-RISC, a reduced instruction set architecture developed by Hewlett-Packard
Ubuntu 12.04 "Precise Pangolin", Canonical's sixteenth release of Ubuntu
Companies
Precision Air, an airline based in Tanzania
Precision Castparts Corp., a casting company based in Portland, Oregon, in the United States
Precision Drilling, the largest drilling-rig contractor in Canada
Precision Monolithics, an American company that produced linear semiconductors
Precision Talent, a voice-over talent-management company
F. E. Baker Ltd, maker of Precision motorcycle and cycle-car engines pre-WW1
Precisely (company), formerly Syncsort
Other
Precision Club, a bidding system in the game of contract bridge
Precision (march), the official marching music of the Royal Military College of Canada
Precision 15, a self-bailing dinghy
Fender Precision Bass, by Fender Musical Instruments Corporation
Precisely (sketch), a dramatic sketch by the English playwright Harold Pinter
See also
Precisionism, an artistic movement also known as Cubist Realism
Precisionist (1981–2006), an American Hall of Fame Thoroughbred racehorse |
https://en.wikipedia.org/wiki/Primary%20Rate%20Interface | The Primary Rate Interface (PRI) is a telecommunications interface standard used on an Integrated Services Digital Network (ISDN) for carrying multiple DS0 voice and data transmissions between the network and a user.
PRI is the standard for providing telecommunication services to enterprises and offices. It is based on T-carrier (T1) transmission in the US, Canada, and Japan, while the E-carrier (E1) is common in Europe and Australia. The T1 line consists of 23 bearer (B) channels and one data (D) channel for control purposes, for a total bandwidth of 24x64-kbit/s or 1.544 Mbit/s. The E1 carrier provides 30 B- and one D-channel for a bandwidth of 2.048 Mbit/s. The first timeslot on the E1 is used for synchronization purposes and is not considered to be a B- or D-channel. The D-channel typically uses timeslot 16 on an E1, while it is timeslot 24 for a T1. Fewer active bearer channels, sometimes called user channels, may be used in fractional T1 or E1 services.
ISDN service types
The Integrated Services Digital Network (ISDN) prescribes two levels of service:
Basic Rate Interface (BRI): one 16-kbit/s D channel with two 64-kbit/s B channels, intended for small enterprises and residential service.
Primary Rate Interface (PRI): one 64-kbit/s D channel with 23 (1.544 Mbit/s T1, a.k.a. "23B + D") or 30 (2.048 Mbit/s E1, a.k.a. "30B + D") 64-kbit/s B channels, intended for large organizations.
Each B-channel carries data, voice, and other services. The D-channel carries control and signaling information. Larger connections are possible using PRI pairing. A dual T1-PRI could have 24 + 23 = 47 B-channels and 1 D-channel (often called "47B + D"), but more commonly has 46 B-channels and 2 D-channels thus providing a backup signaling channel. The concept applies to E1s as well and both can include more than 2 PRIs. When configuring multiple T1's as ISDN-PRI's, it's possible to use NFAS (non-facility associated signaling) to enable one or two D-channels to support additional B-channels on separate T1 circuits.
Application
The Primary Rate Interface channels are typically used by medium to large enterprises with digital private branch exchange (PBX) telephone systems to provide digital access to the public switched telephone network (PSTN). The B-channels may be used flexibly and reassigned when necessary to meet special needs such as video conferences.
PRI channels and direct inward dialing are also common as a means of delivering inbound calls to voice over IP gateways from the PSTN.
See also
Non-Facility Associated Signalling
H channel
I.431
Caller ID spoofing
References
Integrated Services Digital Network |
https://en.wikipedia.org/wiki/Primary%20station | In a data communication network, the primary station is the station responsible for unbalanced control of a data link.
The primary station generates commands and interprets responses, and is responsible for initialization of data and control information interchange, organization and control of data flow, retransmission control, and all recovery functions at the link level.
References
Data transmission |
https://en.wikipedia.org/wiki/Proceed-to-select | In telecommunications, proceed-to-select is a signal or event in the call-access phase of a data call, which signal or event confirms the reception of a call-request signal and advises the calling data terminal equipment to proceed with the transmission of the selection signals.
Examples of proceed-to-select pertain to a dial tone in a telephone system.
References
Telephony signals |
https://en.wikipedia.org/wiki/Protocol%20data%20unit | In telecommunications, a protocol data unit (PDU) is a single unit of information transmitted among peer entities of a computer network. It is composed of protocol-specific control information and user data. In the layered architectures of communication protocol stacks, each layer implements protocols tailored to the specific type or mode of data exchange.
For example, the Transmission Control Protocol (TCP) implements a connection-oriented transfer mode, and the PDU of this protocol is called a segment, while the User Datagram Protocol (UDP) uses datagrams as protocol data units for connectionless communication. A layer lower in the Internet protocol suite, at the Internet layer, the PDU is called a packet, irrespective of its payload type.
Packet-switched data networks
In the context of packet switching data networks, a protocol data unit (PDU) is best understood in relation to a service data unit (SDU).
The features or services of the network are implemented in distinct layers. The physical layer sends ones and zeros across a wire or fiber. The data link layer then organizes these ones and zeros into chunks of data and gets them safely to the right place on the wire. The network layer transmits the organized data over multiple connected networks, and the transport layer delivers the data to the right software application at the destination.
Between the layers (and between the application and the top-most layer), the layers pass service data units (SDUs) across interfaces. The higher layer understands the structure of the data in the SDU, but the lower layer at the interface does not; moreover, the lower layer treats the SDU as the payload, undertaking to get it to the same interface at the destination. In order to do this, the protocol (lower) layer will add to the SDU certain data it needs to perform its function; which is called encapsulation. For example, it might add a port number to identify the application, a network address to help with routing, a code to identify the type of data in the packet and error-checking information. All this additional information, plus the original service data unit from the higher layer, constitutes the protocol data unit at this layer.
The SDU and metadata added by the lower layer can be larger than the maximum size of that layer's PDU (known as the maximum transmission unit; MTU). When this is the case, the PDU must be split into multiple payloads of a size suitable for transmission or processing by the lower layer; a process known as IP fragmentation.
The significance of this is that the PDU is the structured information that is passed to a matching protocol layer further along on the data's journey that allows the layer to deliver its intended function or service. The matching layer, or "peer", decodes the data to extract the original service data unit, decide if it is error-free and where to send it next, etc. Unless we have already arrived at the lowest (physical) layer, the PDU is passed |
https://en.wikipedia.org/wiki/Provisioning%20%28technology%29 | In telecommunication, provisioning involves the process of preparing and equipping a network to allow it to provide new services to its users. In National Security/Emergency Preparedness telecommunications services, "provisioning" equates to "initiation" and includes altering the state of an existing priority service or capability.
The concept of network provisioning or service mediation, mostly used in the telecommunication industry, refers to the provisioning of the customer's services to the network elements, which are various equipment connected in that network communication system. Generally in telephony provisioning this is accomplished with network management database table mappings. It requires the existence of networking equipment and depends on network planning and design.
In a modern signal infrastructure employing information technology (IT) at all levels, there is no possible distinction between telecommunications services and "higher level" infrastructure. Accordingly, provisioning configures any required systems, provides users with access to data and technology resources, and refers to all enterprise-level information-resource management involved.
Organizationally, a CIO typically manages provisioning, necessarily involving human resources and IT departments cooperating to:
Give users access to data repositories or grant authorization to systems, network applications and databases based on a unique user identity.
Appropriate for their use hardware resources, such as computers, mobile phones and pagers.
As its core, the provisioning process monitors access rights and privileges to ensure the security of an enterprise's resources and user privacy. As a secondary responsibility, it ensures compliance and minimizes the vulnerability of systems to penetration and abuse. As a tertiary responsibility, it tries to reduce the amount of custom configuration using boot image control and other methods that radically reduce the number of different configurations involved.
Discussion of provisioning often appears in the context of virtualization, orchestration, utility computing, cloud computing, and open-configuration concepts and projects. For instance, the OASIS Provisioning Services Technical Committee (PSTC) defines an XML-based framework for exchanging user, resource, and service-provisioning information - SPML (Service Provisioning Markup Language) for "managing the provisioning and allocation of identity information and system resources within and between organizations".
Once provisioning has taken place, the process of SysOpping ensures the maintenance of services to the expected standards. Provisioning thus refers only to the setup or startup part of the service operation, and SysOpping to the ongoing support.
Network provisioning
One type of provisioning.
The services which are assigned to the customer in the customer relationship management (CRM) have to be provisioned on the network element which is enabling the se |
https://en.wikipedia.org/wiki/Psophometric%20voltage | Psophometric voltage is a circuit noise voltage measured with a psophometer that includes a CCIF-1951 weighting network.
"Psophometric voltage" should not be confused with "psophometric emf," i.e., the emf in a generator or line with 600 Ω internal resistance. For practical purposes, the psophometric emf is twice the corresponding psophometric voltage.
Psophometric voltage readings, V, in millivolts, are commonly converted to dBm(psoph) by dBm(psoph) = 20 log10V – 57.78.
References
Electrical parameters
Noise (electronics)
Telecommunications engineering |
https://en.wikipedia.org/wiki/Public%20land%20mobile%20network | In telecommunication, a public land mobile network (PLMN) is a combination of wireless communication services offered by a specific operator in a specific country. A PLMN typically consists of several cellular technologies like GSM/2G, UMTS/3G, LTE/4G, NR/5G, offered by a single operator within a given country, often referred to as a cellular network.
PLMN code
A PLMN is identified by a globally unique PLMN code, which consists of a MCC (Mobile Country Code) and MNC (Mobile Network Code). Hence, it is a five- to six-digit number identifying a country, and a mobile network operator in that country, usually represented in the form 001-01 or 001–001.
A PLMN is part of a:
Location Area Identity (LAI) (PLMN and Location Area Code)
Cell Global Identity (CGI) (LAI and Cell Identifier)
IMSI (see PLMN code and IMSI)
Leading zeros in PLMN codes
Note that an MNC can be of two-digit form and three-digit form with leading zeros, so that 01 and 001 are actually two distinct MNCs. Hence the PLMNs 001-02 and 001-002 would be in the same country, but belong to two completely distinct operators (02 and 002). From PLMN assignments, it is apparent that such dualities of two-digit and three-digit MNCs with the same number value are avoided (see the list of mobile country codes and mobile network codes). An example for an actual three-digit/two-digit MNC with leading zeros is in Bermuda MCC, 350-007 and 350-00, 350-01.
PLMN code and IMSI
The IMSI, which identifies a SIM or USIM for one subscriber, typically starts with the PLMN code. For example, an IMSI belonging to the PLMN 262-33 would look like 262330000000001. Mobile phones use this to detect roaming, so that a mobile phone subscribed on a network with a PLMN code that mismatches the start of the USIM's IMSI will typically display an "R" on the icon that indicates connection strength.
PLMN services
A PLMN typically offers the following services to a mobile subscriber:
Emergency calls to local Fire/Ambulance/Police stations.
Voice calls to/from any other PLMN ("cellular network") or PSTN ("landline"/VoIP).
Short message service (SMS) services to/from any other PLMN or SIP service (the original form of texting on a mobile phone, now often replaced by Messaging apps).
Multimedia Messaging Service (MMS) services to/from any other PLMN or SIP service.
Unstructured Supplementary Service Data (USSD) for operator specific interactions (e.g. dialing "*#100#" to indicate the current balance).
Internet data connectivity for arbitrary services, e.g. via GPRS in GSM, IuPS in UMTS, or LTE.
The availability, quality and bandwidth of these services strongly depends on the particular technology used to implement a PLMN.
References
Mobile technology |
https://en.wikipedia.org/wiki/Push-to-type%20operation | Push-to-type operation: In telegraph or data transmission systems, that method of communication in which the operator at a station must keep a switch operated in order to send messages.
Push-to-type operation is used in radio systems where the same frequency is employed for transmission and reception.
Push-to-type operation is a derivative form of transmission and may be used in simplex, half-duplex, or duplex operation. Synonym press-to-type operation.
This is similar to Push-to-talk operation for radio phone communications.
References
Telegraphy |
https://en.wikipedia.org/wiki/Queuing%20delay | In telecommunication and computer engineering, the queuing delay or queueing delay is the time a job waits in a queue until it can be executed. It is a key component of network delay. In a switched network, queuing delay is the time between the completion of signaling by the call originator and the arrival of a ringing signal at the call receiver. Queuing delay may be caused by delays at the originating switch, intermediate switches, or the call receiver servicing switch. In a data network, queuing delay is the sum of the delays between the request for service and the establishment of a circuit to the called data terminal equipment (DTE). In a packet-switched network, queuing delay is the sum of the delays encountered by a packet between the time of insertion into the network and the time of delivery to the address.
Router processing
This term is most often used in reference to routers. When packets arrive at a router, they have to be processed and transmitted. A router can only process one packet at a time. If packets arrive faster than the router can process them (such as in a burst transmission) the router puts them into the queue (also called the buffer) until it can get around to transmitting them. Delay can also vary from packet to packet so averages and statistics are usually generated when measuring and evaluating queuing delay.
As a queue begins to fill up due to traffic arriving faster than it can be processed, the amount of delay a packet experiences going through the queue increases. The speed at which the contents of a queue can be processed is a function of the transmission rate of the facility. This leads to the classic delay curve. The average delay any given packet is likely to experience is given by the formula 1/(μ-λ) where μ is the number of packets per second the facility can sustain and λ is the average rate at which packets are arriving to be serviced. This formula can be used when no packets are dropped from the queue.
The maximum queuing delay is proportional to buffer size. The longer the line of packets waiting to be transmitted, the longer the average waiting time is. The router queue of packets waiting to be sent also introduces a potential cause of packet loss. Since the router has a finite amount of buffer memory to hold the queue, a router which receives packets at too high a rate may experience a full queue. In this case, the router has no other option than to simply discard excess packets.
When the transmission protocol uses the dropped-packets symptom of filled buffers to regulate its transmit rate, as the Internet's TCP does, bandwidth is fairly shared at near theoretical capacity with minimal network congestion delays. Absent this feedback mechanism the delays become both unpredictable and rise sharply, a symptom also seen as freeways approach capacity; metered onramps are the most effective solution there, just as TCP's self-regulation is the most effective solution when the traffic is packets instea |
https://en.wikipedia.org/wiki/Random%20number | In mathematics and statistics, a random number is either Pseudo-random or a number generated for, or part of, a set exhibiting statistical randomness.
Algorithms and implementations
A 1964-developed algorithm is popularly known as the Knuth shuffle or the Fisher–Yates shuffle (based on work they did in 1938). A real-world use for this is sampling water quality in a reservoir.
In 1999, a new feature was added to the Pentium III: a hardware-based random number generator. It has been described as "several oscillators combine their outputs and that odd waveform is sampled asynchronously." These numbers, however, were only 32 bit, at a time when export controls were on 56 bits and higher, so they were not state of the art.
Common understanding
In common understanding, "1 2 3 4 5" is not as random as "3 5 2 1 4" and certainly not as random as "47 88 1 32 41" but "we can't say authoritavely that the first sequence is not random ... it could have been generated by chance."
When a police officer claims to have done a "random .. door-to-door" search, there is a certain expectation that members of a jury will have.
Real world consequences
Flaws in randomness have real-world consequences.
A 99.8% randomness was shown by researchers to negatively affect an estimated 27,000 customers of a large service and that the problem was not limited to just that situation.
See also
Algorithmically random sequence
Quasi-random sequence
Random number generation
Random sequence
Random variable
Random variate
Random real
References
Permutations |
https://en.wikipedia.org/wiki/Reference%20clock | A reference clock may refer to the following:
A master clock used as a timekeeping standard to regulate or compare the accuracy of other clocks
In electronics and computing, the clock signal used to synchronise and schedule operations
de:Hauptuhr |
https://en.wikipedia.org/wiki/Reliability | Reliability, reliable, or unreliable may refer to:
Science, technology, and mathematics
Computing
Data reliability (disambiguation), a property of some disk arrays in computer storage
Reliability (computer networking), a category used to describe protocols
Reliability (semiconductor), outline of semiconductor device reliability drivers
Other uses in science, technology, and mathematics
Reliability (statistics), the overall consistency of a measure
Reliability engineering, concerned with the ability of a system or component to perform its required functions under stated conditions for a specified time
High reliability is informally reported in "nines"
Human reliability in engineered systems
Reliability theory, as a theoretical concept, to explain biological aging and species longevity
Other uses
Reliabilism, in philosophy and epistemology
Unreliable narrator, whose credibility has been seriously compromised
See also
Reliant (disambiguation) |
https://en.wikipedia.org/wiki/Remote%20access | Remote access may refer to:
Connection to a data-processing system from a remote location, for example, through a remote access service or virtual private network
Remote desktop software, software allowing applications to run remotely on a server while displaying graphical output locally
Terminal emulation, when used to interface with a remote system. May use standard tools like:
telnet, – software used to interact over a network with a computer system
ssh – secure shell: often used with remote applications
Activation of features of a business telephone system from outside the business's premises
RemoteAccess, a DOS-based bulletin-board system
Remote Database Access, a protocol standard for database access
Remote Access (film), a 2004 Russian drama film
See also
Remote (disambiguation) |
https://en.wikipedia.org/wiki/Response%20time%20%28technology%29 | In technology, response time is the time a system or functional unit takes to react to a given input.
Computing
Response time is the total amount of time it takes to respond to a request for service. That service can be anything from a memory fetch, to a disk IO, to a complex database query, or loading a full web page. Ignoring transmission time for a moment, the response time is the sum of the service time and wait time. The service time is the time it takes to do the work you requested. For a given request the service time varies little as the workload increases – to do X amount of work it always takes X amount of time. The wait time is how long the request had to wait in a queue before being serviced and it varies from zero, when no waiting is required, to a large multiple of the service time, as many requests are already in the queue and have to be serviced first.
With basic queueing theory math you can calculate how the average wait time increases as the device providing the service goes from 0-100% busy. As the device becomes busier, the average wait time increases in a non-linear fashion. The busier the device is, the more dramatic the response time increases will seem as you approach 100% busy; all of that increase is caused by increases in wait time, which is the result of all the requests waiting in queue that have to run first.
Transmission time gets added to response time when your request and the resulting response has to travel over a network and it can be very significant. Transmission time can include propagation delays due to distance (the speed of light is finite), delays due to transmission errors, and data communication bandwidth limits (especially at the last mile) slowing the transmission speed of the request or the reply.
Real-time systems
In real-time systems the response time of a task or thread is defined as the time elapsed between the dispatch (time when task is ready to execute) to the time when it finishes its job (one dispatch). Response time is different from WCET which is the maximum time the task would take if it were to execute without interference. It is also different from deadline which is the length of time during which the task's output would be valid in the context of the specific system. And it has a relation to the TTFB, which is the time between the dispatch and the time when the response starts.
Display technologies
Response time is the amount of time a pixel in a display takes to change. It is measured in milliseconds (ms). Lower numbers mean faster transitions and therefore fewer visible image artifacts. Display monitors with long response times would create display motion blur around moving objects, making them unacceptable for rapidly moving images. Response times are usually measured from grey-to-grey transitions, based on a VESA industry standard from the 10% to the 90% points in the pixel response curve.
In fast paced competitive games such as Counter-Strike, the response time of a disp |
https://en.wikipedia.org/wiki/Ring%20latency | In a ring network, such as Token Ring, ring latency is the time required for a signal to propagate once around the ring. Ring latency may be measured in seconds or in bits at the data transmission rate. Ring latency includes signal propagation delays in the ring medium, the drop cables, and the data stations connected to the ring network.
References
Network protocols |
https://en.wikipedia.org/wiki/Scan | Scan, SCAN or Scanning may refer to:
Science and technology
Computing and electronics
3D scanning, of a real-world object or environment to collect three dimensional data
Counter-scanning, in physical micro and nanotopography measuring instruments like scanning probe microscope
Elevator algorithm or SCAN, a disk scheduling algorithm
Image scanning, an optical scan of images, printed text, handwriting or an object
Optical character recognition, optical recognition of printed text or printed sheet music
Port scanner, in computer networking
Prefix sum, an operation on lists that is also known as the scan operator
Raster scan, the rectangular pattern of image capture and reconstruction in television
Scan chain, a type of manufacturing test used with integrated circuits
Scan line, one line in a raster scanning pattern
Screen reading, on computers to quickly locate text elements and images
Shared Check Authorization Network (SCAN), a database of bad check writers and collection agency for bad checks
Space Communications and Navigation Program (SCaN), by NASA
Medical
Schedules for Clinical Assessment in Neuropsychiatry (SCAN), a set of psychiatric diagnostic tools by WHO
DMSA scan, a radionuclide scan used in kidney studies
Medical imaging, of the interior of a body
Medical ultrasound, the medical procedure for examining internal structures, especially when applied to developing foetuses in the womb
CT scan (computed tomography scan)
Magnetic resonance imaging, MRI scan
Switch access scanning, a technique that allows users with motor impairments to use a computer using switch access
Partner-assisted scanning, a technique that allows a person with severe disabilities to communicate
Businesses
Scan Furniture, a former US chain in Washington, D.C., US
SCAN Health Plan, not-for-profit health care company based in Long Beach, California, US
Scan AB or Scan Foods UK Ltd, the Swedish and UK subsidiaries of the Finnish HKScan
Seattle Community Access Network, a former TV channel in Seattle, Washington, US
Scan (company), a software company based in Provo, Utah, US
Publishing
Social Cognitive and Affective Neuroscience, a scientific journal
Scanning (journal), a scanning microscopy scientific journal published by Hindawi
Scan: Journal of Media Arts Culture, former online journal published by Macquarie University, Sydney, Australia
SCAN (newspaper), the student newspaper at Lancaster University, UK
Other uses
Scan reading, a method of speed reading
Scientific content analysis (SCAN), a disputed technique
"Scan" (Prison Break episode), an episode of the television series Prison Break
See also
Graham scan, an algorithm for finding the convex hull of a set of points in the plane
Scanner (disambiguation)
Skan (disambiguation)
Scansion, the analysis of writing and verse regarding rhythmic and especially metrical structure |
https://en.wikipedia.org/wiki/Security%20kernel | In telecommunication, the term security kernel has the following meanings:
In computer and communications security, the central part of a computer or communications system hardware, firmware, and software that implements the basic security procedures for controlling access to system resources.
A self-contained usually small collection of key security-related statements that (a) works as a part of an operating system to prevent unauthorized access to, or use of, the system and (b) contains criteria that must be met before specified programs can be accessed.
Hardware, firmware, and software elements of a trusted computing base that implement the reference monitor concept.
References
National Information Systems Security Glossary
Computing terminology |
https://en.wikipedia.org/wiki/Shannon%27s%20law | Shannon's law may refer to:
Shannon's source coding theorem, which establishes the theoretical limits to lossless data compression
Shannon–Hartley theorem, which establishes the theoretical maximum rate at which data can be reliably transmitted over a noisy channel
Shannon's law (Arizona), a law against the firing of gunshots into the air, established after 14-year-old Shannon Smith was killed by a stray bullet in 1999 |
https://en.wikipedia.org/wiki/Simple%20Network%20Management%20Protocol | Simple Network Management Protocol (SNMP) is an Internet Standard protocol for collecting and organizing information about managed devices on IP networks and for modifying that information to change device behaviour. Devices that typically support SNMP include cable modems, routers, switches, servers, workstations, printers, and more.
SNMP is widely used in network management for network monitoring. SNMP exposes management data in the form of variables on the managed systems organized in a management information base (MIB), which describes the system status and configuration. These variables can then be remotely queried (and, in some circumstances, manipulated) by managing applications.
Three significant versions of SNMP have been developed and deployed. SNMPv1 is the original version of the protocol. More recent versions, SNMPv2c and SNMPv3, feature improvements in performance, flexibility and security.
SNMP is a component of the Internet Protocol Suite as defined by the Internet Engineering Task Force (IETF). It consists of a set of standards for network management, including an application layer protocol, a database schema, and a set of data objects.
Overview and basic concepts
In typical uses of SNMP, one or more administrative computers called managers have the task of monitoring or managing a group of hosts or devices on a computer network. Each managed system executes a software component called an agent which reports information via SNMP to the manager.
An SNMP-managed network consists of three key components:
Managed devices
Agentsoftware which runs on managed devices
Network management station (NMS)software which runs on the manager
A managed device is a network node that implements an SNMP interface that allows unidirectional (read-only) or bidirectional (read and write) access to node-specific information. Managed devices exchange node-specific information with the NMSs. Sometimes called network elements, the managed devices can be any type of device, including, but not limited to, routers, access servers, switches, cable modems, bridges, hubs, IP telephones, IP video cameras, computer hosts, and printers.
An agent is a network-management software module that resides on a managed device. An agent has local knowledge of management information and translates that information to or from an SNMP-specific form.
A network management station executes applications that monitor and control managed devices. NMSs provide the bulk of the processing and memory resources required for network management. One or more NMSs may exist on any managed network.
Management information base
SNMP agents expose management data on the managed systems as variables. The protocol also permits active management tasks, such as configuration changes, through remote modification of these variables. The variables accessible via SNMP are organized in hierarchies. SNMP itself does not define which variables a managed system should offer. Rather, SNMP uses an |
https://en.wikipedia.org/wiki/S%20interface | The S interface or S reference point, also known as S0, is a user–network interface reference point in an ISDN BRI environment, characterized by a four-wire circuit using 144kbit/s (2 bearer and 1 signaling channel; 2B+D) user rate.
The S interface is the connection between ISDN terminal equipment (TE) or terminal adapters (TAs) and an NT1 (network terminator, type 1.) Not all TE or TAs connect externally to an S interface, but instead integrate an NT1 so they can connect directly to a U interface (local loop from central office.)
Contrast to the T interface, which connects between an NT2 (PBX or other local switching device) and NT1. However, the S interface is electrically equivalent to the T interface, and the two are jointly referred to as the S/T interface.
The S interface operates at 4000 48-bit frames per second; i.e., 192kbit/s, with a user portion of 36bits per frame; i.e., 144kbit/s.
See also
R interface
T interface
U interface
References
ITU-T recommendations
Integrated Services Digital Network |
https://en.wikipedia.org/wiki/Slave%20station | Slave station may refer to:
A collection and transfer location utilized in the slave trade
A station having a clock synchronized by a remote master station in a navigation system
In a data network a station that is selected and controlled by a master station |
https://en.wikipedia.org/wiki/Standard%20telegraph%20level | In telecommunication, standard telegraph level (STL) is the power per individual telegraph channel required to yield the standard composite data level.
For example, for a composite data level of -13 dBm at 0-dBm transmission level point (0TLP), the STL would be approximately -25 dBm for a 16-channel VFCT terminal computed from STL = - (13+10log10 n ), where n is the number of telegraph channels and the STL is in dBm.
References
Data transmission
Telegraphy |
https://en.wikipedia.org/wiki/Start%20signal | In telecommunication, a start signal is a signal that prepares a device to receive data or to perform a function.
In asynchronous serial communication, start signals are used at the beginning of a character that prepares the receiving device for the reception of the code elements.
A start signal is limited to one signal element usually having the duration of a unit interval.
References
Telecommunications engineering |
https://en.wikipedia.org/wiki/Statement | Statement or statements may refer to:
Common uses
Statement (computer science), the smallest standalone element of an imperative programming language
Statement (logic and semantics), declarative sentence that is either true or false
Statement, a declarative phrase in language (linguistics)
Statement, a North American paper size of 5 1⁄2 in × 8 in (140 mm × 203 mm), also known under various names such as half letter and memo
Financial statement, formal summary of the financial activities of a business, person, or other entity
Mathematical statement, a statement in logic and mathematics
Political statement, any act or nonverbal form of communication that is intended to influence a decision to be made for or by a group
Press statement, written or recorded communication directed at members of the news media
Statement of Special Educational Needs, outlining specific provision needed for a child in England
Witness statement (law), a signed document recording the evidence given by a person with testimony relevant to an incident
A written ministerial statement to Parliament in the United Kingdom
Music
Statements (album), a 1962 album by the jazz vibraphonist Milt Jackson
"Statement" (song), a 2008 song by the band Boris
"Statement", a song by NF from the album Therapy Session
"Statements" (song), a 2017 song by Loreen
See also
The Statement (disambiguation)
Sentence (linguistics), words grouped meaningfully to express a statement, question, exclamation, request, command or suggestion
State (disambiguation)
Theme (music), a statement in music
sn:Zvakadudzwa |
https://en.wikipedia.org/wiki/Supervisory%20program | A supervisory program or supervisor is a computer program, usually part of an operating system, that controls the execution of other routines and regulates work scheduling, input/output operations, error actions, and similar functions and regulates the flow of work in a data processing system.
It can also refer to a program that allocates computer component space and schedules computer events by task queuing and system interrupts. Control of the system is returned to the supervisory program frequently enough to ensure that demands on the system are met.
Historically, this term was essentially associated with IBM's line of mainframe operating systems starting with OS/360. In other operating systems, the supervisor is generally called the kernel.
In the 1970s, IBM further abstracted the supervisor state from the hardware, resulting in a hypervisor that enabled full virtualization, i.e. the capacity to run multiple operating systems on the same machine totally independently from each other. Hence the first such system was called Virtual Machine or VM.
References
Operating system technology |
https://en.wikipedia.org/wiki/Systems%20design | Systems design interfaces, and data for an electronic control system to satisfy specified requirements. System design could be seen as the application of system theory to product development. There is some overlap with the disciplines of system analysis, system architecture and system engineering.
Overview
If the broader topic of product development "blends the perspective of marketing, design, and manufacturing into a single approach to product development," then design is the act of taking the marketing information and creating the design of the product to be manufactured. Systems design is therefore the process of defining and developing systems to satisfy specified requirements of the user.
The basic study of system design is the understanding of component parts and their subsequent interaction with one another.
Physical design
The physical design relates to the actual input and output processes of the system. This is explained in terms of how data is input into a system, how it is verified/authenticated, how it is processed, and how it is displayed.
In physical design, the following requirements about the system are decided.
Input requirement,
Output requirements,
Storage requirements,
Processing requirements,
System control and backup or recovery.
Put another way, the physical portion of system design can generally be broken down into three sub-tasks:
User Interface Design
Data Design
Process Design
Web System design
Online websites, such as Google, Twitter, Facebook, Amazon and Netflix are used by millions of users worldwide. A scalable, highly available system must be designed to accommodate an increasing number of users. Here are the things to consider in designing the system:
Functional and non functional requirements
Capacity estimation
Database to use, Relational or NoSQL
Vertical scaling, Horizontal scaling, Sharding
Load Balancing
Primary-secondary Replication
Cache and CDN
Stateless and Stateful servers
Data center georouting
Message Queue, Publish Subscribe Architecture
Performance Metrics Monitoring and Logging
Build, test, configure deploy automation
Finding single point of failure
API Rate limiting
Service Level Agreement
See also
Arcadia (engineering)
Architectural pattern (computer science)
Configuration design
Electronic design automation (EDA)
Electronic system-level (ESL)
Embedded system
Graphical system design
Hypersystems
Modular design
Morphological analysis (problem-solving)
SCSD (School Construction System Development) project
System information modelling
System development life cycle (SDLC)
System engineering
System thinking
TRIZ
References
Further reading
Bentley, Lonnie D., Kevin C. Dittman, and Jeffrey L. Whitten. System analysis and design methods. (1986, 1997, 2004).
Hawryszkiewycz, Igor T. Introduction to system analysis and design. Prentice Hall PTR, 1994.
Levin, Mark Sh. Modular system design and evaluation. Springer, 2015.
External links
Intera |
https://en.wikipedia.org/wiki/Terminal%20adapter | A terminal adapter or TA is a device that connects a terminal device – a computer, a mobile communications device, or other – to a communications network.
ISDN
In ISDN terminology, the terminal adapter connects a terminal (computer) to the ISDN network. The TA therefore fulfills a similar function to the ones a modem has on the POTS network, and is therefore sometimes called an ISDN modem. The latter term, however, is partially misleading as there is no modulation or demodulation performed.
There are devices on the market that combine the functions of an ISDN TA with those of a classical modem (with an ISDN line interface). These combined TA/modems permit connections from both ISDN and analog-line/modem counterparts. In addition, a TA may contain an interface and codec for one or more analog telephone lines (aka a/b line), allowing an existing POTS installation to be upgraded to ISDN without changing phones.
Terminal adapters typically connect to a basic rate interface (S0, sometimes also U0). On the terminal side, the most popular interfaces are RS-232 serial and USB; others like V.35 or RS-449 are only of historical interest.
Devices connecting ISDN to a network (e.g. Ethernet) commonly include routing functionality; while they technically include a TA function, they are referred to as (ISDN) routers.
Mobile networks
In mobile networks, the terminal adapter is used by the terminal equipment to access the mobile termination, using AT commands (see Hayes command set).
In 2G (such as GSM or CDMA), the terminal adapter is a theoretically optional while in 3G (such as W-CDMA), the terminal adapter is mandatory and is part of the mobile termination.
Automation industry
In the automation industry, a terminal adapter is a passive device that converts a connector like the 8P8C (RJ-45) modular connector or 9 pin D-Sub into a terminal block to facilitate wiring. It is often used when daisy chain wiring is necessary on a multi-node serial communication network like RS-485 or RS-422.
See also
Federal Standard 1037C
Integrated Services Digital Network |
https://en.wikipedia.org/wiki/T%20interface | A T-interface or T reference point is used for basic rate access in an Integrated Services Digital Network (ISDN) environment. It is a User–network interface reference point that is characterized by a four-wire, 144 kbit/s (2B+D) user rate.
Other characteristics of a T-interface are:
it accommodates the link access and transport layer function in the ISDN architecture
it is located at the user premises
it is distance sensitive to the servicing Network termination 1
it functions in a manner similar to that of the Channel service units (CSUs) and the Data service units (DSUs).
The T interface is electrically equivalent to the S interface, and the two are jointly referred to as the S/T interface.
See also
R interface
S interface
U interface
References
Networking hardware
Integrated Services Digital Network |
https://en.wikipedia.org/wiki/Traffic%20intensity | In telecommunication networks, traffic intensity is a measure of the average occupancy of a server or resource during a specified period of time, normally a busy hour. It is measured in traffic units (erlangs) and defined as the ratio of the time during which a facility is cumulatively occupied to the time this facility is available for occupancy.
In a digital network, the traffic intensity is:
where
a is the average arrival rate of packets (e.g. in packets per second)
L is the average packet length (e.g. in bits), and
R is the transmission rate (e.g. bits per second)
A traffic intensity greater than one erlang means that the rate at which bits arrive exceeds the rate bits can be transmitted and queuing delay will grow without bound (if the traffic intensity stays the same). If the traffic intensity is less than one erlang, then the router can handle more average traffic.
Telecommunication operators are vitally interested in traffic intensity, as it dictates the amount of equipment they must supply.
See also
Teletraffic engineering
IEC 80000-13
References
Teletraffic |
https://en.wikipedia.org/wiki/Trusted%20computing%20base | The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security, in the sense that bugs or vulnerabilities occurring inside the TCB might jeopardize the security properties of the entire system. By contrast, parts of a computer system that lie outside the TCB must not be able to misbehave in a way that would leak any more privileges than are granted to them in accordance to the system's security policy.
The careful design and implementation of a system's trusted computing base is paramount to its overall security. Modern operating systems strive to reduce the size of the TCB so that an exhaustive examination of its code base (by means of manual or computer-assisted software audit or program verification) becomes feasible.
Definition and characterization
The term trusted computing base goes back to John Rushby, who defined it as the combination of operating system kernel and trusted processes. The latter refers to processes which are allowed to violate the system's access-control rules.
In the classic paper Authentication in Distributed Systems: Theory and Practice Lampson et al. define the TCB of a computer system as simply
a small amount of software and hardware that security depends on and that we distinguish from a much larger amount that can misbehave without affecting security.
Both definitions, while clear and convenient, are neither theoretically exact nor intended to be, as e.g. a network server process under a UNIX-like operating system might fall victim to a security breach and compromise an important part of the system's security, yet is not part of the operating system's TCB. The Orange Book, another classic computer security literature reference, therefore provides a more formal definition of the TCB of a computer system, as
the totality of protection mechanisms within it, including hardware, firmware, and software, the combination of which is responsible for enforcing a computer security policy.
In other words, trusted computing base (TCB) is a combination of hardware, software, and controls that work together to form a trusted base to enforce your security policy.
The Orange Book further explains that
[t]he ability of a trusted computing base to enforce correctly a unified security policy depends on the correctness of the mechanisms within the trusted computing base, the protection of those mechanisms to ensure their correctness, and the correct input of parameters related to the security policy.
In other words, a given piece of hardware or software is a part of the TCB if and only if it has been designed to be a part of the mechanism that provides its security to the computer system. In operating systems, this typically consists of the kernel (or microkernel) and a select set of system utilities (for example, setuid programs and daemons in UNIX systems). In programming languages designed with built-in security features, such as Java |
https://en.wikipedia.org/wiki/NSA%20product%20types | The U.S. National Security Agency (NSA) used to rank cryptographic products or algorithms by a certification called product types. Product types were defined in the National Information Assurance Glossary (CNSSI No. 4009, 2010) which used to define Type 1, 2, 3, and 4 products. The definitions of numeric type products have been removed from the government lexicon and are no longer used in government procurement efforts.
Type 1 product
A Type 1 product was a device or system certified by NSA for use in cryptographically securing classified U.S. Government information. A Type 1 product was defined as:
Cryptographic equipment, assembly or component classified or certified by NSA for encrypting and decrypting classified and sensitive national security information when appropriately keyed. Developed using established NSA business processes and containing NSA approved algorithms. Used to protect systems requiring the most stringent protection mechanisms.
They were available to U.S. Government users, their contractors, and federally sponsored non-U.S. Government activities subject to export restrictions in accordance with International Traffic in Arms Regulations.
Type 1 certification was a rigorous process that included testing and formal analysis of (among other things) cryptographic security, functional security, tamper resistance, emissions security (EMSEC/TEMPEST), and security of the product manufacturing and distribution process.
Type 2 product
A Type 2 product was unclassified cryptographic equipment, assemblies, or components, endorsed by the NSA, for use in telecommunications and automated information systems for the protection of national security information, as defined as:
Cryptographic equipment, assembly, or component certified by NSA for encrypting or decrypting sensitive national security information when appropriately keyed. Developed using established NSA business processes and containing NSA approved algorithms. Used to protect systems requiring protection mechanisms exceeding best commercial practices including systems used for the protection of unclassified national security information.
Type 3 product
A Type 3 product was a device for use with Sensitive, But Unclassified (SBU) information on non-national security systems, defined as:
Unclassified cryptographic equipment, assembly, or component used, when appropriately keyed, for encrypting or decrypting unclassified sensitive U.S. Government or commercial information, and to protect systems requiring protection mechanisms consistent with standard commercial practices. Developed using established commercial standards and containing NIST approved cryptographic algorithms/modules or successfully evaluated by the National Information Assurance Partnership (NIAP).
Approved encryption algorithms included three-key Triple DES, and AES (although AES can also be used in NSA-certified Type 1 products). Approvals for DES, two-key Triple DES and Skipjack have been withdrawn as of |
https://en.wikipedia.org/wiki/Telephony | Telephony ( ) is the field of technology involving the development, application, and deployment of telecommunication services for the purpose of electronic transmission of voice, fax, or data, between distant parties. The history of telephony is intimately linked to the invention and development of the telephone.
Telephony is commonly referred to as the construction or operation of telephones and telephonic systems and as a system of telecommunications in which telephonic equipment is employed in the transmission of speech or other sound between points, with or without the use of wires. The term is also used frequently to refer to computer hardware, software, and computer network systems, that perform functions traditionally performed by telephone equipment. In this context the technology is specifically referred to as Internet telephony, or voice over Internet Protocol (VoIP).
Overview
The first telephones were connected directly in pairs. Each user had a separate telephone wired to each locations to be reached. This quickly became inconvenient and unmanageable when users wanted to communicate with more than a few people. The invention of the telephone exchange provided the solution for establishing telephone connections with any other telephone in service in the local area. Each telephone was connected to the exchange at first with one wire, later one wire pair, the local loop. Nearby exchanges in other service areas were connected with trunk lines, and long-distance service could be established by relaying the calls through multiple exchanges.
Initially, exchange switchboards were manually operated by an attendant, commonly referred to as the "switchboard operator". When a customer cranked a handle on the telephone, it activated an indicator on the board in front of the operator, who would in response plug the operator headset into that jack and offer service. The caller had to ask for the called party by name, later by number, and the operator connected one end of a circuit into the called party jack to alert them. If the called station answered, the operator disconnected their headset and completed the station-to-station circuit. Trunk calls were made with the assistance of other operators at other exchangers in the network.
Until the 1970s, most telephones were permanently wired to the telephone line installed at customer premises. Later, conversion to installation of jacks that terminated the inside wiring permitted simple exchange of telephone sets with telephone plugs and allowed portability of the set to multiple locations in the premises where jacks were installed. The inside wiring to all jacks was connected in one place to the wire drop which connects the building to a cable. Cables usually bring a large number of drop wires from all over a district access network to one wire center or telephone exchange. When a telephone user wants to make a telephone call, equipment at the exchange examines the dialed telephone number and con |
https://en.wikipedia.org/wiki/U%20interface | The U interface or U reference point is a Basic Rate Interface (BRI) in the local loop of an Integrated Services Digital Network (ISDN), connecting the network terminator (NT1/2) on the customer's premises to the line termination (LT) in the carrier's local exchange, in other words providing the connection from subscriber to central office.
Unlike the ISDN S/T interfaces, the U interface was not originally electrically defined by the ITU ISDN specifications, but left up to network operators to implement, although the ITU has issued recommendations G.960 and G.961 to formalize the standards adopted in the US and EU.
In the US, the U interface is originally defined by the ANSI T1.601 specification as a 2-wire connection using 2B1Q line coding. It is not as distance sensitive as the S interface or T interface, and can operate at distances up to 18,000 feet. Typically the U interface does not connect to terminal equipment (which typically has an S/T interface) but to an NT1 or NT2 (network terminator type 1 or 2.)
An NT1 is a discrete device that converts the U interface to an S/T interface, which is then connected to terminal equipment (TE) having an S/T interface. However, some TE devices integrate an NT1, and therefore have a direct U interface suitable for connection directly to the loop.
An NT2 is a more sophisticated local switching device such as a PBX, that may convert the signal to a different format or hand it off as S/T to terminal equipment.
In America, the NT1 is customer premises equipment (CPE) which is purchased and maintained by the user, which makes the U interface a User–network interface (UNI). The American variant is specified by ANSI T1.601.
In Europe, the NT1 belongs to the network operator, so the user doesn't have direct access to the U interface. The European variant is specified by the European Telecommunications Standards Institute (ETSI) in recommendation ETR 080. The ITU-T has issued recommendations G.960 and G.961 with world-wide scope, encompassing both the European and American variants of the U interface.
Logical interface
Like all other ISDN basic rate interfaces, the U interface carries two B (bearer) channels at 64 kbit/s and one D (data) channel at 16 kbit/s for a combined bitrate of 144 kbit/s (2B+D).
Duplex transmission
While in a four-wire interface such as the ISDN S and T-interfaces one wire pair is available for each direction of transmission, a two-wire interface needs to implement both directions on a single wire pair. To that end, ITU-T recommendation G.961 specifies two duplex transmission technologies for the ISDN U interface, either of which shall be used: Echo cancellation (ECH) and Time Compression Multiplex (TCM).
Echo cancellation (ECH)
When a transmitter applies a signal to the wire-pair, parts of the signal will be reflected as a result of imperfect balance of the hybrid and because of impedance discontinuities on the line. These reflections return to the transmitter as an echo and are |
https://en.wikipedia.org/wiki/Abstract%20factory%20pattern | The abstract factory pattern in software engineering is a design pattern that provides a way to create families of related objects without imposing their concrete classes, by encapsulating a group of individual factories that have a common theme without specifying their concrete classes. According to this pattern, a client software component creates a concrete implementation of the abstract factory and then uses the generic interface of the factory to create the concrete objects that are part of the family. The client does not know which concrete objects it receives from each of these internal factories, as it uses only the generic interfaces of their products. This pattern separates the details of implementation of a set of objects from their general usage and relies on object composition, as object creation is implemented in methods exposed in the factory interface.
Use of this pattern enables interchangeable concrete implementations without changing the code that uses them, even at runtime. However, employment of this pattern, as with similar design patterns, may result in unnecessary complexity and extra work in the initial writing of code. Additionally, higher levels of separation and abstraction can result in systems that are more difficult to debug and maintain.
Overview
The abstract factory design pattern is one of the 23 patterns described in the 1994 Design Patterns book. It may be used to solve problems such as:
How can an application be independent of how its objects are created?
How can a class be independent of how the objects that it requires are created?
How can families of related or dependent objects be created?
Creating objects directly within the class that requires the objects is inflexible because doing so commits the class to particular objects and makes it impossible to change the instantiation later independently from the class without having to change it. It prevents the class from being reusable if other objects are required, and it makes the class difficult to test because real objects cannot be replaced with mock objects.
A factory is the location of a concrete class in the code at which objects are constructed. Implementation of the pattern intends to insulate the creation of objects from their usage and to create families of related objects without having to depend on their concrete classes. This allows for new derived types to be introduced with no change to the code that uses the base class.
The pattern describes how to solve such problems:
Encapsulate object creation in a separate (factory) object by defining and implementing an interface for creating objects.
Delegate object creation to a factory object instead of creating objects directly.
This makes a class independent of how its objects are created. A class may be configured with a factory object, which it uses to create objects, and the factory object can be exchanged at runtime.
Definition
The Design Patterns book describes the abstract factory |
https://en.wikipedia.org/wiki/Telecommunications%20link | In a telecommunications network, a link is a communication channel that connects two or more devices for the purpose of data transmission. The link may be a dedicated physical link or a virtual circuit that uses one or more physical links or shares a physical link with other telecommunications links.
A telecommunications link is generally based on one of several types of information transmission paths such as those provided by communication satellites, terrestrial radio communications infrastructure and computer networks to connect two or more points.
The term link is widely used in computer networking to refer to the communications facilities that connect nodes of a network.
Sometimes the communications facilities that provide the communication channel that constitutes a link are also included in the definition of link.
Types
Point-to-point
A point-to-point link is a dedicated link that connects exactly two communication facilities (e.g., two nodes of a network, an intercom station at an entryway with a single internal intercom station, a radio path between two points, etc.).
Broadcast
Broadcast links connect two or more nodes and support broadcast transmission, where one node can transmit so that all other nodes can receive the same transmission. Classic Ethernet is an example.
Multipoint
Also known as a multidrop link, a multipoint link is a link that connects two or more nodes. Also known as general topology networks, these include ATM and Frame Relay links, as well as X.25 networks when used as links for a network layer protocol like IP.
Unlike broadcast links, there is no mechanism to efficiently send a single message to all other nodes without copying and retransmitting the message.
Point-to-multipoint
A point-to-multipoint link (or simply a multipoint) is a specific type of multipoint link which consists of a central connection endpoint (CE) that is connected to multiple peripheral CEs. Any transmission of data that originates from the central CE is received by all of the peripheral CEs while any transmission of data that originates from any of the peripheral CEs is only received by the central CE.
Private and public
Links are often referred to by terms that refer to the ownership or accessibility of the link.
A private link is a link that is either owned by a specific entity or a link that is only accessible by a specific entity.
A public link is a link that uses the public switched telephone network or other public utility or entity to provide the link and which may also be accessible by anyone.
Direction
Uplink
Pertaining to radiocommunication service, an uplink (UL or U/L) is the portion of a feeder link used for the transmission of signals from an earth station to a space radio station, space radio system or high altitude platform station.
Pertaining to GSM and cellular networks, the radio uplink is the transmission path from the mobile station (cell phone) to a base station (cell site). Traffic and signalling flowi |
https://en.wikipedia.org/wiki/Validation | Validation may refer to:
Data validation, in computer science, ensuring that data inserted into an application satisfies defined formats and other input criteria
Emotional validation, in interpersonal communication is the recognition, the affirmation, the acceptance of the existence of expressed emotions, and the communication, the acknowledgement, of this recognition with the emoter(s) (the one(s) who express the emotions).
Forecast verification, validating and verifying prognostic output from a numerical model
Regression validation, in statistics, determining whether the outputs of a regression model are adequate
Social validation, compliance in a social activity to fit in and be part of the majority
Statistical model validation, determining whether the outputs of a statistical model are acceptable
Validation (drug manufacture), documenting that a process or system meets its predetermined specifications and quality attributes
Validation (gang membership), a formal process for designating a criminal as a member of a gang
Validation of foreign studies and degrees, processes for transferring educational credentials between countries
Validation therapy, a therapy developed by Naomi Feil for older people with cognitive impairments and dementia
Verification and validation (software), checking that software meets specifications and fulfills its intended purpose
Verification and validation, in engineering, confirming that a product or service meets the needs of its users
XML validation, the process of checking a document written in XML to confirm that it both is "well-formed" and follows a defined structure
See also
Cross validation (disambiguation)
Revalidation, of medical doctors in UK
Validity (disambiguation)
Verification (disambiguation) |
https://en.wikipedia.org/wiki/Variable-length%20buffer | In telecommunication, a variable length buffer or elastic buffer is a buffer into which data may be entered at one rate and removed at another rate without changing the data sequence.
Most first-in first-out (FIFO) storage devices are variable-length buffers in that the input rate may be variable while the output rate is constant or the output rate may be variable while the input rate is constant. Various clocking and control systems are used to allow control of underflow or overflow conditions.
See also
Buffer (telecommunication)
Circular buffer
References
Synchronization
Computer memory |
https://en.wikipedia.org/wiki/Viewdata | Viewdata is a Videotex implementation. It is a type of information retrieval service in which a subscriber can access a remote database via a common carrier channel, request data and receive requested data on a video display over a separate channel. Samuel Fedida, who had the idea for Viewdata in 1968, was credited as inventor of the system which was developed while working for the British Post Office which was the operator of the national telephone system. The first prototype became operational in 1974. The access, request and reception are usually via common carrier broadcast channels. This is in contrast with teletext.
Technology
Viewdata offered a display of 40×24 characters, based on ISO 646 (IRV IA5) – 7 bits with no accented characters.
Originally Viewdata was accessed with a special purpose terminal (or emulation software) and a modem running at ITU-T V.23 speed (1,200 bit/s down, 75 bit/s up). By 2004 it was normally accessed over TCP/IP using Viewdata client software on a personal computer running Microsoft Windows, or using a Web-based emulator.
Travel industry
As of 2015, Viewdata was still in use in the United Kingdom, mainly by the travel industry. Travel agents use it to look up the price and availability of package holidays and flights. Once they find what the customer is looking for they can place a booking.
There are a number of factors still holding up a move to a Web-based standard. Viewdata is regarded within the industry as low-cost and reliable, travel consultants have been trained to use Viewdata and would need training to book holidays on the Internet, and tour operators cannot agree on a Web-based standard.
Bulletin board systems
It was made in the late 1970s and early 1980s to make it easier for travel consultants to check availability and make bookings for holidays.
A number of Viewdata bulletin board systems existed in the 1980s, predominantly in the UK due to the proliferation of the BBC Micro, and a short-lived Viewdata Revival appeared in the late 1990s fuelled by the retrocomputing vogue. Some Viewdata boards still exist, with accessibility in the form of Java Telnet clients.
Keypad symbols: the sextile and the square
Viewdata uses special symbols already widely available on telephone keypads: the "star" key and the "square" key, as formally standardised by the International Telecommunication Union. These are often treated as approximately corresponding to the ASCII asterisk (*) and number sign (#), which do not necessarily conform to the ITU specifications for the keypad symbols; the asterisk is also usually displayed smaller and raised.
These symbols appear as 'Sextile' and 'Viewdata square' in the Miscellaneous Symbols and Miscellaneous Technical Unicode blocks, respectively. The sextile was added due to its use in astrology, and the square had previously appeared in the BS_Viewdata character set, as a replacement for the underscore.
In 2013, the German national body submitted a Unicode Technical C |
https://en.wikipedia.org/wiki/Virtual%20circuit | A virtual circuit (VC) is a means of transporting data over a data network, based on packet switching and in which a connection is first established across the network between two endpoints. The network, rather than having a fixed data rate reservation per connection as in circuit switching, takes advantage of the statistical multiplexing on its transmission links, an intrinsic feature of packet switching.
A 1978 standardization of virtual circuits by the CCITT imposes per-connection flow controls at all user-to-network and network-to-network interfaces. This permits participation in congestion control and reduces the likelihood of packet loss in a heavily loaded network. Some circuit protocols provide reliable communication service through the use of data retransmissions invoked by error detection and automatic repeat request (ARQ).
Before a virtual circuit may be used, it must be established between network nodes in the call setup phase. Once established, a bit stream or byte stream may be exchanged between the nodes, providing abstraction from low-level division into protocol data units, and enabling higher-level protocols to operate transparently.
An alternative to virtual-circuit networks are datagram networks.
Comparison with circuit switching
Virtual circuit communication resembles circuit switching, since both are connection oriented, meaning that in both cases data is delivered in correct order, and signaling overhead is required during a connection establishment phase. However, circuit switching provides a constant bit rate and latency, while these may vary in a virtual circuit service due to factors such as:
varying packet queue lengths in the network nodes,
varying bit rate generated by the application,
varying load from other users sharing the same network resources by means of statistical multiplexing, etc.
Virtual call capability
In telecommunication, a virtual call capability, sometimes called a virtual call facility, is a service feature in which:
a call set-up procedure and a call disengagement procedure determine the period of communication between two DTEs in which user data are transferred by a packet switched network
end-to-end transfer control of packets within the network is required
data may be delivered to the network by the call originator before the call access phase is completed, but the data are not delivered to the call receiver if the call attempt is unsuccessful
the network delivers all the user data to the call receiver in the same sequence in which the data are received by the network
multi-access DTEs may have several virtual calls in progress at the same time.
An alternative approach to virtual calls is connectionless communication using datagrams.
In the 1970s, the "virtual call" concept was used in the British EPSS and enhanced by Rémi Després as "virtual circuits" in the French RCP.
Layer 4 virtual circuits
Connection oriented transport layer protocols such as TCP may rely on a connectionles |
https://en.wikipedia.org/wiki/Virtual%20terminal | In open systems, a virtual terminal (VT) is an application service that:
Allows host terminals on a multi-user network to interact with other hosts regardless of terminal type and characteristics,
Allows remote log-on by local area network managers for the purpose of management,
Allows users to access information from another host processor for transaction processing,
Serves as a backup facility.
PuTTY is an example of a virtual terminal.
ITU-T defines a virtual terminal protocol based on the OSI application layer protocols. However, the virtual terminal protocol is not widely used on the Internet.
See also
Pseudo terminal for the software interface that provides access to virtual terminals
Secure Shell
Telnet
Terminal emulator for an application program that provides access to virtual terminals
Virtual console for an analogous concept that provides several local consoles
Sources
Computer terminals
ITU-T recommendations |
https://en.wikipedia.org/wiki/Wireless%20mobility%20management | Wireless mobility management in Personal Communications Service (PCS) is the assigning and controlling of wireless links for terminal network connections. Wireless mobility management provides an "alerting" function for call completion to a wireless terminal, monitors wireless link performance to determine when an automatic link transfer is required, and coordinates link transfers between wireless access interfaces.
One use of this is wireless push technology, by pushing data across wireless networks, this coordinates the link transfers and pushes data between the backend and wireless device only when an established connection is found.
References
Push technology
Wireless networking |
https://en.wikipedia.org/wiki/Work%20station | A workstation or work station may refer to:
A computer or device, such as
computer workstation, a high-performance desktop computer (e.g., one with error-correcting memory), as may be designed for scientific or engineering applications, or
a music workstation
A work space (usually for one employee); for example,
a cubicle or
a piece of furniture such as a computer desk. |
https://en.wikipedia.org/wiki/Terminal | Terminal may refer to:
Computing
Hardware
Terminal (electronics), a device for joining electrical circuits together
Terminal (telecommunication), a device communicating over a line
Computer terminal, a set of primary input and output devices for a computer
Feedback terminal, a physical device used collect anonymous feedback
Software
Terminal emulator, a program that emulates a computer terminal within some other display architecture
Terminal (macOS), a terminal emulator included with macOS
Windows Terminal, a terminal emulator for Windows 10 and Windows 11
GNOME Terminal, a Linux and BSD terminal emulator
Terminal and nonterminal symbols, lexical elements used in specifying the production rules constituting a formal grammar in computer science.
Fonts
Terminal (typeface), a monospace font
Terminal (typography), a type of stroke ending
Transportation
Airport terminal, a building where passengers embark and disembark aircraft (or cargo is loaded)
Bus station
Battery terminal, electrical contact used to connect a load or charger to a single cell or multiple-cell battery
Passenger terminal (maritime), a structure where passenger water vessels ships pick up and drop off passengers
Container port, also called a terminal, where cargo containers are transferred between different vehicles or ships
Railroad terminal, the end point of a railroad line
Places
Terminal (Asunción), a neighbourhood in Paraguay
Terminal Peak, a mountain in Tasmania, Australia
Terminal Range, a mountain range in Canada
Media
Film and TV
The Terminal, 2004 American comedy-drama film
Terminal (2018 film), an American film starring Margot Robbie
"Terminal", an episode of Law & Order
"Terminal" (Space Ghost Coast to Coast), a television episode
Music
Terminal (American band), a rock band from Texas
terminal (Danish band), a pop rock band from Copenhagen
Terminal (Ancestral Legacy album), 2014
Terminal (Salyu album), 2007
"Terminal" (Ayumi Hamasaki song), 2014
"Terminal" (Rupert Holmes song), 1974
"Terminals", a song by Relient K featuring Adam Young
Literature
Terminal (Cook novel), by Robin Cook
Terminal (Tunnels novel), a 2013 novel in the Tunnels series
Terminal, a novel by Colin Forbes
The Terminal Man, 1972 novel by Michael Crichton
The Terminal Man (film), film adaptation by Mike Hodges
The Terminal Man, autobiography of Mehran Karimi Nasseri
Other uses
Terminal illness, a progressive disease that is expected to cause death
Terminal velocity, in physics
Terminal High Altitude Area Defense, an American anti-ballistic missile defence system
Terminal deoxynucleotidyl transferase, a specialized DNA polymerase
Terminal sedation, another name for palliative sedation, the practice of inducing unconsciousness in a terminally ill person for the remainder of the person's life
Terminal, relating to the location of a flower or other feature in botany
Potsdam Conference, code-named “Terminal”, the last Allied meeting of World War II
Suffi |
https://en.wikipedia.org/wiki/Teletype%20%28disambiguation%29 | The teletype, or teleprinter, is a device used for communicating text over telegraph lines, public switched telephone network, Telex, radio, or satellite links.
Teletype may also refer to:
Devices
Teletype Model 28, a 1951 model of teleprinter
Teletype Model 33, a 1963 model of teleprinter
Telecommunications device for the deaf or TDD, a teleprinter specifically designed for text communication over the public switched telephone network
Companies
TeleType Co., an American GPS software company
Teletype Corporation, a subsidiary of the Western Electric Company, purchased by the American Telephone and Telegraph Company in 1930 |
https://en.wikipedia.org/wiki/Nearest%20neighbour%20algorithm | The nearest neighbour algorithm was one of the first algorithms used to solve the travelling salesman problem approximately. In that problem, the salesman starts at a random city and repeatedly visits the nearest city until all have been visited. The algorithm quickly yields a short tour, but usually not the optimal one.
Algorithm
These are the steps of the algorithm:
Initialize all vertices as unvisited.
Select an arbitrary vertex, set it as the current vertex u. Mark u as visited.
Find out the shortest edge connecting the current vertex u and an unvisited vertex v.
Set v as the current vertex u. Mark v as visited.
If all the vertices in the domain are visited, then terminate. Else, go to step 3.
The sequence of the visited vertices is the output of the algorithm.
The nearest neighbour algorithm is easy to implement and executes quickly, but it can sometimes miss shorter routes which are easily noticed with human insight, due to its "greedy" nature. As a general guide, if the last few stages of the tour are comparable in length to the first stages, then the tour is reasonable; if they are much greater, then it is likely that much better tours exist. Another check is to use an algorithm such as the lower bound algorithm to estimate if this tour is good enough.
In the worst case, the algorithm results in a tour that is much longer than the optimal tour. To be precise, for every constant r there is an instance of the traveling salesman problem such that the length of the tour computed by the nearest neighbour algorithm is greater than r times the length of the optimal tour. Moreover, for each number of cities there is an assignment of distances between the cities for which the nearest neighbor heuristic produces the unique worst possible tour. (If the algorithm is applied on every vertex as the starting vertex, the best path found will be better than at least N/2-1 other tours, where N is the number of vertices.)
The nearest neighbour algorithm may not find a feasible tour at all, even when one exists.
Notes
References
G. Gutin, A. Yeo and A. Zverovitch, Exponential Neighborhoods and Domination Analysis for the TSP, in The Traveling Salesman Problem and Its Variations, G. Gutin and A.P. Punnen (eds.), Kluwer (2002) and Springer (2007).
G. Gutin, A. Yeo and A. Zverovich, Traveling salesman should not be greedy: domination analysis of greedy-type heuristics for the TSP. Discrete Applied Mathematics 117 (2002), 81–86.
J. Bang-Jensen, G. Gutin and A. Yeo, When the greedy algorithm fails. Discrete Optimization 1 (2004), 121–127.
G. Bendall and F. Margot, Greedy Type Resistance of Combinatorial Problems, Discrete Optimization 3 (2006), 288–298.
Travelling salesman problem
Approximation algorithms
Heuristic algorithms
Graph algorithms |
https://en.wikipedia.org/wiki/Data%20processing | Data processing is the collection and manipulation of digital data to produce meaningful information.
Data processing is a form of information processing, which is the modification (processing) of information in any manner detectable by an observer.
The term "Data Processing", or "DP" has also been used to refer to a department within an organization responsible for the operation of data processing programs.
Data processing functions
Data processing may involve various processes, including:
Validation – Ensuring that supplied data is correct and relevant.
Sorting – "arranging items in some sequence and/or in different sets."
Summarization(statistical) or (automatic) – reducing detailed data to its main points.
Aggregation – combining multiple pieces of data.
Analysis – the "collection, organization, analysis, interpretation and presentation of data."
Reporting – list detail or summary data or computed information.
Classification – separation of data into various categories.
History
The United States Census Bureau history illustrates the evolution of data processing from manual through electronic procedures.
Manual data processing
Although widespread use of the term data processing dates only from the 1950's, data processing functions have been performed manually for millennia. For example, bookkeeping involves functions such as posting transactions and producing reports like the balance sheet and the cash flow statement. Completely manual methods were augmented by the application of mechanical or electronic calculators. A person whose job was to perform calculations manually or using a calculator was called a "computer."
The 1890 United States Census schedule was the first to gather data by individual rather than household. A number of questions could be answered by making a check in the appropriate box on the form. From 1850 to 1880 the Census Bureau employed "a system of tallying, which, by reason of the increasing number of combinations of classifications required, became increasingly complex. Only a limited number of combinations could be recorded in one tally, so it was necessary to handle the schedules 5 or 6 times, for as many independent tallies." "It took over 7 years to publish the results of the 1880 census" using manual processing methods.
Automatic data processing
The term automatic data processing was applied to operations performed by means of unit record equipment, such as Herman Hollerith's application of punched card equipment for the 1890 United States Census. "Using Hollerith's punchcard equipment, the Census Office was able to complete tabulating most of the 1890 census data in 2 to 3 years, compared with 7 to 8 years for the 1880 census. It is estimated that using Hollerith's system saved some $5 million in processing costs" in 1890 dollars even though there were twice as many questions as in 1880.
Electronic data processing
Computerized data processing, or Electronic data processing represents a |
https://en.wikipedia.org/wiki/Von%20Neumann%20machine | Von Neumann machine may refer to:
Von Neumann architecture, a conceptual model of nearly all computer architecture
IAS machine, a computer designed in the 1940s based on von Neumann's design
Self-replicating machine, a class of machines that can replicate themselves
Universal constructor (disambiguation)
Von Neumann probes, hypothetical space probes capable of self-replication
Nanorobots, capable of self-replication
The Von Neumann cellular automaton |
https://en.wikipedia.org/wiki/John%20Ousterhout | John Kenneth Ousterhout (, born October 15, 1954) is a professor of computer science at Stanford University. He founded Electric Cloud with John Graham-Cumming. Ousterhout was a professor of computer science at University of California, Berkeley where he created the Tcl scripting language and the Tk platform-independent widget toolkit, and proposed the idea of coscheduling. Ousterhout led the research group that designed the experimental Sprite operating system and the first log-structured file system.
Ousterhout also led the team that developed the Magic VLSI computer-aided design (CAD) program.
He received his bachelor's degree in physics from Yale University in 1975, and his Ph.D. in computer science from Carnegie Mellon University in 1980.
Ousterhout received the Grace Murray Hopper Award in 1987 for his work on Electronic design automation CAD systems for very-large-scale integrated circuits. For the same work, he was inducted in 1994 as a Fellow of the Association for Computing Machinery. Ousterhout was elected a member of the National Academy of Engineering in 2001 for improving our ability to program computers by raising the level of abstraction.
In 1994, Ousterhout left Berkeley to join Sun Microsystems Laboratories, which hired a team to join him in Tcl development. After several years at Sun, he left and co-founded Scriptics, Inc. (later renamed Ajuba Solutions) in January 1998 to provide professional Tcl development tools. Most of the Tcl team followed him from Sun. Ajuba was purchased by Interwoven in October 2000. He joined the faculty of Stanford University in 2008.
Selected works
A Philosophy of Software Design, (Yaknyam Press, 2018, )
See also
Ousterhout's dichotomy
Raft (computer science)
References
External links
John's recounting of Tcl's early days
Ousterhout's web page at Stanford University
Computer programmers
Stanford University School of Engineering faculty
University of California, Berkeley faculty
Fellows of the Association for Computing Machinery
Carnegie Mellon University alumni
Grace Murray Hopper Award laureates
Yale University alumni
Living people
1954 births
Programming language designers
Place of birth missing (living people)
American computer scientists
Members of the United States National Academy of Engineering
Sun Microsystems people |
https://en.wikipedia.org/wiki/Reification | Reification may refer to:
Science and technology
Reification (computer science), the creation of a data model
Reification (knowledge representation), the representation of facts and/or assertions
Reification (statistics), the use of an idealized model to make inferences linking results from a model with experimental observations
Other uses
Reification (fallacy), the fallacy of treating an abstraction as if it were a real thing
Reification (Gestalt psychology), the perception of an object as having more spatial information than is present
Reification (linguistics), the transformation of a natural-language statement such that actions and events represented by it become quantifiable variables
Reification (Marxism), the consideration of an abstraction of an object as if it had living existence and abilities
See also
Concretization
Objectification, the treatment of an entity (such as a human or animal) as an object |
https://en.wikipedia.org/wiki/Server%20%28computing%29 | In computing, a server is a piece of computer hardware or software (computer program) that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients or performing computations for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers.
Client–server systems are usually most frequently implemented by (and often identified with) the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client, typically with a result or acknowledgment. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it. This often implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many relatively simple, replaceable server components.
History
The use of the word server in computing comes from queueing theory, where it dates to the mid 20th century, being notably used in (along with "service"), the paper that introduced Kendall's notation. In earlier papers, such as the , more concrete terms such as "[telephone] operators" are used.
In computing, "server" dates at least to RFC 5 (1969), one of the earliest documents describing ARPANET (the predecessor of Internet), and is contrasted with "user", distinguishing two types of host: "server-host" and "user-host". The use of "serving" also dates to early documents, such as RFC 4, contrasting "serving-host" with "using-host".
The Jargon File defines "server" in the common sense of a process performing service for requests, usually remote, with the 1981 (1.1.0) version reading:
Operation
Strictly speaking, the term server refers to a computer program or process (running program). Through metonymy, it refers to a device used for (or a device dedicated to) running one or several server programs. On a network, such a device is called a host. In addition to server, the words serve and service (as verb and as noun respectively) are frequently used, though servicer and servant are not. The word service (noun) may refer to the abstract form of functionality, e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Originally used as "servers serve users" (and "users use servers"), in the sense of "obey", today one often says that "servers serve data", in the same sense as "give". For instance, web servers "serve [up] web pages to users" or "service their requests".
The server is part |
https://en.wikipedia.org/wiki/CATIA | CATIA (, an acronym of computer-aided three-dimensional interactive application) is a multi-platform software suite for computer-aided design (CAD), computer-aided manufacturing (CAM), computer-aided engineering (CAE), 3D modeling and product lifecycle management (PLM), developed by the French company Dassault Systèmes.
Since it supports multiple stages of product development from conceptualization, design and engineering to manufacturing, it is considered a CAx-software and is sometimes referred to as a 3D product lifecycle management software suite. Like most of its competition it facilitates collaborative engineering through an integrated cloud service and have support to be used across disciplines including surfacing & shape design, electrical, fluid and electronic systems design, mechanical engineering and systems engineering.
Besides being used in a wide range of industries from aerospace and defence to packaging design, CATIA has been used by architect Frank Gehry to design some of his signature curvilinear buildings and his company Gehry Technologies was developing their Digital Project software based on CATIA.
The software has been merged with the company's other software suite 3D XML Player to form the combined Solidworks Composer Player.
History
CATIA started as an in-house development in 1977 by French aircraft manufacturer Avions Marcel Dassault to provide 3D surface modeling and NC functions for the CADAM software they used at that time to develop the Mirage fighter jet. Initially named CATI (conception assistée tridimensionnelle interactive – French for interactive aided three-dimensional design ), it was renamed CATIA in 1981 when Dassault created the subsidiary Dassault Systèmes to develop and sell the software, under the management of its first CEO, Francis Bernard. Dassault Systèmes signed a non-exclusive distribution agreement with IBM, that was also selling CADAM for Lockheed since 1978. Version 1 was released in 1982 as an add-on for CADAM.
During the eighties CATIA saw wider adoption in the aviation and military industries with users such as Boeing and General Dynamics Electric Boat Corp.
Dassault Systèmes purchased CADAM from IBM in 1992, and the next year CATIA CADAM was released. During the nineties CATIA was ported first in 1996 from one to four Unix operating systems, and was entirely rewritten for version 5 in 1998 to support Windows NT. In the years prior to 2000, this caused problems of incompatibility between versions that led to $6.1B in additional costs due to delays in production of the Airbus A380.
With the launch of Dassault Systèmes 3DEXPERIENCE Platform in 2014, CATIA became available as a cloud version.
Release history
Gallery
See also
Comparison of computer-aided design editors
List of 3D computer graphics software
List of 3D rendering software
List of 3D modeling software
References
External links
History of CATIA
Computer-aided design software
Computer-aided manufacturing software
Co |
https://en.wikipedia.org/wiki/Amiga%20demos | Amiga demos are demos created for the Amiga home computer.
A "demo" is a demonstration of the multimedia capabilities of a computer (or more to the point, a demonstration of the skill of the demo's constructors). There was intense rivalry during the 1990s among the best programmers, graphic artists and computer musicians to continually outdo each other's demos. Since the Amiga's hardware was more or less fixed (unlike today's PC industry, where arbitrary combinations of hardware can be put together), there was competition to test the limits of that hardware and perform theoretically "impossible" feats by refactoring the problem at hand. In Europe the Amiga was the undisputed leader of mainstream multimedia computing in the late 1980s and early 1990s, though it was eventually overtaken by PC architecture.
Some Amiga demos, such as the RSI Megademo, Kefrens Megademo VIII or Crionics & The Silents "Hardwired" are considered seminal works in the demo field. New Amiga demos are released even today, although the demo scene has firmly moved onto PC hardware. Many Amiga game developers were active in the demo scene.
The demo scene spearheaded development in multimedia programming techniques for the Amiga, such that it was de rigueur for the latest visual tricks, soundtrackers and 3D algorithms from the demo scene to end up being used in computer game development.
Demo software
Most demos were written in 68000 assembly language, although a few were written in C and other languages. To utilize full hardware performance, Amiga demos were optimized and written entirely for one purpose in assembly (avoiding generic and portable code). Additional performance was achieved by utilizing several coprocessors, including a blitter, in parallel with the 68000. Most demos bypassed the operating system and addressed the hardware directly.
First bigger demos were released in 1987, one of them was "Tech Tech" by Sodan & Magician 42, it was released in November 1987 and is considered a classic by many.
Eric Schwartz produced a series of animated demos that ran with MoviePlayer, an animation software package similar to Toon Boom. The animated demos drew heavily on the whimsy and graphic style of comic strips.
Red Sector Incorporated produced a piece of software called the RSI Demomaker, which allowed users to script their own demos, replete with scrolltext, vectorballs, plasma screens, etc.
Full demos range from under 128 KB to several megabytes. There have been several thousand demos produced in many countries. Some active demo countries were Denmark, Finland, Germany, Italy, the Netherlands, Norway, Sweden, UK, Poland and others.
Intros
Smaller demos are often known as intros. They are typically limited to between 4 and 64KB in size. Intros were originally used as tags by cracking groups on computer games and other software. The purpose of the intro was to advertise cracking and distribution skill of a particular group. Later it developed into a stand-alone art |
https://en.wikipedia.org/wiki/Data%20communication | Data communication or digital communications, including data transmission and data reception, is the transfer and reception of data in the form of a digital bitstream or a digitized analog signal transmitted over a point-to-point or point-to-multipoint communication channel. Examples of such channels are copper wires, optical fibers, wireless communication using radio spectrum, storage media and computer buses. The data are represented as an electromagnetic signal, such as an electrical voltage, radiowave, microwave, or infrared signal.
Analog transmission is a method of conveying voice, data, image, signal or video information using a continuous signal which varies in amplitude, phase, or some other property in proportion to that of a variable. The messages are either represented by a sequence of pulses by means of a line code (baseband transmission), or by a limited set of continuously varying waveforms (passband transmission), using a digital modulation method. The passband modulation and corresponding demodulation is carried out by modem equipment. According to the most common definition of digital signal, both baseband and passband signals representing bit-streams are considered as digital transmission, while an alternative definition only considers the baseband signal as digital, and passband transmission of digital data as a form of digital-to-analog conversion.
Data transmitted may be digital messages originating from a data source, for example, a computer or a keyboard. It may also be an analog signal such as a phone call or a video signal, digitized into a bit-stream, for example, using pulse-code modulation or more advanced source coding schemes. This source coding and decoding is carried out by codec equipment.
Distinction between related subjects
Courses and textbooks in the field of data transmission as well as digital transmission and digital communications have similar content.
Digital transmission or data transmission traditionally belongs to telecommunications and electrical engineering. Basic principles of data transmission may also be covered within the computer science or computer engineering topic of data communications, which also includes computer networking applications and communication protocols, for example routing, switching and inter-process communication. Although the Transmission Control Protocol (TCP) involves transmission, TCP and other transport layer protocols are covered in computer networking but not discussed in a textbook or course about data transmission.
In most textbooks, the term analog transmission only refers to the transmission of an analog message signal (without digitization) by means of an analog signal, either as a non-modulated baseband signal or as a passband signal using an analog modulation method such as AM or FM. It may also include analog-over-analog pulse modulatated baseband signals such as pulse-width modulation. In a few books within the computer networking tradition, analog tran |
https://en.wikipedia.org/wiki/Data%20mining | Data mining is the process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.
The term "data mining" is a misnomer because the goal is the extraction of patterns and knowledge from large amounts of data, not the extraction (mining) of data itself. It also is a buzzword and is frequently applied to any form of large-scale data or information processing (collection, extraction, warehousing, analysis, and statistics) as well as any application of computer decision support system, including artificial intelligence (e.g., machine learning) and business intelligence. The book Data Mining: Practical Machine Learning Tools and Techniques with Java (which covers mostly machine learning material) was originally to be named Practical Machine Learning, and the term data mining was only added for marketing reasons. Often the more general terms (large scale) data analysis and analytics—or, when referring to actual methods, artificial intelligence and machine learning—are more appropriate.
The actual data mining task is the semi-automatic or automatic analysis of large quantities of data to extract previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), and dependencies (association rule mining, sequential pattern mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting is part of the data mining step, although they do belong to the overall KDD process as additional steps.
The difference between data analysis and data mining is that data analysis is used to test models and hypotheses on the dataset, e.g., analyzing the effectiveness of a marketing campaign, regardless of the amount of data. In contrast, data mining uses machine learning and statistical models to uncover clandestine or hidden patterns in a large volume of data.
The |
https://en.wikipedia.org/wiki/MIDI%20Maze | MIDI Maze is a networked first-person shooter maze game for the Atari ST developed by Xanth Software F/X and released in 1987 by Hybrid Arts. The game takes place in a maze of untextured walls. The world animates smoothly as the player turns, much like the earlier Wayout, instead of only permitting 90 degree changes of direction. Using the MIDI ports on the Atari ST, the game is said to have introduced deathmatch combat to gaming in 1987. It also predated the LAN party concept by several years. The game found a wider audience when it was converted to Faceball 2000 on the Game Boy.
Gameplay
Up to 16 computers can be networked in a "MIDI Ring" by daisy chaining MIDI ports that are built into the Atari ST series.
The game area occupies only roughly a quarter of the screen and consists of a first-person view of a flat-shaded maze with a crosshair in the middle. All players are shown as Pac-Man-like smiley avatars in various colors. Bullets are represented as small spheres.
The game is started by a designated master machine, which sets rules, divides players into teams, and selects a maze. A number of mazes come with the game, and additional mazes can be constructed using a text-editor.
Development
The original MIDI Maze team at Xanth Software F/X consisted of James Yee as the business manager, Michael Park as the graphic and networking programmer, and George Miller writing the AI and drone logic.
Ports
A Game Boy version was developed by Xanth, and published in 1991 by Bullet-Proof Software, with the title Faceball 2000. James Yee, owner of Xanth, had the idea of porting the 520ST application to the Game Boy. George Miller was hired to rewrite the AI-based drone logic, giving each drone a unique personality trait. This version allows two players with a Game Link Cable, or up to four players with the Four Player Adapter.
It is often rumored that the Game Boy version would allow up to 16 players by daisy-chaining Four Player Adapters, which is not the case. According to programmer Robert Champagne, the game does contain a 16-player mode; however, it requires a special connector that would be bundled with the game, to create a "chain" of Game Link Cables. As Nintendo did not allow them to do so, that connector was never released, so the 16-player mode cannot be enabled using Game Boy systems. However, players can daisy chain 16 Game Boy Advance link cables into a huge loop, each purple end of the link cable connecting into the box of another cable, and plug the gray ends into each GBA.
A Super Nintendo version, also titled Faceball 2000, was released in 1992, supporting two players in split-screen mode. This version features completely different graphics and levels from the earlier Game Boy version. A variety of in-game music for this version was composed by George "The Fat Man" Sanger.
A Game Gear version, also titled Faceball 2000, was released to the Japanese market by Riverhill Soft. It is a colorized version of the monochrome Game Boy ver |
https://en.wikipedia.org/wiki/Arity | In logic, mathematics, and computer science, arity () is the number of arguments or operands taken by a function, operation or relation. In mathematics, arity may also be called rank, but this word can have many other meanings. In logic and philosophy, arity may also be called adicity and degree. In linguistics, it is usually named valency.
Examples
In general, functions or operators with a given arity follow the naming conventions of n-based numeral systems, such as binary and hexadecimal. A Latin prefix is combined with the -ary suffix. For example:
A nullary function takes no arguments.
Example:
A unary function takes one argument.
Example:
A binary function takes two arguments.
Example:
A ternary function takes three arguments.
Example:
An n-ary function takes n arguments.
Example:
Nullary
A constant can be considered an operation of arity 0, called a nullary.
Also, outside of functional programming, a function without arguments can be meaningful and not necessarily constant (due to side effects). Such functions may have some hidden input, such as global variables or the whole state of the system (time, free memory, etc.).
Unary
Examples of unary operators in mathematics and in programming include the unary minus and plus, the increment and decrement operators in C-style languages (not in logical languages), and the successor, factorial, reciprocal, floor, ceiling, fractional part, sign, absolute value, square root (the principal square root), complex conjugate (unary of "one" complex number, that however has two parts at a lower level of abstraction), and norm functions in mathematics. In programming the two's complement, address reference, and the logical NOT operators are examples of unary operators.
All functions in lambda calculus and in some functional programming languages (especially those descended from ML) are technically unary, but see n-ary below.
According to Quine, the Latin distributives being singuli, bini, terni, and so forth, the term "singulary" is the correct adjective, rather than "unary". Abraham Robinson follows Quine's usage.
In philosophy, the adjective monadic is sometimes used to describe a one-place relation such as 'is square-shaped' as opposed to a two-place relation such as 'is the sister of'.
Binary
Most operators encountered in programming and mathematics are of the binary form. For both programming and mathematics, these include the multiplication operator, the radix operator, the often omitted exponentiation operator, the logarithm operator, the addition operator, and the division operator. Logical predicates such as OR, XOR, AND, IMP are typically used as binary operators with two distinct operands. In CISC architectures, it is common to have two source operands (and store result in one of them).
Ternary
The computer programming language C and its various descendants (including C++, C#, Java, Julia, Perl, and others) provide the ternary conditional operator ?:. The first operand |
https://en.wikipedia.org/wiki/John%20Perry%20Barlow | John Perry Barlow (October 3, 1947February 7, 2018) was an American poet, essayist, cattle rancher, and cyberlibertarian political activist who had been associated with both the Democratic and Republican parties. He was also a lyricist for the Grateful Dead, a founding member of the Electronic Frontier Foundation and the Freedom of the Press Foundation, and an early fellow at Harvard University's Berkman Klein Center for Internet & Society.
Early life and education
Barlow was born in Sublette County, Wyoming near the town of Cora, the only child of Norman Walker Barlow (1905–1972), a Republican state legislator, and his wife, Miriam Adeline Barlow ( Jenkins, later Bailey; 1905–1999), who married in 1929.
Barlow's paternal ancestors were Mormon pioneers. He grew up on Bar Cross Ranch in Cora, Wyoming, a property his great-uncle founded in 1907, and attended elementary school in a one-room schoolhouse. Raised as a devout Mormon, he was prohibited from watching television until the sixth grade, when his parents allowed him to "absorb televangelists".
Although Barlow's academic record was erratic throughout his secondary education, he "had his pick of top eastern universities... simply because he was from Wyoming, where few applications originated". In 1969, he graduated from Wesleyan University's College of Letters. He claimed to have served as Wesleyan's student body president until the administration "tossed him into a sanitarium" following a drug-induced attempted suicide attack in Boston, Massachusetts. After two weeks of rehabilitation, he returned to his studies. In his senior year, he became a part-time resident of New York City's East Village and immersed himself in Andy Warhol's Factory demimonde, cultivating a friendship with Rene Ricard and developing a brief addiction to heroin.
As he neared graduation, Barlow was admitted to Harvard Law School and was contracted to write a novel by Farrar, Straus and Giroux at the behest of his mentor, the autodidactic Pulitzer Prize-winning novelist and historian Paul Horgan. Initially supported by a $5,000 or $1,000 advance from the publisher, he decided to eschew these options in favor of spending the next two years traveling around the world, including a nine-month sojourn in India, a riotous winter in a summer cottage on Long Island Sound in Connecticut, and a screenwriting foray in Los Angeles. Barlow eventually finished the novel, but it was rejected by several publishers (including Farrar, Straus and Giroux) and remains unpublished. During this period, he also "lived beside Needle Park on New York's Upper West Side and dealt cocaine in Spanish Harlem".
Career
Grateful Dead
At age 15, Barlow became a student at the Fountain Valley School in Colorado Springs, Colorado. There he met Bob Weir, who later co-founded the Grateful Dead. Weir and Barlow maintained a close friendship through the years.
As a frequent visitor during college to Timothy Leary's facility in Millbrook, New York, Barlow |
https://en.wikipedia.org/wiki/Physical%20modelling%20synthesis | Physical modelling synthesis refers to sound synthesis methods in which the waveform of the sound to be generated is computed using a mathematical model, a set of equations and algorithms to simulate a physical source of sound, usually a musical instrument.
General methodology
Modelling attempts to replicate laws of physics that govern sound production, and will typically have several parameters, some of which are constants that describe the physical materials and dimensions of the instrument, while others are time-dependent functions describing the player's interaction with the instrument, such as plucking a string, or covering toneholes.
For example, to model the sound of a drum, there would be a mathematical model of how striking the drumhead injects energy into a two-dimensional membrane. Incorporating this, a larger model would simulate the properties of the membrane (mass density, stiffness, etc.), its coupling with the resonance of the cylindrical body of the drum, and the conditions at its boundaries (a rigid termination to the drum's body), describing its movement over time and thus its generation of sound.
Similar stages to be modelled can be found in instruments such as a violin, though the energy excitation in this case is provided by the slip-stick behavior of the bow against the string, the width of the bow, the resonance and damping behavior of the strings, the transfer of string vibrations through the bridge, and finally, the resonance of the soundboard in response to those vibrations.
In addition, the same concept has been applied to simulate voice and speech sounds. In this case, the synthesizer includes mathematical models of the vocal fold oscillation and associated laryngeal airflow, and the consequent acoustic wave propagation along the vocal tract. Further, it may also contain an articulatory model to control the vocal tract shape in terms of the position of the lips, tongue and other organs.
Although physical modelling was not a new concept in acoustics and synthesis, having been implemented using finite difference approximations of the wave equation by Hiller and Ruiz in 1971, it was not until the development of the Karplus-Strong algorithm, the subsequent refinement and generalization of the algorithm into the extremely efficient digital waveguide synthesis by Julius O. Smith III and others, and the increase in DSP power in the late 1980s that commercial implementations became feasible.
Yamaha contracted with Stanford University in 1989 to jointly develop digital waveguide synthesis; subsequently, most patents related to the technology are owned by Stanford or Yamaha.
The first commercially available physical modelling synthesizer made using waveguide synthesis was the Yamaha VL1 in 1994.
While the efficiency of digital waveguide synthesis made physical modelling feasible on common DSP hardware and native processors, the convincing emulation of physical instruments often requires the introduction of non-linea |
https://en.wikipedia.org/wiki/Curl%20%28programming%20language%29 | Curl is a reflective object-oriented programming language for interactive web applications whose goal is to provide a smoother transition between formatting and programming. It makes it possible to embed complex objects in simple documents without needing to switch between programming languages or development platforms. The Curl implementation initially consisted of just an interpreter, but a compiler was added later.
Curl combines text markup (as in HTML), scripting (as in JavaScript), and heavy-duty computing (as in Java, C#, or C++) within one unified framework. It is used in a range of internal enterprise, B2B, and B2C applications.
Curl programs may be compiled into Curl applets, that are viewed using the Curl RTE, a runtime environment with a plugin for web browsers. Currently, it is supported on Microsoft Windows. Linux, and macOS was dropped on March 25, 2019 (starting with version 8.0.10). Curl supports "detached applets", which is a web deployed applet which runs on the user's desktop independent of a browser window much as in Silverlight 3 and Adobe AIR.
Architecture
The Curl language attempts to address a long-standing problem: the different building blocks that make up any modern web document most often require wildly different methods of implementation: different languages, different tools, different frameworks, often completely different teams. The final — and often most difficult — hurdle has been getting all of these blocks to communicate with each other in a consistent manner. Curl attempts to side-step these problems by providing a consistent syntactic and semantic interface at all levels of web content creation: from simple HTML to complex object-oriented programming.
Curl is a markup language like HTML—that is, plain text is shown as text; at the same time, Curl includes an object-oriented programming language that supports multiple inheritance. Curl applications are not required to observe the separation of information, style, and behavior that HTML, Cascading Style Sheets (CSS), and JavaScript have imposed, although that style of programming can be used in Curl if desired.
While the Curl language can be used as an HTML replacement for presenting formatted text, its abilities range all the way to those of a compiled, strongly typed, object-oriented system programming language. Both the authoring (HTML-level) and programming constructs of Curl can be extended in user code. The language is designed so Curl applications can be compiled to native code of the client machine by a just-in-time compiler and run at high speed. Curl applets can also be written so that they will run off-line when disconnected from the network (occasionally connected computing). In fact, the Curl IDE is an application written in Curl.
Syntax
A simple Curl applet for HelloWorld might be
{Curl 7.0, 8.0 applet}
{text
color = "blue",
font-size = 16pt,
Hello World}
This code will run if the user has at least one of the Curl versions 7. |
https://en.wikipedia.org/wiki/DATR | DATR is a language for lexical knowledge representation. The lexical knowledge is encoded in a network of nodes. Each node has a set of attributes encoded with it. A node can represent a word or a word form.
DATR was developed in the late 1980s by Roger Evans, Gerald Gazdar and Bill Keller, and used extensively in the 1990s; the standard specification is contained in the Evans and Gazdar RFC, available on the Sussex website (below). DATR has been implemented in a variety of programming languages, and several implementations are available on the internet, including an RFC compliant implementation at the Bielefeld website (below).
DATR is still used for encoding inheritance networks in various linguistic and non-linguistic domains and is under discussion as a standard notation for the representation of lexical information.
References
External links
DATR at the University of Sussex
DATR repository and RFC compliant ZDATR implementation at Universität Bielefeld
Natural language processing
Domain-specific knowledge representation languages |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.