id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
15,859,865 | https://en.wikipedia.org/wiki/R.%20D.%20Lawrence | Ronald Douglas Lawrence (September 12, 1921 – November 27, 2003) was a Canadian naturalist and wildlife author. He was an expert on the wildlife of Canada, on which he wrote more than thirty books, which have been published in 14 languages.
Biography
Early life: Spain, Britain, and war
Lawrence, one of five children, was born in 1921 on a British passenger ship in the Bay of Biscay off the coast of Spain. His mother was Spanish and his father, a journalist, was British. As a child in northern Spain, Lawrence became interested in nature. In his autobiography, he described his young self as a "happy loner engrossed in the natural world ... I cannot recall a single day when I was bored".
At 14, Lawrence lied about his age so that he could join the Republicans in the Spanish Civil War, and killed a man on his first night to save himself. He served for two years, until in 1938 he found himself outnumbered in the Pyrenees and fled to France. He soon traveled to Britain to rejoin his family, and during the summer of 1939 he worked on the Queen Mary. With the arrival of World War II, he enlisted with the British and spent another five years at war, beginning in September 1939. He was a tank gunner in Dunkirk and North Africa, and participated in D-Day at Normandy, where he was seriously injured in August 1944. His leg was full of shrapnel, but he refused to have it amputated; instead, doctors removed 37 of 39 pieces of shrapnel, and Lawrence determinedly exercised the limb until he no longer walked with a limp. His war experiences scarred him, and are recounted in his 1994 autobiography, which he wrote reluctantly but at the encouragement of his publisher.
After the war, he studied biology at Cambridge University for four years but did not complete his degree. He found academia stifling, bothered by a pedagogy that ignored the first-hand experience of nature. Following Cambridge, he went back to Spain, where he worked as a journalist and novelist. He married a British woman and they had a son in 1953 after returning to Britain.
In Canada
Lawrence moved to Canada alone in June 1954, later reflecting, "it was as if I had come home after a long absence". Living in Toronto, he became a reporter for the Toronto Star. Within six months he drove west to Rainy River, Ontario and purchased 100 acres of land for a homestead. His wife and son joined him in July 1955. For three years he made a living cutting timber from Crown land and selling it to mills. He and his wife had a second child, but his wife found their Canadian lifestyle lacking, so she took the children to England and filed for divorce. Lawrence was left in the company of his four dogs, one part wolf, which he developed into a sled dog team. During this period Lawrence also sold fur pelts, but he came to view animal trapping as cruel, and stopped.
In 1957 Lawrence left the area and headed west, embarking on months-long wilderness excursions, which he supported by working for newspapers and living frugally. In southeastern British Columbia he tracked a cougar for nine months, and in Ontario he observed a beaver colony for six months. Lawrence was extremely dedicated to wildlife observation, and believed that his close but unobtrusive study provided new insight into animal behavior. He said, for example, that he witnessed starving rabbits commit suicide by running their heads into trees, and saw herbivores like beavers consume flesh. During this period he met Joan (d. 1969), his second wife, in Winnipeg, with whom he raised orphaned animals. By this time Lawrence had rescued moose calves, bear cubs, and wolves. His decades of wilderness adventure and study were the material of his nature writing.
Lawrence returned to Ontario and married Sharon, his third wife. They settled on a 100-acre property—his last home, called "Wolf Hollow"—in the Haliburton Highlands. He and Sharon rehabilitated wildlife, and Lawrence now lectured and supervised graduate students. According to his wife, "Ron mentored hundreds of young people ... so they in turn could continue to educate and influence younger generations".
The naturalist acknowledged Henry David Thoreau as a great influence in his life: "At fourteen I read Walden and was deeply impressed by one of Thoreau's sentences: 'In wildness is the preservation of the world.'" He was concerned and angry about humankind's treatment of nature, but generally kept his opinions out of his writing; he opposed clear-cut logging and the hunts of bears and wolves that were organized to control deer populations. He loved wolves and considered them the "ultimate stabilizers" in their ecosystems. In his memoir (The Green Trees Beyond, 1994), he wrote of his preference for nature over civilization, and explained how (per one reviewer) "he was taught to love for the first time by a wolf". He reared a number of abandoned wolves: in his memoir, he wrote, "We cannot make up our minds whether we are a family of four people or a pack of four wolves ... The wolves, of course, have no doubt about the matter. They see us as a pack, Sharon fulfilling the role of much loved materfamilias, myself as the pack leader."
Lawrence finished his last book in 1997, and had four in progress at the time of his death. Cry Wild (1970), a book about wolves, is Lawrence's most popular; a 1991 reprint in the United States sold 1.5 million copies in three months. Lawrence kept a low profile, which may explain his relative lack of fame in Canada, but his writing brought him much attention: he and Sharon received six thousand visitors to their Haliburton homestead over 12 years. Lawrence died of Alzheimer's disease on November 27, 2003, in Haliburton County, Ontario.
Books
The following is a list of most of Lawrence's works.
Awards
1967 and 1968 – Frank H. Kortright Award, for "excellence of writing in the field of conservation"
1980 – Best non-fiction paperback, Canadian Paperback Publishers Association, for The North Runner
1981 – Honorary member of the Mark Twain Society for "contribution to conservation writing"
1984 – Best non-fiction award, Canadian Authors Association, for The Ghost Walker
1993 – Commemorative Medal of Canada, presented by Ontario Lieutenant Governor Henry N.R. Jackman, in recognition of "your contribution and service to your community"
2004 – Lifetime Achievement in Wildlife and Wilderness Conservation Through Writing, from Earthroots, a Canadian conservation organization
2007 – Lifetime Achievement Award, International Fund for Animal Welfare, awarded posthumously for Lawrence's "passion, dedication and commitment to animals and the natural environment"
References
External links
Crywild – R. D. Lawrence Official Website
Books by R. D. Lawrence
1921 births
2003 deaths
Canadian naturalists
Canadian nature writers
People born at sea
Canadian conservationists
People from Haliburton County
Ethologists
Neurological disease deaths in Ontario
Deaths from Alzheimer's disease in Canada
20th-century Canadian zoologists
20th-century naturalists
International Brigades personnel
Child soldiers
British Army personnel of World War II
British emigrants to Canada
British expatriates in Spain | R. D. Lawrence | [
"Biology"
] | 1,488 | [
"Ethology",
"Behavior",
"Ethologists"
] |
13,179,037 | https://en.wikipedia.org/wiki/GPER | G protein-coupled estrogen receptor 1 (GPER), also known as G protein-coupled receptor 30 (GPR30), is a protein that in humans is encoded by the GPER gene. GPER binds to and is activated by the female sex hormone estradiol and is responsible for some of the rapid effects that estradiol has on cells.
Discovery
The classical estrogen receptors first characterized in 1958 are water-soluble proteins located in the interior of cells that are activated by estrogenenic hormones such as estradiol and several of its metabolites such as estrone or estriol. These proteins belong to the nuclear hormone receptor class of transcription factors that regulate gene transcription. Since it takes time for genes to be transcribed into RNA and translated into protein, the effects of estrogens binding to these classical estrogen receptors is delayed. However, estrogens are also known to have effects that are too fast to be caused by regulation of gene transcription. In 2005, it was discovered that a member of the G protein-coupled receptor (GPCR) family, GPR30 also binds with high affinity to estradiol and is responsible in part for the rapid non-genomic actions of estradiol. Based on its ability to bind estradiol, GPR30 was renamed as G protein-coupled estrogen receptor (GPER). GPER is localized in the plasma membrane but is predominantly detected in the endoplasmic reticulum.
Ligands
GPER binds estradiol with high affinity though not other endogenous estrogens, such as estrone or estriol, nor other endogenous steroids, including progesterone, testosterone, and cortisol. Although potentially involved in signaling by aldosterone, GPER does not show any detectable binding towards aldosterone. Niacin and nicotinamide bind to the receptor in vitro with very low affinity. CCL18 has been identified as an endogenous antagonist of the GPER. GPER-selective ligands (that do not bind the classical estrogen receptors) include the agonist G-1 and the antagonists G15 and G36.
Agonists
2-Methoxyestradiol
2,2',5'-PCB-4-OH
Afimoxifene
Aldosterone
Atrazine
Bisphenol A
Daidzein
DDT (p,p'-DDT, o',p'-DDE)
Diarylpropionitrile (DPN)
Equol
Estradiol
Ethynylestradiol
Fulvestrant (ICI-182780))
G-1
Genistein
GPER-L1
GPER-L2
Hydroxytyrosol
Kepone
LNS8801
Niacin
Nicotinamide
Nonylphenol
Oleuropein
Protocatechuic aldehyde
Propylpyrazoletriol (PPT)
Quercetin
Raloxifene
Resveratrol
STX
Tamoxifen
Tectoridin
Antagonists
CCL18
Estriol
G15
G36
MIBE
Unknown
Diethylstilbestrol
Zearalenone
Non-ligand
17α-Estradiol
Estrone
Function
This protein is a member of the rhodopsin-like family of G protein-coupled receptors and is a multi-pass membrane protein that localizes to the plasma membrane. The protein binds estradiol, resulting in intracellular calcium mobilization and synthesis of phosphatidylinositol (3,4,5)-trisphosphate in the nucleus. This protein therefore plays a role in the rapid nongenomic signaling events widely observed following stimulation of cells and tissues with estradiol. The distribution of GPER is well established in the rodent, with high expression observed in the hypothalamus, pituitary gland, adrenal medulla, kidney medulla and developing follicles of the ovary.
Role in cancer
GPER expression has been studied in cancer using immunohistochemical and transcriptomic approaches, and has been detected in: colon, lung, melanoma, pancreatic, breast, ovarian, and testicular cancer.
Many groups have demonstrated that GPER signaling is tumor suppressive in cancers that are not traditionally hormone responsive, including melanoma, pancreatic, lung and colon cancer. Additionally, many groups have demonstrated that GPER activation is also tumor suppressive in cancers that are classically considered sex hormone responsive, including endometrial cancer, ovarian cancer, prostate cancer, and Leydig cell tumors. Although GPER signaling was originally thought to be tumor promoting in some breast cancer models, subsequent reports show that GPER signaling inhibits breast cancer. Consistent with this, recent studies showed that the presence of GPER protein in human breast cancer tissue correlates with longer survival. In summary, many independent groups have demonstrated that GPER activation may be a therapeutically useful mechanism for a wide range of cancer types.
Linnaeus Therapeutics is currently running NCI clinical trial (NCT04130516) using GPER agonist, LNS8801, as monotherapy and in combination with the immune checkpoint inhibitor, pembrolizumab, for the treatment of multiple solid tumor malignancies. Activation of GPER with LNS8801 has demonstrated efficacy in humans in cutaneous melanoma, uveal melanoma, lung cancer, neuroendocrine cancer, colorectal cancer, and other PD-1 inhibitor refractory cancers.
Role in normal tissues
Reproductive tissue
Estradiol produces cell proliferation in both normal and malignant breast epithelial tissue. However, GPER knockout mice show no overt mammary phenotype, unlike ERα knockout mice, but similarly to ERβ knockout mice. This indicates that although GPER and ERβ play a modulatory role in breast development, ERα is the main receptor responsible for estrogen-mediated breast tissue growth. GPER is expressed in germ cells and has been found to be essential for male fertility, specifically, in spermatogenesis. GPER has been found to modulate gonadotropin-releasing hormone (GnRH) secretion in the hypothalamic-pituitary-gonadal (HPG) axis.
Cardiovascular effects
GPER is expressed in the blood vessel endothelium and is responsible for vasodilation and as a result, blood pressure lowering effects of 17β-estradiol. GPER also regulates components of the renin–angiotensin system, which also controls blood pressure, and is required for superoxide-mediated cardiovascular function and aging.
Central nervous system activity
GPER and ERα, but not ERβ, have been found to mediate the antidepressant-like effects of estradiol. Contrarily, activation of GPER has been found to be anxiogenic in mice, while activation of ERβ has been found to be anxiolytic. There is a high expression of GPER, as well as ERβ, in oxytocin neurons in various parts of the hypothalamus, including the paraventricular nucleus and the supraoptic nucleus. It is speculated that activation of GPER may be the mechanism by which estradiol mediates rapid effects on the oxytocin system, for instance, rapidly increasing oxytocin receptor expression. Estradiol has also been found to increase oxytocin levels and release in the medial preoptic area and medial basal hypothalamus, actions that may be mediated by activation of GPER and/or ERβ. Estradiol, as well as tamoxifen and fulvestrant, have been found to rapidly induce lordosis through activation of GPER in the arcuate nucleus of the hypothalamus of female rats.
Metabolic roles
Female GPER knockout mice display hyperglycemia and impaired glucose tolerance, reduced body growth, and increased blood pressure. Male GPER knockout mice are observed to have increased growth, body fat, insulin resistance and glucose intolerance, dyslipidemia, increased osteoblast function (mineralization), resulting in higher bone mineral density and trabecular bone volume, and persistent growth plate activity resulting in longer bones. The GPER-selective agonist G-1 shows therapeutic efficacy in mouse models of obesity and diabetes.
Role in neurological disorders
GPER is broadly expressed on the nervous system, and GPER activation promotes beneficial effects in several brain disorders. A study suggests that GPER levels were significantly lower in children with ADHD compared to controls.
See also
Membrane estrogen receptor
Gq-mER
ER-X
ERx
References
External links
G protein-coupled receptors | GPER | [
"Chemistry"
] | 1,814 | [
"G protein-coupled receptors",
"Signal transduction"
] |
13,179,109 | https://en.wikipedia.org/wiki/TREX%20search%20engine | TREX is a search engine in the SAP NetWeaver integrated technology platform produced by SAP SE using columnar storage. The TREX engine is a standalone component that can be used in a range of system environments but is used primarily as an integral part of SAP products such as Enterprise Portal, Knowledge Warehouse, and Business Intelligence (BI, formerly SAP Business Information Warehouse). In SAP NetWeaver BI, the TREX engine powers the BI Accelerator, which is a plug-in appliance for enhancing the performance of online analytical processing. The name "TREX" stands for Text Retrieval and information EXtraction, but it is not a registered trademark of SAP and is not used in marketing collateral.
Search functions
TREX supports various kinds of text search, including exact search, boolean search, wildcard search, linguistic search (grammatical variants are normalized for the index search) and fuzzy search (input strings that differ by a few letters from an index term are normalized for the index search). Result sets are ranked using term frequency-inverse document frequency (tf-idf) weighting, and results can include snippets with the search terms highlighted.
TREX supports text mining and classification using a vector space model. Groups of documents can be classified using query based classification, example based classification, or a combination of these plus keyword management.
TREX supports structured data search not only for document metadata but also for mass business data and data in SAP BusinessObjects. Indexes for structured data are implemented compactly using data compression and the data can be aggregated in linear time, to enable large volumes of data to be processed entirely in memory.
Recent developments include:
A join engine to join structured data from different fields in business objects
A fast update capability to write a delta index beside a main index and to merge them offline while a second delta index takes updates
A data mining feature pack for advanced mathematical analysis
History
The first code for the engine was written in 1998 and TREX became an SAP component in 2000. The SAP NetWeaver BI Accelerator was first rolled out in 2005. As of Q1 2013, the current release of TREX is SAP NW 7.1.
Security
A security vulnerability in TREX was first identified and fixed in 2015 ). The vulnerability occurred due to lack of authentication in TREXnet, an internal communication protocol. The aforementioned patch fixed the problem by removing some critical functionality.
Later on, ERPScan head of threat intelligence Mathieu Geli continued to look into the vulnerability and found that the vulnerability was still exploitable. Moreover, in case of successful attack, the vulnerability would allow a remote attacker to get full control over the server without authorization. The vulnerability has been finally patched via SAP Security Note 2419592.
References
External links
SAP NetWeaver
SAP NetWeaver Business Intelligence
SAP NetWeaver Business Information Management
Search and Classification (TREX) on SAP Community Network
SAP NetWeaver
Information retrieval systems
Business intelligence software | TREX search engine | [
"Technology"
] | 600 | [
"Information technology",
"Information retrieval systems"
] |
13,179,368 | https://en.wikipedia.org/wiki/Doratomyces | Doratomyces (Dor-ah-toe-mice’-ees) is a genus of the fungi imperfecti, closely related to Scopulariopsis. Their conidiophores gather together to form a stalk-like inflorescence known as a synnema or coremia; Scopulariopsis being distinguished in their lack of such a structure.
Usually associated with decay, they are usually found in association with dead wood, rotting plants, and in soil or dung. Economically, they can cause rot in potatoes, oats and corn.
References
Microascales | Doratomyces | [
"Biology"
] | 123 | [
"Fungus stubs",
"Fungi"
] |
13,180,391 | https://en.wikipedia.org/wiki/Anisohedral%20tiling | In geometry, a shape is said to be anisohedral if it admits a tiling, but no such tiling is isohedral (tile-transitive); that is, in any tiling by that shape there are two tiles that are not equivalent under any symmetry of the tiling. A tiling by an anisohedral tile is referred to as an anisohedral tiling.
Existence
The first part of Hilbert's eighteenth problem asked whether there exists an anisohedral polyhedron in Euclidean 3-space; Grünbaum and Shephard suggest that Hilbert was assuming that no such tile existed in the plane. Reinhardt answered Hilbert's problem in 1928 by finding examples of such polyhedra, and asserted that his proof that no such tiles exist in the plane would appear soon. However, Heesch then gave an example of an anisohedral tile in the plane in 1935.
Convex tiles
Reinhardt had previously considered the question of anisohedral convex polygons, showing that there were no anisohedral convex hexagons but being unable to show there were no such convex pentagons, while finding the five types of convex pentagon tiling the plane isohedrally. Kershner gave three types of anisohedral convex pentagon in 1968; one of these tiles using only direct isometries without reflections or glide reflections, so answering a question of Heesch.
Isohedral numbers
The problem of anisohedral tiling has been generalised by saying that the isohedral number of a tile is the lowest number orbits (equivalence classes) of tiles in any tiling of that tile under the action of the symmetry group of that tiling, and that a tile with isohedral number k is k-anisohedral. Berglund asked whether there exist k-anisohedral tiles for all k, giving examples for k ≤ 4 (examples of 2-anisohedral and 3-anisohedral tiles being previously known, while the 4-anisohedral tile given was the first such published tile). Goodman-Strauss considered this in the context of general questions about how complex the behaviour of a given tile or set of tiles can be, noting a 10-anisohedral example of Myers. Grünbaum and Shephard had previously raised a slight variation on the same question.
Socolar showed in 2007 that arbitrarily high isohedral numbers can be achieved in two dimensions if the tile is disconnected, or has coloured edges with constraints on what colours can be adjacent, and in three dimensions with a connected tile without colours, noting that in two dimensions for a connected tile without colours the highest known isohedral number is 10.
Joseph Myers has produced a collection of tiles with high isohedral numbers, particularly a polyhexagon with isohedral number 10 (occurring in 20 orbits under translation) and another with isohedral number 9 (occurring in 36 orbits under translation).
References
External links
John Berglund, Anisohedral Tilings Page
Joseph Myers, Polyomino, polyhex and polyiamond tiling
Tessellation | Anisohedral tiling | [
"Physics",
"Mathematics"
] | 619 | [
"Tessellation",
"Planes (geometry)",
"Euclidean plane geometry",
"Symmetry"
] |
13,180,501 | https://en.wikipedia.org/wiki/Phytomining | Phytomining, sometimes called agromining, is the concept of extracting heavy metals from the soil using plants. Specifically, phytomining is for the purpose of economic gain. The approach exploits the existence of hyperaccumulators, proteins or compounds secreted by plants to bind certain metal ions. These extracted ores are called bio-ores. A 2021 review concluded that the commercial viability of phytomining was "limited" because it is a slow and inefficient process.
History
Phytomining was first proposed in 1983 by Rufus Chaney, a USDA agronomist. He and Alan Baker, a University of Melbourne professor, first tested it in 1996. They, as well as Jay Scott Angle and Yin-Ming Li, filed a patent on the process in 1995 which expired in 2015.
Advantages
Phytomining would, in principle, cause minimal environmental effects compared to mining. Phytomining could also remove low-grade heavy metals from mine waste.
See also
Semisynthesis
References
Bioremediation
Biotechnology
Ecological restoration
Environmental terminology
Phytoremediation plants
Soil contamination
Sustainable technologies | Phytomining | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 235 | [
"Ecological restoration",
"Phytoremediation plants",
"Environmental chemistry",
"Biotechnology",
"Biodegradation",
"Environmental engineering",
"Ecological techniques",
"Soil contamination",
"nan",
"Bioremediation",
"Environmental soil science"
] |
13,181,273 | https://en.wikipedia.org/wiki/LUNA | LUNA is a computer product line of OMRON Tateishi Electric from the late 1980s to the early 1990s. The LUNA is a 20 MHz/m68030 desktop computer. NetBSD has supported the LUNA since 1.4.2, released in 2000.
The later Omron Luna 88K was available in two models: the DT8840 and TD8860 with 1–4 25 MHz 88100 CPUs and 64 MB RAM. The native operating system was CMU Mach 2.5 and Omron UniOS.
References
http://www.3rz.org/mirrors/badabada.org/luna88k.html
External links
NetBSD/luna68k Information
Omron Luna 88k pictures
Personal computers
68k-based computers
Computer-related introductions in 1988 | LUNA | [
"Technology"
] | 167 | [
"Computing stubs",
"Computer hardware stubs"
] |
13,182,326 | https://en.wikipedia.org/wiki/Czes%C5%82aw%20Zakaszewski | Czesław Zakaszewski (19 July 1886 in Warsaw – 2 February 1959 in Warsaw) was a Polish hydro-technician and meliorator. Professor of the Warsaw University of Technology, member of the Warsaw Scientific Society.
He was an author of numerous technical projects, thesis and textbooks.
Notable works
Books
Articles
"Zasoby wodne i gospodarowanie nimi jako przesłanki w planowaniu przestrzennym i lokalizacji"
"Perspektywy rozwiązań obecnych trudności gospodarki wodnej"
"Dolina Neru jako urządzenie gospodarcze"
Reports and co-reports
"Wpływ kanału żeglugi Żerań-Zegrze na stosunki wodne tarasu praskiego"
References
1886 births
1959 deaths
Hydrologists
Academic staff of the Warsaw University of Technology
Burials at Powązki Cemetery
Recipients of the Medal of the 10th Anniversary of the People's Republic of Poland | Czesław Zakaszewski | [
"Environmental_science"
] | 230 | [
"Hydrology",
"Hydrologists"
] |
13,182,827 | https://en.wikipedia.org/wiki/Weld%20quality%20assurance | Weld quality assurance is the use of technological methods and actions to test or assure the quality of welds, and secondarily to confirm the presence, location and coverage of welds. In manufacturing, welds are used to join two or more metal surfaces. Because these connections may encounter loads and fatigue during product lifetime, there is a chance they may fail if not created to proper specification.
Weld testing and analysis
Methods of weld testing and analysis are used to assure the quality and correctness of the weld after it is completed. This term generally refers to testing and analysis focused on the quality and strength of the weld but may refer to technological actions to check for the presence, position, and extent of welds. These are divided into destructive and non-destructive methods. A few examples of destructive testing include macro etch testing, fillet-weld break tests, transverse tension tests, and guided bend tests. Other destructive methods include acid etch testing, back bend testing, tensile strength break testing, nick break testing, and free bend testing. Non-destructive methods include fluorescent penetrate tests, magnaflux tests, eddy current (electromagnetic) tests, hydrostatic testing, tests using magnetic particles, X-rays and gamma ray-based methods, and acoustic emission techniques. Other methods include ferrite and hardness testing.
Imaging-based methods
Industrial Radiography
X-ray-based weld inspection may be manual, performed by an inspector on X-ray-based images or video, or automated using machine vision. Gamma Rays can also be used
Visible light imaging
Inspection may be manual, conducted by an inspector using imaging equipment, or automated using machine vision. Since the similarity of materials between weld and workpiece, and between good and defective areas, provides little inherent contrast, the latter usually requires methods other than simple imaging.
One (destructive) method involves the microscopic analysis of a weld cross-section.
Ultrasonic- and acoustic-based methods
Ultrasonic testing uses the principle that a gap in the weld changes the propagation of ultrasonic sound through the metal. One common method uses single-probe ultrasonic testing involving operator interpretation of an oscilloscope-type screen.
Another senses using a 2D array of ultrasonic sensors. Conventional, phased array and time of flight diffraction (TOFD) methods can be combined into the same piece of test equipment.
Acoustic emission methods monitor for the sound created by the loading or flexing of the weld.
Peel testing of spot welds
This method includes tearing the weld apart and measuring the size of the remaining weld.
Weld monitoring
Weld monitoring methods ensure the weld's quality and correctness during welding. The term is generally applied to automated monitoring for weld-quality purposes and secondarily for process-control purposes such as vision-based robot guidance. Visual weld monitoring is also performed during the welding process.
On vehicular applications, weld monitoring aims to enable improvements in the quality, durability, and safety of vehicles – with cost savings in the avoidance of recalls to fix the large proportion of systemic quality problems that arise from suboptimal welding. Quality monitoring of automatic welding can save production downtime and reduce the need for product reworking and recall.
Industrial monitoring systems encourage high production rates and reduce scrap costs.
Inline coherent imaging
Inline coherent imaging (ICI) is a recently developed interferometric technique based on optical coherence tomography that is used for quality assurance of keyhole laser beam welding, a welding method that is gaining popularity in a variety of industries. ICI aims a low-powered broadband light source through the same optical path as the primary welding laser. The beam enters the keyhole of the weld and is reflected back into the head optics by the bottom of the keyhole. An interference pattern is produced by combining the reflected light with a separate beam that has traveled through a path of a known distance. This interference pattern is then analyzed to obtain a precise measurement of the depth of the keyhole. Because these measurements are acquired in real-time, ICI can also be used to control the laser penetration depth by using the depth measurement in a feedback loop that modulates the laser's output power.
Transient thermal analysis method
Transient thermal analysis is used for range of weld optimization tasks.
Signature image processing method
Signature image processing (SIP) is a technology for analyzing electrical data collected from welding processes. Acceptable welding requires exact conditions; variations in conditions can render a weld unacceptable. SIP allows the identification of welding faults in real time, measures the stability of welding processes, and enables the optimization of welding processes.
Development
The idea of using electrical data analyzed by algorithms to assess the quality of the welds produced in robotic manufacturing emerged in 1995 from research by Associate Professor Stephen Simpson at the University of Sydney on the complex physical phenomena that occur in welding arcs. Simpson realized that a way of determining the quality of a weld could be developed without a definitive understanding of those phenomena.
The development involved:
a method for handling sampled data blocks by treating them as phase-space portrait signatures with appropriate image processing. Typically, one second's worth of sampled welding voltage and current data are collected from GMAW pulse or short arc welding processes. The data is converted to a 2D histogram, and signal-processing operations such as image smoothing are performed.
a technique for analyzing welding signatures based on statistical methods from the social sciences, such as principal component analysis. The relationship between the welding voltage and the current reflects the state of the welding process, and the signature image includes this information. Comparing signatures quantitatively using principal component analysis allows for the spread of signature images, enabling faults to be detected and identified The system includes algorithms and mathematics appropriate for real-time welding analysis on personal computers, and the multidimensional optimization of fault-detection performance using experimental welding data. Comparing signature images from moment to moment in a weld provides a useful estimate of how stable the welding process is. "Through-the-arc" sensing, by comparing signature images when the physical parameters of the process change, leads to quantitative estimates—for example, of the position of the weld bead.
Unlike systems that log information for later study or use X-rays or ultrasound to check samples, SIP technology looks at the electrical signal and detects faults when they occur.
Data blocks of 4,000 points of electrical data are collected four times a second and converted to signature images. After image processing operations, statistical analyses of the signatures provide a quantitative assessment of the welding process, revealing its stability and reproducibility and providing fault detection and process diagnostics. A similar approach, using voltage-current histograms and a simplified statistical measure of distance between signature images, has been evaluated for tungsten inert gas (TIG) welding by researchers from Osaka University.
Industrial application
SIP provides the basis for the WeldPrint system, which consists of a front-end interface and software based on the SIP engine and relies on electrical signals alone. It is designed to be non-intrusive and sufficiently robust to withstand harsh industrial welding environments. The first major purchaser of the technology, GM Holden provided feedback that allowed the system to be refined in ways that increased its industrial and commercial value. Improvements in the algorithms, including multiple parameter optimization with a server network, have led to an order-of-magnitude improvement in fault-detection performance over the past five years.
WeldPrint for arc welding became available in mid-2001. About 70 units have been deployed since 2001, about 90% used on the shop floors of automotive manufacturing companies and their suppliers. Industrial users include Lear (UK), Unidrive, GM Holden, Air International and QTB Automotive (Australia). Units have been leased to Australian companies such as Rheem, Dux, and OneSteel for welding evaluation and process improvement.
The WeldPrint software received the Brother business software of the year award (2001); in 2003, the technology received the A$100,000 inaugural Australasian Peter Doherty Prize for Innovation;
and WTi, the University of Sydney's original spin-off company, received an AusIndustry Certificate of Achievement in recognition of the development.
SIP has opened opportunities for researchers to use it as a measurement tool both in welding
and in related disciplines, such as structural engineering. Research opportunities have opened up in the application of biomonitoring of external EEGs, where SIP offers advantages in interpreting the complex signals
Weld mapping
Weld mapping is the process of assigning information to a weld repair or joint to enable easy identification of weld processes, production (welders, their qualifications, date welded), quality (visual inspection, NDT, standards and specifications) and traceability (tracking weld joints and welded castings, the origin of weld materials).
Weld mapping should also incorporate a pictorial identification to represent the weld number on the fabrication drawing or casting repair. Military, nuclear and commercial industries possess unique quality standards (eg., ISO, CEN, ASME, ASTM, AWS, NAVSEA) which direct weld mapping procedures and specifications, both in metal casting in which defects are removed and filled in via GTAW (TIG welding) or SMAW (stick welding) processes, or fabrication of weld joints which primarily involves GMAW (MIG welding).
See also
Welding defect
Industrial radiography
Robot welding
Pipeline and Hazardous Materials Safety Administration
References
Further reading
ISO 3834-1: "Quality requirements for fusion welding of metallic materials. Criteria for the selection of the appropriate level of quality requirements" 2005)
ISO 3834-2: "Quality requirements for fusion welding of metallic materials. Comprehensive quality requirements" (2005)
ISO 3834-3: "Quality requirements for fusion welding of metallic materials. Standard quality requirements" (2005)
ISO 3834-4: "Quality requirements for fusion welding of metallic materials. Elementary quality requirements" (2005)
ISO 3834-5: "Quality requirements for fusion welding of metallic materials. Documents with which it is necessary to conform to claim conformity to the quality requirements of ISO 3834-2, ISO 3834-3 or ISO 3834-4"
ISO/TR 3834-6: "Quality requirements for fusion welding of metallic materials. Guidelines on implementing ISO 3834" (2007)
Welding | Weld quality assurance | [
"Engineering"
] | 2,111 | [
"Welding",
"Mechanical engineering"
] |
13,183,237 | https://en.wikipedia.org/wiki/Thomas%20Anderson%20%28chemist%29 | Thomas Anderson (2 July 1819 – 2 November 1874) was a 19th-century Scottish chemist. In 1853 his work on alkaloids led him to discover the correct formula/composition for codeine. In 1868 he discovered pyridine and related organic compounds such as picoline through studies on the distillation of bone oil and other animal matter.
As well as his work on organic chemistry, Anderson made important contributions to agricultural chemistry, writing over 130 reports on soils, fertilisers and plant diseases. He kept abreast of all areas of science, and was able to advise his colleague Joseph Lister on Pasteur's germ theory and the use of carbolic acid as an antiseptic.
Biographys
Born in Leith, Thomas Anderson graduated from the University of Edinburgh with a medical doctorate in 1841. Having developed an interest in chemistry during his medical studies, he then spent several years studying chemistry in Europe, including spells under Jöns Jakob Berzelius in Sweden and Justus von Liebig in Germany. Returning to Edinburgh, he worked at the University of Edinburgh and at the Highland and Agricultural Society of Scotland. In 1852, he was appointed Regius Professor of Chemistry at the University of Glasgow and remained in that post for the remainder of his career. In 1854, he became one of the editors of the Edinburgh New Philosophical Journal. In 1872, Anderson was awarded a Royal Medal from the Royal Society "for his investigations on the organic bases of Dippells animal oil; on codeine; on the crystallized constituents of opium; on piperin and on papaverin; and for his researches in physiological and animal chemistry."
His later years were marred by a progressive neurological disease which may have been syphilis. He resigned his chair in early 1874, and died later that year in Chiswick.
He was succeeded by John Ferguson.
References
External links
1819 births
1874 deaths
19th-century Scottish chemists
Organic chemists
People educated at Edinburgh Academy
Alumni of the University of Edinburgh Medical School
British expatriates in Sweden
British expatriates in Germany
Academics of the University of Glasgow
Royal Medal winners
Regius Professors | Thomas Anderson (chemist) | [
"Chemistry"
] | 433 | [
"Organic chemists"
] |
13,183,737 | https://en.wikipedia.org/wiki/The%20Great%20Rivers%20Greenway%20District | The Great Rivers Greenway District is a public agency in the state of Missouri that works to develop a regional network of greenways, parks, and trails in the St. Louis metropolitan area. The agency engages citizens and community partners to plan, build, and care for the greenways.
History
In 1996, a nonprofit organization called St. Louis 2004 was created with the aim of bringing about a renaissance in the region by 2004. It developed a list of 11 priorities, including developing a regional network of greenways.
In 2000, organization president Peter Sortino led a successful drive to place a proposition on local ballots to create a one-tenth-of-one-cent sales tax to support greenway development. In November of that year, voters in the City of St. Louis, St. Louis County, and St. Charles County approved Proposition C, also dubbed the Clean Water, Safe Parks and Community Trails Initiative. The tax enabled the creation of The Great Rivers Greenway District.
The agency's distribution of funds is governed by a board of directors whose members are appointed by the chief executives of the city and two counties. A chief executive officer and staff develop the River Ring, working with local, county, and state agencies and private and non-profit agencies throughout the St. Louis region.
In 2003, Great Rivers Greenway published "Building the River Ring: A Citizen-Driven Regional Plan", developed with advice from citizens, local governments, private companies, non-profit organizations and advocacy groups. The plan proposed to create the River Ring, a system of more than 40 greenways, parks, and trails comprising over 600 miles in St. Louis City, St. Louis County, and St. Charles County. The system will connect with trails developed by the Metro East Park and Recreation District of St. Clair and Madison Counties in Illinois. The concept was designed to raise awareness of the natural beauty found in the region's many rivers and streams and to reconnect residents to the city's primary natural feature: the confluence of the Mississippi and Missouri rivers.
In 2011 and 2016, Great Rivers Greenway engages citizens, civic leaders and partners to update and republish the Citizen-Driven Regional Plan. Pursuant to the 2016 update, a Great Rivers Greenway Foundation was formed to seek private funding for greenway projects.
By 2020, the agency had built more than 128 miles of greenways connecting parks, rivers, schools, neighborhoods, business districts and transit. The agency began surveying citizens about priorities to build and care for the greenways to inform the next update to the plan.
Greenways
Greenways within the district:
Boschert Greenway: The Boschert Greenway stretches from New Town in St. Charles through Fox Hill Park to the Missouri River near Historic Downtown St. Charles and the Katy Trail. A 30-foot flower sculpture, “Blomstre” – the Norwegian word for Bloom – stands at the intersection of Mel Wetter Parkway and the Little Hills Expressway. The sculpture was created by artist Andrew Andrasko from old bicycle parts.
Brickline Greenway: The plan for the Brickline Greenway, formerly known as the Chouteau Greenway, calls for 20 miles of trails and green space connecting 17 neighborhoods across the City of St. Louis. It will connect Fairground Park in the north to Tower Grove Park in the south and Forest Park in the west to Gateway Arch National Park in the east. The project planners aim to knit together diverse communities through the greenway to overcome barriers that have fragmented the city over time. The plan incorporates input from citizens on strategies to promote economic growth and equitable outcomes.
Busch Greenway: The Busch Greenway in St. Charles County connects the Katy Trail to Missouri Research Park and August A. Busch Memorial Conservation Area.
Centennial Greenway: The Centennial Greenway will extend from Forest Park in the city of St. Louis to St. Charles County. Three sections have been completed. From Forest Park, the trail runs through the Washington University campus to Delmar Boulevard and Vernon Avenue in University City. Another section extends from Shaw Park in Clayton north to Olive Boulevard. A third section goes from the Katy Trail to the St. Charles Heritage Museum and connects east across the Missouri River to Creve Coeur Lake Memorial Park via the Creve Coeur Connector Trail.
Dardenne Greenway - The Dardenne Greenway follows Dardenne Creek across St. Charles County. One section loops through the BaratHaven community and a restored prairie landscape in Dardenne Prairie. Another links Legacy Park in Cottleville to St. Charles Community College and Dardenne Park in St. Peters.
Deer Creek Greenway - The Deer Creek Greenway extends from Deer Creek Park in Maplewood to Lorraine Davis Park in Webster Groves.
Fee Fee Greenway - The Fee Fee Greenway extends from the Maryland Heights Recreation Complex to Creve Coeur Lake Memorial Park.
Gravois Greenway: Grant's Trail - Grant's Trail on the Gravois Greenway runs along Gravois Creek on the rail corridor of the former Kirkwood-Carondelet branch of the Missouri Pacific Railroad. Trailnet, a St. Louis-based organization that advocates for active communities and safe spaces for walking and bicycling, purchased the corridor in 1991 and built the first six miles of Grant's Trail which opened in 1994. Since 2006, Great Rivers Greenway has extended the trail to reach 10 miles from Kirkwood to the River des Peres Greenway near Interstate 55 and added two miles of trail in Officer Blake C. Snyder Memorial Park, adjacent to Grant's Trail. Points of interest on the greenway include the Ulysses S. Grant National Historic Site, Grant's Farm and the Thomas Sappington House Museum.
Maline Greenway: The Maline Greenway connects with other paved trails in Bella Fontaine County Park in north St. Louis County. It is the first segment of a planned seven mile, east–west link that will connect the Mississippi and St. Vincent Greenways.
Meramec Greenway: The Meramec River Greenway Concept Plan calls for a greenway stretching 50 river miles from the City of Pacific to the Meramec's confluence with the Mississippi River. Five sections have been completed. The westernmost section connects Lions Park in Eureka to Route 66 State Park. The next section runs along the river on the Al Foster Trail between Glencoe and Sherman Beach County Park with a spur on the Rock Hollow Trail. A third section links Arnold's Grove Park in Valley Park to Simpson County Park and Greentree Park in Kirkwood. In Fenton the greenway connects Unger Park to Fenton City Park and George Winter Park. The fifth segment meanders through the river bottomland in Lower Meramec Park.
Mississippi Greenway: The Mississippi Greenway, formerly known as the Confluence Greenway, is planned as a 32-mile corridor that will connect with the Missouri, Maline, River des Peres and Meramec Greenways. Three sections have been built. The Riverfront Trail runs from the downtown Mississippi riverfront north to the old Chain of Rocks Bridge. At 5,353 feet long, the old Chain of Rocks Bridge was part of Route 66 in 1936 and is one of the world's longest bicycle and pedestrian bridges. Another segment connects Jefferson Barracks County Park with River City Casino near the River des Peres Greenway. A third section runs through Cliff Cave County Park overlooking the Mississippi River.
Missouri Greenway: The master plan for the Missouri Greenway is a 55 river mile corridor from the confluence of the Mississippi and Missouri Rivers to Chesterfield that will connect with the Mississippi, Sunset, Centennial, Fee Fee and Western Greenways. Three sections have been built. One runs from Riverwoods Park in Bridgeton along the Earth City Levee to the Discovery Bridge on Missouri Route 370 where a protected pedestrian and bike path connects with the Katy Trail. In Chesterfield, the greenway runs along the Monarch Chesterfield Levee Trail and connects to the Katy Trail on a protected path on the Daniel Boone Bridge. A third segment in Hazelwood runs through Truman Park.
River des Peres Greenway: The River des Peres Greenway plan calls for a 11-mile corridor from Forest Park to the Mississippi River. Currently it runs from Francis R. Slay Park in the City of St. Louis past the Shrewsbury/Lansdowne MetroLink station to Carondelet Park and Lemay Park and connects to the Gravois Greenway: Grant's Trail.
St. Vincent Greenway: the St. Vincent Greenway will extend seven miles from the North Hanley MetroLink station through the University of Missouri–St. Louis (UMSL) campus to Forest Park. Two sections have been completed. The north section runs from the North Hanley station through the UMSL campus and St. Vincent County Park. The south section connects Trojan Park through Ruth Porter Mall Park to Forest Park.
Sunset Greenway: The Sunset Greenway in north St. Louis County runs from the Old St. Ferdinand Shrine in the historic district of Florissant through St. Ferdinand Park to Sunset Park on the Missouri River.
Western Greenway: The Western Greenway runs from the Meramec Greenway at Glencoe to Rockwoods Reservation. The master plan is to extend the greenway to connect with Babler State Park and the Missouri Greenway in west St. Louis County.
CityArchRiver Project
In 2007, four decades after completion of the Gateway Arch, the site remained an island, severed from the rest of the city by busy highways and disconnected from the Mississippi River. Walter Metcalfe, an attorney and civic leader in St. Louis, led the formation of CityArchRiver2015 Foundation in 2009 to transform the St. Louis Riverfront and Arch grounds.
Michael Van Valkenburgh Associates of New York won an international competition to redesign the Arch grounds in 2010.
The project received $20 million in capital funding in 2011 from the U.S. Department of Transportation, a $25 million matching grant from the Missouri Department of Transportation and $10 million in private donations raised by CityArchRiver. With a total project cost of $380 million, more funding was needed.
Great Rivers Greenway joined the effort in 2012, becoming part of a public-private partnership with the CityArchRiver2015 Foundation, National Park Service, Missouri Department of Transportation, Bi-State Development, and Jefferson National Parks Association. The partners planned a ballot issue known as Proposition P to generate sales tax revenue for the CityArchRiver project and other park improvements.
In April 2013 voters in the City of St. Louis and St. Louis County approved Proposition P: The Safe and Accessible Arch and Public Parks Initiative. The proposition authorized a 3/16th-cent sales tax to fund the CityArchRiver project and accelerate local park and greenway construction. The tax was projected to generate $780 million over the next 20 years, with 60 percent going to Great Rivers Greenway and 40 percent going to support local parks in St. Louis City and County. Half of the Great Rivers Greenway revenue supported the CityArchRiver project and the other half would be used to accelerate greenway construction.
Great Rivers Greenway stewarded $85 million in Proposition P funds to complete several major projects over the next five years:
The Park Over the Highway, a land bridge over the I-44 and I-70 highways, connected downtown St. Louis from Luther Ely Smith Square to the Arch grounds and the riverfront, making it easy and safe for people to travel through the area on foot or by bike.
Luther Ely Smith Square, a green space leading to the Park Over the Highway was renovated.
The Riverfront Trail on the Mississippi Greenway was extended 1.5 miles from Biddle Street south to Chouteau Avenue along Leonor K. Sullivan Boulevard and the boulevard was raised by 2.5 feet to reduce flooding.
The North Gateway, a new recreation and event space at the north entrance to the Arch grounds, opened with a natural amphitheater, a bike and pedestrian path to Laclede's Landing, shaded lawns, a children's garden and an elevated walkway offering views of the Gateway Arch and the historic Eads Bridge.
The Arch grounds were transformed with sustainable ponds, landscaping and accessible walkways connecting the Gateway Arch to the Mississippi River.
Kiener Plaza was redesigned to serve as a public gathering place with a playground, fountains, an interactive splash pad, a grassy concert area, shade gardens, landscaping, and bicycle parking.
The Gateway Arch National Park Museum and Visitor Center was renovated and expanded.
Renovation of the Old Courthouse was planned to enhance the visitor experience and accessibility. Work was scheduled to begin in late 2021.
CityArchRiver Foundation changed its name to the Gateway Arch Park Foundation in 2017. The Foundation raised $250 million in private funds for the project.
External links
Great Rivers Greenway website
References
Landscape architecture
Urban planning in the United States
Parks in St. Louis
Geography of Missouri
Organizations based in Missouri
Tourist attractions in St. Louis | The Great Rivers Greenway District | [
"Engineering"
] | 2,641 | [
"Landscape architecture",
"Architecture"
] |
13,183,981 | https://en.wikipedia.org/wiki/Istv%C3%A1n%20F%C3%A1ry | István Fáry (30 June 1922 – 2 November 1984) was a Hungarian-born mathematician known for his work in geometry and algebraic topology. He proved Fáry's theorem that every planar graph has a straight-line embedding in 1948, and the Fáry–Milnor theorem lower-bounding the curvature of a nontrivial knot in 1949.
Biography
Fáry was born June 30, 1922, in Gyula, Hungary. After studying for a master's degree at the University of Budapest, he moved to the University of Szeged, where he earned a Ph.D. in 1947. He then studied at the Sorbonne before taking a faculty position at the University of Montreal in 1955. He moved to the University of California, Berkeley in 1958 and became a full professor in 1962. He died on November 2, 1984, in El Cerrito, California.
Selected publications
.
.
References
External links
Photos from the Oberwolfach Photo Collection
1922 births
1984 deaths
20th-century Hungarian mathematicians
University of California, Berkeley College of Letters and Science faculty
Geometers
Topologists
University of Paris alumni
Hungarian expatriates in France
Hungarian expatriates in Canada
Hungarian expatriates in the United States | István Fáry | [
"Mathematics"
] | 252 | [
"Topologists",
"Topology",
"Geometers",
"Geometry"
] |
13,184,767 | https://en.wikipedia.org/wiki/Freedom%20of%20Information%20and%20Protection%20of%20Privacy%20Act%20%28Nova%20Scotia%29 | The Freedom of Information and Protection of Privacy Act, commonly known as FOIPOP, (the Act) is the public sector privacy law and access to information law for the Province of Nova Scotia.
The Act is generally considered to be in two parts: the first dealing with access to records in the custody or control of public bodies and the second dealing with the regulation of the collection, use and disclosure of personal information by those public bodies. It applies to provincially regulated public bodies in the province of Nova Scotia. "Public body" is defined in section 3(1)(j) of the Act, and generally means provincial government departments, agencies, boards, commissions, some crown corporations, public universities, school boards, and hospitals. It also specifically includes those organizations listed in the Schedule to the Act.
FOIPOP is administered by the Review Officer appointed under section 33 of the Act. The Review Officers serves as an ombudsman, reviewing complaints brought by individuals seeking access to records held by public bodies. The Review Officer does not have specific order-making powers and the powers of that office are limited with respect to privacy complaints. Complaints can be taken to the Supreme Court of Nova Scotia after having been dealt with by the Review Officer.
External links
Freedom of Information and Protection of Privacy Act
FOIPOP Review Officer
Canadian Privacy Law Blog: A regularly updated blog on issues related to privacy law and FOIPOP written by David T.S. Fraser, a Nova Scotia privacy lawyer.
Privacy in Canada
Privacy legislation in Canada
Information privacy
Nova Scotia provincial legislation
Freedom of information legislation in Canada
1993 in Canadian law | Freedom of Information and Protection of Privacy Act (Nova Scotia) | [
"Engineering"
] | 323 | [
"Cybersecurity engineering",
"Information privacy"
] |
13,184,838 | https://en.wikipedia.org/wiki/Bosscha%20Observatory | Bosscha Observatory is the oldest modern observatory in Indonesia, and one of the oldest in Asia. The observatory is located in Lembang, West Bandung Regency, West Java, approximately north of Bandung. It is situated on a hilly six hectares of land and is above mean sea level plateau. The IAU observatory code for Bosscha is 299.
History
During the first meeting of the Nederlandsch-Indische Sterrekundige Vereeniging (Dutch-Indies Astronomical Society) in the 1920s, it was agreed that an observatory was needed to study astronomy in the Dutch East Indies. Of all locations in the Indonesia archipelago, a tea plantation in Malabar, a few kilometers north of Bandung in West Java was selected. It is on the hilly north side of the city with a non-obstructed view of the sky and with close access to the city that was planned to become the new capital of the Dutch colony, replacing Batavia (present-day Jakarta). The observatory is named after the tea plantation owner Karel Albert Rudolf Bosscha, son of the physicist Johannes Bosscha and a major force in the development of science and technology in the Dutch East Indies, who granted six hectares of his property for the new observatory.
Construction of the observatory began in 1923 and was completed in 1928. Since then a continuous observation of the sky was made. The first international publication from Bosscha was published in 1922. Observations from Bosscha were halted during World War II and after the war a major reconstruction was necessary. On 17 October 1951, the Dutch-Indies Astronomical Society handed over operation of the observatory to the government of Indonesia. In 1959 the observatory's operation was given to the Institut Teknologi Bandung and has been an integral part of the research and formal education of astronomy in Indonesia.
Facilities
Five large telescopes were installed in Bosscha:
The Zeiss double refractor
This telescope is mainly used to observe visual binary stars, conduct photometric studies on eclipsing binaries, image lunar craters, observe planets (Mars, Saturn and Jupiter) and to observe comet details and other heavy bodies. The telescope has two objective lenses with a diameter of each and a focal length of .
The Schmidt telescope (nicknamed the Bima Sakti, or "Milky Way" telescope)
This telescope is used to study galactic structure, stellar spectra, asteroid studies, supernovae, and to photograph heavy bodies. The main lens diameter is , the correcting bi-concave and convex lens is with a focal length of . It is also equipped with a spectral prism with a prime angle of 6.10 degrees for stellar spectra, a wedge sensitometer and a film recorder.
The Bamberg refractor (not to be mixed-up with the Bamberg-Refraktor in Berlin)
This telescope is used to determine stellar magnitude, stellar distance, and photometric studies of eclipsing stars, solar imaging, and others. It is equipped with a photoelectric photometer, has a lens diameter with meter of focal length.
The Cassegrain GOTO
This was a gift from the Japanese government. This computer controlled telescope can automatically view objects from a database and this was the first digital telescope at Bosscha. The telescope is also equipped with a photometer and spectrometer-spectrograph.
The Unitron refractor
This telescope is used for observing hilal, lunar eclipse, solar eclipse and sunspot photography, and also other objects. Lens diameter is and a focal length of .
See also
List of astronomical observatories
References
External links
Timau, SE Asia's largest telescope under construction in Timor, NTT, at similar elevation, due 2019.
Astronomy institutes and departments
Astronomical observatories in Indonesia
Buildings and structures in West Java
Bandung Institute of Technology | Bosscha Observatory | [
"Astronomy"
] | 768 | [
"Astronomy organizations",
"Astronomy institutes and departments"
] |
13,185,451 | https://en.wikipedia.org/wiki/Runt | In a group of animals (usually a litter of animals born in multiple births), a runt is a member which is significantly smaller or weaker than the others. Owing to its small size, a runt in a litter faces obvious disadvantage, including difficulties in competing with its siblings for survival and possible rejection by its mother. Therefore, in the wild, a runt is less likely to survive infancy.
Even among domestic animals, runts often face rejection. They may be placed under the direct care of an experienced animal breeder, although the animal's size and weakness coupled with the lack of natural parental care make this difficult. Some tamed animals are the result of reared runts.
Not all litters have runts. All animals in a litter will naturally vary slightly in size and weight, but the smallest is not considered a "runt" if it is healthy and close in weight to its littermates. It may be perfectly capable of competing with its siblings for nutrition and other resources. A runt is specifically an animal that suffered in utero from deprivation of nutrients by comparison to its siblings, or from a genetic defect, and thus is born underdeveloped or less fit than expected.
Research in a news journal about a runt puppy highlights important factors including maternal care, genetic factors, health concerns, and personality development associated with runt puppies. Maternal care is crucial as, in some cases, the runt may face difficulties competing with their larger siblings for nutrients. It is important that a runt receives its fair share of milk from their mothers so they can continue growing. Genetic factors play a role in why a puppy is born a runt; it could be because of fertilization process or placental issues. They may face health issues like a heart defect, cleft palate, and any organ defects. Getting a veterinary evaluation is crucial to address potential health issues. Not all runt puppies are weak or unhealthy. With proper care and attention, runts can have positive personality traits and be well socialized and happy.
Another animal species where the birth of runts is common is in pigs, especially in large litters where competition for resources is higher. Research has shown that genetic improvements have been made in pig breeding that have resulted in an increase in low birthweight piglets, known as runts. They face challenges in accessing colostrum and milk, which are a competitive environment in large litters. A study in the National Library of Medicine showed the effects of uniform litters of different birth weights on piglets’ survival and performance. When a litter of piglets are similar in size, runts have a better chance of survival since there is less competition between all of them. By understanding and improving litter uniformity, farmers and animal caregivers give the runt of the litter a chance for survival, reducing pre-weaning mortality.
Runts, whether puppies or kittens, need special attention and care, so they have the best survival rate. The Akron Beacon Journal mentioned runts are adopted faster from shelters as their small size and perceived vulnerability makes them appealing to potential adopters. While there are many risks once they are born, once they make it to 6-8 weeks the chance of survival is high, and the runt will most likely grow to full size. While pet runts have a higher likelihood of being wanted, this is not the case with runts of farm animals like pigs. In agricultural settings, when a runt pig is born, a farmer is most likely inclined to cull the animal as they will not be able to reach the proper size for meat production. In the wild, a runt's chance of survival is lower as only the strongest survive. Wild animals do not have the same opportunities for care as domesticated animals.
See also
Vanishing twin
References
External links
https://www.whole-dog-journal.com/puppies/what-is-the-runt-of-the-litter/
https://www.beaconjournal.com/story/lifestyle/home-garden/2012/10/06/as-pets-runts-can-be/10712851007/
Biology terminology
Livestock | Runt | [
"Biology"
] | 863 | [
"nan"
] |
13,185,596 | https://en.wikipedia.org/wiki/Stack%20%28mathematics%29 | In mathematics a stack or 2-sheaf is, roughly speaking, a sheaf that takes values in categories rather than sets. Stacks are used to formalise some of the main constructions of descent theory, and to construct fine moduli stacks when fine moduli spaces do not exist.
Descent theory is concerned with generalisations of situations where isomorphic, compatible geometrical objects (such as vector bundles on topological spaces) can be "glued together" within a restriction of the topological basis. In a more general set-up the restrictions are replaced with pullbacks; fibred categories then make a good framework to discuss the possibility of such gluing. The intuitive meaning of a stack is that it is a fibred category such that "all possible gluings work". The specification of gluings requires a definition of coverings with regard to which the gluings can be considered. It turns out that the general language for describing these coverings is that of a Grothendieck topology. Thus a stack is formally given as a fibred category over another base category, where the base has a Grothendieck topology and where the fibred category satisfies a few axioms that ensure existence and uniqueness of certain gluings with respect to the Grothendieck topology.
Overview
Stacks are the underlying structure of algebraic stacks (also called Artin stacks) and Deligne–Mumford stacks, which generalize schemes and algebraic spaces and which are particularly useful in studying moduli spaces. There are inclusions: schemes ⊆ algebraic spaces ⊆ Deligne–Mumford stacks ⊆ algebraic stacks (Artin stacks) ⊆ stacks. and give a brief introductory accounts of stacks, , and give more detailed introductions, and describes the more advanced theory.
Motivation and history
The concept of stacks has its origin in the definition of effective descent data in .
In a 1959 letter to Serre, Grothendieck observed that a fundamental obstruction to constructing good moduli spaces is the existence of automorphisms. A major motivation for stacks is that if a moduli space for some problem does not exist because of the existence of automorphisms, it may still be possible to construct a moduli stack.
studied the Picard group of the moduli stack of elliptic curves, before stacks had been defined. Stacks were first defined by , and the term "stack" was introduced by for the original French term "champ" meaning "field". In this paper they also introduced Deligne–Mumford stacks, which they called algebraic stacks, though the term "algebraic stack" now usually refers to the more general Artin stacks introduced by .
When defining quotients of schemes by group actions, it is often impossible for the quotient to be a scheme and still satisfy desirable properties for a quotient. For example, if a few points have non-trivial stabilisers, then the categorical quotient will not exist among schemes, but it will exist as a stack.
In the same way, moduli spaces of curves, vector bundles, or other geometric objects are often best defined as stacks instead of schemes. Constructions of moduli spaces often proceed by first constructing a larger space parametrizing the objects in question, and then quotienting by group action to account for objects with automorphisms which have been overcounted.
Definitions
Abstract stacks
A category with a functor to a category is called a fibered category over if for any morphism in and any object of with image (under the functor), there is a pullback of by . This means a morphism with image such that any morphism with image can be factored as by a unique morphism in such that the functor maps to . The element is called the pullback of along and is unique up to canonical isomorphism.
The category c is called a prestack over a category C with a Grothendieck topology if it is fibered over C and for any object U of C and objects x, y of c with image U, the functor from the over category C/U to sets taking F:V→U to Hom(F*x,F*y) is a sheaf. This terminology is not consistent with the terminology for sheaves: prestacks are the analogues of separated presheaves rather than presheaves. Some authors require this as a property of stacks, rather than of prestacks.
The category c is called a stack over the category C with a Grothendieck topology if it is a prestack over C and every descent datum is effective. A descent datum consists roughly of a covering of an object V of C by a family Vi, elements xi in the fiber over Vi, and morphisms fji between the restrictions of xi and xj to Vij=Vi×VVj satisfying the compatibility condition fki = fkjfji. The descent datum is called effective if the elements xi are essentially the pullbacks of an element x with image V.
A stack is called a stack in groupoids or a (2,1)-sheaf if it is also fibered in groupoids, meaning that its fibers (the inverse images of objects of C) are groupoids. Some authors use the word "stack" to refer to the more restrictive notion of a stack in groupoids.
Algebraic stacks
An algebraic stack or Artin stack is a stack in groupoids X over the fppf site such that the diagonal map of X is representable and there exists a smooth surjection from (the stack associated to) a scheme to X.
A morphism Y X of stacks is representable if, for every morphism S X from (the stack associated to) a scheme to X, the fiber product Y ×X S is isomorphic to (the stack associated to) an algebraic space. The fiber product of stacks is defined using the usual universal property, and changing the requirement that diagrams commute to the requirement that they 2-commute. See also morphism of algebraic stacks for further information.
The motivation behind the representability of the diagonal is the following: the diagonal morphism is representable if and only if for any pair of morphisms of algebraic spaces , their fiber product is representable.
A Deligne–Mumford stack is an algebraic stack X such that there is an étale surjection from a scheme to X. Roughly speaking, Deligne–Mumford stacks can be thought of as algebraic stacks whose objects have no infinitesimal automorphisms.
Local structure of algebraic stacks
Since the inception of algebraic stacks it was expected that they are locally quotient stacks of the form where is a linearly reductive algebraic group. This was recently proved to be the case: given a quasi-separated algebraic stack locally of finite type over an algebraically closed field whose stabilizers are affine, and a smooth and closed point with linearly reductive stabilizer group , there exists an etale cover of the GIT quotient , where , such that the diagramis cartesian, and there exists an etale morphisminducing an isomorphism of the stabilizer groups at and .
Examples
Elementary examples
Every sheaf from a category with a Grothendieck topology can canonically be turned into a stack. For an object , instead of a set there is a groupoid whose objects are the elements of and the arrows are the identity morphism.
More concretely, let be a contravariant functor
Then, this functor determines the following category
an object is a pair consisting of a scheme in and an element
a morphism consists of a morphism in such that .
Via the forgetful functor , the category is a category fibered over . For example, if is a scheme in , then it determines the contravariant functor and the corresponding fibered category is the . Stacks (or prestacks) can be constructed as a variant of this construction. In fact, any scheme with a quasi-compact diagonal is an algebraic stack associated to the scheme .
Stacks of objects
A group-stack.
The moduli stack of vector bundles: the category of vector bundles V→S is a stack over the category of topological spaces S. A morphism from V→S to W→T consists of continuous maps from S to T and from V to W (linear on fibers) such that the obvious square commutes. The condition that this is a fibered category follows because one can take pullbacks of vector bundles over continuous maps of topological spaces, and the condition that a descent datum is effective follows because one can construct a vector bundle over a space by gluing together vector bundles on elements of an open cover.
The stack of quasi-coherent sheaves on schemes (with respect to the fpqc-topology and weaker topologies)
The stack of affine schemes on a base scheme (again with respect to the fpqc topology or a weaker one)
Constructions with stacks
Stack quotients
If is a scheme and is a smooth affine group scheme acting on , then there is a quotient algebraic stack , taking a scheme to the groupoid of -torsors over the -scheme with -equivariant maps to . Explicitly, given a space with a -action, form the stack , which (intuitively speaking) sends a space to the groupoid of pullback diagramswhere is a -equivariant morphism of spaces and is a principal -bundle. The morphisms in this category are just morphisms of diagrams where the arrows on the right-hand side are equal and the arrows on the left-hand side are morphisms of principal -bundles.
Classifying stacks
A special case of this when X is a point gives the classifying stack BG of a smooth affine group scheme G: It is named so since the category , the fiber over Y, is precisely the category of principal -bundles over . Note that itself can be considered as a stack, the moduli stack of principal G-bundles on Y.
An important subexample from this construction is , which is the moduli stack of principal -bundles. Since the data of a principal -bundle is equivalent to the data of a rank vector bundle, this is isomorphic to the moduli stack of rank vector bundles .
Moduli stack of line bundles
The moduli stack of line bundles is since every line bundle is canonically isomorphic to a principal -bundle. Indeed, given a line bundle over a scheme , the relative specgives a geometric line bundle. By removing the image of the zero section, one obtains a principal -bundle. Conversely, from the representation , the associated line bundle can be reconstructed.
Gerbes
A gerbe is a stack in groupoids that is locally nonempty, for example the trivial gerbe that assigns to each scheme the groupoid of principal -bundles over the scheme, for some group .
Relative spec and proj
If A is a quasi-coherent sheaf of algebras in an algebraic stack X over a scheme S, then there is a stack Spec(A) generalizing the construction of the spectrum Spec(A) of a commutative ring A. An object of Spec(A) is given by an S-scheme T, an object x of X(T), and a morphism of sheaves of algebras from x*(A) to the coordinate ring O(T) of T.
If A is a quasi-coherent sheaf of graded algebras in an algebraic stack X over a scheme S, then there is a stack Proj(A) generalizing the construction of the projective scheme Proj(A) of a graded ring A.
Moduli stacks
Moduli of curves
studied the moduli stack M1,1 of elliptic curves, and showed that its Picard group is cyclic of order 12. For elliptic curves over the complex numbers the corresponding stack is similar to a quotient of the upper half-plane by the action of the modular group.
The moduli space of algebraic curves defined as a universal family of smooth curves of given genus does not exist as an algebraic variety because in particular there are curves admitting nontrivial automorphisms. However there is a moduli stack , which is a good substitute for the non-existent fine moduli space of smooth genus curves. More generally there is a moduli stack of genus curves with marked points. In general this is an algebraic stack, and is a Deligne–Mumford stack for or or (in other words when the automorphism groups of the curves are finite). This moduli stack has a completion consisting of the moduli stack of stable curves (for given and ), which is proper over Spec Z. For example, is the classifying stack of the projective general linear group. (There is a subtlety in defining , as one has to use algebraic spaces rather than schemes to construct it.)
Kontsevich moduli spaces
Another widely studied class of moduli spaces are the Kontsevich moduli spaces parameterizing the space of stable maps between curves of a fixed genus to a fixed space whose image represents a fixed cohomology class. These moduli spaces are denotedand can have wild behavior, such as being reducible stacks whose components are non-equal dimension. For example, the moduli stack has smooth curves parametrized by an open subset . On the boundary of the moduli space, where curves may degenerate to reducible curves, there is a substack parametrizing reducible curves with a genus component and a genus component intersecting at one point, and the map sends the genus curve to a point. Since all such genus curves are parametrized by , and there is an additional dimensional choice of where these curves intersect on the genus curve, the boundary component has dimension .
Other moduli stacks
A Picard stack generalizes a Picard variety.
The moduli stack of formal group laws classifies formal group laws.
An ind-scheme such as an infinite projective space and a formal scheme is a stack.
A moduli stack of shtukas is used in geometric Langlands program. (See also shtukas.)
Geometric stacks
Weighted projective stacks
Constructing weighted projective spaces involves taking the quotient variety of some by a -action. In particular, the action sends a tupleand the quotient of this action gives the weighted projective space . Since this can instead be taken as a stack quotient, the weighted projective stack pg 30 isTaking the vanishing locus of a weighted polynomial in a line bundle gives a stacky weighted projective variety.
Stacky curves
Stacky curves, or orbicurves, can be constructed by taking the stack quotient of a morphism of curves by the monodromy group of the cover over the generic points. For example, take a projective morphismwhich is generically etale. The stack quotient of the domain by gives a stacky with stacky points that have stabilizer group at the fifth roots of unity in the -chart. This is because these are the points where the cover ramifies.
Non-affine stack
An example of a non-affine stack is given by the half-line with two stacky origins. This can be constructed as the colimit of two inclusion of .
Quasi-coherent sheaves on algebraic stacks
On an algebraic stack one can construct a category of quasi-coherent sheaves similar to the category of quasi-coherent sheaves over a scheme.
A quasi-coherent sheaf is roughly one that looks locally like the sheaf of a module over a ring. The first problem is to decide what one means by "locally": this involves the choice of a Grothendieck topology, and there are many possible choices for this, all of which have some problems and none of which seem completely satisfactory. The Grothendieck topology should be strong enough so that the stack is locally affine in this topology: schemes are locally affine in the Zariski topology so this is a good choice for schemes as Serre discovered, algebraic spaces and Deligne–Mumford stacks are locally affine in the etale topology so one usually uses the etale topology for these, while algebraic stacks are locally affine in the smooth topology so one can use the smooth topology in this case. For general algebraic stacks the etale topology does not have enough open sets: for example, if G is a smooth connected group then the only etale covers of the classifying stack BG are unions of copies of BG, which are not enough to give the right theory of quasicoherent sheaves.
Instead of using the smooth topology for algebraic stacks one often uses a modification of it called the Lis-Et topology (short for Lisse-Etale: lisse is the French term for smooth), which has the same open sets as the smooth topology but the open covers are given by etale rather than smooth maps. This usually seems to lead to an equivalent category of quasi-coherent sheaves, but is easier to use: for example it is easier to compare with the etale topology on algebraic spaces. The Lis-Et topology has a subtle technical problem: a morphism between stacks does not in general give a morphism between the corresponding topoi. (The problem is that while one can construct a pair of adjoint functors f*, f*, as needed for a geometric morphism of topoi, the functor f* is not left exact in general. This problem is notorious for having caused some errors in published papers and books.) This means that constructing the pullback of a quasicoherent sheaf under a morphism of stacks requires some extra effort.
It is also possible to use finer topologies. Most reasonable "sufficiently large" Grothendieck topologies seem to lead to equivalent categories of quasi-coherent sheaves, but the larger a topology is the harder it is to handle, so one generally prefers to use smaller topologies as long as they have enough open sets. For example, the big fppf topology leads to essentially the same category of quasi-coherent sheaves as the Lis-Et topology, but has a subtle problem: the natural embedding of quasi-coherent sheaves into OX modules in this topology is not exact (it does not preserve kernels in general).
Other types of stack
Differentiable stacks and topological stacks are defined in a way similar to algebraic stacks, except that the underlying category of affine schemes is replaced by the category of smooth manifolds or topological spaces.
More generally one can define the notion of an n-sheaf or n–1 stack, which is roughly a sort of sheaf taking values in n–1 categories. There are several inequivalent ways of doing this. 1-sheaves are the same as sheaves, and 2-sheaves are the same as stacks. They are called higher stacks.
A very similar and analogous extension is to develop the stack theory on non-discrete objects (i.e., a space is really a spectrum in algebraic topology). The resulting stacky objects are called derived stacks (or spectral stacks). Jacob Lurie's under-construction book Spectral Algebraic Geometry studies a generalization that he calls a spectral Deligne–Mumford stack. By definition, it is a ringed ∞-topos that is étale-locally the étale spectrum of an E∞-ring (this notion subsumes that of a derived scheme, at least in characteristic zero.)
Set-theoretical problems
There are some minor set theoretical problems with the usual foundation of the theory of stacks, because stacks are often defined as certain functors to the category of sets and are therefore not sets. There are several ways to deal with this problem:
One can work with Grothendieck universes: a stack is then a functor between classes of some fixed Grothendieck universe, so these classes and the stacks are sets in a larger Grothendieck universe. The drawback of this approach is that one has to assume the existence of enough Grothendieck universes, which is essentially a large cardinal axiom.
One can define stacks as functors to the set of sets of sufficiently large rank, and keep careful track of the ranks of the various sets one uses. The problem with this is that it involves some additional rather tiresome bookkeeping.
One can use reflection principles from set theory stating that one can find set models of any finite fragment of the axioms of ZFC to show that one can automatically find sets that are sufficiently close approximations to the universe of all sets.
One can simply ignore the problem. This is the approach taken by many authors.
See also
Algebraic stack
Chow group of a stack
Deligne–Mumford stack
Glossary of algebraic geometry
Pursuing Stacks
Quotient space of an algebraic stack
Ring of modular forms
Simplicial presheaf
Stacks Project
Toric stack
Generalized space
Notes
References
Pedagogical
is an expository article describing the basics of stacks with examples.
Guides to the literature
https://maths-people.anu.edu.au/~alperj/papers/stacks-guide.pdf
http://stacks.math.columbia.edu/tag/03B0
References
Unfortunately this book uses the incorrect assertion that morphisms of algebraic stacks induce morphisms of lisse-étale topoi. Some of these errors were fixed by .
Further reading
External links
"Good introductory references on algebraic stacks?"
Algebraic geometry
Category theory | Stack (mathematics) | [
"Mathematics"
] | 4,457 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory",
"Algebraic geometry"
] |
13,187,419 | https://en.wikipedia.org/wiki/Betavine | Betavine was an open community and resource website, created and managed by Vodafone Group R&D, for the mobile development community in order to support and stimulate the development of new applications for mobile and Internet communications. The Betavine website allows developers to upload and profile their alpha-stage and beta-stage applications, provides interaction tools for members to share knowledge and give feedback on apps, and discuss topics in mobile. Betavine also contains a growing resources section with technical topics and APIs.
The Betavine website was built using open source components so there are a number of resources for open source in mobile - such as the Betavine Forge where developers can share code snippets, post projects, and collaborate on projects. The recently publicised Vodafone Mobile Connect Card Driver for Linux is hosted here, for example. The site offers a virtual Academy where new developers can learn about mobile technology and how to set up a mobile business.
Betavine
The mission stated on the Betavine website is "to support the wider development community in imagining, developing, testing and launching great new applications for mobile, wireless and Internet communications. We are platform agnostic and operating system agnostic.
Everyone is welcome to register as a member, download and play with any application, contribute to discussion threads and create or comment on blog entries. As a developer you can upload your own applications, showcase your work and get useful feedback from other members. Students should keep an eye peeled for great competitions and other opportunities."
Vodafone Betavine also offers internships, "externships" and competitions for students.
There are now a number of competitions on Betavine, some for students only, some for anyone who cares to enter, and some are being run by partners of Betavine:
Student Competitions
Campus Life 2008
Nokia WidSets Challenge
Vodafone Egypt Competition
Guidelines
Winners 2007
Partner Competitions
Mob4hire August 2008
Betavine launched a mobile internet site at the beginning of 2008, using the .mobi domain convention. The stated goal of launching betavine.mobi is to make the downloading of mobile applications profiled on Betavine even easier, and to help end-users find apps that are compatible with their mobile device by automatically detecting the device model and matching that with a database of technical specs.
In May 2008, Betavine launched a pilot with Vodafone Spain which links directly to betavine.mobi from the Vodafone Live! Portal. It's clear from the download figures on the main website www.betavine.net that this is having a huge positive impact on the number of applications being found, downloaded, and being given feedback.
Vodafone is one of the key backers of the dotMobi consortium (the informal name of mTLD Top Level Domain, Ltd.), which is promoting the use of the .mobi domain name in order to increase consumer confidence that an Internet site or service will work on their mobile phones.
As of July 5, 2015, Betavine is inactive and site is down. There have been no tweets since August 25, 2010 so it is assumed the group closed down round then.
Betavine Forge
Vodafone Betavine runs a version of the GForge open-source collaborative development portal in order to host mobile open-source projects, code snippets, and other resources for developers.
Some of the hosted projects are:
Vodafone Mobile Connect Card driver for Linux: GPRS/UMTS/HSDPA device manager written in Python, licensed under the GPL
Betavine Connection Manager: GPRS/UMTS/HSxPA device manager for Linux written in Python, licensed under the GPL
Vodafone MobileScript for Windows Mobile: common framework using an ECMAScript OS engine
Linux Environment for Mobile Networks: studies the possibility of remotely running applications over Mobile Networks
Since the launch of the Asus EEEPC, which has been hugely popular, a new version of the Vodafone Connect Card linux driver for UMPC (Ultra Mobile PCs) has been getting a lot of downloads.
The original Vodafone Connect Card linux driver for Linux has now been rewritten to interoperate with Network Manager, and renamed as 'Betavine Connection Manager'.
See also
Comparison of free software hosting facilities
SourceForge
Google Code
CodePlex
Freshmeat
Ohloh
References
External links
Vodafone Betavine
Betavine Forge
Free software websites
Vodafone
Mobile software development | Betavine | [
"Technology"
] | 914 | [
"Computing websites",
"Free software websites"
] |
13,187,687 | https://en.wikipedia.org/wiki/Nonlinearity%20%28journal%29 | Nonlinearity is a peer-reviewed scientific journal published by IOP Publishing and the London Mathematical Society. The journal publishes papers on nonlinear mathematics, mathematical physics, experimental physics, theoretical physics and other areas in the sciences where nonlinear phenomena are of fundamental importance. The Editors-in-Chief are Tasso J Kaper (Boston University) for IOP Publishing and Konstantin Khanin (University of Toronto) for the London Mathematical Society.
Abstracting and indexing
The journal is abstracted and indexed in Science Citation Index, Current Contents/Physical, Chemical & Earth Sciences, Inspec, CompuMath Citation Index, Mathematical Reviews, MathSciNet, Zentralblatt MATH, and VINITI Database RAS. According to the Journal Citation Reports, the journal has a 2023 impact factor of 1.6.
See also
Journal of Physics A
Inverse Problems
London Mathematical Society
IOP Publishing
References
External links
Physics journals
IOP Publishing academic journals
Academic journals established in 1988
Mathematics journals
Monthly journals
English-language journals
London Mathematical Society
Mathematical physics journals
Dynamical systems journals | Nonlinearity (journal) | [
"Mathematics"
] | 212 | [
"Dynamical systems journals",
"Dynamical systems"
] |
41,244 | https://en.wikipedia.org/wiki/Hybrid%20%28biology%29 | In biology, a hybrid is the offspring resulting from combining the qualities of two organisms of different varieties, subspecies, species or genera through sexual reproduction. Generally, it means that each cell has genetic material from two different organisms, whereas an individual where some cells are derived from a different organism is called a chimera. Hybrids are not always intermediates between their parents such as in blending inheritance (a now discredited theory in modern genetics by particulate inheritance), but can show hybrid vigor, sometimes growing larger or taller than either parent. The concept of a hybrid is interpreted differently in animal and plant breeding, where there is interest in the individual parentage. In genetics, attention is focused on the numbers of chromosomes. In taxonomy, a key question is how closely related the parent species are.
Species are reproductively isolated by strong barriers to hybridization, which include genetic and morphological differences, differing times of fertility, mating behaviors and cues, and physiological rejection of sperm cells or the developing embryo. Some act before fertilization and others after it. Similar barriers exist in plants, with differences in flowering times, pollen vectors, inhibition of pollen tube growth, somatoplastic sterility, cytoplasmic-genic male sterility and the structure of the chromosomes. A few animal species and many plant species, however, are the result of hybrid speciation, including important crop plants such as wheat, where the number of chromosomes has been doubled.
A form of often intentional human-mediated hybridization is the crossing of wild and domesticated species. This is common in both traditional horticulture and modern agriculture; many commercially useful fruits, flowers, garden herbs, and trees have been produced by hybridization. One such flower, Oenothera lamarckiana, was central to early genetics research into mutationism and polyploidy. It is also more occasionally done in the livestock and pet trades; some well-known wild × domestic hybrids are beefalo and wolfdogs. Human selective breeding of domesticated animals and plants has also resulted in the development of distinct breeds (usually called cultivars in reference to plants); crossbreeds between them (without any wild stock) are sometimes also imprecisely referred to as "hybrids".
Hybrid humans existed in prehistory. For example, Neanderthals and anatomically modern humans are thought to have interbred as recently as 40,000 years ago.
Mythological hybrids appear in human culture in forms as diverse as the Minotaur, blends of animals, humans and mythical beasts such as centaurs and sphinxes, and the Nephilim of the Biblical apocrypha described as the wicked sons of fallen angels and attractive women.
Significance
In evolution
Hybridization between species plays an important role in evolution, though there is much debate about its significance. Roughly 25% of plants and 10% of animals are known to form hybrids with at least one other species. One example of an adaptive benefit to hybridization is that hybrid individuals can form a "bridge" transmitting potentially helpful genes from one species to another when the hybrid backcrosses with one of its parent species, a process called introgression. Hybrids can also cause speciation, either because the hybrids are genetically incompatible with their parents and not each other, or because the hybrids occupy a different niche than either parent.
Hybridization is a particularly common mechanism for speciation in plants, and is now known to be fundamental to the evolutionary history of plants. Plants frequently form polyploids, individuals with more than two copies of each chromosome. Whole genome doubling has occurred repeatedly in plant evolution. When two plant species hybridize, the hybrid may double its chromosome count by incorporating the entire nuclear genome of both parents, resulting in offspring that are reproductively incompatible with either parent because of different chromosome counts.
In conservation
Human impact on the environment has resulted in an increase in the interbreeding between regional species, and the proliferation of introduced species worldwide has also resulted in an increase in hybridization. This has been referred to as genetic pollution out of concern that it may threaten many species with extinction. Similarly, genetic erosion from monoculture in crop plants may be damaging the gene pools of many species for future breeding.
The conservation impacts of hybridization between species are highly debated. While hybridization could potentially threaten rare species or lineages by "swamping" the genetically "pure" individuals with hybrids, hybridization could also save a rare lineage from extinction by introducing genetic diversity. It has been proposed that hybridization could be a useful tool to conserve biodiversity by allowing organisms to adapt, and that efforts to preserve the separateness of a "pure" lineage could harm conservation by lowering the organisms' genetic diversity and adaptive potential, particularly in species with low populations. While endangered species are often protected by law, hybrids are often excluded from protection, resulting in challenges to conservation.
Etymology
The term hybrid is derived from Latin , used for crosses such as of a tame sow and a wild boar. The term came into popular use in English in the 19th century, though examples of its use have been found from the early 17th century. Conspicuous hybrids are popularly named with portmanteau words, starting in the 1920s with the breeding of tiger–lion hybrids (liger and tigon).
As seen by different disciplines
Animal and plant breeding
From the point of view of animal and plant breeders, there are several kinds of hybrid formed from crosses within a species, such as between different breeds. Single cross hybrids result from the cross between two true-breeding organisms which produces an F1 hybrid (first filial generation). The cross between two different homozygous lines produces an F1 hybrid that is heterozygous; having two alleles, one contributed by each parent and typically one is dominant and the other recessive. Typically, the F1 generation is also phenotypically homogeneous, producing offspring that are all similar to each other.
Double cross hybrids result from the cross between two different F1 hybrids (i.e., there are four unrelated grandparents).
Three-way cross hybrids result from the cross between an F1 hybrid and an inbred line. Triple cross hybrids result from the crossing of two different three-way cross hybrids. Top cross (or "topcross") hybrids result from the crossing of a top quality or pure-bred male and a lower quality female, intended to improve the quality of the offspring, on average.
Population hybrids result from the crossing of plants or animals in one population with those of another population. These include interspecific hybrids or crosses between different breeds. In biology, the result of crossing of two populations is called a synthetic population.
In horticulture, the term stable hybrid is used to describe an annual plant that, if grown and bred in a small monoculture free of external pollen (e.g., an air-filtered greenhouse) produces offspring that are "true to type" with respect to phenotype; i.e., a true-breeding organism.
Biogeography
Hybridization can occur in the hybrid zones where the geographical ranges of species, subspecies, or distinct genetic lineages overlap. For example, the butterfly Limenitis arthemis has two major subspecies in North America, L. a. arthemis (the white admiral) and L. a. astyanax (the red-spotted purple). The white admiral has a bright, white band on its wings, while the red-spotted purple has cooler blue-green shades. Hybridization occurs between a narrow area across New England, southern Ontario, and the Great Lakes, the "suture region". It is at these regions that the subspecies were formed. Other hybrid zones have formed between described species of plants and animals.
Genetics
From the point of view of genetics, several different kinds of hybrid can be distinguished.
A genetic hybrid carries two different alleles of the same gene, where for instance one allele may code for a lighter coat colour than the other. A structural hybrid results from the fusion of gametes that have differing structure in at least one chromosome, as a result of structural abnormalities. A numerical hybrid results from the fusion of gametes having different haploid numbers of chromosomes. A permanent hybrid results when only the heterozygous genotype occurs, as in Oenothera lamarckiana, because all homozygous combinations are lethal. In the early history of genetics, Hugo de Vries supposed these were caused by mutation.
Genetic complementation
Genetic complementation is a hybridization test widely used in genetics to determine whether two separately isolated mutants that have the same (or similar) phenotype are defective in the same gene or in different genes (see complementation). If a hybrid organism containing the genomes of two different mutant parental organisms displays a wild type phenotype, it is ordinarily considered that the two parental mutant organisms are defective in different genes. If the hybrid organism displays a distinctly mutant phenotype, the two mutant parental organisms are considered to be defective in the same gene. However, in some cases the hybrid organism may display a phenotype that is only weakly (or partially) wild-type, and this may reflect intragenic (interallelic) complementation.
Taxonomy
From the point of view of taxonomy, hybrids differ according to their parentage.
Hybrids between different subspecies (such as between the dog and Eurasian wolf) are called intra-specific hybrids. Interspecific hybrids are the offspring from interspecies mating; these sometimes result in hybrid speciation. Intergeneric hybrids result from matings between different genera, such as between sheep and goats. Interfamilial hybrids, such as between chickens and guineafowl or pheasants, are reliably described but extremely rare. Interordinal hybrids (between different orders) are few, but have been engineered between the sea urchin Strongylocentrotus purpuratus (female) and the sand dollar Dendraster excentricus (male).
Biology
Expression of parental traits
When two distinct types of organisms breed with each other, the resulting hybrids typically have intermediate traits (e.g., one plant parent has red flowers, the other has white, and the hybrid, pink flowers). Commonly, hybrids also combine traits seen only separately in one parent or the other (e.g., a bird hybrid might combine the yellow head of one parent with the orange belly of the other).
Mechanisms of reproductive isolation
Interspecific hybrids are bred by mating individuals from two species, normally from within the same genus. The offspring display traits and characteristics of both parents, but are often sterile, preventing gene flow between the species. Sterility is often attributed to the different number of chromosomes between the two species. For example, donkeys have 62 chromosomes, horses have 64 chromosomes, and mules or hinnies have 63 chromosomes. Mules, hinnies, and other normally sterile interspecific hybrids cannot produce viable gametes, because differences in chromosome structure prevent appropriate pairing and segregation during meiosis, meiosis is disrupted, and viable sperm and eggs are not formed. However, fertility in female mules has been reported with a donkey as the father.
A variety of mechanisms limit the success of hybridization, including the large genetic difference between most species. Barriers include morphological differences, differing times of fertility, mating behaviors and cues, and physiological rejection of sperm cells or the developing embryo. Some act before fertilization; others after it.
In plants, some barriers to hybridization include blooming period differences, different pollinator vectors, inhibition of pollen tube growth, somatoplastic sterility, cytoplasmic-genic male sterility and structural differences of the chromosomes.
Speciation
A few animal species are the result of hybridization. The Lonicera fly is a natural hybrid. The American red wolf appears to be a hybrid of the gray wolf and the coyote, although its taxonomic status has been a subject of controversy. The European edible frog is a semi-permanent hybrid between pool frogs and marsh frogs; its population requires the continued presence of at least one of the parent species. Cave paintings indicate that the European bison is a natural hybrid of the aurochs and the steppe bison.
Plant hybridization is more commonplace compared to animal hybridization. Many crop species are hybrids, including notably the polyploid wheats: some have four sets of chromosomes (tetraploid) or six (hexaploid), while other wheat species have (like most eukaryotic organisms) two sets (diploid), so hybridization events likely involved the doubling of chromosome sets, causing immediate genetic isolation.
Hybridization may be important in speciation in some plant groups. However, homoploid hybrid speciation (not increasing the number of sets of chromosomes) may be rare: by 1997, only eight natural examples had been fully described. Experimental studies suggest that hybridization offers a rapid route to speciation, a prediction confirmed by the fact that early generation hybrids and ancient hybrid species have matching genomes, meaning that once hybridization has occurred, the new hybrid genome can remain stable.
Many hybrid zones are known where the ranges of two species meet, and hybrids are continually produced in great numbers. These hybrid zones are useful as biological model systems for studying the mechanisms of speciation. Recently DNA analysis of a bear shot by a hunter in the Northwest Territories confirmed the existence of naturally occurring and fertile grizzly–polar bear hybrids.
Hybrid vigour
Hybridization between reproductively isolated species often results in hybrid offspring with lower fitness than either parental. However, hybrids are not, as might be expected, always intermediate between their parents (as if there were blending inheritance), but are sometimes stronger or perform better than either parental lineage or variety, a phenomenon called heterosis, hybrid vigour, or heterozygote advantage. This is most common with plant hybrids. A transgressive phenotype is a phenotype that displays more extreme characteristics than either of the parent lines. Plant breeders use several techniques to produce hybrids, including line breeding and the formation of complex hybrids. An economically important example is hybrid maize (corn), which provides a considerable seed yield advantage over open pollinated varieties. Hybrid seed dominates the commercial maize seed market in the United States, Canada and many other major maize-producing countries.
In a hybrid, any trait that falls outside the range of parental variation (and is thus not simply intermediate between its parents) is considered heterotic. Positive heterosis produces more robust hybrids, they might be stronger or bigger; while the term negative heterosis refers to weaker or smaller hybrids. Heterosis is common in both animal and plant hybrids. For example, hybrids between a lion and a tigress ("ligers") are much larger than either of the two progenitors, while "tigons" (lioness × tiger) are smaller. Similarly, the hybrids between the common pheasant (Phasianus colchicus) and domestic fowl (Gallus gallus) are larger than either of their parents, as are those produced between the common pheasant and hen golden pheasant (Chrysolophus pictus). Spurs are absent in hybrids of the former type, although present in both parents.
Human influence
Anthropogenic hybridization
Hybridization is greatly influenced by human impact on the environment, through effects such as habitat fragmentation and species introductions. Such impacts make it difficult to conserve the genetics of populations undergoing introgressive hybridization. Humans have introduced species worldwide to environments for a long time, both intentionally for purposes such as biological control, and unintentionally, as with accidental escapes of individuals. Introductions can drastically affect populations, including through hybridization.
Management
There is a kind of continuum with three semi-distinct categories dealing with anthropogenic hybridization: hybridization without introgression, hybridization with widespread introgression (backcrossing with one of the parent species), and hybrid swarms (highly variable populations with much interbreeding as well as backcrossing with the parent species). Depending on where a population falls along this continuum, the management plans for that population will change. Hybridization is currently an area of great discussion within wildlife management and habitat management. Global climate change is creating other changes such as difference in population distributions which are indirect causes for an increase in anthropogenic hybridization.
Conservationists disagree on when is the proper time to give up on a population that is becoming a hybrid swarm, or to try and save the still existing pure individuals. Once a population becomes a complete mixture, the goal becomes to conserve those hybrids to avoid their loss. Conservationists treat each case on its merits, depending on detecting hybrids within the population. It is nearly impossible to formulate a uniform hybridization policy, because hybridization can occur beneficially when it occurs "naturally", and when hybrid swarms are the only remaining evidence of prior species, they need to be conserved as well.
Genetic mixing and extinction
Regionally developed ecotypes can be threatened with extinction when new alleles or genes are introduced that alter that ecotype. This is sometimes called genetic mixing. Hybridization and introgression, which can happen in natural and hybrid populations, of new genetic material can lead to the replacement of local genotypes if the hybrids are more fit and have breeding advantages over the indigenous ecotype or species. These hybridization events can result from the introduction of non-native genotypes by humans or through habitat modification, bringing previously isolated species into contact. Genetic mixing can be especially detrimental for rare species in isolated habitats, ultimately affecting the population to such a degree that none of the originally genetically distinct population remains.
Effect on biodiversity and food security
In agriculture and animal husbandry, the Green Revolution's use of conventional hybridization increased yields by breeding high-yielding varieties. The replacement of locally indigenous breeds, compounded with unintentional cross-pollination and crossbreeding (genetic mixing), has reduced the gene pools of various wild and indigenous breeds resulting in the loss of genetic diversity. Since the indigenous breeds are often well-adapted to local extremes in climate and have immunity to local pathogens, this can be a significant genetic erosion of the gene pool for future breeding. Therefore, commercial plant geneticists strive to breed "widely adapted" cultivars to counteract this tendency.
Different taxa
In animals
Mammals
Familiar examples of equid hybrids are the mule, a cross between a female horse and a male donkey, and the hinny, a cross between a female donkey and a male horse. Pairs of complementary types like the mule and hinny are called reciprocal hybrids. Polar bears and brown bears are another case of a hybridizing species pairs, and introgression among non-sister species of bears appears to have shaped the Ursidae family tree. Among many other mammal crosses are hybrid camels, crosses between a bactrian camel and a dromedary. There are many examples of felid hybrids, including the liger. The oldest-known animal hybrid bred by humans is the kunga equid hybrid produced as a draft animal and status symbol 4,500 years ago in Umm el-Marra, present-day Syria.
The first known instance of hybrid speciation in marine mammals was discovered in 2014. The clymene dolphin (Stenella clymene) is a hybrid of two Atlantic species, the spinner and striped dolphins. In 2019, scientists confirmed that a skull found 30 years earlier was a hybrid between the beluga whale and narwhal, dubbed the narluga.
Birds
Hybridization between species is common in birds. Hybrid birds are purposefully bred by humans, but hybridization is also common in the wild. Waterfowl have a particularly high incidence of hybridization, with at least 60% of species known to produce hybrids with another species. Among ducks, mallards widely hybridize with many other species, and the genetic relationships between ducks are further complicated by the widespread gene flow between wild and domestic mallards.
One of the most common interspecific hybrids in geese occurs between Greylag and Canada geese (Anser anser x Branta canadensis). One potential mechanism for the occurrence of hybrids in these geese is interspecific nest parasitism, where an egg is laid in the nest of another species to be raised by non-biological parents. The chick imprints upon and eventually seeks a mate among the species that raised it, instead of the species of its biological parents.
Cagebird breeders sometimes breed bird hybrids known as mules between species of finch, such as goldfinch × canary.
Amphibians
Among amphibians, Japanese giant salamanders and Chinese giant salamanders have created hybrids that threaten the survival of Japanese giant salamanders because of competition for similar resources in Japan.
Fish
Among fish, a group of about 50 natural hybrids between Australian blacktip shark and the larger common blacktip shark was found by Australia's eastern coast in 2012.
Russian sturgeon and American paddlefish were hybridized in captivity when sperm from the paddlefish and eggs from the sturgeon were combined, unexpectedly resulting in viable offspring. This hybrid is called a sturddlefish.
Cephalochordates
The two genera Asymmetron and Branchiostoma are able to produce viable hybrid offspring, even if none have lived into adulthood so far, despite the parents' common ancestor living tens of millions of years ago.
Insects
Among insects, so-called killer bees were accidentally created during an attempt to breed a strain of bees that would both produce more honey and be better adapted to tropical conditions. It was done by crossing a European honey bee and an African bee.
The Colias eurytheme and C. philodice butterflies have retained enough genetic compatibility to produce viable hybrid offspring. Hybrid speciation may have produced the diverse Heliconius butterflies, but that is disputed.
The two closely related harvester ant species Pogonomyrmex barbatus and Pogonomyrmex rugosus have evolved to depend on hybridization. When a queen fertilizes her eggs with sperm from males of her own species, the offspring is always new queens. And when she fertilizes the eggs with sperm from males of the other species, the offspring is always sterile worker ants (and because ants are haplodiploid, unfertilized eggs become males). Without mating with males of the other species, the queens are unable to produce workers, and will fail to establish a colony of their own.
In plants
Plant species hybridize more readily than animal species, and the resulting hybrids are fertile more often. Many plant species are the result of hybridization, combined with polyploidy, which duplicates the chromosomes. Chromosome duplication allows orderly meiosis and so viable seed can be produced.
Plant hybrids are generally given names that include an "×" (not in italics), such as Platanus × hispanica for the London plane, a natural hybrid of P. orientalis (oriental plane) and P. occidentalis (American sycamore). The parent's names may be kept in their entirety, as seen in Prunus persica × Prunus americana, with the female parent's name given first, or if not known, the parent's names given alphabetically.
Plant species that are genetically compatible may not hybridize in nature for various reasons, including geographical isolation, differences in flowering period, or differences in pollinators. Species that are brought together by humans in gardens may hybridize naturally, or hybridization can be facilitated by human efforts, such as altered flowering period or artificial pollination. Hybrids are sometimes created by humans to produce improved plants that have some of the characteristics of each of the parent species. Much work is now being done with hybrids between crops and their wild relatives to improve disease resistance or climate resilience for both agricultural and horticultural crops.
Some crop plants are hybrids from different genera (intergeneric hybrids), such as Triticale, × Triticosecale, a wheat–rye hybrid. Most modern and ancient wheat breeds are themselves hybrids; bread wheat, Triticum aestivum, is a hexaploid hybrid of three wild grasses. Several commercial fruits including loganberry (Rubus × loganobaccus) and grapefruit (Citrus × paradisi) are hybrids, as are garden herbs such as peppermint (Mentha × piperita), and trees such as the London plane (Platanus × hispanica). Among many natural plant hybrids is Iris albicans, a sterile hybrid that spreads by rhizome division, and Oenothera lamarckiana, a flower that was the subject of important experiments by Hugo de Vries that produced an understanding of polyploidy.
Sterility in a non-polyploid hybrid is often a result of chromosome number; if parents are of differing chromosome pair number, the offspring will have an odd number of chromosomes, which leaves them unable to produce chromosomally balanced gametes. While that is undesirable in a crop such as wheat, for which growing a crop that produces no seeds would be pointless, it is an attractive attribute in some fruits. Triploid bananas and watermelons are intentionally bred because they produce no seeds and are also parthenocarpic.
In fungi
Hybridization between fungal species is common and well established, particularly in yeast. Yeast hybrids are widely found and used in human-related activities, such as brewing and winemaking. The production of lager beers for instance are known to be carried out by the yeast Saccharomyces pastorianus, a cryotolerant hybrid between Saccharomyces cerevisiae and Saccharomyces eubayanus, which allows fermentation at low temperatures.
In humans
There is evidence of hybridization between modern humans and other species of the genus Homo. In 2010, the Neanderthal genome project showed that 1–4% of DNA from all people living today, apart from most Sub-Saharan Africans, is of Neanderthal heritage. Analyzing the genomes of 600 Europeans and East Asians found that combining them covered 20% of the Neanderthal genome that is in the modern human population. Ancient human populations lived and interbred with Neanderthals, Denisovans, and at least one other extinct Homo species. Thus, Neanderthal and Denisovan DNA has been incorporated into human DNA by introgression.
In 1998, a complete prehistorical skeleton found in Portugal, the Lapedo child, had features of both anatomically modern humans and Neanderthals. Some ancient human skulls with especially large nasal cavities and unusually shaped braincases represent human-Neanderthal hybrids. A 37,000- to 42,000-year-old human jawbone found in Romania's Oase cave contains traces of Neanderthal ancestry from only four to six generations earlier. All genes from Neanderthals in the current human population are descended from Neanderthal fathers and human mothers.
Mythology
Folk tales and myths sometimes contain mythological hybrids; the Minotaur was the offspring of a human, Pasiphaë, and a white bull. More often, they are composites of the physical attributes of two or more kinds of animals, mythical beasts, and humans, with no suggestion that they are the result of interbreeding, as in the centaur (man/horse), chimera (goat/lion/snake), hippocamp (fish/horse), and sphinx (woman/lion). The Old Testament mentions a first generation of half-human hybrid giants, the Nephilim, while the apocryphal Book of Enoch describes the Nephilim as the wicked sons of fallen angels and attractive women.
See also
AquAdvantage salmon
Bird hybrid
Canid hybrid
Chimera (genetics)
Chloroplast capture (botany)
Eukaryote hybrid genome
Endogenous retrovirus
Retrovirus
Reticulate evolution
Felid hybrids
Genetic admixture
Genetic erosion
Grex (horticulture)
Hybridity
Intergradation
Hybridogenesis
Hybrot
Inbreeding
Breeding back
Interspecific pregnancy
Horizontal gene transfer
Inferring horizontal gene transfer
GloFish
Agrobacterium, a bacterium well known for its ability to transfer DNA between itself and plants.
Gene delivery
List of plant hybrids
List of genetic hybrids
Macropod hybrids
Purebred
Selective breeding
Genetic use restriction technology
Gynogenesis
Notes
References
External links
Artificial Hybridisation – Artificial Hybridisation in orchids
Domestic Fowl Hybrids
Scientists Create Butterfly Hybrid – Creation of new species through hybridization was thought to be common only in plants, and rare in animals (archived 3 December 2008)
What is a human admixed embryo? (archived 25 February 2012)
Biology terminology
Botanical nomenclature
Evolutionary biology
Population genetics
Breeding | Hybrid (biology) | [
"Biology"
] | 5,910 | [
"Evolutionary biology",
"Behavior",
"Botanical nomenclature",
"Reproduction",
"Hybrid organisms",
"Botanical terminology",
"Biological nomenclature",
"nan",
"Breeding"
] |
41,245 | https://en.wikipedia.org/wiki/Hybrid%20balance | In telecommunications, a hybrid balance is an expression of the degree of electrical symmetry between two impedances connected to two conjugate sides of a hybrid coil or resistance hybrid. It is usually expressed in dB.
If the respective impedances of the branches of the hybrid that are connected to the conjugate sides of the hybrid are known, hybrid balance may be computed by the formula for return loss.
Telecommunications engineering
Electrical parameters | Hybrid balance | [
"Engineering"
] | 87 | [
"Electrical engineering",
"Telecommunications engineering",
"Electrical parameters"
] |
41,248 | https://en.wikipedia.org/wiki/Hydroxyl%20ion%20absorption | Hydroxyl ion absorption is the absorption in optical fibers of electromagnetic radiation, including the near-infrared, due to the presence of trapped hydroxyl ions remaining from water as a contaminant.
The hydroxyl (OH−) ion can penetrate glass during or after product fabrication, resulting in significant attenuation of discrete optical wavelengths, e.g., centred at 1.383 μm, used for communications via optical fibres.
See also
Electromagnetic absorption by water
References
Fiber optics
Glass engineering and science | Hydroxyl ion absorption | [
"Chemistry",
"Materials_science",
"Engineering"
] | 107 | [
"Glass engineering and science",
"Materials science",
"Analytical chemistry stubs"
] |
41,250 | https://en.wikipedia.org/wiki/Identifier | An identifier is a name that identifies (that is, labels the identity of) either a unique object or a unique class of objects, where the "object" or class may be an idea, person, physical countable object (or class thereof), or physical noncountable substance (or class thereof). The abbreviation ID often refers to identity, identification (the process of identifying), or an identifier (that is, an instance of identification). An identifier may be a word, number, letter, symbol, or any combination of those.
The words, numbers, letters, or symbols may follow an encoding system (wherein letters, digits, words, or symbols stand for [represent] ideas or longer names) or they may simply be arbitrary. When an identifier follows an encoding system, it is often referred to as a code or id code. For instance the ISO/IEC 11179 metadata registry standard defines a code as system of valid symbols that substitute for longer values in contrast to identifiers without symbolic meaning. Identifiers that do not follow any encoding scheme are often said to be arbitrary Ids; they are arbitrarily assigned and have no greater meaning. (Sometimes identifiers are called "codes" even when they are actually arbitrary, whether because the speaker believes that they have deeper meaning or simply because they are speaking casually and imprecisely.)
The unique identifier (UID) is an identifier that refers to only one instance—only one particular object in the universe. A part number is an identifier, but it is not a unique identifier—for that, a serial number is needed, to identify each instance of the part design. Thus the identifier "Model T" identifies the class (model) of automobiles that Ford's Model T comprises; whereas the unique identifier "Model T Serial Number 159,862" identifies one specific member of that class—that is, one particular Model T car, owned by one specific person.
The concepts of name and identifier are denotatively equal, and the terms are thus denotatively synonymous; but they are not always connotatively synonymous, because code names and Id numbers are often connotatively distinguished from names in the sense of traditional natural language naming. For example, both "Jamie Zawinski" and "Netscape employee number 20" are identifiers for the same specific human being; but normal English-language connotation may consider "Jamie Zawinski" a "name" and not an "identifier", whereas it considers "Netscape employee number 20" an "identifier" but not a "name." This is an emic indistinction rather than an etic one.
Metadata
In metadata, an identifier is a language-independent label, sign or token that uniquely identifies an object within an identification scheme. The suffix "identifier" is also used as a representation term when naming a data element.
ID codes may inherently carry metadata along with them. For example, when you know that the food package in front of you has the identifier "2011-09-25T15:42Z-MFR5-P02-243-45", you not only have that data, you also have the metadata that tells you that it was packaged on September 25, 2011, at 3:42pm UTC, manufactured by Licensed Vendor Number 5, at the Peoria, IL, USA plant, in Building 2, and was the 243rd package off the line in that shift, and was inspected by Inspector Number 45.
Arbitrary identifiers might lack metadata. For example, if a food package just says 100054678214, its ID may not tell anything except identity—no date, manufacturer name, production sequence rank, or inspector number. In some cases, arbitrary identifiers such as sequential serial numbers leak information (i.e. the German tank problem). Opaque identifiers—identifiers designed to avoid leaking even that small amount of information—include "really opaque pointers" and Version 4 UUIDs.
In computer science
In computer science, identifiers (IDs) are lexical tokens that name entities. Identifiers are used extensively in virtually all information processing systems. Identifying entities makes it possible to refer to them, which is essential for any kind of symbolic processing.
In computer languages
In computer languages, identifiers are tokens (also called symbols) which name language entities. Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Ambiguity
Identifiers (IDs) versus Unique identifiers (UIDs)
A resource may carry multiple identifiers. Typical examples are:
One person with multiple names, nicknames, and forms of address (titles, salutations)
For example: One specific person may be identified by all of the following identifiers: Jane Smith; Jane Elizabeth Meredith Smith; Jane E. M. Smith; Jane E. Smith; Janie Smith; Janie; Little Janie (as opposed to her mother or sister or cousin, Big Janie); Aunt Jane; Auntie Janie; Mom; Grandmom; Nana; Kelly's mother; Billy's grandmother; Ms. Smith; Dr. Smith; Jane E. Smith, PhD; and Fuzzy (her jocular nickname at work).
One document with multiple versions
One substance with multiple names (for example, CAS index names versus IUPAC names; INN generic drug names versus USAN generic drug names versus brand names)
The inverse is also possible, where multiple resources are represented with the same identifier (discussed below).
Implicit context and namespace conflicts
Many codes and nomenclatural systems originate within a small namespace. Over the years, some of them bleed into larger namespaces (as people interact in ways they formerly had not, e.g., cross-border trade, scientific collaboration, military alliance, and general cultural interconnection or assimilation). When such dissemination happens, the limitations of the original naming convention, which had formerly been latent and moot, become painfully apparent, often necessitating retronymy, synonymity,
translation/transcoding, and so on. Such limitations generally accompany the shift away from the original context to the broader one. Typically the system shows implicit context (context was formerly assumed, and narrow), lack of capacity (e.g., low number of possible IDs, reflecting the outmoded narrow context), lack of extensibility (no features defined and reserved against future needs), and lack of specificity and disambiguating capability (related to the context shift, where longstanding uniqueness encounters novel nonuniqueness). Within computer science, this problem is called naming collision. The story of the origination and expansion of the CODEN system provides a good case example in a recent-decades, technical-nomenclature context. The capitalization variations seen with specific designators reveals an instance of this problem occurring in natural languages, where the proper noun/common noun distinction (and its complications) must be dealt with. A universe in which every object had a UID would not need any namespaces, which is to say that it would constitute one gigantic namespace; but human minds could never keep track of, or semantically interrelate, so many UIDs.
Identifiers in various disciplines
See also
References
External links
Metadata | Identifier | [
"Technology"
] | 1,559 | [
"Metadata",
"Data"
] |
41,254 | https://en.wikipedia.org/wiki/Improved-definition%20television | Improved-definition television (IDTV) or enhanced-quality television transmitters and receivers exceed the performance requirements of the NTSC standard, while remaining within the general parameters of NTSC emissions standards.
IDTV improvements may be made at the television transmitter or receiver. Improvements include enhancements in encoding, digital filtering, scan interpolation, interlaced line scanning, and ghost cancellation.
IDTV improvements must allow the TV signal to be transmitted and received in the standard 4:3 aspect ratio.
The only relevant implementation of IDTV for NTSC-based broadcasts before the introduction of full-digital TV distribution (DTV) was the Japanese Clear-Vision. In European countries, PALplus and MAC had a similar role.
The more commonly used term for advanced display technology before the advent of high-definition television (HDTV) was enhanced-definition television (EDTV), used for instance for plasma TV sets with a 16:9 aspect ratio in the early 2000s.
See also
Comb filter
Federal Standard 1037C
Video scaler
References
Television technology | Improved-definition television | [
"Technology"
] | 212 | [
"Information and communications technology",
"Television technology"
] |
41,256 | https://en.wikipedia.org/wiki/Index-matching%20material | In optics, an index-matching material is a substance, usually a liquid, cement (adhesive), or gel, which has an index of refraction that closely approximates that of another object (such as a lens, material, fiber-optic, etc.).
When two substances with the same index are in contact, light passes from one to the other with neither reflection nor refraction. As such, they are used for various purposes in science, engineering, and art.
For example, in a popular home experiment, a glass rod is made almost invisible by immersing it in an index-matched transparent fluid such as mineral spirits.
In microscopy
In light microscopy, oil immersion is a technique used to increase the resolution of a microscope. This is achieved by immersing both the objective lens and the specimen in a transparent oil of high refractive index, thereby increasing the numerical aperture of the objective lens.
Immersion oils are transparent oils that have specific optical and viscosity characteristics necessary for use in microscopy. Typical oils used have an index of refraction around 1.515. An oil immersion objective is an objective lens specially designed to be used in this way. The index of the oil is typically chosen to match the index of the microscope lens glass, and of the cover slip.
For more details, see the main article, oil immersion. Some microscopes also use other index-matching materials besides oil; see water immersion objective and solid immersion lens.
In fiber optics
In fiber optics and telecommunications, an index-matching material may be used in conjunction with pairs of mated connectors or with mechanical splices to reduce signal reflected in the guided mode (known as return loss) (see Optical fiber connector). Without the use of an index-matching material, Fresnel reflections will occur at the smooth end faces of a fiber unless there is no fiber-air interface or other significant mismatch in refractive index. These reflections may be as high as −14 dB (i.e., 14 dB below the optical power of the incident signal). When the reflected signal returns to the transmitting end, it may be reflected again and return to the receiving end at a level that is 28 dB plus twice the fiber loss below the direct signal. The reflected signal will also be delayed by twice the delay time introduced by the fiber. The twice-reflected, delayed signal superimposed on the direct signal may noticeably degrade an analog baseband intensity-modulated video signal. Conversely, for digital transmission, the reflected signal will often have no practical effect on the detected signal seen at the decision point of the digital optical receiver except in marginal cases where bit-error ratio is significant. However, certain digital transmitters such as those employing a Distributed Feedback Laser may be affected by back reflection and then fall outside specifications such as Side Mode Suppression Ratio, potentially degrading system bit error ratio, so networking standards intended for DFB lasers may specify a back-reflection tolerance such as −10 dB for transmitters so that they remain within specification even without index matching. This back-reflection tolerance might be achieved using an optical isolator or by way of reduced coupling efficiency.
For some applications, instead of standard polished connectors (e.g. FC/PC), angle polished connectors (e.g. FC/APC) may be used, whereby the non-perpendicular polish angle greatly reduces the ratio of reflected signal launched into the guided mode even in the case of a fiber-air interface.
In experimental fluid dynamics
Index matching is used in liquid-liquid and liquid-solid (Multiphase flow) experimental systems to minimise the distortions that occur in these systems, this is particularly important for systems with many interfaces which become optically inaccessible. Matching the refractive index minimises reflection, refraction, diffraction and rotations that occurs at the interfaces allowing access to regions that would otherwise be inaccessible to optical measurements. This is particularly important for advanced optical measurements like Laser-induced fluorescence, Particle image velocimetry and Particle tracking velocimetry to name a few.
In art conservation
If a sculpture is broken into several pieces, art conservators may reattach the pieces using an adhesive such as Paraloid B-72 or epoxy. If the sculpture is made of a transparent or semitransparent material (such as glass), the seam where the pieces are attached will usually be much less noticeable if the refractive index of the adhesive matches the refractive index of the surrounding object. Therefore, art conservators may measure the index of objects and then use an index-matched adhesive. Similarly, losses (missing sections) in transparent or semitransparent objects are often filled using an index-matched material.
In optical component adhesives
Certain optical components, such as a Wollaston prism or Nicol prism, are made of multiple transparent pieces that are directly attached to each other. The adhesive is usually index-matched to the pieces. Historically, Canada balsam was used in this application, but it is now more common to use epoxy or other synthetic adhesives.
References
Fiber optics
Optical materials | Index-matching material | [
"Physics"
] | 1,050 | [
"Materials",
"Optical materials",
"Matter"
] |
41,258 | https://en.wikipedia.org/wiki/Inductive%20coupling | In electrical engineering, two conductors are said to be inductively coupled or magnetically coupled when they are configured in a way such that change in current through one wire induces a voltage across the ends of the other wire through electromagnetic induction. A changing current through the first wire creates a changing magnetic field around it by Ampere's circuital law. The changing magnetic field induces an electromotive force (EMF) voltage in the second wire by Faraday's law of induction. The amount of inductive coupling between two conductors is measured by their mutual inductance.
The coupling between two wires can be increased by winding them into coils and placing them close together on a common axis, so the magnetic field of one coil passes through the other coil. Coupling can also be increased by a magnetic core of a ferromagnetic material like iron or ferrite in the coils, which increases the magnetic flux. The two coils may be physically contained in a single unit, as in the primary and secondary windings of a transformer, or may be separated. Coupling may be intentional or unintentional. Unintentional inductive coupling can cause signals from one circuit to be induced into a nearby circuit, this is called cross-talk, and is a form of electromagnetic interference.
An inductively coupled transponder consists of a solid state transceiver chip connected to a large coil that functions as an antenna. When brought within the oscillating magnetic field of a reader unit, the transceiver is powered up by energy inductively coupled into its antenna and transfers data back to the reader unit inductively.
Magnetic coupling between two magnets can also be used to mechanically transfer power without contact, as in the magnetic gear.
Uses
Inductive coupling is widely used throughout electrical technology; examples include:
Electric motors and generators
Inductive charging products
Induction cookers and induction heating systems
Induction loop communication systems
Metal detectors
Transformers
Wireless power transfer
Testing:
Radio-frequency identification
Presence of voltage
Low-frequency induction
Low-frequency induction can be a dangerous form of inductive coupling when it happens inadvertently. For example, if a long-distance metal pipeline is installed along a right of way in parallel with a high-voltage power line, the power line can induce current on the pipe. Since the pipe is a conductor, insulated by its protective coating from the earth, it acts as a secondary winding for a long, drawn out transformer whose primary winding is the power line. Voltages induced on the pipe are then a hazard to people operating valves or otherwise touching metal parts of the metal pipeline.
Reducing low-frequency magnetic fields may be necessary when dealing with electronics, as sensitive circuits in close proximity to an instrument with a power transformer may pickup the mains frequency. Twisted wires (e.g. in networking cables) are an effective way of reducing the interference as signals induced in the successive twists cancel. Magnetic shielding is also an effective way of reducing unwanted inductive coupling, though moving the source of the magnetic field away from sensitive electronics is the simplest solution if possible.
Although induced currents can be harmful, they can also be helpful. Electrical distribution line engineers use inductive coupling to tap power for cameras on towers and at substations that allow remote monitoring of the facilities. Using this they can watch from anywhere and not need to worry about changing camera batteries or solar panel maintenance.
References
Electronic engineering
Electromagnetic compatibility
Wireless energy transfer | Inductive coupling | [
"Technology",
"Engineering"
] | 705 | [
"Electromagnetic compatibility",
"Radio electronics",
"Computer engineering",
"Electronic engineering",
"Electrical engineering"
] |
41,262 | https://en.wikipedia.org/wiki/Information-transfer%20transaction | A transaction is a change of state, an information-transfer transaction is a transaction in which one of the following changes occurs: content, ownership, location, format, etc. An information-transfer transaction usually consists of three consecutive phases called the access phase, the information transfer phase, and the disengagement phase. Examples of these consecutive phases are the copying and transporting of information. Once a transaction occurs there are also costs to consider, which are associated with that certain transaction. When it comes to the transfer of information some transaction costs include time and means (money).
History of Information-transfer transactions
There are many social systems and devices that have contributed to information-transfer transactions; starting from people writing letters using postal systems to emailing using information technology. Two main examples of information-transfer transactions technology development is the copying and transportation of information.
History of Copying
Copying is the process of duplicating information with the change of location or format of the original information. The transfer transaction of information through copying has been going on for ages and there has been many advances in technology to decrease the time it takes to make copies of said information. The art of copying started with people having to write a copy out by hand, then the printing press, all the way to digital copying with ICTs. These developments lead to quicker information-transfer transactions in the form of distributing copies of original information to others through a changes of location or format.
History of Transporting
Transporting is the movement of information with the change of location or ownership of the original information. The transfer transaction of information through transporting has been going on for ages and there has been social and technological developments to decrease the time it takes for information to change ownership or location. The transportation of information started with people sending letters by foot, then by horse, the public and international postal service, all the way down to technology networks. It is these developments which led to the ability to send information further and quicker through information-transfer transactions.
Transaction Costs
Every time a transaction occurs, there are always costs to be considered. In the case of information-transfer transactions, one most consider the costs of time and means (money). Both of these costs are corollated with one another in that to decrease one, you must increase the other. For example, say Person #1 sends a letter through the mail, while Person #2 sends letters through email. For Person #1 to send their letter they had to buy paper, means of writing, envelopes, stamps, etc., while Person #2 needed to buy a source of electricity, internet, computer technology, etc. to send an email. It seems like Person #1 has lower transaction costs then Person #2 in terms of means; however, when you look at both information-transfer transactions in terms of time that is a different story. For Person #1 although they had little costs in sending their letter, the time it takes for the transfer of that letter is about 3+ days, while Person #2's transfer through email happens in less than minutes, but they endured high mean costs. Therefore, for information-transfer transaction times to decrease, the costs of means have to increase and vice versa.
Telecommunication
In telecommunications, an information-transfer transaction is a coordinated sequence of user and communications system actions that cause information present at a source user to become present at a destination user.
References
Data transmission
Knowledge transfer
Information management | Information-transfer transaction | [
"Technology"
] | 688 | [
"Information systems",
"Information management"
] |
41,265 | https://en.wikipedia.org/wiki/Insertion%20gain | In telecommunications, insertion gain is the gain resulting from the insertion of a device in a transmission line, expressed as the ratio of the signal power delivered to that part of the line following the device to the signal power delivered to that same part before insertion. Gains less than unity indicate insertion loss. Incident power is made of two part, the reflection from the device and the power absorbed by the device.
Insertion gain is usually expressed in decibels.
References
Telecommunications engineering | Insertion gain | [
"Engineering"
] | 94 | [
"Electrical engineering",
"Telecommunications engineering"
] |
41,266 | https://en.wikipedia.org/wiki/Insertion%20loss | In telecommunications, insertion loss is the loss of signal power resulting from the insertion of a device in a transmission line or optical fiber and is usually expressed in decibels (dB).
If the power transmitted to the load before insertion is PT and the power received by the load after insertion is PR, then the insertion loss in decibels is given by,
Electronic filters
Insertion loss is a figure of merit for an electronic filter and this data is generally specified with a filter. Insertion loss is defined as a ratio of the signal level in a test configuration without the filter installed () to the signal level with the filter installed (). This ratio is described in decibels by the following equation:
For passive filters, will be smaller than . In this case, the insertion loss is positive and measures how much smaller the signal is after adding the filter.
Link with scattering parameters
In case the two measurement ports use the same reference impedance, the insertion loss () is defined as:
.
Here is one of the scattering parameters. Insertion loss is the extra loss produced by the introduction of the DUT between the 2 reference planes of the measurement. The extra loss can be introduced by intrinsic loss in the DUT and/or mismatch. In case of extra loss the insertion loss is defined to be positive.
See also
Mismatch loss
Return loss
References
Telecommunications engineering
Engineering ratios | Insertion loss | [
"Mathematics",
"Engineering"
] | 278 | [
"Telecommunications engineering",
"Metrics",
"Engineering ratios",
"Quantity",
"Electrical engineering"
] |
41,268 | https://en.wikipedia.org/wiki/Intelligent%20Network | The Intelligent Network (IN) is the standard network architecture specified in the ITU-T Q.1200 series recommendations. It is intended for fixed as well as mobile telecom networks. It allows operators to differentiate themselves by providing value-added services in addition to the standard telecom services such as PSTN, ISDN on fixed networks, and GSM services on mobile phones or other mobile devices.
The intelligence is provided by network nodes on the service layer, distinct from the switching layer of the core network, as opposed to solutions based on intelligence in the core switches or equipment. The IN nodes are typically owned by telecommunications service providers such as a telephone company or mobile phone operator.
IN is supported by the Signaling System #7 (SS7) protocol between network switching centers and other network nodes owned by network operators.
Examples of IN services
Televoting
Call screening
Local number portability
Toll-free calls/Freephone
Prepaid calling
Account card calling
Virtual private networks (such as family group calling)
Centrex service (Virtual PBX)
Private-number plans (with numbers remaining unpublished in directories)
Universal Personal Telecommunications service (a universal personal telephone number)
Mass-calling service
Prefix free dialing from cellphones abroad
Seamless MMS message access from abroad
Reverse charging
Home Area Discount
Premium Rate calls
Call distribution based on various criteria associated with the call
Location-based routing
Time-based routing
Proportional call distribution (such as between two or more call centres or offices)
Call queueing
Call transfer
History and key concepts
The IN concepts, architecture and protocols were originally developed as standards by the ITU-T which is the standardization committee of the International Telecommunication Union; prior to this a number of telecommunications providers had proprietary implementations. The primary aim of the IN was to enhance the core telephony services offered by traditional telecommunications networks, which usually amounted to making and receiving voice calls, sometimes with call divert. This core would then provide a basis upon which operators could build services in addition to those already present on a standard telephone exchange.
A complete description of the IN emerged in a set of ITU-T standards named Q.1210 to Q.1219, or Capability Set One (CS-1) as they became known. The standards defined a complete architecture including the architectural view, state machines, physical implementation and protocols. They were universally embraced by telecom suppliers and operators, although many variants were derived for use in different parts of the world (see Variants below).
Following the success of CS-1, further enhancements followed in the form of CS-2. Although the standards were completed, they were not as widely implemented as CS-1, partly because of the increasing power of the variants, but also partly because they addressed issues which pushed traditional telephone exchanges to their limits.
The major driver behind the development of the IN was the need for a more flexible way of adding sophisticated services to the existing network. Before the IN was developed, all new features and/or services had to be implemented directly in the core switch systems. This made for long release cycles as the software testing had to be extensive and thorough to prevent the network from failing. With the advent of the IN, most of these services (such as toll-free numbers and geographical number portability) were moved out of the core switch systems and into self-contained nodes, creating a modular and more secure network that allowed the service providers themselves to develop variations and value-added services to their networks without submitting a request to the core switch manufacturer and waiting for the long development process. The initial use of IN technology was for number translation services, e.g. when translating toll-free numbers to regular PSTN numbers; much more complex services have since been built on the IN, such as Custom Local Area Signaling Services (CLASS) and prepaid telephone calls.
SS7 architecture
The main concepts (functional view) surrounding IN services or architecture are connected with SS7 architecture:
Service Switching Function (SSF) or Service Switching Point (SSP) is co-located with the telephone exchange, and acts as the trigger point for further services to be invoked during a call. The SSP implements the Basic Call State Machine (BCSM) which is a finite-state machine that represents an abstract view of a call from beginning to end (off hook, dialing, answer, no answer, busy, hang up, etc.). As each state is traversed, the exchange encounters Detection Points (DPs) at which the SSP may invoke a query to the SCP to wait for further instructions on how to proceed. This query is usually called a trigger. Trigger criteria are defined by the operator and might include the subscriber calling number or the dialed number. The SSF is responsible for controlling calls requiring value added services.
Service Control Function (SCF) or Service Control Point (SCP) is a separate set of platforms that receive queries from the SSP. The SCP contains service logic which implements the behaviour desired by the operator, i.e., the services. During service logic processing, additional data required to process the call may be obtained from the SDF. The logic on the SCP is created using the SCE.
Service Data Function (SDF) or Service Data Point (SDP) is a database that contains additional subscriber data, or other data required to process a call. For example, the subscriber's remaining prepaid credit may be stored in the SDF to be queried in real-time during the call. The SDF may be a separate platform or co-located with the SCP.
Service Management Function (SMF) or Service Management Point (SMP) is a platform or cluster of platforms that operators use to monitor and manage the IN services. It contains the management database which stores the services' configuration, collects the statistics and alarms, and stores the Call Data Reports and Event Data Reports.
Service Creation Environment (SCE) is the development environment used to create the services present on the SCP. Although the standards permit any type of environment, it is fairly rare to see low level languages like C used. Instead, proprietary graphical languages are used to enable telecom engineers to create services directly. The languages are usually of the fourth-generation type, and the engineer may use a graphical interface to build or change a service.
Specialized Resource Function (SRF) or Intelligent Peripheral (IP) is a node which can connect to both the SSP and the SCP and deliver special resources into the call, mostly related to voice communication, for example to play voice announcements or collect DTMF tones from the user.
Protocols
The core elements described above use standard protocols to communicate with each other. The use of standard protocols allows different manufacturers to concentrate on different parts of the architecture and be confident that they will all work together in any combination.
The interfaces between the SSP and the SCP are SS7 based and have similarities with TCP/IP protocols. The SS7 protocols implement much of the OSI seven-layer model. This means that the IN standards only had to define the application layer, which is called the Intelligent Networks Application Part or INAP. The INAP messages are encoded using ASN.1.
The interface between the SCP and the SDP is defined in the standards to be an X.500 Directory Access Protocol or DAP. A more lightweight interface called LDAP has emerged from the IETF which is considerably simpler to implement, so many SCPs have implemented that instead.
Variants
The core CS-1 specifications were adopted and extended by other standards bodies. European flavours were developed by ETSI, American flavours were developed by ANSI, and Japanese variants also exist. The main reasons for producing variants in each region was to ensure interoperability between equipment manufactured and deployed locally (for example different versions of the underlying SS7 protocols exist between the regions).
New functionality was also added which meant that variants diverged from each other and the main ITU-T standard. The biggest variant was called Customised Applications for Mobile networks Enhanced Logic, or CAMEL for short. This allowed for extensions to be made for the mobile phone environment, and allowed mobile phone operators to offer the same IN services to subscribers while they are roaming as they receive in the home network.
CAMEL has become a major standard in its own right and is currently maintained by 3GPP. The last major release of the standard was CAMEL phase 4. It is the only IN standard currently being actively worked on.
Bellcore (subsequently Telcordia Technologies) developed the Advanced Intelligent Network (AIN) as the variant of Intelligent Network for North America, and performed the standardization of the AIN on behalf of the major US operators. The original goal of AIN was AIN 1.0, which was specified in the early 1990s (AIN Release 1, Bellcore SR-NWT-002247, 1993). AIN 1.0 proved technically infeasible to implement, which led to the definition of simplified AIN 0.1 and AIN 0.2 specifications. In North America, Telcordia SR-3511 (originally known as TA-1129+) and GR-1129-CORE protocols serve to link switches with the IN systems such as Service Control Points (SCPs) or Service Nodes. SR-3511 details a TCP/IP-based protocol which directly connects the SCP and Service Node. GR-1129-CORE provides generic requirements for an ISDN-based protocol which connects the SCP to the Service Node via the SSP.
Future
While activity in development of IN standards has declined in recent years, there are many systems deployed across the world which use this technology. The architecture has proved to be not only stable, but also a continuing source of revenue with new services added all the time. Manufacturers continue to support the equipment and obsolescence is not an issue.
Nevertheless, new technologies and architectures have emerged, especially in the area of VoIP and SIP. More attention is being paid to the use of APIs in preference to protocols like INAP, and new standards have emerged in the form of JAIN and Parlay. From a technical viewpoint, the SCE began to move away from its proprietary graphical origins towards a Java application server environment.
The meaning of "intelligent network" is evolving in time, largely driven by breakthroughs in computation and algorithms. From networks enhanced by more flexible algorithms and more advanced protocols, to networks designed using data-driven models to AI enabled networks.
See also
IP Multimedia Subsystem
Service layer
Value-added service
Notes
References
Also known as the green book due to the cover .
External links
Tutorial on Intelligent Networks (archived 24 July 2011)
ITU-T recommendations
GSM standard
Signaling System 7
Network architecture | Intelligent Network | [
"Engineering"
] | 2,179 | [
"Network architecture",
"Computer networks engineering"
] |
41,269 | https://en.wikipedia.org/wiki/Intensity%20modulation | In optical communications, intensity modulation (IM) is a form of modulation in which the optical power output of a source is varied in accordance with some characteristic of the modulating signal. The envelope of the modulated optical signal is an analog of the modulating signal in the sense that the instantaneous power of the envelope is an analog of the characteristic of interest in the modulating signal.
The recovery of the modulating signal is typically achieved by direct detection, not heterodyning. However, optical heterodyne detection is possible and has been actively studied since 1979. Bell Laboratories had a working, but impractical, system in 1969. Heterodyne and homodyne systems are of interest because they are expected to produce an increase in sensitivity of up to allowing longer hops between islands for instance. Such systems also have the important advantage of very narrow channel spacing in optical frequency-division multiplexing (OFDM) systems. OFDM is a step beyond wavelength-division multiplexing (WDM). Normal WDM using direct detection does not achieve anything like the close channel spacing of radio frequency FDM.
Intensity modulation with direct detection
Intensity Modulation / Direct Detection (IM/DD) is a scheme is simple and cost-effective in fiber optic communication, making it a suitable for various optical communication applications. It involves modulating the optical power of the carrier signal to represent the transmitted data. This modulation can be achieved using techniques, such as on-off keying (OOK). The intensity-modulated optical signal is generated by modulating the amplitude or the current of the light source, typically a laser diode with one or two cavity designs such as Fabry-Perot or distributed feedback (DFB).
At the receiver end, direct detection (DD) is used to recover the modulated signal. The modulated optical signal is detected by a photodetector (most commonly PIN or APD photodiode), which converts the optical power variations into corresponding electrical current or voltage variations. The output of the photodetector is then processed and decoded to retrieve the original information.
See also
Photoacoustic Doppler effect
References
Further reading
William Shieh, Ivan Djordjevic, OFDM for Optical Communications, Academic Press, 2009 .
Optical communications | Intensity modulation | [
"Engineering"
] | 466 | [
"Optical communications",
"Telecommunications engineering"
] |
41,272 | https://en.wikipedia.org/wiki/Interchange%20circuit | In telecommunications, an interchange circuit is a circuit that facilitates the exchange of data and signaling information between data terminal equipment (DTE) and data circuit-terminating equipment (DCE).
An interchange circuit can carry many types of signals and provide many types of service features, such as control signals, timing signals, and common return functions.
References
Communication circuits
Telecommunications equipment | Interchange circuit | [
"Engineering"
] | 74 | [
"Telecommunications engineering",
"Communication circuits"
] |
41,274 | https://en.wikipedia.org/wiki/Interconnect%20facility | Interconnect facility: In a communications network, one or more communications links that (a) are used to provide local area communications service among several locations and (b) collectively form a node in the network.
An interconnect facility may include network control and administrative circuits as well as the primary traffic circuits.
An interconnect facility may use any medium available and may be redundant.
References
Network architecture | Interconnect facility | [
"Engineering"
] | 85 | [
"Network architecture",
"Computer networks engineering"
] |
41,278 | https://en.wikipedia.org/wiki/Interference%20filter | An interference filter, dichroic filter, or thin-film filter is an optical filter that reflects some wavelengths (colors) of light and transmits others, with almost no absorption for all wavelengths of interest. An interference filter may be high-pass, low-pass, bandpass, or band-rejection. They are used in scientific applications, as well as in architectural and theatrical lighting.
An interference filter consists of multiple thin layers of dielectric material having different refractive indices. There may also be metallic layers. Interference filters are wavelength-selective by virtue of the interference effects that take place between the incident and reflected waves at the thin-film boundaries. The principle of operation is similar to a Fabry-Perot etalon.
Dichroic mirrors and dichroic reflectors are the same type of device, but are characterized by the colors of light that they reflect, rather than the colors they pass. Dielectric mirrors operate on the same principle, but focus exclusively on reflection.
Theory
Dichroic filters use the principle of thin-film interference, and produce colors in the same way as oil films on water. When light strikes an oil film at an angle, some of the light is reflected from the top surface of the oil, and some is reflected from the bottom surface where it is in contact with the water. Because the light reflecting from the bottom travels a slightly longer path, some light wavelengths are reinforced by this delay, while others tend to be canceled, producing the colors seen. The color transmitted by the filter exhibits a blue shift with increasing angle of incidence, see Dielectric mirror.
In a dichroic mirror or filter, instead of using an oil film to produce the interference, alternating layers of optical coatings with different refractive indices are built up upon a glass substrate. The interfaces between the layers of different refractive index produce phased reflections, selectively reinforcing certain wavelengths of light and interfering with other wavelengths. The layers are usually added by vacuum deposition. By controlling the thickness and number of the layers, the frequency of the passband of the filter can be tuned and made as wide or narrow as desired. Because unwanted wavelengths are reflected rather than absorbed, dichroic filters do not absorb this unwanted energy during operation and so do not become nearly as hot as the equivalent conventional filter (which attempts to absorb all energy except for that in the passband). (See Fabry–Pérot interferometer for a mathematical description of the effect.)
Where white light is being deliberately separated into various color bands (for example, within a color video projector or color television camera), the similar dichroic prism is used instead. For cameras, however, it is now more common to have an absorption filter array to filter individual pixels on a single CCD array.
Applications
Dichroic filters can filter light from a white light source to produce light that is perceived by humans to be highly saturated in color. Such filters are popular in architectural and theatrical applications.
Dichroic reflectors known as cold mirrors are commonly used behind a light source to reflect visible light forward while allowing the invisible infrared light to pass out of the rear of the fixture. Such an arrangement allows intense illumination with less heating of the illuminated object. Many quartz-halogen lamps have an integrated dichroic reflector for this purpose, being originally designed for use in slide projectors to avoid melting the slides, but now widely used for interior home and commercial lighting. This improves whiteness by removing excess red; however, it poses a serious fire hazard if used in recessed or enclosed luminaires by allowing infrared radiation into those luminaires. For these applications non-cool-beam (ALU or Silverback) lamps must be used. Recessed or enclosed luminaires that are unsuitable for use with dichroic reflector lights can be identified by the IEC 60598 No Cool Beam symbol.
In fluorescence microscopy, dichroic filters are used as beam splitters to direct illumination of an excitation frequency toward the sample and then at an analyzer to reject that same excitation frequency but pass a particular emission frequency.
Some LCD projectors use dichroic filters instead of prisms to split the white light from the lamp into the three colours before passing it through the three LCD units.
Older DLP projectors typically transmit a white light source through a color wheel which uses dichroic filters to rapidly switch colors sent through the (monochrome) Digital micromirror device. Newer projectors may use laser or LED light sources to directly emit the desired light wavelengths.
They are used as laser harmonic separators. They separate the various harmonic components of frequency doubled laser systems by selective spectral reflection and transmission.
Dichroic filters are also used to create gobos for high-power lighting products. Pictures are made by overlapping up to four colored dichroic filters.
Photographic enlarger color heads use dichroic filters to adjust the color balance in the print.
Artistic glass jewelry is occasionally fabricated to behave as a dichroic filter. Because the wavelength of light selected by the filter varies with the angle of incidence of the light, such jewelry often has an iridescent effect, changing color as the (for example) earrings swing. Another interesting application of dichroic filters is spatial filtering.
With a technique licensed from Infitec, Dolby Labs uses dichroic filters for screening 3D movies. The left lens of the Dolby 3D glasses transmits specific narrow bands of red, green and blue frequencies, while the right lens transmits a different set of red, green and blue frequencies. The projector uses matching filters to display the images meant for the left and right eyes.
Long-pass dichroic filters applied to ordinary lighting can prevent it from attracting insects. In some cases, such filters can prevent attraction of other wildlife, reducing adverse environmental impact.
Advantages
Dichroic filters have a much longer life than conventional filters; the color is intrinsic in the construction of the hard microscopic layers and cannot "bleach out" over the lifetime of the filter (unlike for example, gel filters). They can be fabricated to pass any passband frequency and block a selected amount of the stopband frequencies. Because light in the stopband is reflected rather than absorbed, there is much less heating of the dichroic filter than with conventional filters. Dichroics are capable of achieving extremely high laser damage thresholds, and are used for all the mirrors on the world's most powerful laser, the National Ignition Facility.
See also
Color gel
Dielectric mirror
Filter (optics)
Holographic Versatile Disc
Thin-film interference
Thin-film optics
References
Additional sources
M. Bass, Handbook of Optics (2nd ed.) pp. 42.89-42.90 (1995)
Further reading
Optical filters
Interference | Interference filter | [
"Chemistry"
] | 1,409 | [
"Optical filters",
"Filters"
] |
41,281 | https://en.wikipedia.org/wiki/Intermediate-field%20region | In antenna theory, intermediate-field region (also known as intermediate field, intermediate zone or transition zone) refers to the transition region lying between the near-field region and the far-field region in which the field strength of an electromagnetic wave is dependent upon the inverse distance, inverse square of the distance, and the inverse cube of the distance from the antenna. For an antenna that is small compared to the wavelength in question, the intermediate-field region is considered to exist at all distances between 0.1 wavelength and 1.0 wavelength from the antenna.
References
Radio frequency propagation | Intermediate-field region | [
"Physics"
] | 117 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
41,285 | https://en.wikipedia.org/wiki/Interoperability | Interoperability is a characteristic of a product or system to work with other products or systems. While the term was initially defined for information technology or systems engineering services to allow for information exchange, a broader definition takes into account social, political, and organizational factors that impact system-to-system performance.
Types of interoperability include syntactic interoperability, where two systems can communicate with each other, and cross-domain interoperability, where multiple organizations work together and exchange information.
Types
If two or more systems use common data formats and communication protocols then they are capable of communicating with each other and they exhibit syntactic interoperability. XML and SQL are examples of common data formats and protocols. Low-level data formats also contribute to syntactic interoperability, ensuring that alphabetical characters are stored in the same ASCII or a Unicode format in all the communicating systems.
Beyond the ability of two or more computer systems to exchange information, semantic interoperability is the ability to automatically interpret the information exchanged meaningfully and accurately in order to produce useful results as defined by the end users of both systems. To achieve semantic interoperability, both sides must refer to a common information exchange reference model. The content of the information exchange requests are unambiguously defined: what is sent is the same as what is understood.
Cross-domain interoperability involves multiple social, organizational, political, legal entities working together for a common interest or information exchange.
Interoperability and open standards
Interoperability implies exchanges between a range of products, or similar products from several different vendors, or even between past and future revisions of the same product. Interoperability may be developed post-facto, as a special measure between two products, while excluding the rest, by using open standards. When a vendor is forced to adapt its system to a dominant system that is not based on open standards, it is compatibility, not interoperability.
Open standards
Open standards rely on a broadly consultative and inclusive group including representatives from vendors, academics and others holding a stake in the development that discusses and debate the technical and economic merits, demerits and feasibility of a proposed common protocol. After the doubts and reservations of all members are addressed, the resulting common document is endorsed as a common standard. This document may be subsequently released to the public, and henceforth becomes an open standard. It is usually published and is available freely or at a nominal cost to any and all comers, with no further encumbrances. Various vendors and individuals (even those who were not part of the original group) can use the standards document to make products that implement the common protocol defined in the standard and are thus interoperable by design, with no specific liability or advantage for customers for choosing one product over another on the basis of standardized features. The vendors' products compete on the quality of their implementation, user interface, ease of use, performance, price, and a host of other factors, while keeping the customer's data intact and transferable even if he chooses to switch to another competing product for business reasons.
Post facto interoperability
Post facto interoperability may be the result of the absolute market dominance of a particular product in contravention of any applicable standards, or if any effective standards were not present at the time of that product's introduction. The vendor behind that product can then choose to ignore any forthcoming standards and not co-operate in any standardization process at all, using its near-monopoly to insist that its product sets the de facto standard by its very market dominance. This is not a problem if the product's implementation is open and minimally encumbered, but it may well be both closed and heavily encumbered (e.g. by patent claims). Because of the network effect, achieving interoperability with such a product is both critical for any other vendor if it wishes to remain relevant in the market, and difficult to accomplish because of lack of cooperation on equal terms with the original vendor, who may well see the new vendor as a potential competitor and threat. The newer implementations often rely on clean-room reverse engineering in the absence of technical data to achieve interoperability. The original vendors may provide such technical data to others, often in the name of encouraging competition, but such data is invariably encumbered, and may be of limited use. Availability of such data is not equivalent to an open standard, because:
The data is provided by the original vendor on a discretionary basis, and the vendor has every interest in blocking the effective implementation of competing solutions, and may subtly alter or change its product, often in newer revisions, so that competitors' implementations are almost, but not quite completely interoperable, leading customers to consider them unreliable or of lower quality. These changes may not be passed on to other vendors at all, or passed on after a strategic delay, maintaining the market dominance of the original vendor.
The data itself may be encumbered, e.g. by patents or pricing, leading to a dependence of all competing solutions on the original vendor, and possibly leading a revenue stream from the competitors' customers back to the original vendor. This revenue stream is the result of the original product's market dominance and not a result of any innate superiority.
Even when the original vendor is genuinely interested in promoting a healthy competition (so that he may also benefit from the resulting innovative market), post-facto interoperability may often be undesirable as many defects or quirks can be directly traced back to the original implementation's technical limitations. Although in an open process, anyone may identify and correct such limitations, and the resulting cleaner specification may be used by all vendors, this is more difficult post-facto, as customers already have valuable information and processes encoded in the faulty but dominant product, and other vendors are forced to replicate those faults and quirks for the sake of preserving interoperability even if they could design better solutions. Alternatively, it can be argued that even open processes are subject to the weight of past implementations and imperfect past designs and that the power of the dominant vendor to unilaterally correct or improve the system and impose the changes to all users facilitates innovation.
Lack of an open standard can also become problematic for the customers, as in the case of the original vendor's inability to fix a certain problem that is an artifact of technical limitations in the original product. The customer wants that fault fixed, but the vendor has to maintain that faulty state, even across newer revisions of the same product, because that behavior is a de facto standard and many more customers would have to pay the price of any interoperability issues caused by fixing the original problem and introducing new behavior.
Government
eGovernment
Speaking from an e-government perspective, interoperability refers to the collaboration ability of cross-border services for citizens, businesses and public administrations. Exchanging data can be a challenge due to language barriers, different specifications of formats, varieties of categorizations and other hindrances.
If data is interpreted differently, collaboration is limited, takes longer and is inefficient. For instance, if a citizen of country A wants to purchase land in country B, the person will be asked to submit the proper address data. Address data in both countries include full name details, street name and number as well as a postal code. The order of the address details might vary. In the same language, it is not an obstacle to order the provided address data; but across language barriers, it becomes difficult. If the language uses a different writing system it is almost impossible if no translation tools are available.
Flood risk management
Interoperability is used by researchers in the context of urban flood risk management. Cities and urban areas worldwide are expanding, which creates complex spaces with many interactions between the environment, infrastructure and people. To address this complexity and manage water in urban areas appropriately, a system of systems approach to water and flood control is necessary. In this context, interoperability is important to facilitate system-of-systems thinking, and is defined as: "the ability of any water management system to redirect water and make use of other system(s) to maintain or enhance its performance function during water exceedance events." By assessing the complex properties of urban infrastructure systems, particularly the interoperability between the drainage systems and other urban systems (e.g. infrastructure such as transport), it could be possible to expand the capacity of the overall system to manage flood water towards achieving improved urban flood resilience.
Military forces
Force interoperability is defined in NATO as the ability of the forces of two or more nations to train, exercise and operate effectively together in the execution of assigned missions and tasks. Additionally NATO defines interoperability more generally as the ability to act together coherently, effectively and efficiently to achieve Allied tactical, operational and strategic objectives.
At the strategic level, interoperability is an enabler for coalition building. It facilitates meaningful contributions by coalition partners. At this level, interoperability issues center on harmonizing world views, strategies, doctrines, and force structures. Interoperability is an element of coalition willingness to work together over the long term to achieve and maintain shared interests against common threats. Interoperability at the operational and tactical levels is where strategic interoperability and technological interoperability come together to help allies shape the environment, manage crises, and win wars. The benefits of interoperability at the operational and tactical levels generally derive from the interchangeability of force elements and units. Technological interoperability reflects the interfaces between organizations and systems. It focuses on communications and computers but also involves the technical capabilities of systems and the resulting mission compatibility between the systems and data of coalition partners. At the technological level, the benefits of interoperability come primarily from their impacts at the operational and tactical levels in terms of enhancing flexibility.
Public safety
Because first responders need to be able to communicate during wide-scale emergencies, interoperability is an important issue for law enforcement, fire fighting, emergency medical services, and other public health and safety departments. It has been a major area of investment and research over the last 12 years. Widely disparate and incompatible hardware impedes the exchange of information between agencies. Agencies' information systems such as computer-aided dispatch systems and records management systems functioned largely in isolation, in so-called information islands. Agencies tried to bridge this isolation with inefficient, stop-gap methods while large agencies began implementing limited interoperable systems. These approaches were inadequate and, in the US, the lack of interoperability in the public safety realm become evident during the 9/11 attacks on the Pentagon and World Trade Center structures. Further evidence of a lack of interoperability surfaced when agencies tackled the aftermath of Hurricane Katrina.
In contrast to the overall national picture, some states, including Utah, have already made great strides forward. The Utah Highway Patrol and other departments in Utah have created a statewide data sharing network.
The Commonwealth of Virginia is one of the leading states in the United States in improving interoperability. The Interoperability Coordinator leverages a regional structure to better allocate grant funding around the Commonwealth so that all areas have an opportunity to improve communications interoperability. Virginia's strategic plan for communications is updated yearly to include new initiatives for the Commonwealth – all projects and efforts are tied to this plan, which is aligned with the National Emergency Communications Plan, authored by the Department of Homeland Security's Office of Emergency Communications.
The State of Washington seeks to enhance interoperability statewide. The State Interoperability Executive Committee (SIEC), established by the legislature in 2003, works to assist emergency responder agencies (police, fire, sheriff, medical, hazmat, etc.) at all levels of government (city, county, state, tribal, federal) to define interoperability for their local region.
Washington recognizes that collaborating on system design and development for wireless radio systems enables emergency responder agencies to efficiently provide additional services, increase interoperability, and reduce long-term costs. This work saves the lives of emergency personnel and the citizens they serve.
The U.S. government is making an effort to overcome the nation's lack of public safety interoperability. The Department of Homeland Security's Office for Interoperability and Compatibility (OIC) is pursuing the SAFECOM and CADIP and Project 25 programs, which are designed to help agencies as they integrate their CAD and other IT systems.
The OIC launched CADIP in August 2007. This project will partner the OIC with agencies in several locations, including Silicon Valley. This program will use case studies to identify the best practices and challenges associated with linking CAD systems across jurisdictional boundaries. These lessons will create the tools and resources public safety agencies can use to build interoperable CAD systems and communicate across local, state, and federal boundaries.
As regulator for interoperability
Governance entities can increase interoperability through their legislative and executive powers. For instance, in 2021 the European Commission, after commissioning two impact assessment studies and a technology analysis study, proposed the implementation of a standardization – for iterations of USB-C – of phone charger products, which may increase interoperability along with convergence and convenience for consumers while decreasing resource needs, redundancy and electronic waste.
Commerce and industries
Information technology and computers
Desktop
Desktop interoperability is a subset of software interoperability. In the early days, the focus of interoperability was to integrate web applications with other web applications. Over time, open-system containers were developed to create a virtual desktop environment in which these applications could be registered and then communicate with each other using simple publish–subscribe patterns. Rudimentary UI capabilities were also supported allowing windows to be grouped with other windows. Today, desktop interoperability has evolved into full-service platforms which include container support, basic exchange between web and web, but also native support for other application types and advanced window management. The very latest interop platforms also include application services such as universal search, notifications, user permissions and preferences, 3rd party application connectors and language adapters for in-house applications.
Information search
Search interoperability refers to the ability of two or more information collections to be searched by a single query.
Specifically related to web-based search, the challenge of interoperability stems from the fact designers of web resources typically have little or no need to concern themselves with exchanging information with other web resources. Federated Search technology, which does not place format requirements on the data owner, has emerged as one solution to search interoperability challenges. In addition, standards, such as Open Archives Initiative Protocol for Metadata Harvesting, Resource Description Framework, and SPARQL, have emerged that also help address the issue of search interoperability related to web resources. Such standards also address broader topics of interoperability, such as allowing data mining.
Software
With respect to software, the term interoperability is used to describe the capability of different programs to exchange data via a common set of exchange formats, to read and write the same file formats, and to use the same communication protocols. The lack of interoperability can be a consequence of a lack of attention to standardization during the design of a program. Indeed, interoperability is not taken for granted in the non-standards-based portion of the computing world.
According to ISO/IEC 2382-01, Information Technology Vocabulary, Fundamental Terms, interoperability is defined as follows: "The capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units".
Standards-developing organizations provide open public software specifications to facilitate interoperability; examples include the Oasis-Open organization and buildingSMART (formerly the International Alliance for Interoperability). Another example of a neutral party is the RFC documents from the Internet Engineering Task Force (IETF).
The Open Service for Lifecycle Collaboration community is working on finding a common standard in order that software tools can share and exchange data e.g. bugs, tasks, requirements etc. The final goal is to agree on an open standard for interoperability of open source application lifecycle management tools.
Java is an example of an interoperable programming language that allows for programs to be written once and run anywhere with a Java virtual machine. A program in Java, so long as it does not use system-specific functionality, will maintain interoperability with all systems that have a Java virtual machine available. Applications will maintain compatibility because, while the implementation is different, the underlying language interfaces are the same.
Achieving software
Software interoperability is achieved through five interrelated ways:
Product testing
Products produced to a common standard, or to a sub-profile thereof, depend on the clarity of the standards, but there may be discrepancies in their implementations that system or unit testing may not uncover. This requires that systems formally be tested in a production scenario – as they will be finally implemented – to ensure they actually will intercommunicate as advertised, i.e. they are interoperable. Interoperable product testing is different from conformance-based product testing as conformance to a standard does not necessarily engender interoperability with another product which is also tested for conformance.
Product engineering
Implements the common standard, or a sub-profile thereof, as defined by the industry and community partnerships with the specific intention of achieving interoperability with other software implementations also following the same standard or sub-profile thereof.
Industry and community partnership
Industry and community partnerships, either domestic or international, sponsor standard workgroups with the purpose of defining a common standard that may be used to allow software systems to intercommunicate for a defined purpose. At times an industry or community will sub-profile an existing standard produced by another organization to reduce options and thus make interoperability more achievable for implementations.
Common technology and intellectual property
The use of a common technology or intellectual property may speed up and reduce the complexity of interoperability by reducing variability between components from different sets of separately developed software products and thus allowing them to intercommunicate more readily. This technique has some of the same technical results as using a common vendor product to produce interoperability. The common technology can come through third-party libraries or open-source developments.
Standard implementation
Software interoperability requires a common agreement that is normally arrived at via an industrial, national or international standard.
Each of these has an important role in reducing variability in intercommunication software and enhancing a common understanding of the end goal to be achieved.
Unified interoperability
Market dominance and power
Interoperability tends to be regarded as an issue for experts and its implications for daily living are sometimes underrated. The European Union Microsoft competition case shows how interoperability concerns important questions of power relationships. In 2004, the European Commission found that Microsoft had abused its market power by deliberately restricting interoperability between Windows work group servers and non-Microsoft work group servers. By doing so, Microsoft was able to protect its dominant market position for work group server operating systems, the heart of corporate IT networks. Microsoft was ordered to disclose complete and accurate interface documentation, which could enable rival vendors to compete on an equal footing (the interoperability remedy).
Interoperability has also surfaced in the software patent debate in the European Parliament (June–July 2005). Critics claim that because patents on techniques required for interoperability are kept under RAND (reasonable and non-discriminatory licensing) conditions, customers will have to pay license fees twice: once for the product and, in the appropriate case, once for the patent-protected program the product uses.
Business processes
Interoperability is often more of an organizational issue. Interoperability can have a significant impact on the organizations concerned, raising issues of ownership (do people want to share their data? or are they dealing with information silos?), labor relations (are people prepared to undergo training?) and usability. In this context, a more apt definition is captured in the term business process interoperability.
Interoperability can have important economic consequences; for example, research has estimated the cost of inadequate interoperability in the US capital facilities industry to be $15.8 billion a year. If competitors' products are not interoperable (due to causes such as patents, trade secrets or coordination failures), the result may well be monopoly or market failure. For this reason, it may be prudent for user communities or governments to take steps to encourage interoperability in various situations. At least 30 international bodies and countries have implemented eGovernment-based interoperability framework initiatives called e-GIF while in the US there is the NIEM initiative.
Medical industry
The need for plug-and-play interoperability – the ability to take a medical device out of its box and easily make it work with one's other devices – has attracted great attention from both healthcare providers and industry.
Increasingly, medical devices like incubators and imaging systems feature software that integrates at the point of care and with electronic systems, such as electronic medical records. At the 2016 Regulatory Affairs Professionals Society (RAPS) meeting, experts in the field like Angela N. Johnson with GE Healthcare and Jeff Shuren of the United States Food and Drug Administration provided practical seminars on how companies developing new medical devices, and hospitals installing them, can work more effectively to align interoperable software systems.
Railways
Railways have greater or lesser interoperability depending on conforming to standards of gauge, couplings, brakes, signalling, loading gauge, and structure gauge to mention a few parameters. For passenger rail service, different railway platform height and width clearance standards may also affect interoperability.
North American freight and intercity passenger railroads are highly interoperable, but systems in Europe, Asia, Africa, Central and South America, and Australia are much less so. The parameter most difficult to overcome (at reasonable cost) is incompatibility of gauge, though variable gauge axle systems are increasingly used.
Telecommunications
In telecommunications, the term can be defined as:
The ability to provide services to and accept services from other systems, and to use the services exchanged to enable them to operate effectively together. ITU-T provides standards for international telecommunications.
The condition achieved among communications-electronics systems or items of communications-electronics equipment when information or services can be exchanged directly and satisfactorily between them or their users. The degree of interoperability should be defined when referring to specific cases.
In two-way radio, interoperability is composed of three dimensions:
compatible communications paths (compatible frequencies, equipment and signaling),
radio system coverage or adequate signal strength, and;
scalable capacity.
Organizations dedicated to interoperability
Many organizations are dedicated to interoperability. Some concentrate on eGovernment, eBusiness or data exchange in general.
Global
Internationally, Network Centric Operations Industry Consortium facilitates global interoperability across borders, language and technical barriers. In the built environment, the International Alliance for Interoperability started in 1994, and was renamed buildingSMART in 2005.
Europe
In Europe, the European Commission and its IDABC program issue the European Interoperability Framework. IDABC was succeeded by the Interoperability Solutions for European Public Administrations (ISA) program. They also initiated the Semantic Interoperability Centre Europe (SEMIC.EU). A European Land Information Service (EULIS) was established in 2006, as a consortium of European National Land Registers. The aim of the service is to establish a single portal through which customers are provided with access to information about individual properties, about land and property registration services, and about the associated legal environment.
The European Interoperability Framework (EIF) considered four kinds of interoperability: legal interoperability, organizational interoperability, semantic interoperability, and technical interoperability.
In the European Research Cluster on the Internet of Things (IERC) and IoT Semantic Interoperability Best Practices; four kinds of interoperability are distinguished: syntactical interoperability, technical interoperability, semantic interoperability, and organizational interoperability.
US
In the United States, the General Services Administration Component Organization and Registration Environment (CORE.GOV) initiative provided a collaboration environment for component development, sharing, registration, and reuse in the early 2000s. A related initiative is the ongoing National Information Exchange Model (NIEM) work and component repository. The National Institute of Standards and Technology serves as an agency for measurement standards.
See also
Computer and information technology
Architecture of Interoperable Information Systems
List of computer standards
Model Driven Interoperability, framework
Semantic Web, standard for making Internet data machine readable
Business
Business interoperability interface, between an organization's systems and processes
Enterprise interoperability, ability to link activities in an efficient and competitive way
Other
Collaboration, general concept
Polytely, problem solving
Universal Data Element Framework, information indexing
Notes
References
External links
"When and How Interoperability Drives Innovation," by Urs Gasser and John Palfrey
GIC - The Greek Interoperability Centre: A Research Infrastructure for Interoperability in eGovernment and eBusiness, in SE Europe and the Mediterranean
Simulation Interoperability Standards Organization (SISO)
Interoperability: What is it and why should I want it? Ariadne 24 (2000)
Interoperability Constitution - DOE's GridWise Architecture Council
Interoperability Context-Setting Framework - DOE's GridWise Architecture Council
Decision Maker's Interoperability Checklist - DOE's GridWise Architecture Council
OA Journal on Interoperability in Business Information Systems
University of New Hampshire Interoperability Laboratory - premier research facility on interoperability of computer networking technologies
Interoperability vs. intraoperability: your open choice on Bob Sutor blog, 6 December 2006
ECIS European Committee for Interoperable Systems
Gradmann, Stefan. INTEROPERABILITY. A key concept for large scale, persistent digital libraries.
DL.org Digital Library Interoperability, Best Practices and Modelling Foundations
Computing terminology
Telecommunications engineering
Product testing | Interoperability | [
"Technology",
"Engineering"
] | 5,308 | [
"Electrical engineering",
"Computing terminology",
"Telecommunications engineering",
"Interoperability"
] |
41,286 | https://en.wikipedia.org/wiki/Interposition%20trunk | In telecommunications, the term interposition trunk has the following meanings:
1. A single direct communication channel, e.g., voice-frequency circuit, between two positions of a large switchboard to facilitate the interconnection of other circuits appearing at the respective switchboard positions.
2. Within a technical control facility, a single direct transmission circuit, between positions in a testboard or patch bay, which circuit facilitates testing or patching between the respective positions.
Communication circuits | Interposition trunk | [
"Engineering"
] | 97 | [
"Telecommunications engineering",
"Communication circuits"
] |
41,287 | https://en.wikipedia.org/wiki/Intersymbol%20interference | In telecommunications, intersymbol interference (ISI) is a form of distortion of a signal in which one symbol interferes with subsequent symbols. This is an unwanted phenomenon as the previous symbols have a similar effect as noise, thus making the communication less reliable. The spreading of the pulse beyond its allotted time interval causes it to interfere with neighboring pulses. ISI is usually caused by multipath propagation or the inherent linear or non-linear frequency response of a communication channel causing successive symbols to blur together.
The presence of ISI in the system introduces errors in the decision device at the receiver output. Therefore, in the design of the transmitting and receiving filters, the objective is to minimize the effects of ISI, and thereby deliver the digital data to its destination with the smallest error rate possible.
Ways to alleviate intersymbol interference include adaptive equalization and error correcting codes.
Causes
Multipath propagation
One of the causes of intersymbol interference is multipath propagation in which a wireless signal from a transmitter reaches the receiver via multiple paths. The causes of this include reflection (for instance, the signal may bounce off buildings), refraction (such as through the foliage of a tree) and atmospheric effects such as atmospheric ducting and ionospheric reflection. Since the various paths can be of different lengths, this results in the different versions of the signal arriving at the receiver at different times. These delays mean that part or all of a given symbol will be spread into the subsequent symbols, thereby interfering with the correct detection of those symbols. Additionally, the various paths often distort the amplitude and/or phase of the signal, thereby causing further interference with the received signal.
Bandlimited channels
Another cause of intersymbol interference is the transmission of a signal through a bandlimited channel, i.e., one where the frequency response is zero above a certain frequency (the cutoff frequency). Passing a signal through such a channel results in the removal of frequency components above this cutoff frequency. In addition, components of the frequency below the cutoff frequency may also be attenuated by the channel.
This filtering of the transmitted signal affects the shape of the pulse that arrives at the receiver. The effects of filtering a rectangular pulse not only change the shape of the pulse within the first symbol period, but it is also spread out over the subsequent symbol periods. When a message is transmitted through such a channel, the spread pulse of each individual symbol will interfere with following symbols.
Bandlimited channels are present in both wired and wireless communications. The limitation is often imposed by the desire to operate multiple independent signals through the same area/cable; due to this, each system is typically allocated a piece of the total bandwidth available. For wireless systems, they may be allocated a slice of the electromagnetic spectrum to transmit in (for example, FM radio is often broadcast in the 87.5–108 MHz range). This allocation is usually administered by a government agency; in the case of the United States this is the Federal Communications Commission (FCC). In a wired system, such as an optical fiber cable, the allocation will be decided by the owner of the cable.
The bandlimiting can also be due to the physical properties of the medium - for instance, the cable being used in a wired system may have a cutoff frequency above which practically none of the transmitted signal will propagate.
Communication systems that transmit data over bandlimited channels usually implement pulse shaping to avoid interference caused by the bandwidth limitation. If the channel frequency response is flat and the shaping filter has a finite bandwidth, it is possible to communicate with no ISI at all. Often the channel response is not known beforehand, and an adaptive equalizer is used to compensate the frequency response.
Effects on eye patterns
One way to study ISI in a PCM or data transmission system experimentally is to apply the received wave to the vertical deflection plates of an oscilloscope and to apply a sawtooth wave at the transmitted symbol rate R (R = 1/T) to the horizontal deflection plates. The resulting display is called an eye pattern because of its resemblance to the human eye for binary waves. The interior region of the eye pattern is called the eye opening. An eye pattern provides a great deal of information about the performance of the pertinent system.
The width of the eye opening defines the time interval over which the received wave can be sampled without error from ISI. It is apparent that the preferred time for sampling is the instant of time at which the eye is open widest.
The sensitivity of the system to timing error is determined by the rate of closure of the eye as the sampling time is varied.
The height of the eye opening, at a specified sampling time, defines the margin over noise.
An eye pattern, which overlays many samples of a signal, can give a graphical representation of the
signal characteristics. The first image above is the eye pattern for a binary phase-shift keying (PSK) system in which a one is represented by an amplitude of −1 and a zero by an amplitude of +1. The current sampling time is at the center of the image and the previous and next sampling times are at the edges of the image. The various transitions from one sampling time to another (such as one-to-zero, one-to-one and so forth) can clearly be seen on the diagram.
The noise margin - the amount of noise required to cause the receiver to get an error - is given by the distance between the signal and the zero amplitude point at the sampling time; in other words, the further from zero at the sampling time the signal is the better. For the signal to be correctly interpreted, it must be sampled somewhere between the two points where the zero-to-one and one-to-zero transitions cross. Again, the further apart these points are the better, as this means the signal will be less sensitive to errors in the timing of the samples at the receiver.
The effects of ISI are shown in the second image which is an eye pattern of the same system when operating over a multipath channel. The effects of receiving delayed and distorted versions of the signal can be seen in the loss of definition of the signal transitions. It also reduces both the noise margin and the window in which the signal can be sampled, which shows that the performance of the system will be worse (i.e. it will have a greater bit error ratio).
Countering ISI
There are several techniques in telecommunications and data storage that try to work around the problem of intersymbol interference.
Design systems such that the impulse response is short enough that very little energy from one symbol smears into the next symbol.
Separate symbols in time with guard periods.
Apply an equalizer at the receiver, that, broadly speaking, attempts to undo the effect of the channel by applying an inverse filter.
Apply a sequence detector at the receiver, that attempts to estimate the sequence of transmitted symbols using the Viterbi algorithm.
Intentional intersymbol interference
Coded modulation systems also exist that intentionally build a controlled amount of ISI into the system at the transmitter side, known as faster-than-Nyquist signaling. Such a design trades a computational complexity penalty at the receiver against a Shannon capacity gain of the overall transceiver system.
See also
Nyquist ISI criterion
References
Further reading
External links
Definition of ISI from Federal Standard 1037C
Intersymbol interference concept
Telecommunication theory
Wireless networking
Television terminology | Intersymbol interference | [
"Technology",
"Engineering"
] | 1,521 | [
"Wireless networking",
"Computer networks engineering"
] |
41,288 | https://en.wikipedia.org/wiki/Inverse-square%20law | In science, an inverse-square law is any scientific law stating that the observed "intensity" of a specified physical quantity is inversely proportional to the square of the distance from the source of that physical quantity. The fundamental cause for this can be understood as geometric dilution corresponding to point-source radiation into three-dimensional space.
Radar energy expands during both the signal transmission and the reflected return, so the inverse square for both paths means that the radar will receive energy according to the inverse fourth power of the range.
To prevent dilution of energy while propagating a signal, certain methods can be used such as a waveguide, which acts like a canal does for water, or how a gun barrel restricts hot gas expansion to one dimension in order to prevent loss of energy transfer to a bullet.
Formula
In mathematical notation the inverse square law can be expressed as an intensity (I) varying as a function of distance (d) from some centre. The intensity is proportional (see ∝) to the reciprocal of the square of the distance thus:
It can also be mathematically expressed as :
or as the formulation of a constant quantity:
The divergence of a vector field which is the resultant of radial inverse-square law fields with respect to one or more sources is proportional to the strength of the local sources, and hence zero outside sources. Newton's law of universal gravitation follows an inverse-square law, as do the effects of electric, light, sound, and radiation phenomena.
Justification
The inverse-square law generally applies when some force, energy, or other conserved quantity is evenly radiated outward from a point source in three-dimensional space. Since the surface area of a sphere (which is 4πr2) is proportional to the square of the radius, as the emitted radiation gets farther from the source, it is spread out over an area that is increasing in proportion to the square of the distance from the source. Hence, the intensity of radiation passing through any unit area (directly facing the point source) is inversely proportional to the square of the distance from the point source. Gauss's law for gravity is similarly applicable, and can be used with any physical quantity that acts in accordance with the inverse-square relationship.
Occurrences
Gravitation
Gravitation is the attraction between objects that have mass. Newton's law states:
If the distribution of matter in each body is spherically symmetric, then the objects can be treated as point masses without approximation, as shown in the shell theorem. Otherwise, if we want to calculate the attraction between massive bodies, we need to add all the point-point attraction forces vectorially and the net attraction might not be exact inverse square. However, if the separation between the massive bodies is much larger compared to their sizes, then to a good approximation, it is reasonable to treat the masses as a point mass located at the object's center of mass while calculating the gravitational force.
As the law of gravitation, this law was suggested in 1645 by Ismaël Bullialdus. But Bullialdus did not accept Kepler's second and third laws, nor did he appreciate Christiaan Huygens's solution for circular motion (motion in a straight line pulled aside by the central force). Indeed, Bullialdus maintained the sun's force was attractive at aphelion and repulsive at perihelion. Robert Hooke and Giovanni Alfonso Borelli both expounded gravitation in 1666 as an attractive force. Hooke's lecture "On gravity" was at the Royal Society, in London, on 21 March. Borelli's "Theory of the Planets" was published later in 1666. Hooke's 1670 Gresham lecture explained that gravitation applied to "all celestiall bodys" and added the principles that the gravitating power decreases with distance and that in the absence of any such power bodies move in straight lines. By 1679, Hooke thought gravitation had inverse square dependence and communicated this in a letter to Isaac Newton:
my supposition is that the attraction always is in duplicate proportion to the distance from the center reciprocall.
Hooke remained bitter about Newton claiming the invention of this principle, even though Newton's 1686 Principia acknowledged that Hooke, along with Wren and Halley, had separately appreciated the inverse square law in the solar system, as well as giving some credit to Bullialdus.
Electrostatics
The force of attraction or repulsion between two electrically charged particles, in addition to being directly proportional to the product of the electric charges, is inversely proportional to the square of the distance between them; this is known as Coulomb's law. The deviation of the exponent from 2 is less than one part in 1015.
Light and other electromagnetic radiation
The intensity (or illuminance or irradiance) of light or other linear waves radiating from a point source (energy per unit of area perpendicular to the source) is inversely proportional to the square of the distance from the source, so an object (of the same size) twice as far away receives only one-quarter the energy (in the same time period).
More generally, the irradiance, i.e., the intensity (or power per unit area in the direction of propagation), of a spherical wavefront varies inversely with the square of the distance from the source (assuming there are no losses caused by absorption or scattering).
For example, the intensity of radiation from the Sun is 9126 watts per square meter at the distance of Mercury (0.387 AU); but only 1367 watts per square meter at the distance of Earth (1 AU)—an approximate threefold increase in distance results in an approximate ninefold decrease in intensity of radiation.
For non-isotropic radiators such as parabolic antennas, headlights, and lasers, the effective origin is located far behind the beam aperture. If you are close to the origin, you don't have to go far to double the radius, so the signal drops quickly. When you are far from the origin and still have a strong signal, like with a laser, you have to travel very far to double the radius and reduce the signal. This means you have a stronger signal or have antenna gain in the direction of the narrow beam relative to a wide beam in all directions of an isotropic antenna.
In photography and stage lighting, the inverse-square law is used to determine the “fall off” or the difference in illumination on a subject as it moves closer to or further from the light source. For quick approximations, it is enough to remember that doubling the distance reduces illumination to one quarter; or similarly, to halve the illumination increase the distance by a factor of 1.4 (the square root of 2), and to double illumination, reduce the distance to 0.7 (square root of 1/2). When the illuminant is not a point source, the inverse square rule is often still a useful approximation; when the size of the light source is less than one-fifth of the distance to the subject, the calculation error is less than 1%.
The fractional reduction in electromagnetic fluence (Φ) for indirectly ionizing radiation with increasing distance from a point source can be calculated using the inverse-square law. Since emissions from a point source have radial directions, they intercept at a perpendicular incidence. The area of such a shell is 4πr 2 where r is the radial distance from the center. The law is particularly important in diagnostic radiography and radiotherapy treatment planning, though this proportionality does not hold in practical situations unless source dimensions are much smaller than the distance. As stated in Fourier theory of heat “as the point source is magnification by distances, its radiation is dilute proportional to the sin of the angle, of the increasing circumference arc from the point of origin”.
Example
Let P be the total power radiated from a point source (for example, an omnidirectional isotropic radiator). At large distances from the source (compared to the size of the source), this power is distributed over larger and larger spherical surfaces as the distance from the source increases. Since the surface area of a sphere of radius r is A = 4πr 2, the intensity I (power per unit area) of radiation at distance r is
The energy or intensity decreases (divided by 4) as the distance r is doubled; if measured in dB would decrease by 6.02 dB per doubling of distance. When referring to measurements of power quantities, a ratio can be expressed as a level in decibels by evaluating ten times the base-10 logarithm of the ratio of the measured quantity to the reference value.
Sound in a gas
In acoustics, the sound pressure of a spherical wavefront radiating from a point source decreases by 50% as the distance r is doubled; measured in dB, the decrease is still 6.02 dB, since dB represents an intensity ratio. The pressure ratio (as opposed to power ratio) is not inverse-square, but is inverse-proportional (inverse distance law):
The same is true for the component of particle velocity that is in-phase with the instantaneous sound pressure :
In the near field is a quadrature component of the particle velocity that is 90° out of phase with the sound pressure and does not contribute to the time-averaged energy or the intensity of the sound. The sound intensity is the product of the RMS sound pressure and the in-phase component of the RMS particle velocity, both of which are inverse-proportional. Accordingly, the intensity follows an inverse-square behaviour:
Field theory interpretation
For an irrotational vector field in three-dimensional space, the inverse-square law corresponds to the property that the divergence is zero outside the source. This can be generalized to higher dimensions. Generally, for an irrotational vector field in n-dimensional Euclidean space, the intensity "I" of the vector field falls off with the distance "r" following the inverse (n − 1)th power law
given that the space outside the source is divergence free.
Non-Euclidean implications
The inverse-square law, fundamental in Euclidean spaces, also applies to non-Euclidean geometries, including hyperbolic space. The curvature present in these spaces alters physical laws, influencing a variety of fields such as cosmology, general relativity, and string theory.
John D. Barrow, in his 2020 paper "Non-Euclidean Newtonian Cosmology," expands on the behavior of force (F) and potential (Φ) within hyperbolic 3-space (H3). He explains that F and Φ obey the relationships F ∝ 1 / R² sinh²(r/R) and Φ ∝ coth(r/R), where R represents the curvature radius and r represents the distance from the focal point.
The concept of spatial dimensionality, first proposed by Immanuel Kant, remains a topic of debate concerning the inverse-square law. Dimitria Electra Gatzia and Rex D. Ramsier, in their 2021 paper, contend that the inverse-square law is more closely related to force distribution symmetry than to the dimensionality of space.
In the context of non-Euclidean geometries and general relativity, deviations from the inverse-square law do not arise from the law itself but rather from the assumption that the force between two bodies is instantaneous, which contradicts special relativity. General relativity reinterprets gravity as the curvature of spacetime, leading particles to move along geodesics in this curved spacetime.
History
John Dumbleton of the 14th-century Oxford Calculators, was one of the first to express functional relationships in graphical form. He gave a proof of the mean speed theorem stating that "the latitude of a uniformly difform movement corresponds to the degree of the midpoint" and used this method to study the quantitative decrease in intensity of illumination in his Summa logicæ et philosophiæ naturalis (ca. 1349), stating that it was not linearly proportional to the distance, but was unable to expose the Inverse-square law.
In proposition 9 of Book 1 in his book Ad Vitellionem paralipomena, quibus astronomiae pars optica traditur (1604), the astronomer Johannes Kepler argued that the spreading of light from a point source obeys an inverse square law:
In 1645, in his book Astronomia Philolaica ..., the French astronomer Ismaël Bullialdus (1605–1694) refuted Johannes Kepler's suggestion that "gravity" weakens as the inverse of the distance; instead, Bullialdus argued, "gravity" weakens as the inverse square of the distance:
In England, the Anglican bishop Seth Ward (1617–1689) publicized the ideas of Bullialdus in his critique In Ismaelis Bullialdi astronomiae philolaicae fundamenta inquisitio brevis (1653) and publicized the planetary astronomy of Kepler in his book Astronomia geometrica (1656).
In 1663–1664, the English scientist Robert Hooke was writing his book Micrographia (1666) in which he discussed, among other things, the relation between the height of the atmosphere and the barometric pressure at the surface. Since the atmosphere surrounds the Earth, which itself is a sphere, the volume of atmosphere bearing on any unit area of the Earth's surface is a truncated cone (which extends from the Earth's center to the vacuum of space; obviously only the section of the cone from the Earth's surface to space bears on the Earth's surface). Although the volume of a cone is proportional to the cube of its height, Hooke argued that the air's pressure at the Earth's surface is instead proportional to the height of the atmosphere because gravity diminishes with altitude. Although Hooke did not explicitly state so, the relation that he proposed would be true only if gravity decreases as the inverse square of the distance from the Earth's center.
See also
Flux
Antenna (radio)
Gauss's law
Kepler's laws of planetary motion
Kepler problem
Telecommunications, particularly:
William Thomson, 1st Baron Kelvin
Power-aware routing protocols
Inverse proportionality
Multiplicative inverse
Distance decay
Fermi paradox
Square–cube law
Principle of similitude
References
External links
Damping of sound level with distance
Sound pressure p and the inverse distance law 1/r
Philosophy of physics
Scientific method | Inverse-square law | [
"Physics",
"Mathematics"
] | 2,976 | [
"Philosophy of physics",
"Applied and interdisciplinary physics",
"Mathematical objects",
"Scientific laws",
"Equations"
] |
41,290 | https://en.wikipedia.org/wiki/Ionospheric%20sounding | In telecommunications and radio science, an ionospheric sounding is a technique that provides real-time data on high-frequency ionospheric-dependent radio propagation, using a basic system consisting of a synchronized transmitter and receiver.
The time delay between transmission and reception is translated into effective ionospheric layer altitude. Vertical incident sounding uses a collocated transmitter and receiver and involves directing a range of frequencies vertically to the ionosphere and measuring the values of the reflected returned signals to determine the effective ionosphere layer altitude. This technique is also used to determine the critical frequency. Oblique sounders use a transmitter at one end of a given propagation path, and a synchronized receiver, usually with an oscilloscope-type display (ionogram), at the other end. The transmitter emits a stepped- or swept-frequency signal which is displayed or measured at the receiver. The measurement converts time delay to effective altitude of the ionospheric layer. The ionogram display shows the effective altitude of the ionospheric layer as a function of frequency.
See also
Kennelly–Heaviside layer
Edward V. Appleton
References
Ionosphere
Radio frequency propagation | Ionospheric sounding | [
"Physics"
] | 238 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
41,291 | https://en.wikipedia.org/wiki/Isochronous%20timing | A sequence of events is isochronous if the events occur regularly, or at equal time intervals. The term isochronous is used in several technical contexts, but usually refers to the primary subject maintaining a constant period or interval (the reciprocal of frequency), despite variations in other measurable factors in the same system. Isochronous timing is a characteristic of a repeating event whereas synchronous timing refers to the relationship between two or more events.
In dynamical systems theory, an oscillator is called isochronous if its frequency is independent of its amplitude.
In horology, a mechanical clock or watch is isochronous if it runs at the same rate regardless of changes in its drive force, so that it keeps correct time as its mainspring unwinds or chain length varies. Isochrony is important in timekeeping devices. Simply put, if a power providing device (e.g. a spring or weight) provides constant torque to the wheel train, it will be isochronous, since the escapement will experience the same force regardless of how far the weight has dropped or the spring has unwound.
In electrical power generation, isochronous means that the frequency of the electricity generated is constant under varying load; there is zero generator droop. (See Synchronization (alternating current).)
In telecommunications, an isochronous signal is one where the time interval separating any two corresponding transitions is equal to the unit interval or to a multiple of the unit interval; but phase is arbitrary and potentially varying.
The term is also used in data transmission to describe cases in which corresponding significant instants of two or more sequential signals have a constant phase relationship.
Isochronous burst transmission is used when the information-bearer channel rate is higher than the input data signaling rate.
In the Universal Serial Bus used in computers, isochronous is one of the four data flow types for USB devices (the others being Control, Interrupt and Bulk). It is commonly used for streaming data types such as video or audio sources. Similarly, the IEEE 1394 interface standard, commonly called Firewire, includes support for isochronous streams of audio and video at known constant rates.
In particle accelerators an isochronous cyclotron is a cyclotron where the field strength increases with radius to compensate for relativistic increase in mass with speed.
An isochrone is a contour line of equal time, for instance, in geological layers, tree rings or wave fronts. An isochrone map or diagram shows such contours.
In linguistics, isochrony is the postulated rhythmic division of time into equal portions by a language.
In neurology, isochronic tones are regular beats of a single tone used for brainwave entrainment.
See also
Anisochronous
References
Synchronization
Telecommunication theory
Horology | Isochronous timing | [
"Physics",
"Engineering"
] | 602 | [
"Telecommunications engineering",
"Physical quantities",
"Horology",
"Time",
"Spacetime",
"Synchronization"
] |
41,293 | https://en.wikipedia.org/wiki/Isochronous%20signal | In telecommunications, an isochronous signal is a signal in which the time interval separating any two significant instants is equal to the unit interval or a multiple of the unit interval. Variations in the time intervals are constrained within specified limits.
"Isochronous" is a characteristic of one signal, while "synchronous" indicates a relationship between two or more signals.
See also
Synchronization in telecommunications
Synchronous network
Mesochronous network
Plesiochronous system
Asynchronous system
References
Telecommunications engineering
Synchronization | Isochronous signal | [
"Engineering"
] | 118 | [
"Electrical engineering",
"Telecommunications engineering",
"Synchronization"
] |
41,296 | https://en.wikipedia.org/wiki/Jitter | In electronics and telecommunications, jitter is the deviation from true periodicity of a presumably periodic signal, often in relation to a reference clock signal. In clock recovery applications it is called timing jitter. Jitter is a significant, and usually undesired, factor in the design of almost all communications links.
Jitter can be quantified in the same terms as all time-varying signals, e.g., root mean square (RMS), or peak-to-peak displacement. Also, like other time-varying signals, jitter can be expressed in terms of spectral density.
Jitter period is the interval between two times of maximum effect (or minimum effect) of a signal characteristic that varies regularly with time. Jitter frequency, the more commonly quoted figure, is its inverse. ITU-T G.810 classifies deviation lower frequencies below 10 Hz as wander and higher frequencies at or above 10 Hz as jitter.
Jitter may be caused by electromagnetic interference and crosstalk with carriers of other signals. Jitter can cause a display monitor to flicker, affect the performance of processors in personal computers, introduce clicks or other undesired effects in audio signals, and cause loss of transmitted data between network devices. The amount of tolerable jitter depends on the affected application.
Metrics
For clock jitter, there are three commonly used metrics:
Absolute jitter
The absolute difference in the position of a clock's edge from where it would ideally be.
Maximum time interval error (MTIE)
Maximum error committed by a clock under test in measuring a time interval for a given period of time.
Period jitter (a.k.a. cycle jitter)
The difference between any one clock period and the ideal or average clock period. Period jitter tends to be important in synchronous circuitry such as digital state machines where the error-free operation of the circuitry is limited by the shortest possible clock period (average period less maximum cycle jitter), and the performance of the circuitry is set by the average clock period. Hence, synchronous circuitry benefits from minimizing period jitter, so that the shortest clock period approaches the average clock period.
Cycle-to-cycle jitter
The difference in duration of any two adjacent clock periods. It can be important for some types of clock generation circuitry used in microprocessors and RAM interfaces.
In telecommunications, the unit used for the above types of jitter is usually the unit interval (UI) which quantifies the jitter in terms of a fraction of the transmission unit period. This unit is useful because it scales with clock frequency and thus allows relatively slow interconnects such as T1 to be compared to higher-speed internet backbone links such as OC-192. Absolute units such as picoseconds are more common in microprocessor applications. Units of degrees and radians are also used.
If jitter has a Gaussian distribution, it is usually quantified using the standard deviation of this distribution. This translates to an RMS measurement for a zero-mean distribution. Often, jitter distribution is significantly non-Gaussian. This can occur if the jitter is caused by external sources such as power supply noise. In these cases, peak-to-peak measurements may be more useful. Many efforts have been made to meaningfully quantify distributions that are neither Gaussian nor have a meaningful peak level. All have shortcomings but most tend to be good enough for the purposes of engineering work.
In computer networking, jitter can refer to packet delay variation, the variation (statistical dispersion) in the delay of the packets.
Types
One of the main differences between random and deterministic jitter is that deterministic jitter is bounded and random jitter is unbounded.
Random jitter
Random jitter, also called Gaussian jitter, is unpredictable electronic timing noise. Random jitter typically follows a normal distribution due to being caused by thermal noise in an electrical circuit.
Deterministic jitter
Deterministic jitter is a type of clock or data signal jitter that is predictable and reproducible. The peak-to-peak value of this jitter is bounded, and the bounds can easily be observed and predicted. Deterministic jitter has a known non-normal distribution. Deterministic jitter can either be correlated to the data stream (data-dependent jitter) or uncorrelated to the data stream (bounded uncorrelated jitter). Examples of data-dependent jitter are duty-cycle dependent jitter (also known as duty-cycle distortion) and intersymbol interference.
Total jitter
Total jitter (T) is the combination of random jitter (R) and deterministic jitter (D) and is computed in the context to a required bit error rate (BER) for the system:
,
in which the value of n is based on the BER required of the link.
A common BER used in communication standards such as Ethernet is 10−12.
Examples
Sampling jitter
In analog-to-digital and digital-to-analog conversion of signals, the sampling is normally assumed to be periodic with a fixed period—the time between every two samples is the same. If there is jitter present on the clock signal to the analog-to-digital converter or a digital-to-analog converter, the time between samples varies and instantaneous signal error arises. The error is proportional to the slew rate of the desired signal and the absolute value of the clock error. The effect of jitter on the signal depends on the nature of the jitter. Random jitter tends to add broadband noise while periodic jitter tends to add errant spectral components, "birdys". In some conditions, less than a nanosecond of jitter can reduce the effective bit resolution of a converter with a Nyquist frequency of 22 kHz to 14 bits.
Sampling jitter is an important consideration in high-frequency signal conversion, or where the clock signal is especially prone to interference.
In digital antenna arrays ADC and DAC jitters are the important factors determining the direction of arrival estimation accuracy and the depth of jammers suppression.
Packet jitter in computer networks
In the context of computer networks, packet jitter or packet delay variation (PDV) is the variation in latency as measured in the variability over time of the end-to-end delay across a network. A network with constant delay has no packet jitter. Packet jitter is expressed as an average of the deviation from the network mean delay. PDV is an important quality of service factor in assessment of network performance.
Transmitting a burst of traffic at a high rate followed by an interval or period of lower or zero rate transmission may also be seen as a form of jitter, as it represents a deviation from the average transmission rate. However, unlike the jitter caused by variation in latency, transmitting in bursts may be seen as a desirable feature, e.g. in variable bitrate transmissions.
Video and image jitter
Video or image jitter occurs when the horizontal lines of video image frames are randomly displaced due to the corruption of synchronization signals or electromagnetic interference during video transmission. Model-based dejittering study has been carried out under the framework of digital image and video restoration.
Testing
Jitter in serial bus architectures is measured by means of eye patterns. There are standards for jitter measurement in serial bus architectures. The standards cover jitter tolerance, jitter transfer function and jitter generation, with the required values for these attributes varying among different applications. Where applicable, compliant systems are required to conform to these standards.
Testing for jitter and its measurement is of growing importance to electronics engineers because of increased clock frequencies in digital electronic circuitry to achieve higher device performance. Higher clock frequencies have commensurately smaller eye openings, and thus impose tighter tolerances on jitter. For example, modern computer motherboards have serial bus architectures with eye openings of 160 picoseconds or less. This is extremely small compared to parallel bus architectures with equivalent performance, which may have eye openings on the order of 1000 picoseconds.
Jitter is measured and evaluated in various ways depending on the type of circuit under test. In all cases, the goal of jitter measurement is to verify that the jitter will not disrupt normal operation of the circuit.
Testing of device performance for jitter tolerance may involve injection of jitter into electronic components with specialized test equipment.
A less direct approach—in which analog waveforms are digitized and the resulting data stream analyzed—is employed when measuring pixel jitter in frame grabbers.
Mitigation
Anti-jitter circuits
Anti-jitter circuits (AJCs) are a class of electronic circuits designed to reduce the level of jitter in a clock signal. AJCs operate by re-timing the output pulses so they align more closely to an idealized clock. They are widely used in clock and data recovery circuits in digital communications, as well as for data sampling systems such as the analog-to-digital converter and digital-to-analog converter. Examples of anti-jitter circuits include phase-locked loop and delay-locked loop.
Jitter buffers
Jitter buffers or de-jitter buffers are buffers used to counter jitter introduced by queuing in packet-switched networks to ensure continuous playout of an audio or video media stream transmitted over the network. The maximum jitter that can be countered by a de-jitter buffer is equal to the buffering delay introduced before starting the play-out of the media stream. In the context of packet-switched networks, the term packet delay variation is often preferred over jitter.
Some systems use sophisticated delay-optimal de-jitter buffers that are capable of adapting the buffering delay to changing network characteristics. The adaptation logic is based on the jitter estimates computed from the arrival characteristics of the media packets. Adjustments associated with adaptive de-jittering involves introducing discontinuities in the media play-out which may be noticeable to the listener or viewer. Adaptive de-jittering is usually carried out for audio play-outs that include voice activity detection that allows the lengths of the silence periods to be adjusted, thus minimizing the perceptual impact of the adaptation.
Dejitterizer
A dejitterizer is a device that reduces jitter in a digital signal. A dejitterizer usually consists of an elastic buffer in which the signal is temporarily stored and then retransmitted at a rate based on the average rate of the incoming signal. A dejitterizer may not be effective in removing low-frequency jitter (wander).
Filtering and decomposition
A filter can be designed to minimize the effect of sampling jitter.
A jitter signal can be decomposed into intrinsic mode functions (IMFs), which can be further applied for filtering or dejittering.
See also
Clock drift
Dither
Jitterlyzer
Micro stuttering
Phase noise
Pulse (signal processing)
References
Further reading
Li, Mike P. Jitter and Signal Integrity Verification for Synchronous and Asynchronous I/Os at Multiple to 10 GHz/Gbps. Presented at International Test Conference 2008.
Li, Mike P. A New Jitter Classification Method Based on Statistical, Physical, and Spectroscopic Mechanisms. Presented at DesignCon 2009.
Liu, Hui, Hong Shi, Xiaohong Jiang, and Zhe Li. Pre-Driver PDN SSN, OPD, Data Encoding, and Their Impact on SSJ. Presented at Electronics Components and Technology Conference 2009.
Zamek, Iliya. SOC-System Jitter Resonance and Its Impact on Common Approach to the PDN Impedance. Presented at International Test Conference 2008.
External links
, a Heuristic Discussion of Fibre Channel and Gigabit Ethernet Methods
Jitter in Packet Voice Networks
Electrical parameters
Packets (information technology)
Synchronization | Jitter | [
"Engineering"
] | 2,456 | [
"Electrical engineering",
"Telecommunications engineering",
"Synchronization",
"Electrical parameters"
] |
41,297 | https://en.wikipedia.org/wiki/Joint%20multichannel%20trunking%20and%20switching%20system | The Joint multichannel trunking and switching system is that composite multichannel trunking and switching system formed from assets of the Services, the Defense Communications System, other available systems, and/or assets controlled by the Joint Chiefs of Staff to provide an operationally responsive, survivable communication system, preferably in a mobile/transportable/recoverable configuration, for the joint force commander in an area of operations.
References
Military communications | Joint multichannel trunking and switching system | [
"Engineering"
] | 88 | [
"Military communications",
"Telecommunications engineering"
] |
41,306 | https://en.wikipedia.org/wiki/Lambert%27s%20cosine%20law | In optics, Lambert's cosine law says that the observed radiant intensity or luminous intensity from an ideal diffusely reflecting surface or ideal diffuse radiator is directly proportional to the cosine of the angle θ between the observer's line of sight and the surface normal; . The law is also known as the cosine emission law or Lambert's emission law. It is named after Johann Heinrich Lambert, from his Photometria, published in 1760.
A surface which obeys Lambert's law is said to be Lambertian, and exhibits Lambertian reflectance. Such a surface has a constant radiance/luminance, regardless of the angle from which it is observed; a single human eye perceives such a surface as having a constant brightness, regardless of the angle from which the eye observes the surface. It has the same radiance because, although the emitted power from a given area element is reduced by the cosine of the emission angle, the solid angle, subtended by surface visible to the viewer, is reduced by the very same amount. Because the ratio between power and solid angle is constant, radiance (power per unit solid angle per unit projected source area) stays the same.
Lambertian scatterers and radiators
When an area element is radiating as a result of being illuminated by an external source, the irradiance (energy or photons /time/area) landing on that area element will be proportional to the cosine of the angle between the illuminating source and the normal. A Lambertian scatterer will then scatter this light according to the same cosine law as a Lambertian emitter. This means that although the radiance of the surface depends on the angle from the normal to the illuminating source, it will not depend on the angle from the normal to the observer. For example, if the moon were a Lambertian scatterer, one would expect to see its scattered brightness appreciably diminish towards the terminator due to the increased angle at which sunlight hit the surface. The fact that it does not diminish illustrates that the moon is not a Lambertian scatterer, and in fact tends to scatter more light into the oblique angles than a Lambertian scatterer.
The emission of a Lambertian radiator does not depend on the amount of incident radiation, but rather from radiation originating in the emitting body itself. For example, if the sun were a Lambertian radiator, one would expect to see a constant brightness across the entire solar disc. The fact that the sun exhibits limb darkening in the visible region illustrates that it is not a Lambertian radiator. A black body is an example of a Lambertian radiator.
Details of equal brightness effect
The situation for a Lambertian surface (emitting or scattering) is illustrated in Figures 1 and 2. For conceptual clarity we will think in terms of photons rather than energy or luminous energy. The wedges in the circle each represent an equal angle dΩ, of an arbitrarily chosen size, and for a Lambertian surface, the number of photons per second emitted into each wedge is proportional to the area of the wedge.
The length of each wedge is the product of the diameter of the circle and cos(θ). The maximum rate of photon emission per unit solid angle is along the normal, and diminishes to zero for θ = 90°. In mathematical terms, the radiance along the normal is I photons/(s·m2·sr) and the number of photons per second emitted into the vertical wedge is . The number of photons per second emitted into the wedge at angle θ is .
Figure 2 represents what an observer sees. The observer directly above the area element will be seeing the scene through an aperture of area dA0 and the area element dA will subtend a (solid) angle of dΩ0, which is a portion of the observer's total angular field-of-view of the scene. Since the wedge size dΩ was chosen arbitrarily, for convenience we may assume without loss of generality that it coincides with the solid angle subtended by the aperture when "viewed" from the locus of the emitting area element dA. Thus the normal observer will then be recording the same photons per second emission derived above and will measure a radiance of
photons/(s·m2·sr).
The observer at angle θ to the normal will be seeing the scene through the same aperture of area dA0 (still corresponding to a dΩ wedge) and from this oblique vantage the area element dA is foreshortened and will subtend a (solid) angle of dΩ0 cos(θ). This observer will be recording photons per second, and so will be measuring a radiance of
photons/(s·m2·sr),
which is the same as the normal observer.
Relating peak luminous intensity and luminous flux
In general, the luminous intensity of a point on a surface varies by direction; for a Lambertian surface, that distribution is defined by the cosine law, with peak luminous intensity in the normal direction. Thus when the Lambertian assumption holds, we can calculate the total luminous flux, , from the peak luminous intensity, , by integrating the cosine law:
and so
where is the determinant of the Jacobian matrix for the unit sphere, and realizing that is luminous flux per steradian. Similarly, the peak intensity will be of the total radiated luminous flux. For Lambertian surfaces, the same factor of relates luminance to luminous emittance, radiant intensity to radiant flux, and radiance to radiant emittance. Radians and steradians are, of course, dimensionless and so "rad" and "sr" are included only for clarity.
Example: A surface with a luminance of say 100 cd/m2 (= 100 nits, typical PC monitor) will, if it is a perfect Lambert emitter, have a luminous emittance of 100π lm/m2. If its area is 0.1 m2 (~19" monitor) then the total light emitted, or luminous flux, would thus be 31.4 lm.
See also
Transmittance
Reflectivity
Passive solar building design
Sun path
References
Eponymous laws of physics
Radiometry
Photometry
3D computer graphics
Scattering | Lambert's cosine law | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,318 | [
"Telecommunications engineering",
"Scattering",
"Condensed matter physics",
"Particle physics",
"Nuclear physics",
"Radiometry"
] |
41,311 | https://en.wikipedia.org/wiki/Layered%20system | In telecommunications, a layered system is a system in which components are grouped, i.e., layered, in a hierarchical arrangement, such that lower layers provide functions and services that support the functions and services of higher layers.
Systems of ever-increasing complexity and capability can be built by adding or changing the layers to improve overall system capability while using the components that are still in place.
Network topology
Holism | Layered system | [
"Mathematics"
] | 83 | [
"Network topology",
"Topology"
] |
41,316 | https://en.wikipedia.org/wiki/Linear%20polarization | In electrodynamics, linear polarization or plane polarization of electromagnetic radiation is a confinement of the electric field vector or magnetic field vector to a given plane along the direction of propagation. The term linear polarization (French: polarisation rectiligne) was coined by Augustin-Jean Fresnel in 1822. See polarization and plane of polarization for more information.
The orientation of a linearly polarized electromagnetic wave is defined by the direction of the electric field vector. For example, if the electric field vector is vertical (alternately up and down as the wave travels) the radiation is said to be vertically polarized.
Mathematical description
The classical sinusoidal plane wave solution of the electromagnetic wave equation for the electric and magnetic fields is (cgs units)
for the magnetic field, where k is the wavenumber,
is the angular frequency of the wave, and is the speed of light.
Here is the amplitude of the field and
is the Jones vector in the x-y plane.
The wave is linearly polarized when the phase angles are equal,
.
This represents a wave polarized at an angle with respect to the x axis. In that case, the Jones vector can be written
.
The state vectors for linear polarization in x or y are special cases of this state vector.
If unit vectors are defined such that
and
then the polarization state can be written in the "x-y basis" as
.
See also
Sinusoidal plane-wave solutions of the electromagnetic wave equation
Polarization
Circular polarization
Elliptical polarization
Plane of polarization
Photon polarization
References
External links
Animation of Linear Polarization (on YouTube)
Comparison of Linear Polarization with Circular and Elliptical Polarizations (YouTube Animation)
Polarization (waves)
ja:直線偏光
pl:Polaryzacja_fali#Polaryzacja_liniowa | Linear polarization | [
"Physics"
] | 379 | [
"Polarization (waves)",
"Astrophysics"
] |
41,317 | https://en.wikipedia.org/wiki/Line%20code | In telecommunications, a line code is a pattern of voltage, current, or photons used to represent digital data transmitted down a communication channel or written to a storage medium. This repertoire of signals is usually called a constrained code in data storage systems.
Some signals are more prone to error than others as the physics of the communication channel or storage medium constrains the repertoire of signals that can be used reliably.
Common line encodings are unipolar, polar, bipolar, and Manchester code.
Transmission and storage
After line coding, the signal is put through a physical communication channel, either a transmission medium or data storage medium. The most common physical channels are:
the line-coded signal can directly be put on a transmission line, in the form of variations of the voltage or current (often using differential signaling).
the line-coded signal (the baseband signal) undergoes further pulse shaping (to reduce its frequency bandwidth) and then is modulated (to shift its frequency) to create an RF signal that can be sent through free space.
the line-coded signal can be used to turn on and off a light source in free-space optical communication, most commonly used in an infrared remote control.
the line-coded signal can be printed on paper to create a bar code.
the line-coded signal can be converted to magnetized spots on a hard drive or tape drive.
the line-coded signal can be converted to pits on an optical disc.
Some of the more common binary line codes include:
Each line code has advantages and disadvantages. Line codes are chosen to meet one or more of the following criteria:
Minimize transmission hardware
Facilitate synchronization
Ease error detection and correction
Achieve a target spectral density
Eliminate a DC component
Disparity
Most long-distance communication channels cannot reliably transport a DC component. The DC component is also called the disparity, the bias, or the DC coefficient. The disparity of a bit pattern is the difference in the number of one bits vs the number of zero bits. The running disparity is the running total of the disparity of all previously transmitted bits. The simplest possible line code, unipolar, gives too many errors on such systems, because it has an unbounded DC component.
Most line codes eliminate the DC component such codes are called DC-balanced, zero-DC, or DC-free. There are three ways of eliminating the DC component:
Use a constant-weight code. Each transmitted code word in a constant-weight code is designed such that every code word that contains some positive or negative levels also contains enough of the opposite levels, such that the average level over each code word is zero. Examples of constant-weight codes include Manchester code and Interleaved 2 of 5.
Use a paired disparity code. Each code word in a paired disparity code that averages to a negative level is paired with another code word that averages to a positive level. The transmitter keeps track of the running DC buildup, and picks the code word that pushes the DC level back towards zero. The receiver is designed so that either code word of the pair decodes to the same data bits. Examples of paired disparity codes include alternate mark inversion, 8b/10b and 4B3T.
Use a scrambler. For example, the scrambler specified in for 64b/66b encoding.
Polarity
Bipolar line codes have two polarities, are generally implemented as RZ, and have a radix of three since there are three distinct output levels (negative, positive and zero). One of the principle advantages of this type of code is that it can eliminate any DC component. This is important if the signal must pass through a transformer or a long transmission line.
Unfortunately, several long-distance communication channels have polarity ambiguity. Polarity-insensitive line codes compensate in these channels.
There are three ways of providing unambiguous reception of 0 and 1 bits over such channels:
Pair each code word with the polarity-inverse of that code word. The receiver is designed so that either code word of the pair decodes to the same data bits. Examples include alternate mark inversion, Differential Manchester encoding, coded mark inversion and Miller encoding.
differential coding each symbol relative to the previous symbol. Examples include MLT-3 encoding and NRZI.
Invert the whole stream when inverted syncwords are detected, perhaps using polarity switching
Run-length limited codes
For reliable clock recovery at the receiver, a run-length limitation may be imposed on the generated channel sequence, i.e., the maximum number of consecutive ones or zeros is bounded to a reasonable number. A clock period is recovered by observing transitions in the received sequence, so that a maximum run length guarantees sufficient transitions to assure clock recovery quality.
RLL codes are defined by four main parameters: m, n, d, k. The first two, m/n, refer to the rate of the code, while the remaining two specify the minimal d and maximal k number of zeroes between consecutive ones. This is used in both telecommunications and storage systems that move a medium past a fixed recording head.
Specifically, RLL bounds the length of stretches (runs) of repeated bits during which the signal does not change. If the runs are too long, clock recovery is difficult; if they are too short, the high frequencies might be attenuated by the communications channel. By modulating the data, RLL reduces the timing uncertainty in decoding the stored data, which would lead to the possible erroneous insertion or removal of bits when reading the data back. This mechanism ensures that the boundaries between bits can always be accurately found (preventing bit slip), while efficiently using the media to reliably store the maximal amount of data in a given space.
Early disk drives used very simple encoding schemes, such as RLL (0,1) FM code, followed by RLL (1,3) MFM code which were widely used in hard disk drives until the mid-1980s and are still used in digital optical discs such as CD, DVD, MD, Hi-MD and Blu-ray using EFM and EFMPLus codes. Higher density RLL (2,7) and RLL (1,7) codes became the de facto standards for hard disks by the early 1990s.
Synchronization
Line coding should make it possible for the receiver to synchronize itself to the phase of the received signal. If the clock recovery is not ideal, then the signal to be decoded will not be sampled at the optimal times. This will increase the probability of error in the received data.
Biphase line codes require at least one transition per bit time. This makes it easier to synchronize the transceivers and detect errors, however, the baud rate is greater than that of NRZ codes.
Other considerations
A line code will typically reflect technical requirements of the transmission medium, such as optical fiber or shielded twisted pair. These requirements are unique for each medium, because each one has different behavior related to interference, distortion, capacitance and attenuation.
Common line codes
2B1Q
4B3T
4B5B
6b/8b encoding
8b/10b encoding
64b/66b encoding
128b/130b encoding
Alternate mark inversion (AMI)
Coded mark inversion (CMI)
EFMPlus, used in DVDs
Eight-to-fourteen modulation (EFM), used in compact discs
Hamming code
Hybrid ternary code
Manchester code and differential Manchester
Mark and space
MLT-3 encoding
Modified AMI codes: B8ZS, B6ZS, B3ZS, HDB3
Modified frequency modulation, Miller encoding and delay encoding
Non-return-to-zero (NRZ)
Non-return-to-zero, inverted (NRZI)
Pulse-position modulation (PPM)
Return-to-zero (RZ)
TC-PAM
Optical line codes
Alternate-Phase Return-to-Zero (APRZ)
Carrier-Suppressed Return-to-Zero (CSRZ)
Three of Six, Fiber Optical (TS-FO)
See also
Physical layer
Self-synchronizing code and bit synchronization
References
External links
Line Coding Lecture No. 9
Line Coding in Digital Communication
CodSim 2.0: Open source simulator for Digital Data Communications Model at the University of Malaga written in HTML
Physical layer protocols
Coding theory | Line code | [
"Mathematics"
] | 1,721 | [
"Discrete mathematics",
"Coding theory"
] |
41,320 | https://en.wikipedia.org/wiki/Link%20level | In computer networking, in the hierarchical structure of a primary or secondary station, link level is the conceptual level of control or data processing logic that controls the data link.
Link-level functions provide an interface between the station high-level logic and the data link. Link-level functions include (a) transmit bit injection and receive bit extraction, (b) address and control field interpretation, (c) command response generation, transmission and interpretation, and (d) frame check sequence computation and interpretation.
References
Computer networking | Link level | [
"Technology",
"Engineering"
] | 104 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
41,321 | https://en.wikipedia.org/wiki/Link%20quality%20analysis | In adaptive high-frequency (HF) radio, link quality analysis (LQA) is the overall process by which measurements of radio signal quality are made, assessed, and analyzed.
In LQA, signal quality is determined by measuring, assessing, and analyzing link parameters, such as bit error ratio (BER), and the levels of the ratio of signal-plus-noise-plus-distortion to noise-plus-distortion (SINAD). Measurements are stored at—and exchanged between—stations, for use in making decisions about link establishment.
For adaptive HF radio, LQA is automatically performed and is usually based on analyses of pseudo-BERs and SINAD readings.
References
Radio technology | Link quality analysis | [
"Technology",
"Engineering"
] | 144 | [
"Information and communications technology",
"Telecommunications engineering",
"Radio technology"
] |
41,326 | https://en.wikipedia.org/wiki/Loading%20coil | A loading coil or load coil is an inductor that is inserted into an electronic circuit to increase its inductance. The term originated in the 19th century for inductors used to prevent signal distortion in long-distance telegraph transmission cables. The term is also used for inductors in radio antennas, or between the antenna and its feedline, to make an electrically short antenna resonant at its operating frequency.
The concept of loading coils was discovered by Oliver Heaviside in studying the problem of slow signalling speed of the first transatlantic telegraph cable in the 1860s. He concluded additional inductance was required to prevent amplitude and time delay distortion of the transmitted signal. The mathematical condition for distortion-free transmission is known as the Heaviside condition. Previous telegraph lines were overland or shorter and hence had less delay, and the need for extra inductance was not as great. Submarine communications cables are particularly subject to the problem, but early 20th century installations using balanced pairs were often continuously loaded with iron wire or tape rather than discretely with loading coils, which avoided the sealing problem.
Loading coils are historically also known as Pupin coils after Mihajlo Pupin, especially when used for the Heaviside condition and the process of inserting them is sometimes called pupinization.
Applications
Telephone lines
A common application of loading coils is to improve the voice-frequency amplitude response characteristics of the twisted balanced pairs in a telephone cable. Because twisted pair is a balanced format, half the loading coil must be inserted in each leg of the pair to maintain the balance. It is common for both these windings to be formed on the same core. This increases the flux linkages, without which the number of turns on the coil would need to be increased. Despite the use of common cores, such loading coils do not comprise transformers, as they do not provide coupling to other circuits.
Loading coils inserted periodically in series with a pair of wires reduce the attenuation at the higher voice frequencies up to the cutoff frequency of the low-pass filter formed by the inductance of the coils (plus the distributed inductance of the wires) and the distributed capacitance between the wires. Above the cutoff frequency, attenuation increases rapidly. The shorter the distance between the coils, the higher the cut-off frequency. The cutoff effect is an artifact of using lumped inductors. With loading methods using continuous distributed inductance there is no cutoff.
Without loading coils, the line response is dominated by the resistance and capacitance of the line with the attenuation gently increasing with frequency. With loading coils of exactly the right inductance, neither capacitance nor inductance dominate: the response is flat, waveforms are undistorted and the characteristic impedance is resistive up to the cutoff frequency. The coincidental formation of an audio frequency filter is also beneficial in that noise is reduced.
DSL
With loading coils, signal attenuation of a circuit remains low for signals within the passband of the transmission line but increases rapidly for frequencies above the audio cutoff frequency. If the telephone line is subsequently reused to support applications that require higher frequencies, such as in analog or digital carrier systems or digital subscriber line (DSL), loading coils must be removed or replaced. Using coils with parallel capacitors forms a filter with the topology of an m-derived filter and a band of frequencies above the cut-off is also passed. Without removal, for subscribers at an extended distance, e.g., over 4 miles (6.4 km) from the central office, DSL cannot be supported.
Carrier systems
American early and middle 20th century telephone cables had load coils at intervals of a mile (1.61 km), usually in coil cases holding many. The coils had to be removed to pass higher frequencies, but the coil cases provided convenient places for repeaters of digital T-carrier systems, which could then transmit a 1.5 Mbit/s signal that distance. Due to narrower streets and higher cost of copper, European cables had thinner wires and used closer spacing. Intervals of a kilometer allowed European systems to carry 2 Mbit/s.
Radio antenna
Another type of loading coil is used in radio antennas. Monopole and dipole radio antennas are designed to act as resonators for radio waves; the power from the transmitter, applied to the antenna through the antenna's transmission line, excites standing waves of voltage and current in the antenna element. To be “naturally” resonant, the antenna must have a physical length of one quarter of the wavelength of the radio waves used (or a multiple of that length, with odd multiples usually preferred). At resonance, the antenna acts electrically as a pure resistance, absorbing all the power applied to it from the transmitter.
In many cases, for practical reasons, it is necessary to make the antenna shorter than the resonant length, this is called an electrically short antenna. An antenna shorter than a quarter wavelength presents capacitive reactance to the transmission line . Some of the applied power is reflected back into the transmission line and travels back toward the transmitter . The two currents at the same frequency running in opposite directions causes standing waves on the transmission line , measured as a standing wave ratio (SWR) greater than one. The elevated currents waste energy by heating the wire, and can even overheat the transmitter.
To make an electrically short antenna resonant, a loading coil is inserted in series with the antenna. The coil is built to have an inductive reactance equal and opposite to the capacitive reactance of the short antenna, so the combination of reactances cancels. When so loaded the antenna presents a pure resistance to the transmission line, preventing energy from being reflected. The loading coil is often placed at the base of the antenna, between it and the transmission line (base loading), but for more efficient radiation, it is sometimes inserted near the midpoint of the antenna element (center loading).
Loading coils for powerful transmitters can have challenging design requirements, especially at low frequencies. The radiation resistance of short antennas can be very low, as low a few ohms in the LF or VLF bands, where antennas are commonly short and inductive loading is most needed. Because resistance in the coil winding is comparable to, or exceeds the radiation resistance, loading coils for extremely electrically short antennas must have extremely low AC resistance at the operating frequency. To reduce skin effect losses, the coil is often made of tubing or Litz wire, with single layer windings, with turns spaced apart to reduce proximity effect resistance. They must often handle high voltages. To reduce power lost in dielectric losses, the coil is often suspended in air supported on thin ceramic strips. The capacitively loaded antennas used at low frequencies have extremely narrow bandwidths, and therefore if the frequency is changed the loading coil must be adjustable to tune the antenna to resonance with the new transmitter frequency. Variometers are often used.
Bulk power transmission
To reduce losses due to high capacitance on long-distance bulk power transmission lines, inductance can be introduced to the circuit with a flexible AC transmission system (FACTS), a static VAR compensator, or a static synchronous series compensator. Series compensation can be thought of as an inductor connected to the circuit in series if it is supplying inductance to the circuit.
Campbell equation
The Campbell equation is a relationship due to George Ashley Campbell for predicting the propagation constant of a loaded line. It is stated as;
where,
is the propagation constant of the unloaded line
is the propagation constant of the loaded line
is the interval between coils on the loaded line
is the impedance of a loading coil and
is the characteristic impedance of the unloaded line.
A more engineer friendly rule of thumb is that the approximate requirement for spacing loading coils is ten coils per wavelength of the maximum frequency being transmitted. This approximation can be arrived at by treating the loaded line as a constant k filter and applying image filter theory to it. From basic image filter theory the angular cutoff frequency and the characteristic impedance of a low-pass constant k filter are given by;
and,
where and are the half section element values.
From these basic equations the necessary loading coil inductance and coil spacing can be found;
and,
where C is the capacitance per unit length of the line.
Expressing this in terms of number of coils per cutoff wavelength yields;
where v is the velocity of propagation of the cable in question.
Since then
.
Campbell arrived at this expression by analogy with a mechanical line periodically loaded with weights described by Charles Godfrey in 1898 who obtained a similar result. Mechanical loaded lines of this sort were first studied by Joseph-Louis Lagrange (1736–1813).
The phenomenon of cutoff whereby frequencies above the cutoff frequency are not transmitted is an undesirable side effect of loading coils (although it proved highly useful in the development of filters). Cutoff is avoided by the use of continuous loading since it arises from the lumped nature of the loading coils.
History
Oliver Heaviside
The origin of the loading coil can be found in the work of Oliver Heaviside on the theory of transmission lines. Heaviside (1881) represented the line as a network of infinitesimally small circuit elements. By applying his operational calculus to the analysis of this network he discovered (1887) what has become known as the Heaviside condition. This is the condition that must be fulfilled in order for a transmission line to be free from distortion. The Heaviside condition is that the series impedance, Z, must be proportional to the shunt admittance, Y, at all frequencies. In terms of the primary line coefficients the condition is:
where:
is the series resistance of the line per unit length
is the series self-inductance of the line per unit length
is the shunt leakage conductance of the line insulator per unit length
is the shunt capacitance between the line conductors per unit length
Heaviside was aware that this condition was not met in the practical telegraph cables in use in his day. In general, a real cable would have,
This is mainly due to the low value of leakage through the cable insulator, which is even more pronounced in modern cables which have better insulators than in Heaviside's day. In order to meet the condition, the choices are therefore to try to increase G or L or to decrease R or C. Decreasing R requires larger conductors. Copper was already in use in telegraph cables and this is the very best conductor available short of using silver. Decreasing R means using more copper and a more expensive cable. Decreasing C would also mean a larger cable (although not necessarily more copper). Increasing G is highly undesirable; while it would reduce distortion, it would at the same time increase the signal loss. Heaviside considered, but rejected, this possibility which left him with the strategy of increasing L as the way to reduce distortion.
Heaviside immediately (1887) proposed several methods of increasing the inductance, including spacing the conductors further apart and loading the insulator with iron dust. Finally, Heaviside made the proposal (1893) to use discrete inductors at intervals along the line. However, he never succeeded in persuading the British GPO to take up the idea. Brittain attributes this to Heaviside's failure to provide engineering details on the size and spacing of the coils for particular cable parameters. Heaviside's eccentric character and setting himself apart from the establishment may also have played a part in their ignoring of him.
John Stone
John S. Stone worked for the American Telephone & Telegraph Company (AT&T) and was the first to attempt to apply Heaviside's ideas to real telecommunications. Stone's idea (1896) was to use a bimetallic iron-copper cable which he had patented. This cable of Stone's would increase the line inductance due to the iron content and had the potential to meet the Heaviside condition. However, Stone left the company in 1899 and the idea was never implemented. Stone's cable was an example of continuous loading, a principle that was eventually put into practice in other forms, see for instance Krarup cable later in this article.
George Campbell
George Campbell was another AT&T engineer working in their Boston facility. Campbell was tasked with continuing the investigation into Stone's bimetallic cable, but soon abandoned it in favour of the loading coil. His was an independent discovery: Campbell was aware of Heaviside's work in discovering the Heaviside condition, but unaware of Heaviside's suggestion of using loading coils to enable a line to meet it. The motivation for the change of direction was Campbell's limited budget.
Campbell was struggling to set up a practical demonstration over a real telephone route with the budget he had been allocated. After considering that his artificial line simulators used lumped components rather than the distributed quantities found in a real line, he wondered if he could not insert the inductance with lumped components instead of using Stone's distributed line. When his calculations showed that the manholes on telephone routes were sufficiently close together to be able to insert the loading coils without the expense of either having to dig up the route or lay in new cables he changed to this new plan. The very first demonstration of loading coils on a telephone cable was on a 46-mile length of the so-called Pittsburgh cable (the test was actually in Boston, the cable had previously been used for testing in Pittsburgh) on 6 September 1899 carried out by Campbell himself and his assistant. The first telephone cable using loaded lines put into public service was between Jamaica Plain and West Newton in Boston on 18 May 1900.
Campbell's work on loading coils provided the theoretical basis for his subsequent work on filters which proved to be so important for frequency-division multiplexing. The cut-off phenomena of loading coils, an undesirable side-effect, can be exploited to produce a desirable filter frequency response.
Michael Pupin
Michael Pupin, inventor and Serbian immigrant to the USA, also played a part in the story of loading coils. Pupin filed a rival patent to the one of Campbell's. This patent of Pupin's dates from 1899. There is an earlier patent (1894, filed December 1893) which is sometimes cited as Pupin's loading coil patent but is, in fact, something different. The confusion is easy to understand, Pupin himself claims that he first thought of the idea of loading coils while climbing a mountain in 1894, although there is nothing from him published at that time.
Pupin's 1894 patent "loads" the line with capacitors rather than inductors, a scheme that has been criticised as being theoretically flawed and never put into practice. To add to the confusion, one variant of the capacitor scheme proposed by Pupin does indeed have coils. However, these are not intended to compensate the line in any way. They are there merely to restore DC continuity to the line so that it may be tested with standard equipment. Pupin states that the inductance is to be so large that it blocks all AC signals above 50 Hz. Consequently, only the capacitor is adding any significant impedance to the line and "the coils will not exercise any material influence on the results before noted".
Legal battle
Heaviside never patented his idea; indeed, he took no commercial advantage of any of his work. Despite the legal disputes surrounding this invention, it is unquestionable that Campbell was the first to actually construct a telephone circuit using loading coils. There also can be little doubt that Heaviside was the first to publish and many would dispute Pupin's priority.
AT&T fought a legal battle with Pupin over his claim. Pupin was first to patent but Campbell had already conducted practical demonstrations before Pupin had even filed his patent (December 1899). Campbell's delay in filing was due to the slow internal machinations of AT&T.
However, AT&T foolishly deleted from Campbell's proposed patent application all the tables and graphs detailing the exact value of inductance that would be required before the patent was submitted. Since Pupin's patent contained a (less accurate) formula, AT&T was open to claims of incomplete disclosure. Fearing that there was a risk that the battle would end with the invention being declared unpatentable due to Heaviside's prior publication, they decided to desist from the challenge and buy an option on Pupin's patent for a yearly fee so that AT&T would control both patents. By January 1901 Pupin had been paid $200,000 ($13 million in 2011) and by 1917, when the AT&T monopoly ended and payments ceased, he had received a total of $455,000 ($25 million in 2011).
Benefit to AT&T
The invention was of enormous value to AT&T. Telephone cables could now be used to twice the distance previously possible, or alternatively, a cable of half the previous quality (and cost) could be used over the same distance. When considering whether to allow Campbell to go ahead with the demonstration, their engineers had estimated that they stood to save $700,000 in new installation costs in New York and New Jersey alone. It has been estimated that AT&T saved $100 million in the first quarter of the 20th century. Heaviside, who began it all, came away with nothing. He was offered a token payment but would not accept, wanting the credit for his work. He remarked ironically that if his prior publication had been admitted it would "interfere ... with the flow of dollars in the proper direction ...".
Submarine cables
Distortion is a particular problem for submarine communication cables, partly because their great length allows more distortion to build up, but also because they are more susceptible to distortion than open wires on poles due to the characteristics of the insulating material. Different wavelengths of the signal travel at different velocities in the material causing dispersion. It was this problem on the first transatlantic telegraph cable that motivated Heaviside to study the problem and find the solution. Loading coils solve the dispersion problem, and the first use of them on a submarine cable was in 1906 by Siemens and Halske in a cable across Lake Constance.
There are a number of difficulties using loading coils with heavy submarine cables. The bulge of the loading coils could not easily pass through the cable laying apparatus of cable ships and the ship had to slow down during the laying of a loading coil. Discontinuities where the coils were installed caused stresses in the cable during laying. Without great care, the cable might part and would be difficult to repair. A further problem was that the material science of the time had difficulties sealing the joint between coil and cable against ingress of seawater. When this occurred the cable was ruined. Continuous loading was developed to overcome these problems, which also has the benefit of not having a cutoff frequency.
Krarup cable
A Danish engineer, Carl Emil Krarup, invented a form of continuously loaded cable which solved the problems of discrete loading coils. Krarup cable has iron wires continuously wound around the central copper conductor with adjacent turns in contact with each other. This cable was the first use of continuous loading on any telecommunication cable. In 1902, Krarup both wrote his paper on this subject and saw the installation of the first cable between Helsingør (Denmark) and Helsingborg (Sweden).
Permalloy cable
Even though the Krarup cable added inductance to the line, this was insufficient to meet the Heaviside condition. AT&T searched for a better material with higher magnetic permeability. In 1914, Gustav Elmen discovered permalloy, a magnetic nickel-iron annealed alloy. In c. 1915, Oliver E. Buckley, H. D. Arnold, and Elmen, all at Bell Labs, greatly improved transmission speeds by suggesting a method of constructing submarine communications cable using permalloy tape wrapped around the copper conductors.
The cable was tested in a trial in Bermuda in 1923. The first permalloy cable placed in service connected New York City and Horta (Azores) in September 1924. Permalloy cable enabled signalling speed on submarine telegraph cables to be increased to 400 words/min at a time when 40 words/min was considered good. The first transatlantic cable achieved only two words/min.
Mu-metal cable
Mu-metal has similar magnetic properties to permalloy but the addition of copper to the alloy increases the ductility and allows the metal to be drawn into wire. Mu-metal cable is easier to construct than permalloy cable, the mu-metal being wound around the core copper conductor in much the same way as the iron wire in Krarup cable. A further advantage with mu-metal cable is that the construction lends itself to a variable loading profile whereby the loading is tapered towards the ends.
Mu-metal was invented in 1923 by the Telegraph Construction and Maintenance Company, London, who made the cable, initially, for the Western Union Telegraph Co. Western Union were in competition with AT&T and the Western Electric Company who were using permalloy. The patent for permalloy was held by Western Electric which prevented Western Union from using it.
Patch loading
Continuous loading of cables is expensive and hence is only done when absolutely necessary. Lumped loading with coils is cheaper but has the disadvantages of difficult seals and a definite cutoff frequency. A compromise scheme is patch loading whereby the cable is continuously loaded in repeated sections. The intervening sections are left unloaded.
Current practice
Loaded cable is no longer a useful technology for submarine communication cables, having first been superseded by co-axial cable using electrically powered in-line repeaters and then by fibre-optic cable. Manufacture of loaded cable declined in the 1930s and was then superseded by other technologies post-World War 2. Loading coils can still be found in some telephone landlines today but new installations use more modern technology.
See also
Electrical lengthening
Antenna tuner
Constant k filter
Unloaded phantom
References
Bibliography
Bakshi, V.A.; Bakshi, A V, Transmission Lines And Waveguide, Technical Publications, 2009 .
Bray, J., Innovation and the Communications Revolution, Institute of Electrical Engineers, 2002 .
Brittain, James E., "The introduction of the loading coil: George A. Campbell and Michael I. Pupin", Technology and Culture, vol. 11, no. 1, pp. 36–57, The Johns Hopkins University Press on behalf of the Society for the History of Technology, January 1970.
Godfrey, Charles, "On discontinuities connected with the propagation of wave-motion along a periodically loaded string", Philosophical Magazine, ser. 5, vol. 45, no. 275, pp. 356-363, April 1898.
Green, Allan, "150 Years Of Industry & Enterprise At Enderby's Wharf", Fleming Centenary Conference, University College, July 2004, retrieved from History of the Atlantic Cable & Undersea Communications, 16 January 2009.
Griffiths, Hugh, "Oliver Heaviside", ch. 6 in, Sarkar, Tapan K; Mailloux, Robert J; Oliner, Arthur A; Salazar-Palma, Magdalena; Sengupta, Dipak L, History of Wireless, Wiley, 2006 .
Heaviside, O., Electrical Papers, American Mathematical Society Bookstore, 1970 (reprint from 1892) .
Huurdeman, A.A., The Worldwide History of Telecommunications, Wiley-IEEE, 2003 .
Kragh, H., "The Krarup cable: Invention and early development", Technology and Culture, vol. 35, no. 1, pp. 129–157, The Johns Hopkins University Press on behalf of the Society for the History of Technology, January 1994.
Mason, Warren P., "Electrical and mechanical analogies", Bell System Technical Journal, vol. 20, no. 4, pp. 405–414, October 1941.
May, Earl Chapin, "Four millions on 'permalloy'—to win!", Popular Mechanics, vol. 44, no. 6, pp. 947–952, December 1925 .
Nahin, Paul J., Oliver Heaviside: The Life, Work, and Times of an Electrical Genius of the Victorian Age, JHU Press, 2002 .
Newell, E.L., "Loading coils for ocean cables", Transactions of the American Institute of Electrical Engineers, Part I: Communication and Electronics, vol. 76, iss. 4, pp. 478–482, September 1957.
External links
Allan Green, "150 Years Of Industry & Enterprise At Enderby's Wharf", History of the Atlantic Cable & Undersea Communications. Includes photographs of continuously loaded cable.
Electromagnetic coils
Telephony equipment
Telecommunications equipment
Telecommunications engineering
Communication circuits
History of electronic engineering | Loading coil | [
"Engineering"
] | 5,193 | [
"Telecommunications engineering",
"Electronic engineering",
"History of electronic engineering",
"Electrical engineering",
"Communication circuits"
] |
41,333 | https://en.wikipedia.org/wiki/Long-haul%20communications | In telecommunications, the term long-haul communications has the following meanings:
1. In public switched networks, pertaining to circuits that span large distances, such as the circuits in inter-LATA, interstate, and international communications. See also Long line (telecommunications)
2. In the military community, communications among users on a national or worldwide basis.
Note 1: Compared to tactical communications, long-haul communications are characterized by (a) higher levels of users, such as the US National Command Authority, (b) more stringent performance requirements, such as higher quality circuits, (c) longer distances between users, including worldwide distances, (d) higher traffic volumes and densities, (e) larger switches and trunk cross sections, and (f) fixed and recoverable assets.
Note 2: "Long-haul communications" usually pertains to the U.S. Defense Communications System.
Note 3: "Long-haul telecommunications technicians" can be translated into many fields of IT work within the corporate industry (Information Technology, Network Technician, Telecommunication Specialist, It Support, and so on). While the term is used in military most career fields that are in communications such as 3D1X2 - Cyber Transport Systems (the career field has been renamed so many times over the course of many years but essentially it is the same job (Network Infrastructure Tech., Systems Control Technician, and Cyber Transport Systems)) or may work in areas that require the "in between" (cloud networking) for networks (MSPP, ATM, Routers, Switches), phones (VOIP, DS0 - DS4 or higher, and so on), encryption (configuring encryption devices or monitoring), and video support data transfers. The "bulk data transfer" or aggregation networking.
The Long-haul telecommunication technicians is considered a "jack of all" but it is much in the technician's interest to gather greater education with certifications to qualify for certain jobs outside the military. The Military provides an avenue but does not make the individual a master of the career field. The technician will find that the job out look outside of military requires many things that aren't required of them within the career field while in the military. So it is best to find the job that is similar to the AFSC and also view the companies description of the qualification to fit that job. Also at least get an associate degree, over 5 years experience, and all of the required "certs" (Network +, Security +, CCNA, CCNP and so on) to acquire the job or at least an interview. The best time to apply or get a guaranteed job is the last three months before you leave the military. Military personnel that are within the career field 3D1X2 require a Secret, TS, or TS with SCI clearance in order to do the job.
See also
Long-distance calling
Meteor burst communications
Communication circuits | Long-haul communications | [
"Engineering"
] | 590 | [
"Telecommunications engineering",
"Communication circuits"
] |
41,334 | https://en.wikipedia.org/wiki/Longitudinal%20redundancy%20check | In telecommunication, a longitudinal redundancy check (LRC), or horizontal redundancy check, is a form of redundancy check that is applied independently to each of a parallel group of bit streams. The data must be divided into transmission blocks, to which the additional check data is added.
The term usually applies to a single parity bit per bit stream, calculated independently of all the other bit streams (BIP-8).
This "extra" LRC word at the end of a block of data is very similar to checksum and cyclic redundancy check (CRC).
Optimal rectangular code
While simple longitudinal parity can only detect errors, it can be combined with additional error-control coding, such as a transverse redundancy check (TRC), to correct errors. The transverse redundancy check is stored on a dedicated "parity track".
Whenever any single-bit error occurs in a transmission block of data, such two-dimensional parity checking, or "two-coordinate parity checking",
enables the receiver to use the TRC to detect which byte the error occurred in, and the LRC to detect exactly which track the error occurred in, to discover exactly which bit is in error, and then correct that bit by flipping it.
Pseudocode
International standard ISO 1155 states that a longitudinal redundancy check for a sequence of bytes may be computed in software by the following algorithm:
lrc := 0
for each byte b in the buffer do
lrc := (lrc + b) and 0xFF
lrc := (((lrc XOR 0xFF) + 1) and 0xFF)
which can be expressed as "the 8-bit two's-complement value of the sum of all bytes modulo 28" (x AND 0xFF is equivalent to x MOD 28).
Other forms
Many protocols use an XOR-based longitudinal redundancy check byte (often called block check character or BCC), including the serial line interface protocol (SLIP, not to be confused with the later and well-known Serial Line Internet Protocol),
the IEC 62056-21 standard for electrical-meter reading, smart cards as defined in ISO/IEC 7816, and the ACCESS.bus protocol.
An 8-bit LRC such as this is equivalent to a cyclic redundancy check using the polynomial x8 + 1, but the independence of the bit streams is less clear when looked at in that way.
References
Error detection and correction
Articles with example pseudocode
ISO standards | Longitudinal redundancy check | [
"Engineering"
] | 522 | [
"Error detection and correction",
"Reliability engineering"
] |
41,336 | https://en.wikipedia.org/wiki/Long-term%20stability | In electronics, the long-term stability of an oscillator is the degree of uniformity of frequency over time, when the frequency is measured under identical environmental conditions, such as supply voltage, load, and temperature. Long-term frequency changes are caused by changes in the oscillator elements that determine frequency, such as crystal drift, inductance changes, and capacitance changes.
Timekeeping | Long-term stability | [
"Physics"
] | 86 | [
"Spacetime",
"Timekeeping",
"Physical quantities",
"Time"
] |
41,338 | https://en.wikipedia.org/wiki/Loop%20gain | In electronics and control system theory, loop gain is the sum of the gain, expressed as a ratio or in decibels, around a feedback loop. Feedback loops are widely used in electronics in amplifiers and oscillators, and more generally in both electronic and nonelectronic industrial control systems to control industrial plant and equipment. The concept is also used in biology. In a feedback loop, the output of a device, process or plant is sampled and applied to alter the input, to better control the output. The loop gain, along with the related concept of loop phase shift, determines the behavior of the device, and particularly whether the output is stable, or unstable, which can result in oscillation. The importance of loop gain as a parameter for characterizing electronic feedback amplifiers was first recognized by Heinrich Barkhausen in 1921, and was developed further by Hendrik Wade Bode and Harry Nyquist at Bell Labs in the 1930s.
A block diagram of an electronic amplifier with negative feedback is shown at right. The input signal is applied to the amplifier with open-loop gain A and amplified. The output of the amplifier is applied to a feedback network with gain β, and subtracted from the input to the amplifier. The loop gain is calculated by imagining the feedback loop is broken at some point, and calculating the net gain if a signal is applied. In the diagram shown, the loop gain is the product of the gains of the amplifier and the feedback network, −Aβ. The minus sign is because the feedback signal is subtracted from the input.
The gains A and β, and therefore the loop gain, generally vary with the frequency of the input signal, and so are usually expressed as functions of the angular frequency ω in radians per second. It is often displayed as a graph with the horizontal axis frequency ω and the vertical axis gain. In amplifiers, the loop gain is the difference between the open-loop gain curve and the closed-loop gain curve (actually, the 1/β curve) on a dB scale.
See also
Phase margin and gain margin
Nyquist plot
In telecommunications, the term "loop gain" can refer to the total usable power gain of a carrier terminal or two-wire repeater. The maximum usable gain is determined by, and may not exceed, the losses in the closed path.
Summary of negative feedback amplifier terms
References
External links
Loop Gain and its Effects on Analog Circuit Performance
Striving for Small-Signal Stability Ieee Circuits and Devices Magazine, vol. 17, no. 1, pp. 31-41, January 2001.
Electronic amplifiers | Loop gain | [
"Technology"
] | 523 | [
"Electronic amplifiers",
"Amplifiers"
] |
41,341 | https://en.wikipedia.org/wiki/Machine-readable%20medium%20and%20data | In communications and computing, a machine-readable medium (or computer-readable medium) is a medium capable of storing data in a format easily readable by a digital computer or a sensor.
It contrasts with human-readable medium and data.
The result is called machine-readable data or computer-readable data, and the data itself can be described as having machine-readability.
Data
Machine-readable data must be structured data.
Attempts to create machine-readable data occurred as early as the 1960s. At the same time that seminal developments in machine-reading and natural-language processing were releasing (like Weizenbaum's ELIZA), people were anticipating the success of machine-readable functionality and attempting to create machine-readable documents. One such example was musicologist Nancy B. Reich's creation of a machine-readable catalog of composer William Jay Sydeman's works in 1966.
In the United States, the OPEN Government Data Act of 14 January 2019 defines machine-readable data as "data in a format that can be easily processed by a computer without human intervention while ensuring no semantic meaning is lost." The law directs U.S. federal agencies to publish public data in such a manner, ensuring that "any public data asset of the agency is machine-readable".
Machine-readable data may be classified into two groups: human-readable data that is marked up so that it can also be read by machines (e.g. microformats, RDFa, HTML), and data file formats intended principally for processing by machines (CSV, RDF, XML, JSON). These formats are only machine readable if the data contained within them is formally structured; exporting a CSV file from a badly structured spreadsheet does not meet the definition.
Machine readable is not synonymous with digitally accessible. A digitally accessible document may be online, making it easier for humans to access via computers, but its content is much harder to extract, transform, and process via computer programming logic if it is not machine-readable.
Extensible Markup Language (XML) is designed to be both human- and machine-readable, and Extensible Stylesheet Language Transformation (XSLT) is used to improve the presentation of the data for human readability. For example, XSLT can be used to automatically render XML in Portable Document Format (PDF). Machine-readable data can be automatically transformed for human-readability but, generally speaking, the reverse is not true.
For purposes of implementation of the Government Performance and Results Act (GPRA) Modernization Act, the Office of Management and Budget (OMB) defines "machine readable format" as follows: "Format in a standard computer language (not English text) that can be read automatically by a web browser or computer system. (e.g.; xml). Traditional word processing documents and portable document format (PDF) files are easily read by humans but typically are difficult for machines to interpret. Other formats such as extensible markup language (XML), (JSON), or spreadsheets with header columns that can be exported as comma separated values (CSV) are machine readable formats. As HTML is a structural markup language, discreetly labeling parts of the document, computers are able to gather document components to assemble tables of contents, outlines, literature search bibliographies, etc. It is possible to make traditional word processing documents and other formats machine readable but the documents must include enhanced structural elements."
Media
Examples of machine-readable media include magnetic media such as magnetic disks, cards, tapes, and drums, punched cards and paper tapes, optical discs, barcodes and magnetic ink characters.
Common machine-readable technologies include magnetic recording, processing waveforms, and barcodes. Optical character recognition (OCR) can be used to enable machines to read information available to humans. Any information retrievable by any form of energy can be machine-readable.
Examples include:
Acoustics
Chemical
Photochemical
Electrical
Semiconductor used in volatile RAM microchips
Floating-gate transistor used in non-volatile memory cards
Radio transmission
Magnetic storage
Mechanical
Tins And Swins
Punched card
Paper tape
Music roll
Music box cylinder or disk
Grooves (See also: Audio Data)
Phonograph cylinder
Gramophone record
DictaBelt (groove on plastic belt)
Capacitance Electronic Disc
Optics
Optical storage
Thermodynamic
Applications
Documents
Catalogs
Dictionaries
Passports
See also
Paper data storage
Symmetric Phase Recording
Open data
Linked data
Human-readable medium and data
Semantic Web
Machine-readable postal marking
References
Computing terminology
Storage media
Optical character recognition | Machine-readable medium and data | [
"Technology"
] | 963 | [
"Computing terminology"
] |
41,342 | https://en.wikipedia.org/wiki/Magneto-ionic%20double%20refraction | In telecommunications, magneto-ionic double refraction is the combined effect of the Earth's magnetic field and atmospheric ionization, whereby a linearly polarized wave entering the ionosphere is split into two components called the ordinary wave and extraordinary wave.
The component waves follow different paths, experience different attenuations, have different phase velocities, and, in general, are elliptically polarized in opposite senses. The critical frequency of the extraordinary wave is always greater than the critical frequency of the ordinary wave (i.e. the wave in absence of the magnetic field) by the amount approximately equal to .5 times of gyro frequency . The amplitude of extraordinary wave is dependent on the earth magnetic field at that particular point . Beside splitting, the polarization of the incident radio wave is also effected by this phenomenon because the electron that were earlier in simple harmonic motion only are now in spiral motion too due to the magnetic field.
References
Radio frequency propagation | Magneto-ionic double refraction | [
"Physics"
] | 197 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
41,343 | https://en.wikipedia.org/wiki/Magneto-optic%20effect | A magneto-optic effect is any one of a number of phenomena in which an electromagnetic wave propagates through a medium that has been altered by the presence of a quasistatic magnetic field. In such a medium, which is also called gyrotropic or gyromagnetic, left- and right-rotating elliptical polarizations can propagate at different speeds, leading to a number of important phenomena. When light is transmitted through a layer of magneto-optic material, the result is called the Faraday effect: the plane of polarization can be rotated, forming a Faraday rotator. The results of reflection from a magneto-optic material are known as the magneto-optic Kerr effect (not to be confused with the nonlinear Kerr effect).
In general, magneto-optic effects break time reversal symmetry locally (i.e. when only the propagation of light, and not the source of the magnetic field, is considered) as well as Lorentz reciprocity, which is a necessary condition to construct devices such as optical isolators (through which light passes in one direction but not the other).
Two gyrotropic materials with reversed rotation directions of the two principal polarizations, corresponding to complex-conjugate ε tensors for lossless media, are called optical isomers.
Gyrotropic permittivity
In particular, in a magneto-optic material the presence of a magnetic field (either externally applied or because the material itself is ferromagnetic) can cause a change in the permittivity tensor ε of the material. The ε becomes anisotropic, a 3×3 matrix, with complex off-diagonal components, depending on the frequency ω of incident light. If the absorption losses can be neglected, ε is a Hermitian matrix. The resulting principal axes become complex as well, corresponding to elliptically-polarized light where left- and right-rotating polarizations can travel at different speeds (analogous to birefringence).
More specifically, for the case where absorption losses can be neglected, the most general form of Hermitian ε is:
or equivalently the relationship between the displacement field D and the electric field E is:
where is a real symmetric matrix and is a real pseudovector called the gyration vector, whose magnitude is generally small compared to the eigenvalues of . The direction of g is called the axis of gyration of the material. To first order, g is proportional to the applied magnetic field:
where is the magneto-optical susceptibility (a scalar in isotropic media, but more generally a tensor). If this susceptibility itself depends upon the electric field, one can obtain a nonlinear optical effect of magneto-optical parametric generation (somewhat analogous to a Pockels effect whose strength is controlled by the applied magnetic field).
The simplest case to analyze is the one in which g is a principal axis (eigenvector) of , and the other two eigenvalues of are identical. Then, if we let g lie in the z direction for simplicity, the ε tensor simplifies to the form:
Most commonly, one considers light propagating in the z direction (parallel to g). In this case the solutions are elliptically polarized electromagnetic waves with phase velocities (where μ is the magnetic permeability). This difference in phase velocities leads to the Faraday effect.
For light propagating purely perpendicular to the axis of gyration, the properties are known as the Cotton-Mouton effect and used for a Circulator.
Kerr rotation and Kerr ellipticity
Kerr rotation and Kerr ellipticity are changes in the polarization of incident light which comes in contact with a gyromagnetic material. Kerr rotation is a rotation in the plane of polarization of transmitted light, and Kerr ellipticity is the ratio of the major to minor axis of the ellipse traced out by elliptically polarized light on the plane through which it propagates. Changes in the orientation of polarized incident light can be quantified using these two properties.
According to classical physics, the speed of light varies with the permittivity of a material:
where is the velocity of light through the material, is the material permittivity, and is the material permeability. Because the permittivity is anisotropic, polarized light of different orientations will travel at different speeds.
This can be better understood if we consider a wave of light that is circularly polarized (seen to the right). If this wave interacts with a material at which the horizontal component (green sinusoid) travels at a different speed than the vertical component (blue sinusoid), the two components will fall out of the 90 degree phase difference (required for circular polarization) changing the Kerr ellipticity.
A change in Kerr rotation is most easily recognized in linearly polarized light, which can be separated into two circularly polarized components: Left-handed circular polarized (LHCP) light and right-handed circular polarized (RHCP) light. The anisotropy of the magneto-optic material permittivity causes a difference in the speed of LHCP and RHCP light, which will cause a change in the angle of polarized light. Materials that exhibit this property are known as birefringent.
From this rotation, we can calculate the difference in orthogonal velocity components, find the anisotropic permittivity, find the gyration vector, and calculate the applied magnetic field .
See also
Zeeman effect
QMR effect
Magneto-optic Kerr effect
Faraday effect
Voigt Effect
Photoelectric effect
References
Federal Standard 1037C and from MIL-STD-188
Broad band magneto-optical spectroscopy
Optical phenomena
Electric and magnetic fields in matter
de:Magnetooptik#Magnetooptische Effekte | Magneto-optic effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,226 | [
"Physical phenomena",
"Electric and magnetic fields in matter",
"Materials science",
"Optical phenomena",
"Condensed matter physics",
"Magneto-optic effects"
] |
41,345 | https://en.wikipedia.org/wiki/Main%20lobe | In a radio antennas, the main lobe or main beam is the region of the radiation pattern containing the highest power or exhibiting the greatest field strength.
The radiation pattern of most antennas shows a pattern of "lobes" at various directions, where the radiated signal strength reaches a local maximum, separated by "nulls", at which the radiation falls to zero. In a directional antenna in which the objective is to emit the radio waves in one direction, the lobe in that direction is designed to have higher field strength than the others, so on a graph of the radiation pattern it appears biggest; this is the main lobe. The other lobes are called "sidelobes", and usually represent unwanted radiation in undesired directions. The sidelobe in the opposite direction from the main lobe is called the "backlobe".
The radiation pattern referred to above is usually the horizontal radiation pattern, which is plotted as a function of azimuth about the antenna, although the vertical radiation pattern may also have a main lobe. The beamwidth of the antenna is the width of the main lobe, usually specified by the half power beam width (HPBW), the angle encompassed between the points on the side of the lobe where the power has fallen to half (-3 dB) of its maximum value.
The concepts of main lobe and sidelobes also apply to acoustics and optics, and are used to describe the radiation pattern of optical systems like telescopes, and acoustic transducers like microphones and loudspeakers.
See also
Antenna boresight
Antenna gain
Beam radio navigation
Beam waveguide antenna
Beamwidth
Side lobe
Software defined antenna
Antennas | Main lobe | [
"Engineering"
] | 337 | [
"Antennas",
"Telecommunications engineering"
] |
41,347 | https://en.wikipedia.org/wiki/Maintainability | Maintainability is the ease of maintaining or providing maintenance for a functioning product or service. Depending on the field, it can have slightly different meanings.
Usage in different fields
Engineering
In engineering, maintainability is the ease with which a product can be maintained to:
correct defects or their cause,
Repair or replace faulty or worn-out components without having to replace still working parts,
prevent unexpected working conditions,
maximize a product's useful life,
maximize efficiency, reliability, and safety,
meet new requirements,
make future maintenance easier, or
cope with a changing environment.
In some cases, maintainability involves a system of continuous improvement - learning from the past to improve the ability to maintain systems, or improve the reliability of systems based on maintenance experience.
Telecommunication
In telecommunications and several other engineering fields, the term maintainability has the following meanings:
A characteristic of design and installation, expressed as the probability that an item will be retained in or restored to a specified condition within a given period of time, when the maintenance is performed by prescribed procedures and resources.
The ease with which maintenance of a functional unit can be performed by prescribed requirements.
Software
In software engineering, these activities are known as software maintenance (cf. ISO/IEC 9126). Closely related concepts in the software engineering domain are evolvability, modifiability, technical debt, and code smells.
The maintainability index is calculated with certain formulae from lines-of-code measures, McCabe measures and Halstead complexity measures.
The measurement and tracking of maintainability are intended to help reduce or reverse a system's tendency toward "code entropy" or degraded integrity, and to indicate when it becomes cheaper and/or less risky to rewrite the code than it is to change it.
See also
List of system quality attributes
Maintenance (technical)
Supportability (disambiguation)
Serviceability (disambiguation)
Software sizing
RAMS
Throwaway society
References
Further reading
External links
Calculation, Field testing and history of Maintainability Index (MI) (with references)
Measurement of Maintainability Index (MI)
Telecommunications engineering
Design for X
Maintenance
Software quality
Broad-concept articles | Maintainability | [
"Engineering"
] | 430 | [
"Telecommunications engineering",
"Design for X",
"Maintenance",
"Mechanical engineering",
"Electrical engineering",
"Design"
] |
41,349 | https://en.wikipedia.org/wiki/Managed%20object | In network, a managed object is an abstract representation of network resources that are managed. With "representation", we mean not only the actual device that is managed, but also the device driver, that communicates with the device. An example of a printer as a managed object is the window that shows information about the printer, such as the location, printer status, printing progress, paper choice, and printing margins.
The database, where all managed objects are stored, is called Management Information Base. In contrast with a CI, a managed object is "dynamic" and communicates with other network resources that are managed.
A managed object may represent a physical entity, a network service, or an abstraction of a resource that exists independently of its use in management.
In telecommunications management, managed object can refer to a resource within the telecommunications environment that may be managed through the use of operation, administration, maintenance, and provisioning (OAMP) application protocols.
References
Network management | Managed object | [
"Engineering"
] | 195 | [
"Computer networks engineering",
"Network management"
] |
41,354 | https://en.wikipedia.org/wiki/Master%20frequency%20generator | A master frequency generator or master electronic oscillator, in frequency-division multiplexing (FDM), is a piece of equipment used to provide system end-to-end carrier frequency synchronization and frequency accuracy of tones.
The following types of oscillators are used in the Defense Communications System FDM systems:
Type 1 - A master carrier oscillator as an integral part of the multiplexer set.
Type 2 - A submaster oscillator equipment or slave oscillator equipment as an integral part of the multiplexer set.
Type 3 - An external master oscillator equipment that has extremely accurate and stable characteristics.
References
Synchronization | Master frequency generator | [
"Engineering"
] | 144 | [
"Telecommunications engineering",
"Synchronization"
] |
41,355 | https://en.wikipedia.org/wiki/Master%20station | In telecommunications, a master station is a station that controls or coordinates the activities of other stations in the system.
Examples:
In a data network, the control station may designate a master station to ensure data transfer to one or more slave stations. Such a master station controls one or more data links of the data communications network at any given instant. The assignment of master status to a given station is temporary and is controlled by the control station according to the procedures set forth in the operational protocol. Master status is normally conferred upon a station so that it may transmit a message, but a station need not have a message to send to be designated the master station.
In navigation systems using precise time dissemination, the master station is a station that has the clock that is used to synchronize the clocks of subordinate stations.
In basic mode link control, the master station is a data station that has accepted an invitation to ensure a data transfer to one or more slave stations. At a given instant, there can be only one master station on a data link.
Operation modes
In data transmission, a master station can be set to not wait for a reply from a slave station after transmitting each message or transmission block. In this case the station is said to be in "continuous operation".
References
Telecommunications systems | Master station | [
"Technology"
] | 259 | [
"Telecommunications systems"
] |
41,359 | https://en.wikipedia.org/wiki/Maximum%20usable%20frequency | In radio transmission, maximum usable frequency (MUF) is the highest radio frequency that can be used for transmission between two points on Earth by reflection from the ionosphere (skywave or skip) at a specified time, independent of transmitter power. This index is especially useful for shortwave transmissions.
In shortwave radio communication, a major mode of long distance propagation is for the radio waves to reflect off the ionized layers of the atmosphere and return diagonally back to Earth. In this way radio waves can travel beyond the horizon, around the curve of the Earth. However the refractive index of the ionosphere decreases with increasing frequency, so there is an upper limit to the frequency which can be used. Above this frequency the radio waves are not reflected by the ionosphere but are transmitted through it into space.
The ionization of the atmosphere varies with time of day and season as well as with solar conditions, so the upper frequency limit for skywave communication varies throughout the day. MUF is a median frequency, defined as the highest frequency at which skywave communication is possible 50% of the days in a month, as opposed to the lowest usable high frequency (LUF) which is the frequency at which communication is possible 90% of the days, and the frequency of optimum transmission (FOT).
Typically the MUF is a predicted number. Given the maximum observed frequency (MOF) for a mode on each day of the month at a given hour, the MUF is the highest frequency for which an ionospheric communications path is predicted on 50% of the days of the month.
On a given day, communications may or may not succeed at the MUF. Commonly, the optimal operating frequency for a given path is estimated at 80 to 90% of the MUF. As a rule of thumb the MUF is approximately 3 times the critical frequency.
where the critical frequency is the highest frequency reflected for a signal propagating directly upward and θ is the angle of incidence.
Optimum Working Frequency
Another important parameter used in skywave propagation is the optimum working frequency (OWF), which estimates the maximum frequency that must be used for a given critical frequency and incident angle. It is the frequency chosen to avoid the irregularities of the atmosphere.
See also
DX communication
E-layer
E-skip
F-layer
Lowest usable high frequency
MW DX
Near vertical incidence skywave
Radio propagation
Skip distance
TV-FM DX
Sources
External links
MUF Basics
Radio frequency propagation | Maximum usable frequency | [
"Physics"
] | 502 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
41,363 | https://en.wikipedia.org/wiki/Mean%20time%20between%20outages | In a system the mean time between outages (MTBO) is the mean time between equipment failures that result in loss of system continuity or unacceptable degradation.
The MTBO is calculated by the equation,
where MTBF is the nonredundant mean time between failures and FFAS is the fraction of failures for which the failed equipment is automatically bypassed.
References
Reliability engineering | Mean time between outages | [
"Engineering"
] | 75 | [
"Systems engineering",
"Reliability engineering"
] |
41,365 | https://en.wikipedia.org/wiki/Mediation%20function | In telecommunications network management, a mediation function is a function that routes or acts on information passing between network elements and network operations.
Examples of mediation functions are communications control, protocol conversion, data handling, communications of primitives, processing that includes decision-making, and data storage.
Mediation functions can be shared among network elements, mediation devices, and network operation centers.
Sources
Network management | Mediation function | [
"Engineering"
] | 77 | [
"Computer networks engineering",
"Network management"
] |
41,372 | https://en.wikipedia.org/wiki/Sprague%E2%80%93Grundy%20theorem | In combinatorial game theory, the Sprague–Grundy theorem states that every impartial game under the normal play convention is equivalent to a one-heap game of nim, or to an infinite generalization of nim. It can therefore be represented as a natural number, the size of the heap in its equivalent game of nim, as an ordinal number in the infinite generalization, or alternatively as a nimber, the value of that one-heap game in an algebraic system whose addition operation combines multiple heaps to form a single equivalent heap in nim.
The Grundy value or nim-value of any impartial game is the unique nimber that the game is equivalent to. In the case of a game whose positions are indexed by the natural numbers (like nim itself, which is indexed by its heap sizes), the sequence of nimbers for successive positions of the game is called the nim-sequence of the game.
The Sprague–Grundy theorem and its proof encapsulate the main results of a theory discovered independently by R. P. Sprague (1936) and P. M. Grundy (1939).
Definitions
For the purposes of the Sprague–Grundy theorem, a game is a two-player sequential game of perfect information satisfying the ending condition (all games come to an end: there are no infinite lines of play) and the normal play condition (a player who cannot move loses).
At any given point in the game, a player's position is the set of moves they are allowed to make. As an example, we can define the zero game to be the two-player game where neither player has any legal moves. Referring to the two players as (for Alice) and (for Bob), we would denote their positions as , since the set of moves each player can make is empty.
An impartial game is one in which at any given point in the game, each player is allowed exactly the same set of moves. Normal-play nim is an example of an impartial game. In nim, there are one or more heaps of objects, and two players (we'll call them Alice and Bob), take turns choosing a heap and removing 1 or more objects from it. The winner is the player who removes the final object from the final heap. The game is impartial because for any given configuration of pile sizes, the moves Alice can make on her turn are exactly the same moves Bob would be allowed to make if it were his turn. In contrast, a game such as checkers is not impartial because, supposing Alice were playing red and Bob were playing black, for any given arrangement of pieces on the board, if it were Alice's turn, she would only be allowed to move the red pieces, and if it were Bob's turn, he would only be allowed to move the black pieces.
Note that any configuration of an impartial game can therefore be written as a single position, because the moves will be the same no matter whose turn it is. For example, the position of the zero game can simply be written , because if it's Alice's turn, she has no moves to make, and if it's Bob's turn, he has no moves to make either.
A move can be associated with the position it leaves the next player in.
Doing so allows positions to be defined recursively. For example, consider the following game of Nim played by Alice and Bob.
Example Nim Game
At step 6 of the game (when all of the heaps are empty) the position is , because Bob has no valid moves to make. We name this position .
At step 5, Alice had exactly one option: to remove one object from heap C, leaving Bob with no moves. Since her move leaves Bob in position , her position is written . We name this position .
At step 4, Bob had two options: remove one from B or remove one from C. Note, however, that it didn't really matter which heap Bob removed the object from: Either way, Alice would be left with exactly one object in exactly one pile. So, using our recursive definition, Bob really only has one move: . Thus, Bob's position is .
At step 3, Alice had 3 options: remove two from C, remove one from C, or remove one from B. Removing two from C leaves Bob in position . Removing one from C leaves Bob with two piles, each of size one, i.e., position , as described in step 4. However, removing 1 from B would leave Bob with two objects in a single pile. His moves would then be and , so her move would result in the position . We call this position . Alice's position is then the set of all her moves: .
Following the same recursive logic, at step 2, Bob's position is
Finally, at step 1, Alice's position is
Nimbers
The special names , , and referenced in our example game are called nimbers. In general, the nimber corresponds to the position in a game of nim where there are exactly objects in exactly one heap.
Formally, nimbers are defined inductively as follows: is , , and for all , .
While the word nimber comes from the game nim, nimbers can be used to describe the positions of any finite, impartial game, and in fact, the Sprague–Grundy theorem states that every instance of a finite, impartial game can be associated with a single nimber.
Combining Games
Two games can be combined by adding their positions together.
For example, consider another game of nim with heaps , , and .
Example Game 2
We can combine it with our first example to get a combined game with six heaps: , , , , , and :
Combined Game
To differentiate between the two games, for the first example game, we'll label its starting position , and color it blue:
For the second example game, we'll label the starting position and color it red:
To compute the starting position of the combined game, remember that a player can either make a move in the first game, leaving the second game untouched, or make a move in the second game, leaving the first game untouched. So the combined game's starting position is:
The explicit formula for adding positions is: , which means that addition is both commutative and associative.
Equivalence
Positions in impartial games fall into two outcome classes: either the next player (the one whose turn it is) wins (an - position), or the previous player wins (a - position). So, for example, is a -position, while is an -position.
Two positions and are equivalent if, no matter what position is added to them, they are always in the same outcome class.
Formally,
if and only if , is in the same outcome class as .
To use our running examples, notice that in both the first and second games above, we can show that on every turn, Alice has a move that forces Bob into a -position. Thus, both and are -positions. (Notice that in the combined game, Bob is the player with the -positions. In fact, is a -position, which as we will see in Lemma 2, means .)
First Lemma
As an intermediate step to proving the main theorem, we show that for every position and every -position , the equivalence holds. By the above definition of equivalence, this amounts to showing that and share an outcome class for all .
Suppose that is a -position. Then the previous player has a winning strategy for : respond to moves in according to their winning strategy for (which exists by virtue of being a -position), and respond to moves in according to their winning strategy for (which exists for the analogous reason). So must also be a -position.
On the other hand, if is an -position, then is also an -position, because the next player has a winning strategy: choose a -position from among the options, and we conclude from the previous paragraph that adding to that position is still a -position. Thus, in this case, must be a -position, just like .
As these are the only two cases, the lemma holds.
Second Lemma
As a further step, we show that if and only if is a -position.
In the forward direction, suppose that . Applying the definition of equivalence with , we find that (which is equal to by commutativity of addition) is in the same outcome class as . But must be a -position: for every move made in one copy of , the previous player can respond with the same move in the other copy, and so always make the last move.
In the reverse direction, since is a -position by hypothesis, it follows from the first lemma, , that . Similarly, since is also a -position, it follows from the first lemma in the form that . By associativity and commutativity, the right-hand sides of these results are equal. Furthermore, is an equivalence relation because equality is an equivalence relation on outcome classes. Via the transitivity of , we can conclude that .
Proof of the Sprague–Grundy theorem
We prove that all positions are equivalent to a nimber by structural induction. The more specific result, that the given game's initial position must be equivalent to a nimber, shows that the game is itself equivalent to a nimber.
Consider a position . By the induction hypothesis, all of the options are equivalent to nimbers, say . So let . We will show that , where is the mex (minimum exclusion) of the numbers , that is, the smallest non-negative integer not equal to some .
The first thing we need to note is that , by way of the second lemma. If is zero, the claim is trivially true. Otherwise, consider . If the next player makes a move to in , then the previous player can move to in , and conversely if the next player makes a move in . After this, the position is a -position by the lemma's forward implication. Therefore, is a -position, and, citing the lemma's reverse implication, .
Now let us show that is a -position, which, using the second lemma once again, means that . We do so by giving an explicit strategy for the previous player.
Suppose that and are empty. Then is the null set, clearly a -position.
Or consider the case that the next player moves in the component to the option where . Because was the minimum excluded number, the previous player can move in to . And, as shown before, any position plus itself is a -position.
Finally, suppose instead that the next player moves in the component to the option . If then the previous player moves in to ; otherwise, if , the previous player moves in to ; in either case the result is a position plus itself. (It is not possible that because was defined to be different from all the .)
In summary, we have and . By transitivity, we conclude that , as desired.
Development
If is a position of an impartial game, the unique integer such that is called its Grundy value, or Grundy number, and the function that assigns this value to each such position is called the Sprague–Grundy function. R. L. Sprague and P. M. Grundy independently gave an explicit definition of this function, not based on any concept of equivalence to nim positions, and showed that it had the following properties:
The Grundy value of a single nim pile of size (i.e. of the position ) is ;
A position is a loss for the next player to move (i.e. a -position) if and only if its Grundy value is zero; and
The Grundy value of the sum of a finite set of positions is just the nim-sum of the Grundy values of its summands.
It follows straightforwardly from these results that if a position has a Grundy value of , then has the same Grundy value as , and therefore belongs to the same outcome class, for any position . Thus, although Sprague and Grundy never explicitly stated the theorem described in this article, it follows directly from their results and is credited to them.
These results have subsequently been developed into the field of combinatorial game theory, notably by Richard Guy, Elwyn Berlekamp, John Horton Conway and others, where they are now encapsulated in the Sprague–Grundy theorem and its proof in the form described here. The field is presented in the books Winning Ways for your Mathematical Plays and On Numbers and Games.
See also
Genus theory
Indistinguishability quotient
References
External links
Grundy's game at cut-the-knot
Easily readable, introductory account from the UCLA Math Department
The Game of Nim at sputsoft.com
Combinatorial game theory
Theorems in discrete mathematics | Sprague–Grundy theorem | [
"Mathematics"
] | 2,724 | [
"Mathematical theorems",
"Discrete mathematics",
"Recreational mathematics",
"Theorems in discrete mathematics",
"Combinatorics",
"Game theory",
"Mathematical problems",
"Combinatorial game theory"
] |
41,385 | https://en.wikipedia.org/wiki/Multipath%20propagation | In radio communication, multipath is the propagation phenomenon that results in radio signals reaching the receiving antenna by two or more paths. Causes of multipath include atmospheric ducting, ionospheric reflection and refraction, and reflection from water bodies and terrestrial objects such as mountains and buildings. When the same signal is received over more than one path, it can create interference and phase shifting of the signal. Destructive interference causes fading; this may cause a radio signal to become too weak in certain areas to be received adequately. For this reason, this effect is also known as multipath interference or multipath distortion.
Where the magnitudes of the signals arriving by the various paths have a distribution known as the Rayleigh distribution, this is known as Rayleigh fading. Where one component (often, but not necessarily, a line of sight component) dominates, a Rician distribution provides a more accurate model, and this is known as Rician fading. Where two components dominate, the behavior is best modeled with the two-wave with diffuse power (TWDP) distribution. All of these descriptions are commonly used and accepted and lead to results. However, they are generic and abstract/hide/approximate the underlying physics.
Interference
Multipath interference is a phenomenon in the physics of waves whereby a wave from a source travels to a detector via two or more paths and the two (or more) components of the wave interfere constructively or destructively. Multipath interference is a common cause of "ghosting" in analog television broadcasts and of fading of radio waves.
The condition necessary is that the components of the wave remain coherent throughout the whole extent of their travel.
The interference will arise owing to the two (or more) components of the wave having, in general, travelled a different length (as measured by optical path length – geometric length and refraction (differing optical speed)), and thus arriving at the detector out of phase with each other.
The signal due to indirect paths interferes with the required signal in amplitude as well as phase which is called multipath fading.
Examples
In analog facsimile and television transmission, multipath causes jitter and ghosting, seen as a faded duplicate image to the right of the main image. Ghosts occur when transmissions bounce off a mountain or other large object, while also arriving at the antenna by a shorter, direct route, with the receiver picking up two signals separated by a delay.
In radar processing, multipath causes ghost targets to appear, deceiving the radar receiver. These ghosts are particularly bothersome since they move and behave like the normal targets (which they echo), and so the receiver has difficulty in isolating the correct target echo. These problems can be minimized by incorporating a ground map of the radar's surroundings and eliminating all echoes which appear to originate below the ground or above a certain height (altitude).
In digital radio communications (such as GSM) multipath can cause errors and affect the quality of communications. The errors are due to intersymbol interference (ISI). Equalizers are often used to correct the ISI. Alternatively, techniques such as orthogonal frequency division modulation and rake receivers may be used.
In a Global Positioning System receiver, multipath effects can cause a stationary receiver's output to indicate as if it were randomly jumping about or creeping. When the unit is moving the jumping or creeping may be hidden, but it still degrades the displayed accuracy of location and speed.
In wired media
Multipath propagation is similar in power line communication and in telephone local loops. In either case, impedance mismatch causes signal reflection.
High-speed power line communication systems usually employ multi-carrier modulations (such as OFDM or wavelet OFDM) to avoid the intersymbol interference that multipath propagation would cause. The ITU-T G.hn standard provides a way to create a high-speed (up to 1 gigabit per second) local area network using existing home wiring (power lines, phone lines, and coaxial cables). G.hn uses OFDM with a cyclic prefix to avoid ISI. Because multipath propagation behaves differently in each kind of wire, G.hn uses different OFDM parameters (OFDM symbol duration, guard interval duration) for each media.
DSL modems also use orthogonal frequency-division multiplexing to communicate with their DSLAM despite multipath. In this case the reflections may be caused by mixed wire gauges, but those from bridge taps are usually more intense and complex. Where OFDM training is unsatisfactory, bridge taps may be removed.
Mathematical modeling
The mathematical model of the multipath can be presented using the method of the impulse response used for studying linear systems.
Suppose you want to transmit a single, ideal Dirac pulse of electromagnetic power at time 0, i.e.
At the receiver, due to the presence of the multiple electromagnetic paths, more than one pulse will be received, and each one of them will arrive at different times. In fact, since the electromagnetic signals travel at the speed of light, and since every path has a geometrical length possibly different from that of the other ones, there are different air travelling times (consider that, in free space, the light takes 3 μs to cross a 1 km span). Thus, the received signal will be expressed by
where is the number of received impulses (equivalent to the number of electromagnetic paths, and possibly very large), is the time delay of the generic impulse, and represent the complex amplitude (i.e., magnitude and phase) of the generic received pulse. As a consequence, also represents the impulse response function of the equivalent multipath model.
More in general, in presence of time variation of the geometrical reflection conditions, this impulse response is time varying, and as such we have
Very often, just one parameter is used to denote the severity of multipath conditions: it is called the multipath time, , and it is defined as the time delay existing between the first and the last received impulses
In practical conditions and measurement, the multipath time is computed by considering as last impulse the first one which allows receiving a determined amount of the total transmitted power (scaled by the atmospheric and propagation losses), e.g. 99%.
Keeping our aim at linear, time invariant systems, we can also characterize the multipath phenomenon by the channel transfer function , which is defined as the continuous time Fourier transform of the impulse response
where the last right-hand term of the previous equation is easily obtained by remembering that the Fourier transform of a Dirac pulse is a complex exponential function, an eigenfunction of every linear system.
The obtained channel transfer characteristic has a typical appearance of a sequence of peaks and valleys (also called notches); it can be shown that, on average, the distance (in Hz) between two consecutive valleys (or two consecutive peaks), is roughly inversely proportional to the multipath time. The so-called coherence bandwidth is thus defined as
For example, with a multipath time of 3 μs (corresponding to a 1 km of added on-air travel for the last received impulse), there is a coherence bandwidth of about 330 kHz.
See also
Choke ring antenna, a design that can reject extraneous reflection signals
Diversity schemes
Doppler spread
Fading
Lloyd's mirror
Olivia MFSK
Orthogonal frequency-division multiplexing
Rician fading
Signal flow
Two-ray ground-reflection model
Ultra wide-band
Rake receiver
References
MIL-STD-188
Federal Standard 1037C
Broadcast engineering
Radio frequency propagation | Multipath propagation | [
"Physics",
"Engineering"
] | 1,543 | [
"Broadcast engineering",
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves",
"Electronic engineering"
] |
41,387 | https://en.wikipedia.org/wiki/Multiple%20homing | In telecommunications, the term multiple homing has the following meanings:
In telephone systems, the connection of a terminal facility so that it can be served by one or several switching centers. Multiple homing may use a single directory number.
In telephone systems, the connection of a terminal facility to more than one switching center by separate access lines. Separate directory numbers are applicable to each switching center accessed.
In military, such as Missiles and loitering munitions, it is ability of a single weapon system or projectile to select, focus and simultaneously engage multiple targets.
See also
Homing pigeon
Infrared homing
Missile guidance
References
Communication circuits
Military terminology | Multiple homing | [
"Engineering"
] | 127 | [
"Telecommunications engineering",
"Communication circuits"
] |
41,388 | https://en.wikipedia.org/wiki/Multiplex%20baseband | In telecommunications, the term multiplex baseband has the following meanings:
In frequency-division multiplexing, the frequency band occupied by the aggregate of the signals in the line interconnecting the multiplexing and radio or line equipment.
In frequency division multiplexed carrier systems, at the input to any stage of frequency translation, the frequency band occupied.
For example, the output of a group multiplexer consists of a band of frequencies from 60 kHz to 108 kHz. This is the group-level baseband that results from combining 12 voice-frequency input channels, having a bandwidth of 4 kHz each, including guard bands. In turn, 5 groups are multiplexed into a super group having a baseband of 312 kHz to 552 kHz. This baseband, however, does not represent a group-level baseband. Ten super groups are in turn multiplexed into one master group, the output of which is a baseband that may be used to modulate a microwave-frequency carrier.
References
Multiplexing
Signal processing | Multiplex baseband | [
"Technology",
"Engineering"
] | 206 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
41,389 | https://en.wikipedia.org/wiki/Multiplexing | In telecommunications and computer networking, multiplexing (sometimes contracted to muxing) is a method by which multiple analog or digital signals are combined into one signal over a shared medium. The aim is to share a scarce resource a physical transmission medium. For example, in telecommunications, several telephone calls may be carried using one wire. Multiplexing originated in telegraphy in the 1870s, and is now widely applied in communications. In telephony, George Owen Squier is credited with the development of telephone carrier multiplexing in 1910.
The multiplexed signal is transmitted over a communication channel such as a cable. The multiplexing divides the capacity of the communication channel into several logical channels, one for each message signal or data stream to be transferred. A reverse process, known as demultiplexing, extracts the original channels on the receiver end.
A device that performs the multiplexing is called a multiplexer (MUX), and a device that performs the reverse process is called a demultiplexer (DEMUX or DMX).
Inverse multiplexing (IMUX) has the opposite aim as multiplexing, namely to break one data stream into several streams, transfer them simultaneously over several communication channels, and recreate the original data stream.
In computing, I/O multiplexing can also be used to refer to the concept of processing multiple input/output events from a single event loop, with system calls like poll and select (Unix).
Types
Multiple variable bit rate digital bit streams may be transferred efficiently over a single fixed bandwidth channel by means of statistical multiplexing. This is an asynchronous mode time-domain multiplexing which is a form of time-division multiplexing.
Digital bit streams can be transferred over an analog channel by means of code-division multiplexing techniques such as frequency-hopping spread spectrum (FHSS) and direct-sequence spread spectrum (DSSS).
In wireless communications, multiplexing can also be accomplished through alternating polarization (horizontal/vertical or clockwise/counterclockwise) on each adjacent channel and satellite, or through phased multi-antenna array combined with a multiple-input multiple-output communications (MIMO) scheme.
Space-division multiplexing
In wired communication, space-division multiplexing, also known as space-division multiple access (SDMA) is the use of separate point-to-point electrical conductors for each transmitted channel. Examples include an analog stereo audio cable, with one pair of wires for the left channel and another for the right channel, and a multi-pair telephone cable, a switched star network such as a telephone access network, a switched Ethernet network, and a mesh network.
In wireless communication, space-division multiplexing is achieved with multiple antenna elements forming a phased array antenna. Examples are multiple-input and multiple-output (MIMO), single-input and multiple-output (SIMO) and multiple-input and single-output (MISO) multiplexing. An IEEE 802.11g wireless router with k antennas makes it in principle possible to communicate with k multiplexed channels, each with a peak bit rate of 54 Mbit/s, thus increasing the total peak bit rate by the factor k. Different antennas would give different multi-path propagation (echo) signatures, making it possible for digital signal processing techniques to separate different signals from each other. These techniques may also be utilized for space diversity (improved robustness to fading) or beamforming (improved selectivity) rather than multiplexing.
Frequency-division multiplexing
Frequency-division multiplexing (FDM) is inherently an analog technology. FDM achieves the combining of several signals into one medium by sending signals in several distinct frequency ranges over a single medium. In FDM the signals are electrical signals.
One of the most common applications for FDM is traditional radio and television broadcasting from terrestrial, mobile or satellite stations, or cable television. Only one cable reaches a customer's residential area, but the service provider can send multiple television channels or signals simultaneously over that cable to all subscribers without interference. Receivers must tune to the appropriate frequency (channel) to access the desired signal.
A variant technology, called wavelength-division multiplexing (WDM) is used in optical communications.
Time-division multiplexing
Time-division multiplexing (TDM) is a digital (or in rare cases, analog) technology that uses time, instead of space or frequency, to separate the different data streams. TDM involves sequencing groups of a few bits or bytes from each individual input stream, one after the other, and in such a way that they can be associated with the appropriate receiver. If done sufficiently quickly, the receiving devices will not detect that some of the circuit time was used to serve another logical communication path.
Consider an application requiring four terminals at an airport to reach a central computer. Each terminal communicated at 2400 baud, so rather than acquire four individual circuits to carry such a low-speed transmission, the airline has installed a pair of multiplexers. A pair of 9600 baud modems and one dedicated analog communications circuit from the airport ticket desk back to the airline data center are also installed. Some web proxy servers (e.g. polipo) use TDM in HTTP pipelining of multiple HTTP transactions onto the same TCP/IP connection.
Carrier-sense multiple access and multidrop communication methods are similar to time-division multiplexing in that multiple data streams are separated by time on the same medium, but because the signals have separate origins instead of being combined into a single signal, are best viewed as channel access methods, rather than a form of multiplexing.
TD is a legacy multiplexing technology still providing the backbone of most National fixed-line telephony networks in Europe, providing the 2 Mbit/s voice and signaling ports on narrow-band telephone exchanges such as the DMS100. Each E1 or 2 Mbit/s TDM port provides either 30 or 31 speech timeslots in the case of CCITT7 signaling systems and 30 voice channels for customer-connected Q931, DASS2, DPNSS, V5 and CASS signaling systems.
Polarization-division multiplexing
Polarization-division multiplexing uses the polarization of electromagnetic radiation to separate orthogonal channels. It is in practical use in both radio and optical communications, particularly in 100 Gbit/s per channel fiber-optic transmission systems.
Differential Cross-Polarized Wireless Communications is a novel method for polarized antenna transmission utilizing a differential technique.
Orbital angular momentum multiplexing
Orbital angular momentum multiplexing is a relatively new and experimental technique for multiplexing multiple channels of signals carried using electromagnetic radiation over a single path. It can potentially be used in addition to other physical multiplexing methods to greatly expand the transmission capacity of such systems. it is still in its early research phase, with small-scale laboratory demonstrations of bandwidths of up to 2.5 Tbit/s over a single light path. This is a controversial subject in the academic community, with many claiming it is not a new method of multiplexing, but rather a special case of space-division multiplexing.
Code-division multiplexing
Code-division multiplexing (CDM), code-division multiple access (CDMA) or spread spectrum is a class of techniques where several channels simultaneously share the same frequency spectrum, and this spectral bandwidth is much higher than the bit rate or symbol rate. One form is frequency hopping, another is direct sequence spread spectrum. In the latter case, each channel transmits its bits as a coded channel-specific sequence of pulses called chips. Number of chips per bit, or chips per symbol, is the spreading factor. This coded transmission typically is accomplished by transmitting a unique time-dependent series of short pulses, which are placed within chip times within the larger bit time. All channels, each with a different code, can be transmitted on the same fiber or radio channel or other medium, and asynchronously demultiplexed. Advantages over conventional techniques are that variable bandwidth is possible (just as in statistical multiplexing), that the wide bandwidth allows poor signal-to-noise ratio according to Shannon–Hartley theorem, and that multi-path propagation in wireless communication can be combated by rake receivers.
A significant application of CDMA is the Global Positioning System (GPS).
Multiple access method
A multiplexing technique may be further extended into a multiple access method or channel access method, for example, TDM into time-division multiple access (TDMA) and statistical multiplexing into carrier-sense multiple access (CSMA). A multiple-access method makes it possible for several transmitters connected to the same physical medium to share their capacity.
Multiplexing is provided by the physical layer of the OSI model, while multiple access also involves a media access control protocol, which is part of the data link layer.
The Transport layer in the OSI model, as well as TCP/IP model, provides statistical multiplexing of several application layer data flows to/from the same computer.
Code-division multiplexing (CDM) is a technique in which each channel transmits its bits as a coded channel-specific sequence of pulses. This coded transmission is typically accomplished by transmitting a unique time-dependent series of short pulses, which are placed within chip times within the larger bit time. All channels, each with a different code, can be transmitted on the same fiber and asynchronously demultiplexed. Other widely used multiple access techniques are time-division multiple access (TDMA) and frequency-division multiple access (FDMA).
Code-division multiplex techniques are used as an access technology, namely code-division multiple access (CDMA), in Universal Mobile Telecommunications System (UMTS) standard for the third-generation (3G) mobile communication identified by the ITU.
Application areas
Telegraphy
The earliest communication technology using electrical wires, and therefore sharing an interest in the economies afforded by multiplexing, was the electric telegraph. Early experiments allowed two separate messages to travel in opposite directions simultaneously, first using an electric battery at both ends, then at only one end.
Émile Baudot developed a time-multiplexing system of multiple Hughes machines in the 1870s. In 1874, the quadruplex telegraph developed by Thomas Edison transmitted two messages in each direction simultaneously, for a total of four messages transiting the same wire at the same time. Several researchers were investigating acoustic telegraphy, a frequency-division multiplexing technique, which led to the invention of the telephone.
Telephony
In telephony, a customer's telephone line now typically ends at the remote concentrator box, where it is multiplexed along with other telephone lines for that neighborhood or other similar area. The multiplexed signal is then carried to the central switching office on significantly fewer wires and for much further distances than a customer's line can practically go. This is likewise also true for digital subscriber lines (DSL).
Fiber in the loop (FITL) is a common method of multiplexing, which uses optical fiber as the backbone. It not only connects POTS phone lines with the rest of the PSTN, but also replaces DSL by connecting directly to Ethernet wired into the home. Asynchronous Transfer Mode is often the communications protocol used.
Cable TV has long carried multiplexed television channels, and late in the 20th century began offering the same services as telephone companies. IPTV also depends on multiplexing.
Video processing
In video editing and processing systems, multiplexing refers to the process of interleaving audio and video into one coherent data stream.
In digital video, such a transport stream is normally a feature of a container format which may include metadata and other information, such as subtitles. The audio and video streams may have variable bit rate. Software that produces such a transport stream and/or container is commonly called a multiplexer or muxer. A demuxer is software that extracts or otherwise makes available for separate processing the components of such a stream or container.
Digital broadcasting
In digital television systems, several variable bit-rate data streams are multiplexed together to a fixed bit-rate transport stream by means of statistical multiplexing. This makes it possible to transfer several video and audio channels simultaneously over the same frequency channel, together with various services. This may involve several standard-definition television (SDTV) programs (particularly on DVB-T, DVB-S2, ISDB and ATSC-C), or one HDTV, possibly with a single SDTV companion channel over one 6 to 8 MHz-wide TV channel. The device that accomplishes this is called a statistical multiplexer. In several of these systems, the multiplexing results in an MPEG transport stream. The newer DVB standards DVB-S2 and DVB-T2 has the capacity to carry several HDTV channels in one multiplex.
In digital radio, a multiplex (also known as an ensemble) is a number of radio stations that are grouped together. A multiplex is a stream of digital information that includes audio and other data.
On communications satellites which carry broadcast television networks and radio networks, this is known as multiple channel per carrier or MCPC. Where multiplexing is not practical (such as where there are different sources using a single transponder), single channel per carrier mode is used.
Analog broadcasting
In FM broadcasting and other analog radio media, multiplexing is a term commonly given to the process of adding subcarriers to the audio signal before it enters the transmitter, where modulation occurs. (In fact, the stereo multiplex signal can be generated using time-division multiplexing, by switching between the two (left channel and right channel) input signals at an ultrasonic rate (the subcarrier), and then filtering out the higher harmonics.) Multiplexing in this sense is sometimes known as MPX, which in turn is also an old term for stereophonic FM, seen on stereo systems since the 1960s.
Other meanings
In spectroscopy the term is used to indicate that the experiment is performed with a mixture of frequencies at once and their respective response unraveled afterward using the Fourier transform principle.
In computer programming, it may refer to using a single in-memory resource (such as a file handle) to handle multiple external resources (such as on-disk files).
Some electrical multiplexing techniques do not require a physical "multiplexer" device, they refer to a "keyboard matrix" or "Charlieplexing" design style:
Multiplexing may refer to the design of a multiplexed display (non-multiplexed displays are immune to break up).
Multiplexing may refer to the design of a "switch matrix" (non-multiplexed buttons are immune to "phantom keys" and also immune to "phantom key blocking").
In high-throughput DNA sequencing, the term is used to indicate that some artificial sequences (often called barcodes or indexes) have been added to link given sequence reads to a given sample, and thus allow for the sequencing of multiple samples in the same reaction.
In sociolinguistics, multiplexity is used to describe the number of distinct connections between individuals who are part of a social network. A multiplex network is one in which members share a number of ties stemming from more than one social context, such as workmates, neighbors, or relatives.
See also
Add-drop multiplexer
Channel bank
Multiplexed display
Optical add-drop multiplexer
Orthogonal frequency-division multiplexing (OFDM) (which is a modulation method)
Statistical multiplexing
References
Efficient beam multiplexing using a spatial light modulator
External links
Digital television
Digital radio
Broadcast engineering
Physical layer protocols
Television terminology | Multiplexing | [
"Engineering"
] | 3,193 | [
"Broadcast engineering",
"Electronic engineering"
] |
41,392 | https://en.wikipedia.org/wiki/Narrowband%20modem | In telecommunications, a narrowband modem is a modem whose modulated output signal has an essential frequency spectrum that is limited to that which can be wholly contained within, and faithfully transmitted through, a voice channel with a nominal 4 kHz bandwidth.
Note: High frequency (HF) modems are limited to operation over a voice channel with a nominal 3 kHz bandwidth.
References
Modems | Narrowband modem | [
"Technology"
] | 80 | [
"Computing stubs",
"Computer network stubs"
] |
41,396 | https://en.wikipedia.org/wiki/National%20Information%20Infrastructure | The National Information Infrastructure (NII) was the product of the High Performance Computing Act of 1991. It was a telecommunications policy buzzword, which was popularized during the Clinton Administration under the leadership of Vice-President Al Gore.
It proposed to build communications networks, interactive services, interoperable computer hardware and software, computers, databases, and consumer electronics in order to put vast amounts of information available to both public and private sectors. NII was to have included more than just the physical facilities (more than the cameras, scanners, keyboards, telephones, fax machines, computers, switches, compact disks, video and audio tape, cable, wire, satellites, optical fiber transmission lines, microwave nets, switches, televisions, monitors, and printers) used to transmit, store, process, and display voice, data, and images; it was also to encompass a wide range of interactive functions, user-tailored services, and multimedia databases that were interconnected in a technology-neutral manner that will favor no one industry over any other.
See also
Al Gore and information technology
High Performance Computing Act of 1991
Information Superhighway
History of the Internet
NII Award
References
Chapman, Gary and Marc Rotenberg. "The National Information Infrastructure: A Public Interest Opportunity." Summer, 1993.
Gore, Al. Remarks on the National Information Infrastructure by Vice President Al Gore at the National Press club, December 21, 1993.
History of the Internet
Internet terminology
Telecommunications in the United States | National Information Infrastructure | [
"Technology"
] | 298 | [
"Computing terminology",
"Internet terminology"
] |
41,402 | https://en.wikipedia.org/wiki/Neper | The neper (symbol: Np) is a logarithmic unit for ratios of measurements of physical field and power quantities, such as gain and loss of electronic signals. The unit's name is derived from the name of John Napier, the inventor of logarithms. As is the case for the decibel and bel, the neper is a unit defined in the international standard ISO 80000. It is not part of the International System of Units (SI), but is accepted for use alongside the SI.
Definition
Like the decibel, the neper is a unit in a logarithmic scale. While the bel uses the decadic (base-10) logarithm to compute ratios, the neper uses the natural logarithm, based on Euler's number (). The level of a ratio of two signal amplitudes or root-power quantities, with the unit neper, is given by
where and are the signal amplitudes, and is the natural logarithm. The level of a ratio of two power quantities, with the unit neper, is given by
where and are the signal powers.
In the International System of Quantities, the neper is defined as .
Units
The neper is defined in terms of ratios of field quantities — also called root-power quantities — (for example, voltage or current amplitudes in electrical circuits, or pressure in acoustics), whereas the decibel was originally defined in terms of power ratios. A power ratio 10 log r dB is equivalent to a field-quantity ratio 20 log r dB, since power in a linear system is proportional to the square (Joule's laws) of the amplitude. Hence the decibel and the neper have a fixed ratio to each other:
and
The (voltage) level ratio is
Like the decibel, the neper is a dimensionless unit. The International Telecommunication Union (ITU) recognizes both units. Only the neper is coherent with the SI.
Applications
The neper is a natural linear unit of relative difference, meaning in nepers (logarithmic units) relative differences add rather than multiply. This property is shared with logarithmic units in other bases, such as the bel.
The derived units decineper (1 dNp = 0.1 neper) and centineper (1 cNp = 0.01 neper) are also used. The centineper for root-power quantities corresponds to a log point or log percentage, see .
See also
Nat (unit)
Nepers per metre
References
Works
Further reading
External links
What's a neper?
Conversion of level gain and loss: neper, decibel, and bel
Calculating transmission line losses
Units of level | Neper | [
"Physics",
"Mathematics"
] | 566 | [
"Physical quantities",
"Units of level",
"Quantity",
"Logarithmic scales of measurement",
"Units of measurement"
] |
41,403 | https://en.wikipedia.org/wiki/Net%20gain%20%28telecommunications%29 | In telecommunications, net gain is the overall gain of a transmission circuit. Net gain is measured by applying a test signal at an appropriate power level at the input port of a circuit and measuring the power delivered at the output port. The net gain in dB is calculated by taking 10 times the common logarithm of the ratio of the output power to the input power.
The net gain expressed in dB may be positive or negative. If the net gain expressed in dB is negative, it is also called the net loss. If the net gain is expressed as a ratio, and the ratio is less than unity, a net loss is indicated.
The test signal must be chosen so that its power level is within the usual operating range of the circuit being tested.
References
Electrical parameters
Telecommunication theory | Net gain (telecommunications) | [
"Engineering"
] | 158 | [
"Electrical engineering",
"Electrical parameters"
] |
41,404 | https://en.wikipedia.org/wiki/Net%20operation | A radio net is three or more radio stations communicating with each other on a common channel or frequency. A net is essentially a moderated conference call conducted over two-way radio, typically in half-duplex operating conditions. The use of half-duplex operation requires a very particular set of operating procedures to be followed in order to avoid inefficiencies and chaos.
Nets operate either on schedule or continuously (continuous watch). Nets operating on schedule handle traffic only at definite, prearranged times and in accordance with a prearranged schedule of intercommunication. Nets operating continuously are prepared to handle traffic at any time; they maintain operators on duty at all stations in the net at all times. When practicable, messages relating to schedules will be transmitted by a means of signal communication other than radio.
Net operations:
allow participants to conduct ordered conferences among participants who usually have common information needs or related functions to perform
are characterized by adherence to standard formats and procedures, and
are responsive to a common supervisory station, called the "net control station", which permits access to the net and maintains net operational discipline.
Net manager
A net manager is the person who supervises the creation and operation of a net over multiple sessions. This person will specify the format, date, time, participants, and the net control script. The net manager will also choose the Net Control Station for each net, and may occasionally take on that function, especially in smaller organizations.
Net Control Station
Radio nets are like conference calls in that both have a moderator who initiates the group communication, who ensures all participants follow the standard procedures, and who determines and directs when each other station may talk. The moderator in a radio net is called the Net Control Station, formally abbreviated NCS, and has the following duties:
Establishes the net and closes the net;
Directs Net activities, such as passing traffic, to maintain optimum efficiency;
Chooses net frequency, maintains circuit discipline and frequency accuracy;
Maintains a net log and records participation in the net and movement of messages; (always knows who is on and off net)
Appoints one or more Alternate Net Control Stations (ANCS);
Determines whether and when to conduct network continuity checks;
Determines when full procedure and full call signs may enhance communications;
Subject to Net Manager guidance, directs a net to be directed or free.
The Net Control Station will, for each net, appoint at least one Alternate Net Control Station, formally abbreviated ANCS (abbreviated NC2 in WWII procedures), who has the following duties:
Assists the NCS to maintain optimum efficiency;
Assumes NCS duties in event that the NCS develops station problems;
Assumes NCS duties for a portion of the net, as directed or as needed;
Serves as a resource for the NCS; echoes transmissions of the NCS if, and only if, directed to do so by the NCS;
Maintains a duplicate net log
Structure of the net
Nets can be described as always having a net opening and a net closing, with a roll call normally following the net opening, itself followed by regular net business, which may include announcements, official business, and message passing. Military nets will follow a very abbreviated and opaque version of the structure outlined below, but will still have the critical elements of opening, roll call, late check-ins, and closing.
A net should always operate on the same principle as the inverted pyramid used in journalism—the most important communications always come first, followed by content in ever lower levels of priority.
Net opening
Identification of the NCS
Announcement of the regular date, time, and frequency of the net
Purpose of the net
Roll call
A call for stations to check in, oftentimes from a roster of regular stations
A call for late check-ins (stations on the roster who did not respond to the first check-in period)
A call for guest stations to check in
Net business
Optional conversion to a free net
Net closing
Each net will typically have a main purpose, which varies according to the organization conducting the net, which occurs during the net business phase. For amateur radio nets, it's typically for the purpose of allowing stations to discuss their recent operating activities (stations worked, antennas built, etc.) or to swap equipment. For Military Auxiliary Radio System and National Traffic System nets, net business will involve mainly the passing of formal messages, known as radiograms.
Two modes of net operation
Directed Net
A net in which no station other than the net control station can communicate with any other station, except for the transmission of urgent messages, without first obtaining the permission of the net control station.
Free net
A net in which any station may communicate with any other station in the same net without first obtaining permission from the net control station to do so.
Net-control procedure words
U.S. Army Field Manual ACP 125(G) has the most complete set of procedure words used in radio nets:
Types of radio nets
Maritime mobile nets serve the needs of seagoing vessels.
Civil Air Patrol nets
The Civil Air Patrol defines a different set of nets:
Amateur radio nets
The International Amateur Radio Union defines six different types of nets in its IARU Emergency Telecommunications Guide:
Other Amateur radio net types
U.S. Military radio nets
The U.S. Army Field Manual FM 6-02.53, Tactical Radio Operations, defines the following types of radio nets:
Maritime radio nets
When boats or ships are in distress, they will operate a maritime broadcast communications net to communicate among the vessel in distress and all the other vessels, aircraft, and shore stations assisting in the distress response.
See also
Amateur radio net
References
External links
Telecommunications | Net operation | [
"Technology"
] | 1,136 | [
"Information and communications technology",
"Telecommunications"
] |
41,406 | https://en.wikipedia.org/wiki/Network%20architecture | Network architecture is the design of a computer network. It is a framework for the specification of a network's physical components and their functional organization and configuration, its operational principles and procedures, as well as communication protocols used.
In telecommunications, the specification of a network architecture may also include a detailed description of products and services delivered via a communications network, as well as detailed rate and billing structures under which services are compensated.
The network architecture of the Internet is predominantly expressed by its use of the Internet protocol suite, rather than a specific model for interconnecting networks or nodes in the network, or the usage of specific types of hardware links.
OSI model
The Open Systems Interconnection model (OSI model) defines and codifies the concept of layered network architecture. Abstraction layers are used to subdivide a communications system further into smaller manageable parts. A layer is a collection of similar functions that provide services to the layer above it and receives services from the layer below it. On each layer, an instance provides services to the instances at the layer above and requests services from the layer below.
Distributed computing
In distributed computing, the network architecture often describes the structure and classification of a distributed application architecture, as the participating nodes in a distributed application are often referred to as a network. For example, the applications architecture of the public switched telephone network (PSTN) has been termed the Intelligent Network. There are a number of specific classifications but all lie on a continuum between the dumb network (e.g. the Internet) and the intelligent network (e.g. the PSTN).
A popular example of such usage of the term in distributed applications, as well as permanent virtual circuits, is the organization of nodes in peer-to-peer (P2P) services and networks. P2P networks usually implement overlay networks running over an underlying physical or logical network. These overlay networks may implement certain organizational structures of the nodes according to several distinct models, the network architecture of the system.
See also
Network topology
Spawning networks
References
External links
Computer Network Architects at the US Department of Labor
Telecommunications engineering | Network architecture | [
"Engineering"
] | 428 | [
"Network architecture",
"Electrical engineering",
"Telecommunications engineering",
"Computer networks engineering"
] |
41,407 | https://en.wikipedia.org/wiki/Network%20engineering | Network engineering may refer to:
Internetworking, service requirements for switched telephone networks
Computer network engineering, the design and management of computer networks
Telecommunications Engineering, developing telecommunications network topologies
Broadcasting, spreading messages to a dispersed audience electronically
See also
Network administrator
Telecommunications engineering | Network engineering | [
"Engineering"
] | 51 | [
"Electrical engineering",
"Telecommunications engineering"
] |
41,410 | https://en.wikipedia.org/wiki/Network%20management | Network management is the process of administering and managing computer networks. Services provided by this discipline include fault analysis, performance management, provisioning of networks and maintaining quality of service. Network management software is used by network administrators to help perform these functions.
Technologies
A small number of accessory methods exist to support network and network device management. Network management allows IT professionals to monitor network components within large network area. Access methods include the SNMP, command-line interface (CLI), custom XML, CMIP, Windows Management Instrumentation (WMI), Transaction Language 1 (TL1), CORBA, NETCONF, RESTCONF and the Java Management Extensions (JMX).
Schemas include the Structure of Management Information (SMI), YANG, WBEM, the Common Information Model (CIM Schema), and MTOSI amongst others.
Value
Effective network management can provide positive strategic impacts. For example, in the case of developing an infrastructure, providing participants with some interactive space allows them to collaborate with each other, thereby promoting overall benefits. At the same time, the value of network management to the strategic network is also affected by the relationship between participants. Active participation, interaction and collaboration can make them more trusting of each other and enhance cohesion.
See also
Application service management
Business service management
Capacity management
Comparison of network monitoring systems
FCAPS
In-network management
Integrated business planning
Network and service management taxonomy
Network monitoring
Network traffic measurement
Out-of-band management
Systems management
Website monitoring
References
External links
Network Monitoring and Management Tools
Software-Defined Network Management | Network management | [
"Engineering"
] | 316 | [
"Computer networks engineering",
"Network management"
] |
41,411 | https://en.wikipedia.org/wiki/Network%20operating%20system | A network operating system (NOS) is a specialized operating system for a network device such as a router, switch or firewall.
Historically operating systems with networking capabilities were described as network operating systems, because they allowed personal computers (PCs) to participate in computer networks and shared file and printer access within a local area network (LAN). This description of operating systems is now largely historical, as common operating systems include a network stack to support a client–server model.
History
Packet switching networks were developed to share hardware resources, such as a mainframe computer, a printer or a large and expensive hard disk.
Historically, a network operating system was an operating system for a computer which implemented network capabilities. Operating systems with a network stack allowed personal computers to participate in a client-server architecture in which a server enables multiple clients to share resources, such as printers.
These limited client/server networks were gradually replaced by Peer-to-peer networks, which used networking capabilities to share resources and files located on a variety of computers of all sizes. A peer-to-peer network sets all connected computers equal; they all share the same abilities to use resources available on the network.
Today, distributed computing and groupware applications have become the norm. Computer operating systems include a networking stack as a matter of course. During the 1980s the need to integrate dissimilar computers with network capabilities grew and the number of networked devices grew rapidly. Partly because it allowed for multi-vendor interoperability, and could route packets globally rather than being restricted to a single building, the Internet protocol suite became almost universally adopted in network architectures. Thereafter, computer operating systems and the firmware of network devices tended to support Internet protocols.
Network device operating systems
Network operating systems can be embedded in a router or hardware firewall that operates the functions in the network layer (layer 3). Notable network operating systems include:
Proprietary network operating systems
Cisco IOS, a family of network operating systems used on Cisco Systems routers and network switches. (Earlier switches ran the Catalyst operating system, or CatOS)
RouterOS by MikroTik
ZyNOS, used in network devices made by ZyXEL
FreeBSD, NetBSD, OpenBSD, and Linux-based operating systems
Cisco NX-OS, IOS XE, and IOS XR; families of network operating systems used across various Cisco Systems device including the Cisco Nexus and Cisco ASR platforms
Junos OS; a network operating system that runs on Juniper Networks platforms
Cumulus Linux distribution, which uses the full TCP/IP stack of Linux
DD-WRT, a Linux kernel-based firmware for wireless routers and access points as well as low-cost networking device platforms such as the Linksys WRT54G
Dell Networking Operating System; DNOS9 is NetBSD based, while OS10 uses the Linux kernel
Extensible Operating System runs on switches from Arista and uses an unmodified Linux kernel
ExtremeXOS (EXOS), used in network devices made by Extreme Networks
FTOS (Force10 Operating System), the firmware family used on Force10 Ethernet switches
ONOS, an open source SDN operating system (hosted by Linux Foundation) for communications service providers that is designed for scalability, high performance and high availability.
OpenBSD, an open source operating system which includes its own implementations of BGP, RPKI, OSPF, MPLS, VXLAN, and other IETF standardized networking protocols, as well as firewall (PF) and load-balancing functionality.
OpenWrt used to route IP packets on embedded devices
pfSense, a fork of M0n0wall, which uses PF
OPNsense, a fork of pfSense
SONiC, a Linux-based network operating system developed by Microsoft
VyOS, an open source fork of the Vyatta routing package
See also
Distributed operating system
FRRouting
Interruptible operating system
Network Computer Operating System
Network functions virtualization
Operating System Projects
SONiC (operating system)
References
Internet Protocol based network software
Operating systems | Network operating system | [
"Engineering"
] | 824 | [
"Computer networks engineering",
"Network operating systems"
] |
41,413 | https://en.wikipedia.org/wiki/Network%20topology | Network topology is the arrangement of the elements (links, nodes, etc.) of a communication network. Network topology can be used to define or describe the arrangement of various types of telecommunication networks, including command and control radio networks, industrial fieldbusses and computer networks.
Network topology is the topological structure of a network and may be depicted physically or logically. It is an application of graph theory wherein communicating devices are modeled as nodes and the connections between the devices are modeled as links or lines between the nodes. Physical topology is the placement of the various components of a network (e.g., device location and cable installation), while logical topology illustrates how data flows within a network. Distances between nodes, physical interconnections, transmission rates, or signal types may differ between two different networks, yet their logical topologies may be identical. A network's physical topology is a particular concern of the physical layer of the OSI model.
Examples of network topologies are found in local area networks (LAN), a common computer network installation. Any given node in the LAN has one or more physical links to other devices in the network; graphically mapping these links results in a geometric shape that can be used to describe the physical topology of the network. A wide variety of physical topologies have been used in LANs, including ring, bus, mesh and star. Conversely, mapping the data flow between the components determines the logical topology of the network. In comparison, Controller Area Networks, common in vehicles, are primarily distributed control system networks of one or more controllers interconnected with sensors and actuators over, invariably, a physical bus topology.
Topologies
Two basic categories of network topologies exist, physical topologies and logical topologies.
The transmission medium layout used to link devices is the physical topology of the network. For conductive or fiber optical mediums, this refers to the layout of cabling, the locations of nodes, and the links between the nodes and the cabling. The physical topology of a network is determined by the capabilities of the network access devices and media, the level of control or fault tolerance desired, and the cost associated with cabling or telecommunication circuits.
In contrast, logical topology is the way that the signals act on the network media, or the way that the data passes through the network from one device to the next without regard to the physical interconnection of the devices. A network's logical topology is not necessarily the same as its physical topology. For example, the original twisted pair Ethernet using repeater hubs was a logical bus topology carried on a physical star topology. Token Ring is a logical ring topology, but is wired as a physical star from the media access unit. Physically, Avionics Full-Duplex Switched Ethernet (AFDX) can be a cascaded star topology of multiple dual redundant Ethernet switches; however, the AFDX virtual links are modeled as time-switched single-transmitter bus connections, thus following the safety model of a single-transmitter bus topology previously used in aircraft. Logical topologies are often closely associated with media access control methods and protocols. Some networks are able to dynamically change their logical topology through configuration changes to their routers and switches.
Links
The transmission media (often referred to in the literature as the physical media) used to link devices to form a computer network include electrical cables (Ethernet, HomePNA, power line communication, G.hn), optical fiber (fiber-optic communication), and radio waves (wireless networking). In the OSI model, these are defined at layers 1 and 2 — the physical layer and the data link layer.
A widely adopted family of transmission media used in local area network (LAN) technology is collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Ethernet transmits data over both copper and fiber cables. Wireless LAN standards (e.g. those defined by IEEE 802.11) use radio waves, or others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data.
Wired technologies
The orders of the following wired technologies are, roughly, from slowest to fastest transmission speed.
Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire surrounded by an insulating layer (typically a flexible material with a high dielectric constant), which itself is surrounded by a conductive layer. The insulation between the conductors helps maintain the characteristic impedance of the cable which can help improve its performance. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.
ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network.
Signal traces on printed circuit boards are common for board-level serial communication, particularly between certain types integrated circuits, a common example being SPI.
Ribbon cable (untwisted and possibly unshielded) has been a cost-effective media for serial protocols, especially within metallic enclosures or rolled within copper braid or foil, over short distances, or at lower data rates. Several serial network protocols can be deployed without shielded or twisted pair cabling, that is, with flat or ribbon cable, or a hybrid flat and twisted ribbon cable, should EMC, length, and bandwidth constraints permit: RS-232, RS-422, RS-485, CAN, GPIB, SCSI, etc.
Twisted pair wire is the most widely used medium for all telecommunication. Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer network cabling (wired Ethernet as defined by IEEE 802.3) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 10 billion bits per second. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted pair (STP). Each form comes in several category ratings, designed for use in various scenarios.
An optical fiber is a glass fiber. It carries pulses of light that represent data. Some advantages of optical fibers over metal wires are very low transmission loss and immunity from electrical interference. Optical fibers can simultaneously carry multiple wavelengths of light, which greatly increases the rate that data can be sent, and helps enable data rates of up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea communications cables to interconnect continents.
Price is a main factor distinguishing wired- and wireless technology options in a business. Wireless options command a price premium that can make purchasing wired computers, printers and other devices a financial benefit. Before making the decision to purchase hard-wired technology products, a review of the restrictions and limitations of the selections is necessary. Business and employee needs may override any cost considerations.
Wireless technologies
Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately apart.
Communications satellites – Satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically in geostationary orbit above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
Cellular and PCS systems use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.
Radio and spread spectrum technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi.
Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.
Exotic technologies
There have been various attempts at transporting data over exotic media:
IP over Avian Carriers was a humorous April fool's Request for Comments, issued as RFC 1149. It was implemented in real life in 2001.
Extending the Internet to interplanetary dimensions via radio waves, the Interplanetary Internet.
Both cases have a large round-trip delay time, which gives slow two-way communication, but does not prevent sending large amounts of information.
Nodes
Network nodes are the points of connection of the transmission medium to transmitters and receivers of the electrical, optical, or radio signals carried in the medium. Nodes may be associated with a computer, but certain types may have only a microcontroller at a node or possibly no programmable device at all. In the simplest of serial arrangements, one RS-232 transmitter can be connected by a pair of wires to one receiver, forming two nodes on one link, or a Point-to-Point topology. Some protocols permit a single node to only either transmit or receive (e.g., ARINC 429). Other protocols have nodes that can both transmit and receive into a single channel (e.g., CAN can have many transceivers connected to a single bus). While the conventional system building blocks of a computer network include network interface controllers (NICs), repeaters, hubs, bridges, switches, routers, modems, gateways, and firewalls, most address network concerns beyond the physical network topology and may be represented as single nodes on a particular physical network topology.
Network interfaces
A network interface controller (NIC) is computer hardware that provides a computer with the ability to access the transmission media, and has the ability to process low-level network information. For example, the NIC may have a connector for accepting a cable, or an aerial for wireless transmission and reception, and the associated circuitry.
The NIC responds to traffic addressed to a network address for either the NIC or the computer as a whole.
In Ethernet networks, each network interface controller has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.
Repeaters and hubs
A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal may be reformed or retransmitted at a higher power level, to the other side of an obstruction possibly using a different transmission medium, so that the signal can cover longer distances without degradation. Commercial repeaters have extended RS-232 segments from 15 meters to over a kilometer. In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart.
Repeaters work within the physical layer of the OSI model, that is, there is no end-to-end change in the physical protocol across the repeater, or repeater pair, even if a different physical layer may be used between the ends of the repeater, or repeater pair. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters that can be used in a row, e.g., the Ethernet 5-4-3 rule.
A repeater with multiple ports is known as hub, an Ethernet hub in Ethernet networks, a USB hub in USB networks.
USB networks use hubs to form tiered-star topologies.
Ethernet hubs and repeaters in LANs have been mostly obsoleted by modern switches.
Bridges
A network bridge connects and filters traffic between two network segments at the data link layer (layer 2) of the OSI model to form a single network. This breaks the network's collision domain but maintains a unified broadcast domain. Network segmentation breaks down a large, congested network into an aggregation of smaller, more efficient networks.
Bridges come in three basic types:
Local bridges: Directly connect LANs
Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.
Wireless bridges: Can be used to join LANs or connect remote devices to LANs.
Switches
A network switch is a device that forwards and filters OSI layer 2 datagrams (frames) between ports based on the destination MAC address in each frame.
A switch is distinct from a hub in that it only forwards the frames to the physical ports involved in the communication rather than all ports connected. It can be thought of as a multi-port bridge. It learns to associate physical ports to MAC addresses by examining the source addresses of received frames. If an unknown destination is targeted, the switch broadcasts to all ports but the source. Switches normally have numerous ports, facilitating a star topology for devices, and cascading additional switches.
Multi-layer switches are capable of routing based on layer 3 addressing or additional logical levels. The term switch is often used loosely to include devices such as routers and bridges, as well as devices that may distribute traffic based on load or based on application content (e.g., a Web URL identifier).
Routers
A router is an internetworking device that forwards packets between networks by processing the routing information included in the packet or datagram (Internet protocol information from layer 3). The routing information is often processed in conjunction with the routing table (or forwarding table). A router uses its routing table to determine where to forward packets. A destination in a routing table can include a black hole because data can go into it, however, no further processing is done for said data, i.e. the packets are dropped.
Modems
Modems (MOdulator-DEModulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Modems are commonly used for telephone lines, using a digital subscriber line technology.
Firewalls
A firewall is a network device for controlling network security and access rules. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks.
Classification
The study of network topology recognizes eight basic topologies: point-to-point, bus, star, ring or circular, mesh, tree, hybrid, or daisy chain.
Point-to-point
The simplest topology with a dedicated link between two endpoints. Easiest to understand, of the variations of point-to-point topology, is a point-to-point communication channel that appears, to the user, to be permanently associated with the two endpoints. A child's tin can telephone is one example of a physical dedicated channel.
Using circuit-switching or packet-switching technologies, a point-to-point circuit can be set up dynamically and dropped when no longer needed. Switched point-to-point topologies are the basic model of conventional telephony.
The value of a permanent point-to-point network is unimpeded communications between the two endpoints. The value of an on-demand point-to-point connection is proportional to the number of potential pairs of subscribers and has been expressed as Metcalfe's Law.
Daisy chain
Daisy chaining is accomplished by connecting each computer in series to the next. If a message is intended for a computer partway down the line, each system bounces it along in sequence until it reaches the destination. A daisy-chained network can take two basic forms: linear and ring.
A linear topology puts a two-way link between one computer and the next. However, this was expensive in the early days of computing, since each computer (except for the ones at each end) required two receivers and two transmitters.
By connecting the computers at each end of the chain, a ring topology can be formed. When a node sends a message, the message is processed by each computer in the ring. An advantage of the ring is that the number of transmitters and receivers can be cut in half. Since a message will eventually loop all of the way around, transmission does not need to go both directions. Alternatively, the ring can be used to improve fault tolerance. If the ring breaks at a particular link then the transmission can be sent via the reverse path thereby ensuring that all nodes are always connected in the case of a single failure.
Bus
In local area networks using bus topology, each node is connected by interface connectors to a single central cable. This is the 'bus', also referred to as the backbone, or trunk – all data transmission between nodes in the network is transmitted over this common transmission medium and is able to be received by all nodes in the network simultaneously.
A signal containing the address of the intended receiving machine travels from a source machine in both directions to all machines connected to the bus until it finds the intended recipient, which then accepts the data. If the machine address does not match the intended address for the data, the data portion of the signal is ignored. Since the bus topology consists of only one wire it is less expensive to implement than other topologies, but the savings are offset by the higher cost of managing the network. Additionally, since the network is dependent on the single cable, it can be the single point of failure of the network. In this topology data being transferred may be accessed by any node.
Linear bus
In a linear bus network, all of the nodes of the network are connected to a common transmission medium which has just two endpoints. When the electrical signal reaches the end of the bus, the signal is reflected back down the line, causing unwanted interference. To prevent this, the two endpoints of the bus are normally terminated with a device called a terminator.
Distributed bus
In a distributed bus network, all of the nodes of the network are connected to a common transmission medium with more than two endpoints, created by adding branches to the main section of the transmission medium – the physical distributed bus topology functions in exactly the same fashion as the physical linear bus topology because all nodes share a common transmission medium.
Star
In star topology (also called hub-and-spoke), every peripheral node (computer workstation or any other peripheral) is connected to a central node called a hub or switch. The hub is the server and the peripherals are the clients. The network does not necessarily have to resemble a star to be classified as a star network, but all of the peripheral nodes on the network must be connected to one central hub. All traffic that traverses the network passes through the central hub, which acts as a signal repeater.
The star topology is considered the easiest topology to design and implement. One advantage of the star topology is the simplicity of adding additional nodes. The primary disadvantage of the star topology is that the hub represents a single point of failure. Also, since all peripheral communication must flow through the central hub, the aggregate central bandwidth forms a network bottleneck for large clusters.
Extended star
The extended star network topology extends a physical star topology by one or more repeaters between the central node and the peripheral (or 'spoke') nodes. The repeaters are used to extend the maximum transmission distance of the physical layer, the point-to-point distance between the central node and the peripheral nodes. Repeaters allow greater transmission distance, further than would be possible using just the transmitting power of the central node. The use of repeaters can also overcome limitations from the standard upon which the physical layer is based.
A physical extended star topology in which repeaters are replaced with hubs or switches is a type of hybrid network topology and is referred to as a physical hierarchical star topology, although some texts make no distinction between the two topologies.
A physical hierarchical star topology can also be referred as a tier-star topology. This topology differs from a tree topology in the way star networks are connected together. A tier-star topology uses a central node, while a tree topology uses a central bus and can also be referred as a star-bus network.
Distributed star
A distributed star is a network topology that is composed of individual networks that are based upon the physical star topology connected in a linear fashion – i.e., 'daisy-chained' – with no central or top level connection point (e.g., two or more 'stacked' hubs, along with their associated star connected nodes or 'spokes').
Ring
A ring topology is a daisy chain in a closed loop. Data travels around the ring in one direction. When one node sends data to another, the data passes through each intermediate node on the ring until it reaches its destination. The intermediate nodes repeat (retransmit) the data to keep the signal strong. Every node is a peer; there is no hierarchical relationship of clients and servers. If one node is unable to retransmit data, it severs communication between the nodes before and after it in the bus.
Advantages:
When the load on the network increases, its performance is better than bus topology.
There is no need of network server to control the connectivity between workstations.
Disadvantages:
Aggregate network bandwidth is bottlenecked by the weakest link between two nodes.
Mesh
The value of fully meshed networks is proportional to the exponent of the number of subscribers, assuming that communicating groups of any two endpoints, up to and including all the endpoints, is approximated by Reed's Law.
Fully connected network
In a fully connected network, all nodes are interconnected. (In graph theory this is called a complete graph.) The simplest fully connected network is a two-node network. A fully connected network doesn't need to use packet switching or broadcasting. However, since the number of connections grows quadratically with the number of nodes:
This makes it impractical for large networks. This kind of topology does not trip and affect other nodes in the network.
Partially connected network
In a partially connected network, certain nodes are connected to exactly one other node; but some nodes are connected to two or more other nodes with a point-to-point link. This makes it possible to make use of some of the redundancy of mesh topology that is physically fully connected, without the expense and complexity required for a connection between every node in the network.
Hybrid
Hybrid topology is also known as hybrid network. Hybrid networks combine two or more topologies in such a way that the resulting network does not exhibit one of the standard topologies (e.g., bus, star, ring, etc.). For example, a tree network (or star-bus network) is a hybrid topology in which star networks are interconnected via bus networks. However, a tree network connected to another tree network is still topologically a tree network, not a distinct network type. A hybrid topology is always produced when two different basic network topologies are connected.
A star-ring network consists of two or more ring networks connected using a multistation access unit (MAU) as a centralized hub.
Snowflake topology is meshed at the core, but tree shaped at the edges.
Two other hybrid network types are hybrid mesh and hierarchical star.
Centralization
The star topology reduces the probability of a network failure by connecting all of the peripheral nodes (computers, etc.) to a central node. When the physical star topology is applied to a logical bus network such as Ethernet, this central node (traditionally a hub) rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on the network, sometimes including the originating node. All peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the central node only. The failure of a transmission line linking any peripheral node to the central node will result in the isolation of that peripheral node from all others, but the remaining peripheral nodes will be unaffected. However, the disadvantage is that the failure of the central node will cause the failure of all of the peripheral nodes.
If the central node is passive, the originating node must be able to tolerate the reception of an echo of its own transmission, delayed by the two-way round trip transmission time (i.e. to and from the central node) plus any delay generated in the central node. An active star network has an active central node that usually has the means to prevent echo-related problems.
A tree topology (a.k.a. hierarchical topology) can be viewed as a collection of star networks arranged in a hierarchy. This tree structure has individual peripheral nodes (e.g. leaves) which are required to transmit to and receive from one other node only and are not required to act as repeaters or regenerators. Unlike the star network, the functionality of the central node may be distributed.
As in the conventional star network, individual nodes may thus still be isolated from the network by a single-point failure of a transmission path to the node. If a link connecting a leaf fails, that leaf is isolated; if a connection to a non-leaf node fails, an entire section of the network becomes isolated from the rest.
To alleviate the amount of network traffic that comes from broadcasting all signals to all nodes, more advanced central nodes were developed that are able to keep track of the identities of the nodes that are connected to the network. These network switches will learn the layout of the network by listening on each port during normal data transmission, examining the data packets and recording the address/identifier of each connected node and which port it is connected to in a lookup table held in memory. This lookup table then allows future transmissions to be forwarded to the intended destination only.
Daisy chain topology is a way of connecting network nodes in a linear or ring structure. It is used to transmit messages from one node to the next until they reach the destination node.
A daisy chain network can have two types: linear and ring. A linear daisy chain network is like an electrical series, where the first and last nodes are not connected. A ring daisy chain network is where the first and last nodes are connected, forming a loop.
Decentralization
In a partially connected mesh topology, there are at least two nodes with two or more paths between them to provide redundant paths in case the link providing one of the paths fails. Decentralization is often used to compensate for the single-point-failure disadvantage that is present when using a single device as a central node (e.g., in star and tree networks). A special kind of mesh, limiting the number of hops between two nodes, is a hypercube. The number of arbitrary forks in mesh networks makes them more difficult to design and implement, but their decentralized nature makes them very useful.
This is similar in some ways to a grid network, where a linear or ring topology is used to connect systems in multiple directions. A multidimensional ring has a toroidal topology, for instance.
A fully connected network, complete topology, or full mesh topology is a network topology in which there is a direct link between all pairs of nodes. In a fully connected network with n nodes, there are direct links. Networks designed with this topology are usually very expensive to set up, but provide a high degree of reliability due to the multiple paths for data that are provided by the large number of redundant links between nodes. This topology is mostly seen in military applications.
See also
Broadcast communication network
Butterfly network
Computer network diagram
Gradient network
Internet topology
Network simulation
Relay network
Rhizome (philosophy)
Scale-free network
Shared mesh
Switched communication network
Switched mesh
References
External links
Tetrahedron Core Network: Application of a tetrahedral structure to create a resilient partial-mesh 3-dimensional campus backbone data network
Decentralization | Network topology | [
"Mathematics"
] | 5,890 | [
"Network topology",
"Topology"
] |
41,416 | https://en.wikipedia.org/wiki/Noise-equivalent%20power | Noise-equivalent power (NEP) is a measure of the sensitivity of a photodetector or detector system. It is defined as the signal power that gives a signal-to-noise ratio of one in a one hertz output bandwidth. An output bandwidth of one hertz is equivalent to half a second of integration time. The units of NEP are watts per square root hertz. The NEP is equal to the noise amplitude spectral density (expressed in units of or ) divided by the responsivity (expressed in units of or , respectively). The fundamental equation is .
A smaller NEP corresponds to a more sensitive detector. For example, a detector with an NEP of can detect a signal power of one picowatt with a signal-to-noise ratio (SNR) of one after one half second of averaging. The SNR improves as the square root of the averaging time, and hence the SNR in this example can be improved by a factor of 10 by averaging 100-times longer, i.e. for 50 seconds.
If the NEP refers to the signal power absorbed in the detector, it is known as the electrical NEP. If instead it refers to the signal power incident on the detector system, it is called the optical NEP. The optical NEP is equal to the electrical NEP divided by the optical coupling efficiency of the detector system.
References and footnotes
See also
Noise-equivalent temperature
Specific detectivity
Noise (electronics)
Superconducting detectors
Equivalent units | Noise-equivalent power | [
"Materials_science",
"Mathematics"
] | 309 | [
"Equivalent quantities",
"Quantity",
"Superconductivity",
"Superconducting detectors",
"Equivalent units",
"Units of measurement"
] |
41,417 | https://en.wikipedia.org/wiki/Noise%20figure | Noise figure (NF) and noise factor (F) are figures of merit that indicate degradation of the signal-to-noise ratio (SNR) that is caused by components in a signal chain. These figures of merit are used to evaluate the performance of an amplifier or a radio receiver, with lower values indicating better performance.
The noise factor is defined as the ratio of the output noise power of a device to the portion thereof attributable to thermal noise in the input termination at standard noise temperature T0 (usually 290 K). The noise factor is thus the ratio of actual output noise to that which would remain if the device itself did not introduce noise, which is equivalent to the ratio of input SNR to output SNR.
The noise factor and noise figure are related, with the former being a unitless ratio and the latter being the logarithm of the noise factor, expressed in units of decibels (dB).
General
The noise figure is the difference in decibel (dB) between the noise output of the actual receiver to the noise output of an "ideal" receiver with the same overall gain and bandwidth when the receivers are connected to matched sources at the standard noise temperature T0 (usually 290 K). The noise power from a simple load is equal to kTB, where k is the Boltzmann constant, T is the absolute temperature of the load (for example a resistor), and B is the measurement bandwidth.
This makes the noise figure a useful figure of merit for terrestrial systems, where the antenna effective temperature is usually near the standard 290 K. In this case, one receiver with a noise figure, say 2 dB better than another, will have an output signal-to-noise ratio that is about 2 dB better than the other. However, in the case of satellite communications systems, where the receiver antenna is pointed out into cold space, the antenna effective temperature is often colder than 290 K. In these cases a 2 dB improvement in receiver noise figure will result in more than a 2 dB improvement in the output signal-to-noise ratio. For this reason, the related figure of effective noise temperature is therefore often used instead of the noise figure for characterizing satellite-communication receivers and low-noise amplifiers.
In heterodyne systems, output noise power includes spurious contributions from image-frequency transformation, but the portion attributable to thermal noise in the input termination at standard noise temperature includes only that which appears in the output via the principal frequency transformation of the system and excludes that which appears via the image frequency transformation.
Definition
The noise factor of a system is defined as
where and are the input and output signal-to-noise ratios respectively. The quantities are unitless power ratios. Note that this specific definition is only valid for an input signal of which the noise is Ni=kT0B.
The noise figure is defined as the noise factor in units of decibels (dB):
where and are in units of (dB).
These formulae are only valid when the input termination is at standard noise temperature , although in practice small differences in temperature do not significantly affect the values.
The noise factor of a device is related to its noise temperature :
Attenuators have a noise factor equal to their attenuation ratio when their physical temperature equals . More generally, for an attenuator at a physical temperature , the noise temperature is , giving a noise factor
Noise factor of cascaded devices
If several devices are cascaded, the total noise factor can be found with Friis' formula:
where is the noise factor for the -th device, and is the power gain (linear, not in dB) of the -th device. The first amplifier in a chain usually has the most significant effect on the total noise figure because the noise figures of the following stages are reduced by stage gains. Consequently, the first amplifier usually has a low noise figure, and the noise figure requirements of subsequent stages is usually more relaxed.
Noise factor as a function of additional noise
The noise factor may be expressed as a function of the additional output referred noise power and the power gain of an amplifier.
Derivation
From the definition of noise factor
and assuming a system which has a noisy single stage amplifier. The signal to noise ratio of this amplifier would include its own output referred noise , the amplified signal and the amplified input noise ,
Substituting the output SNR to the noise factor definition,
In cascaded systems does not refer to the output noise of the previous component. An input termination at the standard noise temperature is still assumed for the individual component. This means that the additional noise power added by each component is independent of the other components.
Optical noise figure
The above describes noise in electrical systems. The optical noise figure is discussed in multiple sources. Electric sources generate noise with a power spectral density, or energy per mode, equal to , where is the Boltzmann constant and is the absolute temperature. One mode has two quadratures, i.e. the amplitudes of and oscillations of voltages, currents or fields. However, there is also noise in optical systems. In these, the sources have no fundamental noise. Instead the energy quantization causes notable shot noise in the detector. In an optical receiver which can output one available mode or two available quadratures this corresponds to a noise power spectral density, or energy per mode, of where is the Planck constant and is the optical frequency. In an optical receiver with only one available quadrature the shot noise has a power spectral density, or energy per mode, of only .
In the 1990s, an optical noise figure has been defined. This has been called for photon number fluctuations. The powers needed for SNR and noise factor calculation are the electrical powers caused by the current in a photodiode. SNR is the square of mean photocurrent divided by variance of photocurrent. Monochromatic or sufficiently attenuated light has a Poisson distribution of detected photons. If, during a detection interval the expectation value of detected photons is then the variance is also and one obtains = = . Behind an optical amplifier with power gain there will be a mean of detectable signal photons. In the limit of large the variance of photons is where is the spontaneous emission factor. One obtains = = . Resulting optical noise factor is = = .
is in conceptual conflict with the electrical noise factor, which is now called :
Photocurrent is proportional to optical power . is proportional to squares of a field amplitude (electric or magnetic). So, the receiver is nonlinear in amplitude. The "Power" needed for calculation is proportional to the 4th power of the signal amplitude. But for in the electrical domain the power is proportional to the square of the signal amplitude.
If is a noise factor then its definition must be independent of measurement apparatus and frequency. Consider the signal "Power" in the sense of definition. Behind an amplifier it is proportional to . We may replace the photodiode by a thermal power meter, and measured photocurrent by measured temperature change . "Power", being proportional to or , is also proportional to . Thermal power meters can be built at all frequencies. Hence it is possible to lower the frequency from optical (say 200 THz) to electrical (say 200 MHz). Still there, "Power" must be proportional to or . Electrical power is proportional to the square of voltage . But "Power" is proportional to .
These implications are in obvious conflict with ~150 years of physics. They are compelling consequence of calling a noise factor, or noise figure when expressed in dB.
At any given electrical frequency, noise occurs in both quadratures, i.e. in phase (I) and in quadrature (Q) with the signal. Both these quadratures are available behind the electrical amplifier. The same holds in an optical amplifier. But the direct detection photoreceiver needed for measurement of takes mainly the in-phase noise into account whereas quadrature noise can be neglected for high . Also, the receiver outputs only one baseband signal, corresponding to quadrature. So, one quadrature or degree-of-freedom is lost.
For an optical amplifier with large it holds ≥ 2 whereas for an electrical amplifier it holds ≥ 1.
Moreover, today's long-haul optical fiber communication is dominated by coherent optical I&Q receivers but does not describe the SNR degradation observed in these.
Another optical noise figure for amplified spontaneous emission has been defined. But the noise factor is not the SNR degradation factor in any optical receiver.
All the above conflicts are resolved by the optical in-phase and quadrature noise factor and figure . It can be measured using a coherent optical I&Q receiver. In these, power of the output signal is proportional to the square of an optical field amplitude because they are linear in amplitude. They pass both quadratures. For an optical amplifier it holds = ≥ 1. Quantity is the input-referred number of added noise photons per mode.
and can easily be converted into each other. For large it holds = or, when expressed in dB, is 3 dB less than . The ideal in dB equals 0 dB. This describes the known fact that the sensitivity of an ideal optical I&Q receiver is not improved by an ideal optical preamplifier.
See also
Noise
Noise (electronic)
Noise figure meter
Noise level
Thermal noise
Signal-to-noise ratio
Y-factor
References
.
External links
Noise Figure Calculator 2- to 30-Stage Cascade
Noise Figure and Y Factor Method Basics and Tutorial
Mobile phone noise figure
Noise (electronics)
Radar signal processing
Acoustics
Sound
Articles with short description | Noise figure | [
"Physics"
] | 1,969 | [
"Classical mechanics",
"Acoustics"
] |
41,420 | https://en.wikipedia.org/wiki/Noise%20temperature | In electronics, noise temperature is one way of expressing the level of available noise power introduced by a component or source. The power spectral density of the noise is expressed in terms of the temperature (in kelvins) that would produce that level of Johnson–Nyquist noise, thus:
where:
is the noise power (in W, watts)
is the total bandwidth (Hz, hertz) over which that noise power is measured
is the Boltzmann constant (, joules per kelvin)
is the noise temperature (K, kelvin)
Thus the noise temperature is proportional to the power spectral density of the noise, . That is the power that would be absorbed from the component or source by a matched load. Noise temperature is generally a function of frequency, unlike that of an ideal resistor which is simply equal to the actual temperature of the resistor at all frequencies.
Noise voltage and current
A noisy component may be modelled as a noiseless component in series with a noisy voltage source producing a voltage of , or as a noiseless component in parallel with a noisy current source producing a current of . This equivalent voltage or current corresponds to the above power spectral density , and would have a mean squared amplitude over a bandwidth of:
where is the resistive part of the component's impedance or is the conductance (real part) of the component's admittance. Speaking of noise temperature therefore offers a fair comparison between components having different impedances rather than specifying the noise voltage and qualifying that number by mentioning the component's resistance. It is also more accessible than speaking of the noise's power spectral density (in watts per hertz) since it is expressed as an ordinary temperature which can be compared to the noise level of an ideal resistor at room temperature (290 K).
Note that one can only speak of the noise temperature of a component or source whose impedance has a substantial (and measurable) resistive component. Thus it does not make sense to talk about the noise temperature of a capacitor or of a voltage source. The noise temperature of an amplifier refers to the noise that would be added at the amplifier's input (relative to the input impedance of the amplifier) in order to account for the added noise observed following amplification.
System noise temperature
An RF receiver system is typically made up of an antenna and a receiver, and the transmission line(s) that connect the two together. Each of these is a source of additive noise. The additive noise in a receiving system can be of thermal origin (thermal noise) or can be from other external or internal noise-generating processes. The contributions of all noise sources are typically lumped together and regarded as a level of thermal noise. The noise power spectral density generated by any source () can be described by assigning to the noise a temperature as defined above:
In an RF receiver, the overall system noise temperature equals the sum of the effective noise temperature of the receiver and transmission lines and that of the antenna.
The antenna noise temperature gives the noise power seen at the output of the antenna. The composite noise temperature of the receiver and transmission line losses represents the noise contribution of the rest of the receiver system. It is calculated as the effective noise that would be present at the antenna input terminals if the receiver system were perfect and created no noise. In other words, it is a cascaded system of amplifiers and losses where the internal noise temperatures are referred to the antenna input terminals. Thus, the summation of these two noise temperatures represents the noise input to a "perfect" receiver system.
Noise factor and noise figure
One use of noise temperature is in the definition of a system's noise factor or noise figure. The noise factor specifies the increase in noise power (referred to the input of an amplifier) due to a component or system when its input noise temperature is .
is customarily taken to be room temperature, 290 K.
The noise factor (a linear term) is more often expressed as the noise figure (in decibels) using the conversion:
The noise figure can also be seen as the decrease in signal-to-noise ratio (SNR) caused by passing a signal through a system if the original signal had a noise temperature of 290 K. This is a common way of expressing the noise contributed by a radio frequency amplifier regardless of the amplifier's gain. For instance, assume an amplifier has a noise temperature 870 K and thus a noise figure of 6 dB. If that amplifier is used to amplify a source having a noise temperature of about room temperature (290 K), as many sources do, then the insertion of that amplifier would reduce the SNR of a signal by 6 dB. This simple relationship is frequently applicable where the source's noise is of thermal origin since a passive transducer will often have a noise temperature similar to 290 K.
However, in many cases the input source's noise temperature is much higher, such as an antenna at lower frequencies where atmospheric noise dominates. Then there will be little degradation of the SNR. On the other hand, a good satellite dish looking through the atmosphere into space (so that it sees a much lower noise temperature) would have the SNR of a signal degraded by more than 6 dB. In those cases a reference to the amplifier's noise temperature itself, rather than the noise figure defined according to room temperature, is more appropriate.
Effective noise temperature
The noise temperature of an amplifier is commonly measured using the Y-factor method. If there are multiple amplifiers in cascade, the noise temperature of the cascade can be calculated using the Friis equation:
where
= resulting noise temperature referred to the input
= noise temperature of the first component in the cascade
= noise temperature of the second component in the cascade
= noise temperature of the third component in the cascade
= power gain of the first component in the cascade
= power gain of the second component in the cascade
Therefore, the amplifier chain can be modelled as a black box having a gain of and a noise figure given by . In the usual case where the gains of the amplifier's stages are much greater than one, then it can be seen that the noise temperatures of the earlier stages have a much greater influence on the resulting noise temperature than those later in the chain. One can appreciate that the noise introduced by the first stage, for instance, is amplified by all of the stages whereas the noise introduced by later stages undergoes lesser amplification. Another way of looking at it is that the signal applied to a later stage already has a high noise level, due to amplification of noise by the previous stages, so that the noise contribution of that stage to that already amplified signal is of less significance.
This explains why the quality of a preamplifier or RF amplifier is of particular importance in an amplifier chain. In most cases only the noise figure of the first stage need be considered. However one must check that the noise figure of the second stage is not so high (or that the gain of the first stage is so low) that there is SNR degradation due to the second stage anyway. That will be a concern if the noise figure of the first stage plus that stage's gain (in decibels) is not much greater than the noise figure of the second stage.
One corollary of the Friis equation is that an attenuator prior to the first amplifier will degrade the noise figure due to the amplifier. For instance, if stage 1 represents a 6 dB attenuator so that , then . Effectively the noise temperature of the amplifier has been quadrupled, in addition to the (smaller) contribution due to the attenuator itself (usually room temperature if the attenuator is composed of resistors). An antenna with poor efficiency is an example of this principle, where would represent the antenna's efficiency.
See also
Noise spectral density
References
Noise (electronics)
Electrical engineering
Telecommunication theory | Noise temperature | [
"Engineering"
] | 1,602 | [
"Electrical engineering"
] |
41,421 | https://en.wikipedia.org/wiki/Noise%20weighting | A noise weighting is a specific amplitude-vs.-frequency characteristic that is designed to allow subjectively valid measurement of noise. It emphasises the parts of the spectrum that are most important.
Usually, noise means audible noise, in audio systems, broadcast systems or telephone circuits. In this case the weighting is sometimes referred to as Psophometric weighting, though this term is best avoided because, although strictly a general term, the word Psophometric is sometimes assumed to refer to a particular weighting used in telecommunications.
A major use of noise weighting is in the measurement of residual noise in audio equipment, usually present as hiss or hum in quiet moments of programme material. The purpose of weighting here is to emphasise the parts of the audible spectrum that our ears perceive most readily, and attenuate the parts that contribute less to our perception of loudness, in order to get a measured figure that correlates well with subjective effect.
The ITU-R 468 noise weighting was devised specifically for this purpose, and is widely used in broadcasting, especially in the UK and Europe. A-weighting is also used, especially in the United States, though this is only really valid for the measurement of tones, not noise, and is widely incorporated into sound level meters.
In telecommunications, noise weightings are used by agencies concerned with public telephone service, and various standard curves are based on the characteristics of specific commercial telephone instruments, representing successive stages of technological development. The coding of commercial apparatus appears in the nomenclature of certain weightings. The same weighting nomenclature and units are used in military versions of commercial noise measuring sets.
Telecommunication measurements are made in lines terminated either by the measuring set or an instrument of the relevant class.
See also
A-weighting
ITU-R 468 noise weighting
Equal-loudness contour
Noise pollution
Weighting filter
Psophometric weighting
References
Noise
Audio engineering
Sound
Acoustics | Noise weighting | [
"Physics",
"Engineering"
] | 397 | [
"Electrical engineering",
"Audio engineering",
"Classical mechanics",
"Acoustics"
] |
41,428 | https://en.wikipedia.org/wiki/N-entity | In telecommunications, a n-entity is an active element in the n-th layer of the Open Systems Interconnection--Reference Model (OSI-RM) that (a) interacts directly with elements, i.e., entities, of the layer immediately above or below the n-th layer, (b) is defined by a unique set of rules, i.e., syntax, and information formats, including data and control formats, and (c) performs a defined set of functions.
The n refers to any one of the 7 layers of the OSI-RM.
In an existing layered open system, the n may refer to any given layer in the system.
Layers are conventionally numbered from the lowest, i.e., the physical layer, to the highest, so that the -th layer is above the n-th layer and the -th layer is below.
References
Network architecture
OSI protocols
Reference models | N-entity | [
"Engineering"
] | 194 | [
"Network architecture",
"Computer networks engineering"
] |
41,432 | https://en.wikipedia.org/wiki/Numerical%20aperture | In optics, the numerical aperture (NA) of an optical system is a dimensionless number that characterizes the range of angles over which the system can accept or emit light. By incorporating index of refraction in its definition, has the property that it is constant for a beam as it goes from one material to another, provided there is no refractive power at the interface. The exact definition of the term varies slightly between different areas of optics. Numerical aperture is commonly used in microscopy to describe the acceptance cone of an objective (and hence its light-gathering ability and resolution), and in fiber optics, in which it describes the range of angles within which light that is incident on the fiber will be transmitted along it.
General optics
In most areas of optics, and especially in microscopy, the numerical aperture of an optical system such as an objective lens is defined by
where is the index of refraction of the medium in which the lens is working (1.00 for air, 1.33 for pure water, and typically 1.52 for immersion oil; see also list of refractive indices), and is the half-angle of the maximum cone of light that can enter or exit the lens. In general, this is the angle of the real marginal ray in the system. Because the index of refraction is included, the of a pencil of rays is an invariant as a pencil of rays passes from one material to another through a flat surface. This is easily shown by rearranging Snell's law to find that is constant across an interface.
In air, the angular aperture of the lens is approximately twice this value (within the paraxial approximation). The is generally measured with respect to a particular object or image point and will vary as that point is moved. In microscopy, generally refers to object-space numerical aperture unless otherwise noted.
In microscopy, is important because it indicates the resolving power of a lens. The size of the finest detail that can be resolved (the resolution) is proportional to , where is the wavelength of the light. A lens with a larger numerical aperture will be able to visualize finer details than a lens with a smaller numerical aperture. Assuming quality (diffraction-limited) optics, lenses with larger numerical apertures collect more light and will generally provide a brighter image, but will provide shallower depth of field.
Numerical aperture is used to define the "pit size" in optical disc formats.
Increasing the magnification and the numerical aperture of the objective reduces the working distance, i.e. the distance between front lens and specimen.
Numerical aperture versus f-number
Numerical aperture is not typically used in photography. Instead, the angular aperture of a lens (or an imaging mirror) is expressed by the f-number, written , where is the f-number given by the ratio of the focal length to the diameter of the entrance pupil :
This ratio is related to the image-space numerical aperture when the lens is focused at infinity. Based on the diagram at the right, the image-space numerical aperture of the lens is:
thus , assuming normal use in air ().
The approximation holds when the numerical aperture is small, but it turns out that for well-corrected optical systems such as camera lenses, a more detailed analysis shows that is almost exactly equal to even at large numerical apertures. As Rudolf Kingslake explains, "It is a common error to suppose that the ratio [] is actually equal to , and not ... The tangent would, of course, be correct if the principal planes were really plane. However, the complete theory of the Abbe sine condition shows that if a lens is corrected for coma and spherical aberration, as all good photographic objectives must be, the second principal plane becomes a portion of a sphere of radius centered about the focal point". In this sense, the traditional thin-lens definition and illustration of f-number is misleading, and defining it in terms of numerical aperture may be more meaningful.
Working (effective) f-number
The f-number describes the light-gathering ability of the lens in the case where the marginal rays on the object side are parallel to the axis of the lens. This case is commonly encountered in photography, where objects being photographed are often far from the camera. When the object is not distant from the lens, however, the image is no longer formed in the lens's focal plane, and the f-number no longer accurately describes the light-gathering ability of the lens or the image-side numerical aperture. In this case, the numerical aperture is related to what is sometimes called the "working f-number" or "effective f-number".
The working f-number is defined by modifying the relation above, taking into account the magnification from object to image:
where is the working f-number, is the lens's magnification for an object a particular distance away, is the pupil magnification, and the is defined in terms of the angle of the marginal ray as before. The magnification here is typically negative, and the pupil magnification is most often assumed to be 1 — as Allen R. Greenleaf explains, "Illuminance varies inversely as the square of the distance between the exit pupil of the lens and the position of the plate or film. Because the position of the exit pupil usually is unknown to the user of a lens, the rear conjugate focal distance is used instead; the resultant theoretical error so introduced is insignificant with most types of photographic lenses."
In photography, the factor is sometimes written as , where represents the absolute value of the magnification; in either case, the correction factor is 1 or greater. The two equalities in the equation above are each taken by various authors as the definition of working f-number, as the cited sources illustrate. They are not necessarily both exact, but are often treated as if they are.
Conversely, the object-side numerical aperture is related to the f-number by way of the magnification (tending to zero for a distant object):
Laser physics
In laser physics, numerical aperture is defined slightly differently. Laser beams spread out as they propagate, but slowly. Far away from the narrowest part of the beam, the spread is roughly linear with distance—the laser beam forms a cone of light in the "far field". The relation used to define the of the laser beam is the same as that used for an optical system,
but is defined differently. Laser beams typically do not have sharp edges like the cone of light that passes through the aperture of a lens does. Instead, the irradiance falls off gradually away from the center of the beam. It is very common for the beam to have a Gaussian profile. Laser physicists typically choose to make the divergence of the beam: the far-field angle between the beam axis and the distance from the axis at which the irradiance drops to times the on-axis irradiance. The of a Gaussian laser beam is then related to its minimum spot size ("beam waist") by
where is the vacuum wavelength of the light, and is the diameter of the beam at its narrowest spot, measured between the irradiance points ("Full width at maximum of the intensity"). This means that a laser beam that is focused to a small spot will spread out quickly as it moves away from the focus, while a large-diameter laser beam can stay roughly the same size over a very long distance. See also: Gaussian beam width.
Fiber optics
A multi-mode optical fiber will only propagate light that enters the fiber within a certain range of angles, known as the acceptance cone of the fiber. The half-angle of this cone is called the acceptance angle, . For step-index multimode fiber in a given medium, the acceptance angle is determined only by the indices of refraction of the core, the cladding, and the medium:
where is the refractive index of the medium around the fiber, is the refractive index of the fiber core, and is the refractive index of the cladding. While the core will accept light at higher angles, those rays will not totally reflect off the core–cladding interface, and so will not be transmitted to the other end of the fiber. The derivation of this formula is given below.
When a light ray is incident from a medium of refractive index to the core of index at the maximum acceptance angle, Snell's law at the medium–core interface gives
From the geometry of the above figure we have:
where
is the critical angle for total internal reflection.
Substituting for in Snell's law we get:
By squaring both sides
Solving, we find the formula stated above:
This has the same form as the numerical aperture in other optical systems, so it has become common to define the of any type of fiber to be
where is the refractive index along the central axis of the fiber. Note that when this definition is used, the connection between the numerical aperture and the acceptance angle of the fiber becomes only an approximation. In particular, "" defined this way is not relevant for single-mode fiber. One cannot define an acceptance angle for single-mode fiber based on the indices of refraction alone.
The number of bound modes, the mode volume, is related to the normalized frequency and thus to the numerical aperture.
In multimode fibers, the term equilibrium numerical aperture is sometimes used. This refers to the numerical aperture with respect to the extreme exit angle of a ray emerging from a fiber in which equilibrium mode distribution has been established.
See also
f-number
Launch numerical aperture
Guided ray, optic fibre context
Acceptance angle (solar concentrator), further context
References
External links
"Microscope Objectives: Numerical Aperture and Resolution" by Mortimer Abramowitz and Michael W. Davidson, Molecular Expressions: Optical Microscopy Primer (website), Florida State University, April 22, 2004.
"Basic Concepts and Formulas in Microscopy: Numerical Aperture" by Michael W. Davidson, Nikon MicroscopyU (website).
"Numerical aperture", Encyclopedia of Laser Physics and Technology (website).
"Numerical Aperture and Resolution", UCLA Brain Research Institute Microscopy Core Facilities (website), 2007.
Optics
Fiber optics
Microscopy
Dimensionless numbers of physics | Numerical aperture | [
"Physics",
"Chemistry"
] | 2,098 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Microscopy",
"Atomic",
" and optical physics"
] |
41,440 | https://en.wikipedia.org/wiki/Online%20and%20offline | In computer technology and telecommunications, online indicates a state of connectivity, and offline indicates a disconnected state. In modern terminology, this usually refers to an Internet connection, but (especially when expressed as "on line" or "on the line") could refer to any piece of equipment or functional unit that is connected to a larger system. Being online means that the equipment or subsystem is connected, or that it is ready for use.
"Online" has come to describe activities performed on and data available on the Internet, for example: "online identity", "online predator", "online gambling", "online game", "online shopping", "online banking", and "online learning". A Similar meaning is also given by the prefixes "cyber" and "e", as in words "cyberspace", "cybercrime", "email", and "e-commerce". In contrast, "offline" can refer to either computing activities performed while disconnected from the Internet, or alternatives to Internet activities (such as shopping in brick-and-mortar stores). The term "offline" is sometimes used interchangeably with the acronym "IRL", meaning "in real life".
History
During the 19th century, the term on line was commonly used in both the railroad and telegraph industries. For railroads, a signal box would send messages down the line (track), via a telegraph line (cable), indicating the track's status: Train on line or Line clear. Telegraph linemen would refer to sending current through a line as direct on line or battery on line; or they may refer to a problem with the circuit as being on line, as opposed to the power source or end-point equipment.
Since at least 1950, in computing, the terms on-line and off-line have been used to refer to whether machines, including computers and peripheral devices, are connected or not. Here is an excerpt from the 1950 book High-Speed Computing Devices:
The use of automatic computing equipment for large-scale reduction of data will be strikingly successful only if means are provided for the automatic transcription of these data to a form suitable for automatic entry into the machine. For some applications, of which the most prominent are those in which the reduced data are used to control the process being measured, the input must be developed for on-line operation. In on-line operation the input is communicated directly and without delay to the data-reduction device. For other applications, off-line operation, involving automatic transcription of data in a form suitable for later introduction to the machine, may be tolerated. These requirements may be compared with teleprinter operating requirements. For example, some teletype machines operate on line. Their operators are in instantaneous communication. Other teletype machines are operated off line, through the intervention of punched paper tape. The message is preserved by means of holes punched in the tape and is transmitted later by feeding the tape to another machine.
Examples
Offline e-mail
One example of a common use of these concepts with email is a mail user agent (MUA) that can be instructed to be in either online or offline states. One such MUA is Microsoft Outlook. When online it will attempt to connect to mail servers (to check for new mail at regular intervals, for example), and when offline it will not attempt to make any such connection. The online or offline state of the MUA does not necessarily reflect the connection status between the computer on which it is running and the Internet i.e. the computer itself may be online—connected to the Internet via a cable modem or other means—while Outlook is kept offline by the user, so that it makes no attempt to send or to receive messages. Similarly, a computer may be configured to employ a dial-up connection on demand (as when an application such as Outlook attempts to make a connection to a server), but the user may not wish for Outlook to trigger that call whenever it is configured to check for mail.
Offline media playing
Another example of the use of these concepts is digital audio technology. A tape recorder, digital audio editor, or other device that is online is one whose clock is under the control of the clock of a synchronization master device. When the sync master commences playback, the online device automatically synchronizes itself to the master and commences playing from the same point in the recording. A device that is offline uses no external clock reference and relies upon its own internal clock. When many devices are connected to a sync master it is often convenient, if one wants to hear just the output of one single device, to take it offline because, if the device is played back online, all synchronized devices have to locate the playback point and wait for each other device to be in synchronization. (For related discussion, see MIDI timecode, Word clock, and recording system synchronization.)
Offline browsing
A third example of a common use of these concepts is a web browser that can be instructed to be in either online or offline states. The browser attempts to fetch pages from servers while only in the online state. In the offline state, or "offline mode", users can perform offline browsing, where pages can be browsed using local copies of those pages that have previously been downloaded while in the online state. This can be useful when the computer is offline and connection to the Internet is impossible or undesirable. The pages are downloaded either implicitly into the web browser's own cache as a result of prior online browsing by the user or explicitly by a browser configured to keep local copies of certain web pages, which are updated when the browser is in the online state, either by checking that the local copies are up-to-date at regular intervals or by checking that the local copies are up-to-date whenever the browser is switched to the online. One such web browser is Internet Explorer. When pages are added to the Favourites list, they can be marked to be "available for offline browsing". Internet Explorer will download local copies of both the marked page and, optionally, all of the pages that it links to. In Internet Explorer version 6, the level of direct and indirect links, the maximum amount of local disc space allowed to be consumed, and the schedule on which local copies are checked to see whether they are up-to-date, are configurable for each individual Favourites entry.
For communities that lack adequate Internet connectivity—such as developing countries, rural areas, and prisons—offline information stores such as WiderNet's eGranary Digital Library (a collection of approximately thirty million educational resources from more than two thousand web sites and hundreds of CD-ROMs) provide offline access to information. More recently, the Internet Archive announced an offline server project intended to provide access to material on inexpensive servers that can be updated using USB sticks and SD cards.
Offline storage
Likewise, offline storage is computer data storage that has no connection to the other systems until a connection is deliberately made. Additionally, an otherwise online system that is powered down may be considered offline.
Offline messages
With the growing communication tools and media, the words offline and online are used very frequently. If a person is active over a messaging tool and is able to accept the messages it is termed as online message and if the person is not available and the message is left to view when the person is back, it is termed as offline message. In the same context, the person's availability is termed as online and non-availability is termed as offline.
File systems
In the context of file systems, "online" and "offline" are synonymous with "mounted" and "not mounted". For example, in file systems' resizing capabilities, "online grow" and "online shrink" respectively mean the ability to increase or decrease the space allocated to that file system without needing to unmount it.
Generalisations
Online and offline distinctions have been generalised from computing and telecommunication into the field of human interpersonal relationships. The distinction between what is considered online and what is considered offline has become a subject of study in the field of sociology.
The distinction between online and offline is conventionally seen as the distinction between computer-mediated communication and face-to-face communication (e.g., face time), respectively. Online is virtuality or cyberspace, and offline is reality (i.e., real life or "meatspace"). Slater states that this distinction is "obviously far too simple". To support his argument that the distinctions in relationships are more complex than a simple dichotomy of online versus offline, he observes that some people draw no distinction between an online relationship, such as indulging in cybersex, and an offline relationship, such as being pen pals. He argues that even the telephone can be regarded as an online experience in some circumstances, and that the blurring of the distinctions between the uses of various technologies (such as PDA versus mobile phone, internet television versus internet, and telephone versus Voice over Internet Protocol) has made it "impossible to use the term online meaningfully in the sense that was employed by the first generation of Internet research".
Slater asserts that there are legal and regulatory pressures to reduce the distinction between online and offline, with a "general tendency to assimilate online to offline and erase the distinction," stressing, however, that this does not mean that online relationships are being reduced to pre-existing offline relationships. He conjectures that greater legal status may be assigned to online relationships (pointing out that contractual relationships, such as business transactions, online are already seen as just as "real" as their offline counterparts), although he states it to be hard to imagine courts awarding palimony to people who have had a purely online sexual relationship. He also conjectures that an online/offline distinction may be seen by people as "rather quaint and not quite comprehensible" within 10 years.
This distinction between online and offline is sometimes inverted, with online concepts being used to define and to explain offline activities, rather than (as per the conventions of the desktop metaphor with its desktops, trash cans, folders, and so forth) the other way around. Several cartoons appearing in The New Yorker have satirized this. One includes Saint Peter asking for a username and a password before admitting a man into Heaven. Another illustrates "the offline store" where "All items are actual size!", shoppers may "Take it home as soon as you pay for it!", and "Merchandise may be handled prior to purchase!"
See also
, or the "oN-Line System"
and – the online/outline distinction in
References
Computer jargon
Internet terminology | Online and offline | [
"Technology"
] | 2,234 | [
"Natural language and computing",
"Computer jargon",
"Internet terminology",
"Computing terminology"
] |
41,448 | https://en.wikipedia.org/wiki/Open%20network%20architecture | In telecommunications, and in the context of Federal Communications Commission's (FCC) Computer Inquiry III, Open network architecture (ONA) is the overall design of a communication carrier's basic network facilities and services to permit all users of the basic network to interconnect to specific basic network functions and interfaces on an unbundled, equal-access basis.
The ONA concept consists of three integral components:
Basic serving arrangements (BSAs)
Basic service elements (BSEs)
Complementary network services
See also
Open Garden
References
Network architecture | Open network architecture | [
"Engineering"
] | 109 | [
"Network architecture",
"Computer networks engineering"
] |
41,449 | https://en.wikipedia.org/wiki/Open%20systems%20architecture | Open systems architecture is a system design approach which aims to produce systems that are inherently interoperable and connectable without recourse to retrofit and redesign.
Concept
Systems design is a process of defining and engineering the architecture, methods, and interfaces necessary to accomplish a goal or fulfill a set of requirements. In open systems architecture, the design includes intentional provisions to make it possible to expand or modify the system at a later stage after initial operation. There is no one specific universal OSA, but it is essential the specific OSA applicable to a system is rigorously defined and documented. For example, in information technology and telecommunication, such design principles lead to open systems.
Telecommunications
In telecommunications, open systems architecture (OSA) is a standard that describes the layered hierarchical structure, configuration, or model of a communications or distributed data processing system. It enables system description, design, development, installation, operation, improvement, and maintenance to be performed at the abstraction layers in the hierarchical structure. Each layer provides a set of accessible functions that can be controlled and used by the functions in the layer above it. Each layer can be implemented without affecting the implementation of other layers. The alteration of system performance by the modification of one or more layers may be accomplished without altering the existing equipment, procedures, and protocols at the remaining layers.
Examples of independent alterations include the conversion from wire to optical fiber at a physical layer without affecting the data link layer or the network layer, except to provide more traffic capacity, and the altering of the operational protocols at the network level without altering the physical layer.
See also
Hardware Open Systems Technologies
Architecture of Interoperable Information Systems
Architectural pattern
Enterprise architecture
OSI model
Open-system environment reference model
References
Sources
Telecommunications standards
Systems architecture | Open systems architecture | [
"Engineering"
] | 350 | [
"Systems engineering",
"Design",
"Systems architecture"
] |
41,455 | https://en.wikipedia.org/wiki/Optical%20attenuator | An optical attenuator, or fiber optic attenuator, is a device used to reduce the power level of an optical signal, either in free space or in an optical fiber. The basic types of optical attenuators are fixed, step-wise variable, and continuously variable.
Applications
Optical attenuators are commonly used in fiber-optic communications, either to test power level margins by temporarily adding a calibrated amount of signal loss, or installed permanently to properly match transmitter and receiver levels. Sharp bends stress optic fibers and can cause losses. If a received signal is too strong a temporary fix is to wrap the cable around a pencil until the desired level of attenuation is achieved. However, such arrangements are unreliable, since the stressed fiber tends to break over time.
Generally, multimode systems do not need attenuators as the multimode sources, rarely have enough power output to saturate receivers. Instead, single-mode systems, especially the long-haul DWDM network links, often need to use fiber optic attenuators to adjust the optical power during the transmission.
Principles of operation
The power reduction is done by such means as absorption, reflection, diffusion, scattering, deflection, diffraction, and dispersion, etc. Optical attenuators usually work by absorbing the light, like sunglasses absorb extra light energy. They typically have a working wavelength range in which they absorb all light energy equally. They should not reflect the light or scatter the light in an air gap, since that could cause unwanted back reflection in the fiber system. Another type of attenuator utilizes a length of high-loss optical fiber, that operates upon its input optical signal power level in such a way that its output signal power level is less than the input level.
Types
Optical attenuators can take a number of different forms and are typically classified as fixed or variable attenuators. What's more, they can be classified as LC, SC, ST, FC, MU, E2000 etc. according to the different types of connectors.
Fixed Attenuators
Fixed optical attenuators used in fiber optic systems may use a variety of principles for their functioning. Preferred attenuators use either doped fibers, or mis-aligned splices, or total power since both of these are reliable and inexpensive.
Inline style attenuators are incorporated into patch cables. The alternative build out style attenuator is a small male-female adapter that can be added onto other cables.
Non-preferred attenuators often use gap loss or reflective principles. Such devices can be sensitive to: modal distribution, wavelength, contamination, vibration, temperature, damage due to power bursts, may cause back reflections, may cause signal dispersion etc.
Loopback attenuators
Loopback fiber optic attenuator is designed for testing, engineering and the burn-in stage of boards or other equipment. Available in SC/UPC, SC/APC, LC/UPC, LC/APC, MTRJ, MPO for singlemode application.900 um fiber cable inside of the black shell for LC and SC type.
No black shell for MTRJ and MPO type.
Built-in variable attenuators
Built-in variable optical attenuators may be either manually or electrically controlled. A manual device is useful for one-time set up of a system, and is a near-equivalent to a fixed attenuator, and may be referred to as an "adjustable attenuator". In contrast, an electrically controlled attenuator can provide adaptive power optimization.
Attributes of merit for electrically controlled devices, include speed of response and avoiding degradation of the transmitted signal. Dynamic range is usually quite restricted, and power feedback may mean that long term stability is a relatively minor issue. Speed of response is a particularly major issue in dynamically reconfigurable systems, where a delay of one millionth of a second can result in the loss of large amounts of transmitted data. Typical technologies employed for high speed response include liquid crystal variable attenuator (LCVA), or lithium niobate devices. There is a class of built-in attenuators that is technically indistinguishable from test attenuators, except they are packaged for rack mounting, and have no test display.
Variable optical test attenuators
Variable optical test attenuators generally use a variable neutral density filter. Despite relatively high cost, this arrangement has the advantages of being stable, wavelength insensitive, mode insensitive, and offering a large dynamic range. Other schemes such as LCD, variable air gap etc. have been tried over the years, but with limited success.
They may be either manually or motor controlled. Motor control give regular users a distinct productivity advantage, since commonly used test sequences can be run automatically.
Attenuator instrument calibration is a major issue. The user typically would like an absolute port to port calibration. Also, calibration should usually be at a number of wavelengths and power levels, since the device is not always linear. However a number of instruments do not in fact offer these basic features, presumably in an attempt to reduce cost. The most accurate variable attenuator instruments have thousands of calibration points, resulting in excellent overall accuracy in use.
Test automation
Test sequences that use variable attenuators can be very time-consuming. Therefore, automation is likely to achieve useful benefits. Both bench and handheld-style devices are available that offer such features.
See also
Gap loss - sources and causes of unintended attenuation
Optical fiber cable
Optical fiber connector
Optical power meter
References
Fiber optics
Optical components
Telecommunications equipment
Measuring instruments | Optical attenuator | [
"Materials_science",
"Technology",
"Engineering"
] | 1,185 | [
"Glass engineering and science",
"Optical components",
"Components",
"Measuring instruments"
] |
41,458 | https://en.wikipedia.org/wiki/Optical%20disc | An optical disc is a flat, usually disc-shaped object that stores information in the form of physical variations on its surface that can be read with the aid of a beam of light. Optical discs can be reflective, where the light source and detector are on the same side of the disc, or transmissive, where light shines through the disc to be detected on the other side.
Optical discs can store analog information (e.g. Laserdisc), digital information (e.g. DVD), or store both on the same disc (e.g. CD Video).
Their main uses are the distribution of media and data, and long-term archival.
Design and technology
The encoding material sits atop a thicker substrate (usually polycarbonate) that makes up the bulk of the disc and forms a dust defocusing layer. The encoding pattern follows a continuous, spiral path covering the entire disc surface and extending from the innermost track to the outermost track.
The data are stored on the disc with a laser or stamping machine, and can be accessed when the data path is illuminated with a laser diode in an optical disc drive that spins the disc at speeds of about 200 to 4,000 RPM or more, depending on the drive type, disc format, and the distance of the read head from the center of the disc (outer tracks are read at a higher data speed due to higher linear velocities at the same angular velocities).
Most optical discs exhibit a characteristic iridescence as a result of the diffraction grating formed by their grooves. This side of the disc contains the actual data and is typically coated with a transparent material, usually lacquer.
The reverse side of an optical disc usually has a printed label, sometimes made of paper but often printed or stamped onto the disc itself. Unlike the 3-inch floppy disk, most optical discs do not have an integrated protective casing and are therefore susceptible to data transfer problems due to scratches, fingerprints, and other environmental problems. Blu-rays have a coating called durabis that mitigates these problems.
Optical discs are usually between in diameter, with being the most common size. The so-called program area that contains the data commonly starts 25 millimetres away from the center point. A typical disc is about thick, while the track pitch (distance from the center of one track to the center of the next) ranges from 1.6 μm (for CDs) to 320 nm (for Blu-ray discs).
Recording types
An optical disc is designed to support one of three recording types: read-only (such as CD and CD-ROM), recordable (write-once, like CD-R), or re-recordable (rewritable, like CD-RW). Write-once optical discs commonly have an organic dye (may also be a (phthalocyanine) azo dye, mainly used by Verbatim, or an oxonol dye, used by Fujifilm) recording layer between the substrate and the reflective layer. Rewritable discs typically contain an alloy recording layer composed of a phase change material, most often AgInSbTe, an alloy of silver, indium, antimony, and tellurium. Azo dyes were introduced in 1996 and phthalocyanine only began to see wide use in 2002. The type of dye and the material used on the reflective layer on an optical disc may be determined by shining a light through the disc, as different dye and material combinations have different colors.
Blu-ray Disc recordable discs do not usually use an organic dye recording layer, instead using an inorganic recording layer. Those that do are known as low-to-high (LTH) discs and can be made in existing CD and DVD production lines, but are of lower quality than traditional Blu-ray recordable discs.
File systems
File systems specifically created for optical discs are ISO9660 and the Universal Disk Format (UDF).
ISO9660 can be extended using the "Joliet" extension to store longer file names than standalone ISO9660. The "Rock Ridge" extension can store even longer file names and Unix/Linux-style file permissions, but is not recognized by Windows and by DVD players and similar devices that can read data discs.
For cross-platform compatibility, multiple file systems can co-exist on one disc and reference the same files.
Usage
Optical discs are most commonly used for digital preservation, storing music (particularly for use in a CD player), video (such as for use in a Blu-ray player), or data and programs for personal computers (PC), as well as offline hard copy data distribution due to lower per-unit prices than other types of media. The Optical Storage Technology Association (OSTA) promoted standardized optical storage formats.
Libraries and archives enact optical media preservation procedures to ensure continued usability in the computer's optical disc drive or corresponding disc player.
File operations of traditional mass storage devices such as flash drives, memory cards and hard drives can be simulated using a UDF live file system.
For computer data backup and physical data transfer, optical discs such as CDs and DVDs are gradually being replaced with faster, smaller solid-state devices, especially the USB flash drive. This trend is expected to continue as USB flash drives continue to increase in capacity and drop in price.
Additionally, music, movies, games, software and TV shows purchased, shared or streamed over the Internet has significantly reduced the number of audio CDs, video DVDs and Blu-ray discs sold annually. However, audio CDs and Blu-rays are still preferred and bought by some, as a way of supporting their favorite works while getting something tangible in return and also since audio CDs (alongside vinyl records and cassette tapes) contain uncompressed audio without the artifacts introduced by lossy compression algorithms like MP3, and Blu-rays offer better image and sound quality than streaming media, without visible compression artifacts, due to higher bitrates and more available storage space. However, Blu-rays may sometimes be torrented over the internet, but torrenting may not be an option for some, due to restrictions put in place by ISPs on legal or copyright grounds, low download speeds or not having enough available storage space, since the content may weigh up to several dozen gigabytes. Blu-rays may be the only option for those looking to play large games without having to download them over an unreliable or slow internet connection, which is the reason why they are still (as of 2020) widely used by gaming consoles, like the PlayStation 4 and Xbox One X. As of 2020, it is unusual for PC games to be available in a physical format like Blu-ray.
Optical discs are typically stored in special cases, sometimes called jewel cases. Discs should not have any stickers and should not be stored together with paper; papers must be removed from the jewel case before storage. Discs should be handled by the edges to prevent scratching, with the thumb on the inner edge of the disc. The ISO Standard 18938:2014 is about best optical disc handling techniques. Optical disc cleaning should never be done in a circular pattern, to avoid concentric cirles from forming on the disc. Improper cleaning can scratch the disc. Recordable discs should not be exposed to light for extended periods of time. Optical discs should be stored in dry and cool conditions to increase longevity, with temperatures between -10 and 23 °C, never exceeding 32 °C, and with humidity never falling below 10%, with recommended storage at 20 to 50% of humidity without fluctuations of more than ±10%.
Durability
Although optical discs are more durable than earlier audio-visual and data storage formats, they are susceptible to environmental and daily-use damage, if handled improperly.
Optical discs are not prone to uncontrollable catastrophic failures such as head crashes, power surges, or exposure to water like hard disk drives and flash storage, since optical drives' storage controllers are not tied to optical discs themselves like with hard disk drives and flash memory controllers, and a disc is usually recoverable from a defective optical drive by pushing an unsharp needle into the emergency ejection pinhole, and has no point of immediate water ingress and no integrated circuitry.
Security
As the media itself only is accessed through a laser beam and has no internal control circuitry, it cannot contain malicious hardware in the same way as so-called rubber-duckies or USB killers. Like any data storage media, optical discs can contain malicious data, they are able to contain and spread malware - as happened in the case of the Sony BMG copy protection rootkit scandal in 2005 where Sony misused discs by pre-loading them with malware.
Many types of optical discs are factory-pressed or finalized write once read many storage devices and would therefore not be effective at spreading computer worms that are designed to spread by copying themselves onto optical media, because data on those discs can not be modified once pressed or written. However, re-writable disc technologies (such as CD-RW) are able to spread this type of malware.
History
The first recorded historical use of an optical disc was in 1884 when Alexander Graham Bell, Chichester Bell and Charles Sumner Tainter recorded sound on a glass disc using a beam of light.
Optophonie is a very early (1931) example of a recording device using light for both recording and playing back sound signals on a transparent photograph.
An early analogue optical disc system existed in 1935, used on Welte's sampling organ.
An early analog optical disc used for video recording was invented by David Paul Gregg in 1958 and patented in the US in 1961 and 1969. This form of optical disc was a very early form of the DVD (). It is of special interest that , filed 1989, issued 1990, generated royalty income for Pioneer Corporation's DVA until 2007 —then encompassing the CD, DVD, and Blu-ray systems. In the early 1960s, the Music Corporation of America bought Gregg's patents and his company, Gauss Electrophysics.
American inventor James T. Russell has been credited with inventing the first system to record a digital signal on an optical transparent foil that is lit from behind by a high-power halogen lamp. Russell's patent application was first filed in 1966 and he was granted a patent in 1970. Following litigation, Sony and Philips licensed Russell's patents (then held by a Canadian company, Optical Recording Corp.) in the 1980s.
Both Gregg's and Russell's disc are floppy media read in transparent mode, which imposes serious drawbacks, after this were developed four generations of optical drive that includes Laserdisc (1969), WORM (1979), Compact Discs (1984), DVD (1995), Blu-ray (2005), HD-DVD (2006), more formats are currently under development.
First-generation
From the start optical discs were used to store broadcast-quality analog video, and later digital media such as music or computer software. The LaserDisc format stored analog video signals for the distribution of home video, but commercially lost to the VHS videocassette format, due mainly to its high cost and non-re-recordability; other first-generation disc formats were designed only to store digital data and were not initially capable of use as a digital video medium.
Most first-generation disc devices had an infrared laser reading head. The minimum size of the laser spot is proportional to the wavelength of the laser, so wavelength is a limiting factor upon the amount of information that can be stored in a given physical area on the disc. The infrared range is beyond the long-wavelength end of the visible light spectrum, so it supports less density than shorter-wavelength visible light. One example of high-density data storage capacity, achieved with an infrared laser, is 700 MB of net user data for a 12 cm compact disc.
Other factors that affect data storage density include: the existence of multiple layers of data on the disc, the method of rotation (Constant linear velocity (CLV), Constant angular velocity (CAV), or zoned-CAV), the composition of lands and pits, and how much margin is unused is at the center and the edge of the disc.
Types of Optical Discs:
Compact disc (CD) and derivatives
Audio CD
Video CD (VCD)
Super Video CD
CD Video
CD-Interactive
LaserDisc
GD-ROM
Phase-change Dual
Double Density Compact Disc (DDCD)
Magneto-optical disc
MiniDisc (MD)
MD Data
Write Once Read Many (WORM)
Laserdisc
In the Netherlands in 1969, Philips Research physicist, Pieter Kramer invented an optical videodisc in reflective mode with a protective layer read by a focused laser beam , filed 1972, issued 1991. Kramer's physical format is used in all optical discs.
In 1975, Philips and MCA began to work together, and in 1978, commercially much too late, they presented their long-awaited Laserdisc in Atlanta. MCA delivered the discs and Philips the players. However, the presentation was a commercial failure, and the cooperation ended.
In Japan and the U.S., Pioneer succeeded with the Laserdisc until the advent of the DVD. In 1979, Philips and Sony, in consortium, successfully developed the audio compact disc.
WORM drive
In 1979, Exxon STAR Systems in Pasadena, CA built a computer controlled WORM drive that utilized thin film coatings of Tellurium and Selenium on a 12" diameter glass disk. The recording system utilized blue light at 457 nm to record and red light at 632.8 nm to read. STAR Systems was bought by Storage Technology Corporation (STC) in 1981 and moved to Boulder, CO. Development of the WORM technology was continued using 14" diameter aluminum substrates. Beta testing of the disk drives, originally labeled the Laser Storage Drive 2000 (LSD-2000), was only moderately successful. Many of the disks were shipped to RCA Laboratories (now David Sarnoff Research Center) to be used in the Library of Congress archiving efforts. The STC disks utilized a sealed cartridge with an optical window for protection .
CD-ROM
The CD-ROM format was developed by Sony and Philips, introduced in 1984, as an extension of Compact Disc Digital Audio and adapted to hold any form of digital data. The same year, Sony demonstrated a LaserDisc data storage format, with a larger data capacity of 3.28 GB.
In the late 1980s and early 1990s, Optex, Inc. of Rockville, MD, built an erasable optical digital video disc system using Electron Trapping Optical Media (ETOM). Although this technology was written up in Video Pro Magazine's December 1994 issue promising "the death of the tape", it was never marketed.
Magnetic disks found limited applications in storing the data in large amount. So, there was the need of finding some more data storing techniques. As a result, it was found that by using optical means large data storing devices can be made that in turn gave rise to the optical discs. The very first application of this kind was the compact disc (CD), which was used in audio systems.
Sony and Philips developed the first generation of the CDs in the mid-1980s with the complete specifications for these devices. With the help of this kind of technology the possibility of representing the analog signal into digital signal was exploited to a great level. For this purpose, the 16-bit samples of the analog signal were taken at the rate of 44,100 samples per second. This sample rate was based on the Nyquist rate of 40,000 samples per second required to capture the audible frequency range to 20 kHz without aliasing, with an additional tolerance to allow the use of less-than-perfect analog audio pre-filters to remove any higher frequencies. The first version of the standard allowed up to 74 minutes of music or 650 MB of data storage.
Second-generation
Second-generation optical discs were for storing great amounts of data, including broadcast-quality digital video. Such discs usually are read with a visible-light laser (usually red); the shorter wavelength and greater numerical aperture allow a narrower light beam, permitting smaller pits and lands in the disc. In the DVD format, this allows 4.7 GB storage on a standard 12 cm, single-sided, single-layer disc; alternatively, smaller media, such as the DataPlay format, can have capacity comparable to that of the larger, standard compact 12 cm disc.
DVD and derivatives
DVD-Audio
DualDisc
Digital Video Express (DIVX)
DVD-RAM
DVD±R
Nintendo GameCube Game Disc (miniDVD derivative)
Wii Optical Disc (DVD derivative)
Super Audio CD (SACD)
Enhanced Versatile Disc
DataPlay
Hi-MD
Universal Media Disc (UMD)
Ultra Density Optical
DVD-ROM
In 1995, a consortium of manufacturers (Sony, Philips, Toshiba, Panasonic) developed the second generation of the optical disc, the DVD. The DVD disc appeared after the CD-ROM had become widespread in society.
Third-generation
Third-generation optical discs are used for distributing high-definition video and videogames and support greater data storage capacities, accomplished with short-wavelength visible-light lasers and greater numerical apertures. Blu-ray Disc and HD DVD uses blue-violet lasers and focusing optics of greater aperture, for use with discs with smaller pits and lands, thereby greater data storage capacity per layer.
In practice, the effective multimedia presentation capacity is improved with enhanced video data compression codecs such as H.264/MPEG-4 AVC and VC-1.
Blu-ray and derivatives (up to 400 GB - experimental)
BD-R and BD-RE
High Fidelity Pure Audio
AVCHD and AVCREC
BDXL and Blu-ray 3D
4K Blu-ray and 8K Blu-ray
Wii U Optical Disc (25 GB per layer)
HD DVD (discontinued disc format, up to 51 GB triple layer)
CBHD (a derivative of the HD DVD format)
HD VMD
Professional Disc
Announced but not released:
Digital Multilayer Disk
Fluorescent Multilayer Disc
Forward Versatile Disc
Blu-ray and HD-DVD
The third generation optical disc was developed in 2000–2006 and was introduced as Blu-ray Disc. First movies on Blu-ray Discs were released in June 2006. Blu-ray eventually prevailed in a high definition optical disc format war over a competing format, the HD DVD. A standard Blu-ray disc can hold about 25 GB of data, a DVD about 4.7 GB, and a CD about 700 MB.
Fourth-generation
The following formats go beyond the current third-generation discs and have the potential to hold more than one terabyte (1 TB) of data and at least some are meant for cold data storage in data centers:
Archival Disc
Holographic Versatile Disc
Announced but not released:
LS-R
Protein-coated disc
Stacked Volumetric Optical Disc
5D DVD
3D optical data storage (not a single technology, examples are Hyper CD-ROM and Fluorescent Multilayer Disc)
In 2004, development of the Holographic Versatile Disc (HVD) commenced, which promised the storage of several terabytes of data per disc. However, development stagnated towards the late 2000s due to lack of funding.
In 2006, it was reported that Japanese researchers developed ultraviolet ray lasers with a wavelength of 210 nanometers, which would enable a higher bit density than Blu-ray discs. As of 2022, no updates on that project have been reported.
Folio Photonics is planning to release high-capacity discs in 2024 with the cost of $5 per TB, with a roadmap to $1 per TB, using 80% less power than HDD.
Overview of optical types
Notes
Recordable and writable optical discs
There are numerous formats of optical direct to disk recording devices on the market, all of which are based on using a laser to change the reflectivity of the digital recording medium in order to duplicate the effects of the pits and lands created when a commercial optical disc is pressed. Formats such as CD-R and DVD-R are "Write once read many" or write-once, while CD-RW and DVD-RW are rewritable, more like a magnetic recording hard disk drive (HDD).
Media technologies vary, for example, M-DISC media uses a rock-like layer to retain data for longer than conventional recordable media. While being read-only compatible with existing DVD and Blu-ray drives, M-DISC media can only be written to using a stronger laser specifically made for this purpose, which is built into fewer optical drive models.
Surface error scanning
Optical media can predictively be scanned for errors and media deterioration well before any data becomes unreadable. Optical formats include some redundancy for error correction, which works until the amount of error exceeds a threshold. A higher rate of errors may indicate deteriorating and/or low quality media, physical damage, an unclean surface and/or media written using a defective optical drive.
Precise error scanning requires access to the raw, uncorrected readout of a disc, which is not always provided by a drive. As a result, support of this functionality varies per optical drive manufacturer and model. On ordinary drives without this functionality, it is possible to still look for unexpected reduction in read speed as an indirect, much less reliable measure.
Optical media, such as CDs and DVDs, can be scanned to detect errors and signs of deterioration well before data becomes unreadable. These formats include built-in error correction mechanisms, which function by adding redundant data. However, once the rate of errors surpasses the correction threshold, the media becomes vulnerable to failure. A high error rate can signal physical deterioration, low-quality manufacturing, surface contamination, or data recorded by a faulty optical drive.
Accurate error scanning requires access to a disc's raw, uncorrected readout. However, not all optical drives provide this capability, and support for this feature can vary significantly between manufacturers and drive models. On drives lacking raw data access, users may rely on a less precise method: monitoring unexpected reductions in read speed, though this is a far less reliable indicator of disc health.
Several specialized tools are available for performing error scans on optical media. Popular programs include Nero DiscSpeed, K-Probe, Opti Drive Control (previously known as "CD Speed 2000"), and DVD Info Pro for Windows. For cross-platform users, QPxTool is available to help monitor and maintain optical media integrity. Each of these tools allows for detailed analysis of the error rates and conditions affecting optical discs.
Error types
There are different types of error measurements, including so-called "C1", "C2" and "CU" errors on CDs, and "PI/PO (parity inner/outer) errors" and the more critical "PI/PO failures" on DVDs. Finer-grain error measurements on CDs supported by very few optical drives are called E11, E21, E31, E21, E22, E32.
"CU" and "POF" represent uncorrectable errors on data CDs and DVDs respectively, thus data loss, and can be a result of too many consecutive smaller errors.
Due to the weaker error correction used on Audio CDs (Red Book standard) and Video CDs (White Book standard), C2 errors already lead to data loss. However, even with C2 errors, the damage is inaudible to some extent.
Blu-ray discs use so-called LDC (Long Distance Codes) and BIS (Burst Indication Subcodes) error parameters. According to the developer of the Opti Drive Control software, a disc can be considered healthy at an LDC error rate below 13 and BIS error rate below 15.
Optical disc manufacturing
Optical discs are made using replication. This process can be used with all disc types. Recordable discs have pre-recorded vital information, like manufacturer, disc type, maximum read and write speeds, etc. In replication, a cleanroom with yellow light is necessary to protect the light-sensitive photoresist and to prevent dust from corrupting the data on the disc.
A glass master is used in replication. The master is placed in a machine that cleans it as much as possible using a rotating brush and deionized water, preparing it for the next step. In the next step, a surface analyzer inspects the cleanliness of the master before photoresist is applied on the master.
The photoresist is then baked in an oven to solidify it. Then, in the exposure process, the master is placed in a turntable where a laser selectively exposes the resist to light. At the same time, a developer and deionized water are applied to the disc to remove the exposed resist. This process forms the pits and lands that represent the data on the disc.
A thin coating of metal is then applied to the master, making a negative of the master with the pits and lands in it. The negative is then peeled off the master and coated in a thin layer of plastic. The plastic protects the coating while a punching press punches a hole into the center of the disc, and punches excess material.
The negative is now a stamper - a part of the mold that will be used for replication. It is placed on one side of the mold with the data side containing the pits and lands facing out. This is done inside an injection molding machine. The machine then closes the mold and injects polycarbonate in the cavity formed by the walls of the mold, which forms or molds the disc with the data on it.
The molten polycarbonate fills the pits or spaces between the lands on the negative, acquiring their shape when it solidifies. This step is somewhat similar to record pressing.
The polycarbonate disc cools quickly and is promptly removed from the machine, before forming another disc. The disc is then metallized, covered with a thin reflective layer of aluminum. The aluminum fills the space once occupied by the negative.
A layer of varnish is then applied to protect the aluminum coating and provide a surface suitable for printing. The varnish is applied near the center of the disc, and the disc is spun, evenly distributing the varnish on the surface of the disc. The varnish is hardened using UV light. The discs are then silkscreened or a label is otherwise applied.
Recordable discs add a dye layer, and rewritable discs add a phase change alloy layer instead, which is protected by upper and lower dielectric (electrically insulating) layers. The layers may be sputtered. The additional layer is between the grooves and the reflective layer of the disc. Grooves are made in recordable discs in place of the traditional pits and lands found in replicated discs, and the two can be made in the same exposure process. In DVDs, the same processes as in CDs are carried out, but in a thinner disc. The thinner disc is then bonded to a second, equally thin but blank, disc using UV-curable Liquid optically clear adhesive, forming a DVD disc. This leaves the data in the middle of the disc, which is necessary for DVDs to achieve their storage capacity. In multi layer discs, semi reflective instead of reflective coatings are used for all layers except the last layer, which is the deepest one and uses a traditional reflective coating.
Dual layer DVDs are made slightly differently. After metallization (with a thinner metal layer to allow some light to pass through), base and pit transfer resins are applied and pre-cured in the center of the disc. Then the disc is pressed again using a different stamper, and the resins are completely cured using UV light before being separated from the stamper. Then the disc receives another, thicker metallization layer, and is then bonded to the blank disc using LOCA glue. DVD-R DL and DVD+R DL discs receive a dye layer after curing, but before metallization. CD-R, DVD-R, and DVD+R discs receive the dye layer after pressing but before metallization. CD-RW, DVD-RW and DVD+RW receive a metal alloy layer sandwiched between 2 dielectric layers. HD-DVD is made in the same way as DVD. In recordable and rewritable media, most of the stamper is composed of grooves, not pits and lands. The grooves contain a wobble frequency that is used to locate the position of the reading or writing laser on the disc. DVDs use pre-pits instead, with a constant frequency wobble.
Blu-ray
HTL (high-to-low type) Blu-ray discs are made differently. First, a silicon wafer is used instead of a glass master. The wafer is processed in the same way a glass master would.
The wafer is then electroplated to form a 300-micron thick nickel stamper, which is peeled off from the wafer. The stamper is mounted onto a mold inside a press or embosser.
The polycarbonate discs are molded in a similar fashion to DVD and CD discs. If the discs being produced are BD-Rs or BD-REs, the mold is fitted with a stamper that stamps a groove pattern onto the discs, in lieu of the pits and lands found on BD-ROM discs.
After cooling, a 35 nanometre-thick layer of silver alloy is applied to the disc using sputtering. Then the second layer is made by applying base and pit transfer resins to the disc, and are pre-cured in its center.
After application and pre-curing, the disc is pressed or embossed using a stamper and the resins are immediately cured using intense UV light, before the disc is separated from the stamper. The stamper contains the data that will be transferred to the disc. This process is known as embossing and is the step that engraves the data onto the disc, replacing the pressing process used in the first layer, and it is also used for multi layer DVD discs.
Then, a 30 nanometre-thick layer of silver alloy is then sputtered onto the disc and the process is repeated as many times as required. Each repetition creates a new data layer. (The resins are applied again, pre-cured, stamped (with data or grooves) and cured, silver alloy is sputtered and so on)
BD-R and BD-RE discs receive (through sputtering) a metal (recording layer) alloy (that is sandwiched between two dielectric layers, also sputtered, in BD-RE), before receiving the 30 nanometre metallization (silver alloy, aluminum or gold) layer, which is sputtered. Alternatively, the silver alloy may be applied before the recording layer is applied. Silver alloys are usually used in Blu-rays, and aluminum is usually used on CDs and DVDs. Gold is used in some "Archival" CDs and DVDs, since it is more chemically inert and resistant to corrosion than aluminum, which corrodes into aluminum oxide, which can be seen in disc rot as transparent patches or dots in the disc, that prevent the disc from being read, since the laser light passes through the disc instead of being reflected back into the laser pickup assembly to be read. Normally, aluminum does not corrode since it has a thin oxide layer that forms on contact with oxygen. In this case, it can corrode due to its thinness.
Then, the 98 micron-thick cover layer is applied using UV-curable liquid optically clear adhesive, and a 2 micron-thick hard coat (such as Durabis) is also applied and cured using UV light. In the last step, a 10 nanometre-thick silicon nitride barrier layer is applied to the label side of the disc to protect against humidity. Blu-rays have their data very close to the read surface of the disc, which is necessary for Blu-rays to achieve their capacity.
Discs in large quantities can either be replicated or duplicated. In replication, the process explained above is used to make the discs, while in duplication, CD-R, DVD-R or BD-R discs are recorded and finalized to prevent further recording and allow for wider compatibility. (See Optical disc authoring). The equipment is also different: replication is carried out by fully automated purpose-built machinery whose cost is in the hundreds of thousands of US dollars in the used market, while duplication can be automated (using what's known as an autoloader) or be done by hand, and only requires a small tabletop duplicator.
Specifications
See also
Disc Description Protocol
List of optical disc manufacturers
Universal Disk Format (UDF)
References
External links
Longevity of Recordable CDs, DVDs and Blu-rays — Canadian Conservation Institute (CCI) Notes 19/1
Audiovisual introductions in 1884
Compact disc
DVD
Optical discs
Optical disc authoring
Optoelectronics
Optical computer storage media
et:Optiline andmekandja | Optical disc | [
"Technology"
] | 6,768 | [
"Multimedia",
"Optical disc authoring"
] |
41,460 | https://en.wikipedia.org/wiki/Optical%20isolator | An optical isolator, or optical diode, is an optical component which allows the transmission of light in only one direction. It is typically used to prevent unwanted feedback into an optical oscillator, such as a laser cavity.
The operation of conventional optical isolators relies on the Faraday effect (which in turn is produced by magneto-optic effect), which is used in the main component, the Faraday rotator. However, integrated isolators which do not rely on magnetism have been made in recent years too.
Theory
The main component of the optical isolator is the Faraday rotator. The magnetic field, , applied to the Faraday rotator causes a rotation in the polarization of the light due to the Faraday effect. The angle of rotation, , is given by,
,
where, is the Verdet constant of the material (amorphous or crystalline solid, or liquid, or crystalline liquid, or vaprous, or gaseous) of which the rotator is made, and is the length of the rotator. This is shown in Figure 2. Specifically for an optical isolator, the values are chosen to give a rotation of 45°.
It has been shown that a crucial requirement for any kind of optical isolator (not only the Faraday isolator) is some kind of non-reciprocal optics
Polarization dependent isolator
The polarization dependent isolator, or Faraday isolator, is made of three parts, an input polarizer (polarized vertically), a Faraday rotator, and an output polarizer, called an analyzer (polarized at 45°).
Light traveling in the forward direction becomes polarized vertically by the input polarizer. The Faraday rotator will rotate the polarization by 45°. The analyzer then enables the light to be transmitted through the isolator.
Light traveling in the backward direction becomes polarized at 45° by the analyzer. The Faraday rotator will again rotate the polarization by 45°. This means the light is polarized horizontally (the direction of rotation is not sensitive to the direction of propagation). Since the polarizer is vertically aligned, the light will be extinguished.
Figure 2 shows a Faraday rotator with an input polarizer, and an output analyzer. For a polarization dependent isolator, the angle between the polarizer and the analyzer, , is set to 45°. The Faraday rotator is chosen to give a 45° rotation.
Polarization dependent isolators are typically used in free space optical systems. This is because the polarization of the source is typically maintained by the system. In optical fibre systems, the polarization direction is typically dispersed in non polarization maintaining systems. Hence the angle of polarization will lead to a loss.
Polarization independent isolator
The polarization independent isolator is made of three parts, an input birefringent wedge (with its ordinary polarization direction vertical and its extraordinary polarization direction horizontal), a Faraday rotator, and an output birefringent wedge (with its ordinary polarization direction at 45°, and its extraordinary polarization direction at −45°).
Light traveling in the forward direction is split by the input birefringent wedge into its vertical (0°) and horizontal (90°) components, called the ordinary ray (o-ray) and the extraordinary ray (e-ray) respectively. The Faraday rotator rotates both the o-ray and e-ray by 45°. This means the o-ray is now at 45°, and the e-ray is at −45°. The output birefringent wedge then recombines the two components.
Light traveling in the backward direction is separated into the o-ray at 45, and the e-ray at −45° by the birefringent wedge. The Faraday Rotator again rotates both the rays by 45°. Now the o-ray is at 90°, and the e-ray is at 0°. Instead of being focused by the second birefringent wedge, the rays diverge.
Typically collimators are used on either side of the isolator. In the transmitted direction the beam is split and then combined and focused into the output collimator. In the isolated direction the beam is split, and then diverged, so it does not focus at the collimator.
Figure 3 shows the propagation of light through a polarization independent isolator. The forward travelling light is shown in blue, and the backward propagating light is shown in red. The rays were traced using an ordinary refractive index of 2, and an extraordinary refractive index of 3. The wedge angle is 7°.
The Faraday rotator
The most important optical element in an isolator is the Faraday rotator. The characteristics that one looks for in a Faraday rotator optic include a high Verdet constant, low absorption coefficient, low non-linear refractive index and high damage threshold. Also, to prevent self-focusing and other thermal related effects, the optic should be as short as possible. The two most commonly used materials for the 700–1100 nm range are terbium doped borosilicate glass and terbium gallium garnet crystal (TGG). For long distance fibre communication, typically at 1310 nm or 1550 nm, yttrium iron garnet crystals are used (YIG). Commercial YIG based Faraday isolators reach isolations higher than 30 dB.
Optical isolators are different from 1/4 wave plate based isolators because the Faraday rotator provides non-reciprocal rotation while maintaining linear polarization. That is, the polarization rotation due to the Faraday rotator is always in the same relative direction. So in the forward direction, the rotation is positive 45°. In the reverse direction, the rotation is −45°. This is due to the change in the relative magnetic field direction, positive one way, negative the other. This then adds to a total of 90° when the light travels in the forward direction and then the negative direction. This allows the higher isolation to be achieved.
Optical isolators and thermodynamics
It might seem at first glance that a device that allows light to flow in only one direction would violate Kirchhoff's law and the second law of thermodynamics, by allowing light energy to flow from a cold object to a hot object and blocking it in the other direction, but the violation is avoided because the isolator must absorb (not reflect) the light from the hot object and will eventually reradiate it to the cold one. Attempts to re-route the photons back to their source unavoidably involve creating a route by which other photons can travel from the hot body to the cold one, avoiding the paradox.
See also
Isolator (microwave)
References
External links
Telecommunication isolators, good pictures
Compact optical isolators and Faraday isolators
Optical components | Optical isolator | [
"Materials_science",
"Technology",
"Engineering"
] | 1,460 | [
"Glass engineering and science",
"Optical components",
"Components"
] |
41,461 | https://en.wikipedia.org/wiki/Optical%20path%20length | In optics, optical path length (OPL, denoted Λ in equations), also known as optical length or optical distance, is the length that light needs to travel through a vacuum to create the same phase difference as it would have when traveling through a given medium. It is calculated by taking the product of the geometric length of the optical path followed by light and the refractive index of the homogeneous medium through which the light ray propagates; for inhomogeneous optical media, the product above is generalized as a path integral as part of the ray tracing procedure. A difference in OPL between two paths is often called the optical path difference (OPD). OPL and OPD are important because they determine the phase of the light and govern interference and diffraction of light as it propagates.
In a medium of constant refractive index, n, the OPL for a path of geometrical length s is just
If the refractive index varies along the path, the OPL is given by a line integral
where n is the local refractive index as a function of distance along the path C.
An electromagnetic wave propagating along a path C has the phase shift over C as if it was propagating a path in a vacuum, length of which, is equal to the optical path length of C. Thus, if a wave is traveling through several different media, then the optical path length of each medium can be added to find the total optical path length. The optical path difference between the paths taken by two identical waves can then be used to find the phase change. Finally, using the phase change, the interference between the two waves can be calculated.
Fermat's principle states that the path light takes between two points is the path that has the minimum optical path length.
Optical path difference
The OPD corresponds to the phase shift undergone by the light emitted from two previously coherent sources when passed through mediums of different refractive indices. For example, a wave passing through air appears to travel a shorter distance than an identical wave traveling the same distance in glass. This is because a larger number of wavelengths fit in the same distance due to the higher refractive index of the glass.
The OPD can be calculated from the following equation:
where d1 and d2 are the distances of the ray passing through medium 1 or 2, n1 is the greater refractive index (e.g., glass) and n2 is the smaller refractive index (e.g., air).
See also
Air mass (astronomy)
Lagrangian optics
Hamiltonian optics
Fermat's principle
Optical depth
References
Geometrical optics
Physical optics
Optical quantities | Optical path length | [
"Physics",
"Mathematics"
] | 542 | [
"Optical quantities",
"Quantity",
"Physical quantities"
] |
41,463 | https://en.wikipedia.org/wiki/Optical%20power%20margin | In an optical communications link, the optical power margin is the difference between the optical power that is launched by a given transmitter into the fiber, less transmission losses from all causes, and the minimum optical power that is required by the receiver for a specified level of performance. An optical power margin is typically measured using a calibrated light source and an optical power meter.
The optical power margin is usually expressed in decibels (dB). At least several dB of optical power margin should be included in the optical power budget. The amount of optical power launched into a given fiber by a given transmitter depends on the nature of its active optical source (LED or laser diode) and the type of fiber, including such parameters as core diameter and numerical aperture.
References
Optical communications | Optical power margin | [
"Engineering"
] | 156 | [
"Optical communications",
"Telecommunications engineering"
] |
41,464 | https://en.wikipedia.org/wiki/Visible%20spectrum | The visible spectrum is the band of the electromagnetic spectrum that is visible to the human eye. Electromagnetic radiation in this range of wavelengths is called visible light (or simply light).
The optical spectrum is sometimes considered to be the same as the visible spectrum, but some authors define the term more broadly, to include the ultraviolet and infrared parts of the electromagnetic spectrum as well, known collectively as optical radiation.
A typical human eye will respond to wavelengths from about 380 to about 750 nanometers. In terms of frequency, this corresponds to a band in the vicinity of 400–790 terahertz. These boundaries are not sharply defined and may vary per individual. Under optimal conditions, these limits of human perception can extend to 310 nm (ultraviolet) and 1100 nm (near infrared).
The spectrum does not contain all the colors that the human visual system can distinguish. Unsaturated colors such as pink, or purple variations like magenta, for example, are absent because they can only be made from a mix of multiple wavelengths. Colors containing only one wavelength are also called pure colors or spectral colors.
Visible wavelengths pass largely unattenuated through the Earth's atmosphere via the "optical window" region of the electromagnetic spectrum. An example of this phenomenon is when clean air scatters blue light more than red light, and so the midday sky appears blue (apart from the area around the Sun which appears white because the light is not scattered as much). The optical window is also referred to as the "visible window" because it overlaps the human visible response spectrum. The near infrared (NIR) window lies just out of the human vision, as well as the medium wavelength infrared (MWIR) window, and the long-wavelength or far-infrared (LWIR or FIR) window, although other animals may perceive them.
Spectral colors
Colors that can be produced by visible light of a narrow band of wavelengths (monochromatic light) are called pure spectral colors. The various color ranges indicated in the illustration are an approximation: The spectrum is continuous, with no clear boundaries between one color and the next.
History
In the 13th century, Roger Bacon theorized that rainbows were produced by a similar process to the passage of light through glass or crystal.
In the 17th century, Isaac Newton discovered that prisms could disassemble and reassemble white light, and described the phenomenon in his book Opticks. He was the first to use the word spectrum (Latin for "appearance" or "apparition") in this sense in print in 1671 in describing his experiments in optics. Newton observed that, when a narrow beam of sunlight strikes the face of a glass prism at an angle, some is reflected and some of the beam passes into and through the glass, emerging as different-colored bands. Newton hypothesized light to be made up of "corpuscles" (particles) of different colors, with the different colors of light moving at different speeds in transparent matter, red light moving more quickly than violet in glass. The result is that red light is bent (refracted) less sharply than violet as it passes through the prism, creating a spectrum of colors.
Newton originally divided the spectrum into six named colors: red, orange, yellow, green, blue, and violet. He later added indigo as the seventh color since he believed that seven was a perfect number as derived from the ancient Greek sophists, of there being a connection between the colors, the musical notes, the known objects in the Solar System, and the days of the week. The human eye is relatively insensitive to indigo's frequencies, and some people who have otherwise-good vision cannot distinguish indigo from blue and violet. For this reason, some later commentators, including Isaac Asimov, have suggested that indigo should not be regarded as a color in its own right but merely as a shade of blue or violet. Evidence indicates that what Newton meant by "indigo" and "blue" does not correspond to the modern meanings of those color words. Comparing Newton's observation of prismatic colors with a color image of the visible light spectrum shows that "indigo" corresponds to what is today called blue, whereas his "blue" corresponds to cyan.
In the 18th century, Johann Wolfgang von Goethe wrote about optical spectra in his Theory of Colours. Goethe used the word spectrum (Spektrum) to designate a ghostly optical afterimage, as did Schopenhauer in On Vision and Colors. Goethe argued that the continuous spectrum was a compound phenomenon. Where Newton narrowed the beam of light to isolate the phenomenon, Goethe observed that a wider aperture produces not a spectrum but rather reddish-yellow and blue-cyan edges with white between them. The spectrum appears only when these edges are close enough to overlap.
In the early 19th century, the concept of the visible spectrum became more definite, as light outside the visible range was discovered and characterized by William Herschel (infrared) and Johann Wilhelm Ritter (ultraviolet), Thomas Young, Thomas Johann Seebeck, and others.
Young was the first to measure the wavelengths of different colors of light, in 1802.
The connection between the visible spectrum and color vision was explored by Thomas Young and Hermann von Helmholtz in the early 19th century. Their theory of color vision correctly proposed that the eye uses three distinct receptors to perceive color.
Limits to visible range
The visible spectrum is limited to wavelengths that can both reach the retina and trigger visual phototransduction (excite a visual opsin). Insensitivity to UV light is generally limited by transmission through the lens. Insensitivity to IR light is limited by the spectral sensitivity functions of the visual opsins. The range is defined psychometrically by the luminous efficiency function, which accounts for all of these factors. In humans, there is a separate function for each of two visual systems, one for photopic vision, used in daylight, which is mediated by cone cells, and one for scotopic vision, used in dim light, which is mediated by rod cells. Each of these functions have different visible ranges. However, discussion on the visible range generally assumes photopic vision.
Atmospheric transmission
The visible range of most animals evolved to match the optical window, which is the range of light that can pass through the atmosphere. The ozone layer absorbs almost all UV light (below 315 nm). However, this only affects cosmic light (e.g. sunlight), not terrestrial light (e.g. Bioluminescence).
Ocular transmission
Before reaching the retina, light must first transmit through the cornea and lens. UVB light (< 315 nm) is filtered mostly by the cornea, and UVA light (315–400 nm) is filtered mostly by the lens. The lens also yellows with age, attenuating transmission most strongly at the blue part of the spectrum. This can cause xanthopsia as well as a slight truncation of the short-wave (blue) limit of the visible spectrum. Subjects with aphakia are missing a lens, so UVA light can reach the retina and excite the visual opsins; this expands the visible range and may also lead to cyanopsia.
Opsin absorption
Each opsin has a spectral sensitivity function, which defines how likely it is to absorb a photon of each wavelength. The luminous efficiency function is approximately the superposition of the contributing visual opsins. Variance in the position of the individual opsin spectral sensitivity functions therefore affects the luminous efficiency function and the visible range. For example, the long-wave (red) limit changes proportionally to the position of the L-opsin. The positions are defined by the peak wavelength (wavelength of highest sensitivity), so as the L-opsin peak wavelength blue shifts by 10 nm, the long-wave limit of the visible spectrum also shifts 10 nm. Large deviations of the L-opsin peak wavelength lead to a form of color blindness called protanomaly and a missing L-opsin (protanopia) shortens the visible spectrum by about 30 nm at the long-wave limit. Forms of color blindness affecting the M-opsin and S-opsin do not significantly affect the luminous efficiency function nor the limits of the visible spectrum.
Different definitions
Regardless of actual physical and biological variance, the definition of the limits is not standard and will change depending on the industry. For example, some industries may be concerned with practical limits, so would conservatively report 420–680 nm, while others may be concerned with psychometrics and achieving the broadest spectrum would liberally report 380–750, or even 380–800 nm. The luminous efficiency function in the NIR does not have a hard cutoff, but rather an exponential decay, such that the function's value (or vision sensitivity) at 1,050 nm is about 109 times weaker than at 700 nm; much higher intensity is therefore required to perceive 1,050 nm light than 700 nm light.
Vision outside the visible spectrum
Under ideal laboratory conditions, subjects may perceive infrared light up to at least 1,064 nm. While 1,050 nm NIR light can evoke red, suggesting direct absorption by the L-opsin, there are also reports that pulsed NIR lasers can evoke green, which suggests two-photon absorption may be enabling extended NIR sensitivity.
Similarly, young subjects may perceive ultraviolet wavelengths down to about 310–313 nm, but detection of light below 380 nm may be due to fluorescence of the ocular media, rather than direct absorption of UV light by the opsins. As UVA light is absorbed by the ocular media (lens and cornea), it may fluoresce and be released at a lower energy (longer wavelength) that can then be absorbed by the opsins. For example, when the lens absorbs 350 nm light, the fluorescence emission spectrum is centered on 440 nm.
Non-visual light detection
In addition to the photopic and scotopic systems, humans have other systems for detecting light that do not contribute to the primary visual system. For example, melanopsin has an absorption range of 420–540 nm and regulates circadian rhythm and other reflexive processes. Since the melanopsin system does not form images, it is not strictly considered vision and does not contribute to the visible range.
In non-humans
The visible spectrum is defined as that visible to humans, but the variance between species is large. Not only can cone opsins be spectrally shifted to alter the visible range, but vertebrates with 4 cones (tetrachromatic) or 2 cones (dichromatic) relative to humans' 3 (trichromatic) will also tend to have a wider or narrower visible spectrum than humans, respectively.
Vertebrates tend to have 1-4 different opsin classes:
longwave sensitive (LWS) with peak sensitivity between 500–570 nm,
middlewave sensitive (MWS) with peak sensitivity between 480–520 nm,
shortwave sensitive (SWS) with peak sensitivity between 415–470 nm, and
violet/ultraviolet sensitive (VS/UVS) with peak sensitivity between 355–435 nm.
Testing the visual systems of animals behaviorally is difficult, so the visible range of animals is usually estimated by comparing the peak wavelengths of opsins with those of typical humans (S-opsin at 420 nm and L-opsin at 560 nm).
Mammals
Most mammals have retained only two opsin classes (LWS and VS), due likely to the nocturnal bottleneck. However, old world primates (including humans) have since evolved two versions in the LWS class to regain trichromacy. Unlike most mammals, rodents' UVS opsins have remained at shorter wavelengths. Along with their lack of UV filters in the lens, mice have a UVS opsin that can detect down to 340 nm. While allowing UV light to reach the retina can lead to retinal damage, the short lifespan of mice compared with other mammals may minimize this disadvantage relative to the advantage of UV vision. Dogs have two cone opsins at 429 nm and 555 nm, so see almost the entire visible spectrum of humans, despite being dichromatic. Horses have two cone opsins at 428 nm and 539 nm, yielding a slightly more truncated red vision.
Birds
Most other vertebrates (birds, lizards, fish, etc.) have retained their tetrachromacy, including UVS opsins that extend further into the ultraviolet than humans' VS opsin. The sensitivity of avian UVS opsins vary greatly, from 355–425 nm, and LWS opsins from 560–570 nm. This translates to some birds with a visible spectrum on par with humans, and other birds with greatly expanded sensitivity to UV light. The LWS opsin of birds is sometimes reported to have a peak wavelength above 600 nm, but this is an effective peak wavelength that incorporates the filter of avian oil droplets. The peak wavelength of the LWS opsin alone is the better predictor of the long-wave limit. A possible benefit of avian UV vision involves sex-dependent markings on their plumage that are visible only in the ultraviolet range.
Fish
Teleosts (bony fish) are generally tetrachromatic. The sensitivity of fish UVS opsins vary from 347-383 nm, and LWS opsins from 500-570 nm. However, some fish that use alternative chromophores can extend their LWS opsin sensitivity to 625 nm. The popular belief that the common goldfish is the only animal that can see both infrared and ultraviolet light is incorrect, because goldfish cannot see infrared light.
Invertebrates
The visual systems of invertebrates deviate greatly from vertebrates, so direct comparisons are difficult. However, UV sensitivity has been reported in most insect species.
Bees and many other insects can detect ultraviolet light, which helps them find nectar in flowers. Plant species that depend on insect pollination may owe reproductive success to their appearance in ultraviolet light rather than how colorful they appear to humans. Bees' long-wave limit is at about 590 nm. Mantis shrimp exhibit up to 14 opsins, enabling a visible range of less than 300 nm to above 700 nm.
Thermal vision
Some snakes can "see" radiant heat at wavelengths between 5 and 30 μm to a degree of accuracy such that a blind rattlesnake can target vulnerable body parts of the prey at which it strikes, and other snakes with the organ may detect warm bodies from a meter away. It may also be used in thermoregulation and predator detection.
Spectroscopy
Spectroscopy is the study of objects based on the spectrum of color they emit, absorb or reflect. Visible-light spectroscopy is an important tool in astronomy (as is spectroscopy at other wavelengths), where scientists use it to analyze the properties of distant objects. Chemical elements and small molecules can be detected in astronomical objects by observing emission lines and absorption lines. For example, helium was first detected by analysis of the spectrum of the Sun. The shift in frequency of spectral lines is used to measure the Doppler shift (redshift or blueshift) of distant objects to determine their velocities towards or away from the observer. Astronomical spectroscopy uses high-dispersion diffraction gratings to observe spectra at very high spectral resolutions.
See also
High-energy visible light
Cosmic ray visual phenomena
Electromagnetic absorption by water
Two-photon absorption - A method for seeing outside the visible spectrum
References
Color
Electromagnetic spectrum
Vision | Visible spectrum | [
"Physics"
] | 3,162 | [
"Optical spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
41,474 | https://en.wikipedia.org/wiki/Overfill | In telecommunications, overfill is the condition that prevails when the numerical aperture or the beam diameter of an optical source, such as a laser, light-emitting diode, or optical fiber, exceeds that of the driven element, e.g. an optical fiber core. In optical communications testing, overfill in both numerical aperture and mean diameter (core diameter or spot size) is usually required.
In polygonal mirror scanners, an overfilled type is one which uses each mirror facet at least in one dimension completely.
References
Federal Standard 1037C
Optical communications | Overfill | [
"Engineering"
] | 121 | [
"Optical communications",
"Telecommunications engineering"
] |
41,480 | https://en.wikipedia.org/wiki/Overtone | An overtone is any resonant frequency above the fundamental frequency of a sound. (An overtone may or may not be a harmonic) In other words, overtones are all pitches higher than the lowest pitch within an individual sound; the fundamental is the lowest pitch. While the fundamental is usually heard most prominently, overtones are actually present in any pitch except a true sine wave. The relative volume or amplitude of various overtone partials is one of the key identifying features of timbre, or the individual characteristic of a sound.
Using the model of Fourier analysis, the fundamental and the overtones together are called partials. Harmonics, or more precisely, harmonic partials, are partials whose frequencies are numerical integer multiples of the fundamental (including the fundamental, which is 1 times itself). These overlapping terms are variously used when discussing the acoustic behavior of musical instruments. (See etymology below.) The model of Fourier analysis provides for the inclusion of inharmonic partials, which are partials whose frequencies are not whole-number ratios of the fundamental (such as 1.1 or 2.14179).
When a resonant system such as a blown pipe or plucked string is excited, a number of overtones may be produced along with the fundamental tone. In simple cases, such as for most musical instruments, the frequencies of these tones are the same as (or close to) the harmonics. Examples of exceptions include the circular drum – a timpani whose first overtone is about 1.6 times its fundamental resonance frequency, gongs and cymbals, and brass instruments. The human vocal tract is able to produce highly variable amplitudes of the overtones, called formants, which define different vowels.
Explanation
Most oscillators, from a plucked guitar string to a flute that is blown, will naturally vibrate at a series of distinct frequencies known as normal modes. The lowest normal mode frequency is known as the fundamental frequency, while the higher frequencies are called overtones. Often, when an oscillator is excited — for example, by plucking a guitar string — it will oscillate at several of its modal frequencies at the same time. So when a note is played, this gives the sensation of hearing other frequencies (overtones) above the lowest frequency (the fundamental).
Timbre is the quality that gives the listener the ability to distinguish between the sound of different instruments. The timbre of an instrument is determined by which overtones it emphasizes. That is to say, the relative volumes of these overtones to each other determines the specific "flavor", "color" or "tone" of sound of that family of instruments. The intensity of each of these overtones is rarely constant for the duration of a note. Over time, different overtones may decay at different rates, causing the relative intensity of each overtone to rise or fall independent of the overall volume of the sound. A carefully trained ear can hear these changes even in a single note. This is why the timbre of a note may be perceived differently when played staccato or legato.
A driven non-linear oscillator, such as the vocal folds, a blown wind instrument, or a bowed violin string (but not a struck guitar string or bell) will oscillate in a periodic, non-sinusoidal manner. This generates the impression of sound at integer multiple frequencies of the fundamental known as harmonics, or more precisely, harmonic partials. For most string instruments and other long and thin instruments such as a bassoon, the first few overtones are quite close to integer multiples of the fundamental frequency, producing an approximation to a harmonic series. Thus, in music, overtones are often called harmonics. Depending upon how the string is plucked or bowed, different overtones can be emphasized.
However, some overtones in some instruments may not be of a close integer multiplication of the fundamental frequency, thus causing a small dissonance. "High quality" instruments are usually built in such a manner that their individual notes do not create disharmonious overtones. In fact, the flared end of a brass instrument is not to make the instrument sound louder, but to correct for tube length “end effects” that would otherwise make the overtones significantly different from integer harmonics. This is illustrated by the following:
Consider a guitar string. Its idealized 1st overtone would be exactly twice its fundamental if its length were shortened by ½, perhaps by lightly pressing a guitar string at the 12th fret; however, if a vibrating string is examined, it will be seen that the string does not vibrate flush to the bridge and nut, but it instead has a small “dead length” of string at each end. This dead length actually varies from string to string, being more pronounced with thicker and/or stiffer strings. This means that halving the physical string length does not halve the actual string vibration length, and, hence, the overtones will not be exact multiples of a fundamental frequency. The effect is so pronounced that properly set up guitars will angle the bridge such that the thinner strings will progressively have a length up to few millimeters shorter than the thicker strings. Not doing so would result in inharmonious chords made up of two or more strings. Similar considerations apply to tube instruments.
Musical usage term
An overtone is a partial (a "partial wave" or "constituent frequency") that can be either a harmonic partial (a harmonic) other than the fundamental, or an inharmonic partial. A harmonic frequency is an integer multiple of the fundamental frequency. An inharmonic frequency is a non-integer multiple of a fundamental frequency.
An example of harmonic overtones: (absolute harmony)
Some musical instruments produce overtones that are slightly sharper or flatter than true harmonics. The sharpness or flatness of their overtones is one of the elements that contributes to their sound. Due to phase inconsistencies between the fundamental and the partial harmonic, this also has the effect of making their waveforms not perfectly periodic.
Musical instruments that can create notes of any desired duration and definite pitch have harmonic partials.
A tuning fork, provided it is sounded with a mallet (or equivalent) that is reasonably soft, has a tone that consists very nearly of the fundamental, alone; it has a sinusoidal waveform. Nevertheless, music consisting of pure sinusoids was found to be unsatisfactory in the early 20th century.
Etymology
In Hermann von Helmholtz's classic "On The Sensations Of Tone" he used the German "Obertöne" which was a contraction of "Oberpartialtöne", or in English: "upper partial tones". According to Alexander Ellis (in pages 24–25 of his English translation of Helmholtz), the similarity of German "ober" to English "over" caused a Prof. Tyndall to mistranslate Helmholtz' term, thus creating "overtone". Ellis disparages the term "overtone" for its awkward implications. Because "overtone" makes the upper partials seem like such a distinct phenomena, it leads to the mathematical problem where the first overtone is the second partial. Also, unlike discussion of "partials", the word "overtone" has connotations that have led people to wonder about the presence of "undertones" (a term sometimes confused with "difference tones" but also used in speculation about a hypothetical "undertone series").
"Overtones" in choral music
In barbershop music, a style of four-part singing, the word overtone is often used in a related but particular manner. It refers to a psychoacoustic effect in which a listener hears an audible pitch that is higher than, and different from, the fundamentals of the four pitches being sung by the quartet. The barbershop singer's "overtone" is created by the interactions of the upper partial tones in each singer's note (and by sum and difference frequencies created by nonlinear interactions within the ear). Similar effects can be found in other a cappella polyphonic music such as the music of the Republic of Georgia and the Sardinian cantu a tenore. Overtones are naturally highlighted when singing in a particularly resonant space, such as a church; one theory of the development of polyphony in Europe holds that singers of Gregorian chant, originally monophonic, began to hear the overtones of their monophonic song and to imitate these pitches - with the fifth, octave, and major third being the loudest vocal overtones, it is one explanation of the development of the triad and the idea of consonance in music.
The first step in composing choral music with overtone singing is to discover what the singers can be expected to do successfully without extensive practice. The second step is to find a musical context in which those techniques could be effective, not mere special effects. It was initially hypothesized that beginners would be able to:
glissando through the partials of a given fundamental, ascending or descending, fast, or slow
use vowels/text for relative pitch gestures on indeterminate partials specifying the given shape without specifying particular partials
improvise on partials of the given fundamental, ad lib., freely, or in giving style or manner
find and sustain a particular partial (requires interval recognition)
by extension, move to an adjacent partial, above or below, and alternate between the two
Singers should not be asked to change the fundamental pitch while overtone singing and changing partials should always be to an adjacent partial. When a particular partial is to be specified, time should be allowed (a beat or so) for the singers to get the harmonics to "speak" and find the correct one.
String instruments
String instruments can also produce multiphonic tones when strings are divided in two pieces or the sound is somehow distorted. The sitar has sympathetic strings which help to bring out the overtones while one is playing. The overtones are also highly important in the tanpura, the drone instrument in traditional North and South Indian music, in which loose strings tuned at octaves and fifths are plucked and designed to buzz to create sympathetic resonance and highlight the cascading sound of the overtones.
Western string instruments, such as the violin, may be played close to the bridge (a technique called "sul ponticello" or "am Steg") which causes the note to split into overtones while attaining a distinctive glassy, metallic sound. Various techniques of bow pressure may also be used to bring out the overtones, as well as using string nodes to produce natural harmonics. On violin family instruments, overtones can be played with the bow or by plucking. Scores and parts for Western violin family instruments indicate where the performer is to play harmonics. The most well-known technique on a guitar is playing flageolet tones or using distortion effects. The ancient Chinese instrument the guqin contains a scale based on the knotted positions of overtones. The Vietnamese đàn bầu functions on flageolet tones. Other multiphonic extended techniques used are prepared piano, prepared guitar and 3rd bridge.
Wind instruments
Wind instruments manipulate the overtone series significantly in the normal production of sound, but various playing techniques may be used to produce multiphonics which bring out the overtones of the instrument. On many woodwind instruments, alternate fingerings are used. "Overblowing", or adding intensely exaggerated air pressure, can also cause notes to split into their overtones. In brass instruments, multiphonics may be produced by singing into the instrument while playing a note at the same time, causing the two pitches to interact - if the sung pitch is at specific harmonic intervals with the played pitch, the two sounds will blend and produce additional notes by the phenomenon of sum and difference tones.
Non-western wind instruments also exploit overtones in playing, and some may highlight the overtone sound exceptionally. Instruments like the didgeridoo are highly dependent on the interaction and manipulation of overtones achieved by the performer changing their mouth shape while playing, or singing and playing simultaneously. Likewise, when playing a harmonica or pitch pipe, one may alter the shape of their mouth to amplify specific overtones. Though not a wind instrument, a similar technique is used for playing the jaw harp: the performer amplifies the instrument's overtones by changing the shape, and therefore the resonance, of their vocal tract.
Brass Instruments
Brass instruments originally had no valves, and could only play the notes in the natural overtone, or harmonic series.
Brass instruments still rely heavily on the overtone series to produce notes: the tuba typically has 3-4 valves, the tenor trombone has 7 slide positions, the trumpet has 3 valves, and the French horn typically has 4 valves. Each instrument can play (within their respective ranges) the notes of the overtone series in different keys with each fingering combination (open, 1, 2, 12, 123, etc). The role of each valve or rotor (excluding trombone) is as follows: 1st valve lowers major 2nd, 2nd valve lowers minor 2nd, 3rd valve-lowers minor 3rd, 4th valve-lowers perfect 4th (found on piccolo trumpet, certain euphoniums, and many tubas). The French horn has a trigger key that opens other tubing and is pitched a perfect fourth higher; this allows for greater ease between different registers of the instrument. Valves allow brass instruments to play chromatic notes, as well as notes within the overtone series (open valve = C overtone series, 2nd valve = B overtone series on the C Trumpet) by changing air speed and lip vibrations.
The tuba, trombone, and trumpet play notes within the first few octaves of the overtone series, where the partials are farther apart. The French horn sounds notes in a higher octave of the overtone series, so the partials are closer together and make it more difficult to play the correct pitches and partials.
Overtone singing
Overtone singing is a traditional form of singing in many parts of the Himalayas and Altay; Tibetans, Mongols and Tuvans are known for their overtone singing. In these contexts it is often referred to as throat singing or khoomei, though it should not be confused with Inuit throat singing, which is produced by different means. There is also the possibility to create the overtone out of fundamental tones without any stress on the throat.
Also, the overtone is very important in singing to take care of vocal tract shaping, to improve color, resonance, and text declamation. During practice overtone singing, it helps the singer to remove unnecessary pressure on the muscle, especially around the throat. So if one can "find" a single overtone, then one will know where the sensation needs to be in order to bring out vocal resonance in general, helping to find the resonance in one's own voice on any vowel and in any register.
Overtones in music composition
The primacy of the triad in Western harmony comes from the first four partials of the overtone series. The eighth through fourteenth partials resemble the equal tempered acoustic scale:
When this scale is rendered as a chord, it is called the lydian dominant thirteenth chord. This chord appears throughout Western music, but is notably used as the basis of jazz harmony, features prominently in the music of Franz Liszt, Claude Debussy, Maurice Ravel, and appears as the Mystic chord in the music of Alexander Scriabin.
Because the overtone series rises infinitely from the fundamental with no periodicity, in Western music the equal temperament scale was designed to create synchronicity between different octaves. This was achieved by de-tuning certain intervals, such as the perfect fifth. A true perfect fifth is 702 cents above the fundamental, but equal temperament flattens it by two cents. The difference is only barely perceptible, and allows both for the illusion of the scale being in-tune with itself across multiple octaves, and for tonalities based on all 12 chromatic notes to sound in-tune.
Western classical composers have also made use of the overtone series through orchestration. In his treatise "Principles of Orchestration," Russian composer Nikolai Rimsky-Korsakov says the overtone series "may serve as a guide to the orchestral arrangement of chords". Rimsky-Korsakov then demonstrates how to voice a C major triad according to the overtone series, using partials 1, 2, 3, 4, 5, 6, 8, 10, 12, and 16.
In the 20th century, exposure to non-Western music and further scientific acoustical discoveries led some Western composers to explore alternate tuning systems. Harry Partch for example designed a tuning system that divides the octave into 43 tones, with each tone based on the overtone series. The music of Ben Johnston uses many different tuning systems, including his String Quartet No. 5 which divides the octave into more than 100 tones.
Spectral music is a genre developed by Gérard Grisey and Tristan Murail in the 1970s and 80s, under the auspices of IRCAM. Broadly, spectral music deals with resonance and acoustics as compositional elements. For example, in Grisey's seminal work Partiels, the composer used a sonogram to analyze the true sonic characteristics of the lowest note on a tenor trombone (E2). The analysis revealed which overtones were most prominent from that sound, and Partiels was then composed around the analysis. Another seminal spectral work is Tristan Murail's Gondwana for orchestra. This work begins with a spectral analysis of a bell, and gradually transforms it into the spectral analysis of a brass instrument. Other spectralists and post-spectralists include Jonathan Harvey, Kaija Saariaho, and Georg Friedrich Haas.
John Luther Adams is known for his extensive use of the overtone series, as well as his tendency to allow musicians to make their own groupings and play at their own pace to alter the sonic experience. For example, his piece Sila: The Breath of the World can be played by 16 to 80 musicians and are separated into their own groups. The piece is set on sixteen "harmonic clouds" that are grounded on the first sixteen overtones of low B-flat. Another example is John Luther Adam's piece Everything That Rises, which grew out of his piece Sila: The Breath of the World. Everything That Rises is a piece for string quartet that has sixteen harmonic clouds that are built off of the fundamental tone (C0)
See also
Combination tone
Harmonic
Just intonation
Mersenne's laws
Overtone band (in vibrational spectroscopy)
Scale of harmonics
Stretched octave
Undertone series
Xenharmonic music
References
External links
Overtones, partials and harmonics from fundamental frequency
Timbre: The Color of Music
Musical tuning
Acoustics | Overtone | [
"Physics"
] | 3,874 | [
"Classical mechanics",
"Acoustics"
] |
41,481 | https://en.wikipedia.org/wiki/Packet-switching%20node | A packet-switching node is a node in a packet-switching network that contains data switches and equipment for controlling, formatting, transmitting, routing, and receiving data packets.
Note: In the Defense Data Network (DDN), a packet-switching node is usually configured to support up to thirty-two X.25 56 kbit/s host connections, as many as six 56 kbit/s interswitch trunk (IST) lines to other packet-switching nodes, and at least one Terminal Access Controller (TAC).
Packets (information technology) | Packet-switching node | [
"Technology"
] | 116 | [
"Computing stubs",
"Computer network stubs"
] |
41,492 | https://en.wikipedia.org/wiki/Path%20loss | Path loss, or path attenuation, is the reduction in power density (attenuation) of an electromagnetic wave as it propagates through space. Path loss is a major component in the analysis and design of the link budget of a telecommunication system.
This term is commonly used in wireless communications and signal propagation. Path loss may be due to many effects, such as free-space loss, refraction, diffraction, reflection, aperture-medium coupling loss, and absorption. Path loss is also influenced by terrain contours, environment (urban or rural, vegetation and foliage), propagation medium (dry or moist air), the distance between the transmitter and the receiver, and the height and location of antennas.
Overview
In wireless communications, path loss is the reduction in signal strength as the signal travels from a transmitter to a receiver, and is an application for verifying the loss. There are several factors that affect this:
Free-space path loss: This is the fundamental loss that occurs due to the spreading of the radio wave as it propagates through space. It follows an inverse square law, meaning the signal strength decreases proportionally to the square of the distance between the transmitter and receiver.
Diffraction: When a radio wave encounters an obstacle, it can be diffracted, or bent around the edge of the obstacle. This can cause additional signal loss, especially in urban environments with many buildings.
Absorption: Certain atmospheric gases and obstacles like buildings and foliage can absorb radio waves, reducing their strength.
Reflection and scattering: Radio waves can be reflected off surfaces like buildings and the ground, and scattered by objects like trees and lampposts. This can lead to multipath propagation, where the receiver receives multiple copies of the signal that may interfere with each other.
In understanding path loss and minimizing it, there are four key factors to consider in designing a wireless communication system:
1) Determining the required transmitter power: The transmitter must have enough power to overcome the path loss in order for the signal to reach the receiver with sufficient strength.
2) Determine the appropriate antenna design and gain: Antennas with higher gain can focus the waves in a specific direction, reducing the path loss.
3) Optimize modulation scheme: The choice of modulation scheme can affect the robustness of the signal to path loss.
4) Set the receiver sensitivity appropriately: The receiver must be sensitive enough to detect weak signals.
Causes
Path loss normally includes propagation losses caused by the natural expansion of the radio wave front in free space (which usually takes the shape of an ever-increasing sphere), absorption losses (sometimes called penetration losses), when the signal passes through media not transparent to electromagnetic waves, diffraction losses when part of the radiowave front is obstructed by an opaque obstacle, and losses caused by other phenomena.
The signal radiated by a transmitter may also travel along many and different paths to a receiver simultaneously; this effect is called multipath. Multipath waves combine at the receiver antenna, resulting in a received signal that may vary widely, depending on the distribution of the intensity and relative propagation time of the waves and bandwidth of the transmitted signal. The total power of interfering waves in a Rayleigh fading scenario varies quickly as a function of space (which is known as small scale fading). Small-scale fading refers to the rapid changes in radio signal amplitude in a short period of time or distance of travel.
Loss exponent
In the study of wireless communications, path loss can be represented by the path loss exponent, whose value is normally in the range of 2 to 4 (where 2 is for propagation in free space, 4 is for relatively lossy environments and for the case of full specular reflection from the earth surface—the so-called flat earth model). In some environments, such as buildings, stadiums and other indoor environments, the path loss exponent can reach values in the range of 4 to 6. On the other hand, a tunnel may act as a waveguide, resulting in a path loss exponent less than 2.
Path loss is usually expressed in dB. In its simplest form, the path loss can be calculated using the formula
where is the path loss in decibels, is the path loss exponent, is the distance between the transmitter and the receiver, usually measured in meters, and is a constant which accounts for system losses.
Radio engineer formula
Radio and antenna engineers use the following simplified formula (derived from the Friis Transmission Formula) for the signal path loss between the feed points of two isotropic antennas in free space:
Path loss in dB:
where is the path loss in decibels, is the wavelength and is the transmitter-receiver distance in the same units as the wavelength. Note the power density in space has no dependency on ; The variable exists in the formula to account for the effective capture area of the isotropic receiving antenna.
Prediction
Calculation of the path loss is usually called prediction. Exact prediction is possible only for simpler cases, such as the above-mentioned free space propagation or the flat-earth model. For practical cases the path loss is calculated using a variety of approximations.
Statistical methods (also called stochastic or empirical) are based on measured and averaged losses along typical classes of radio links. Among the most commonly used such methods are Okumura–Hata, the COST Hata model, W.C.Y.Lee, etc. These are also known as radio wave propagation models and are typically used in the design of cellular networks and public land mobile networks (PLMN). For wireless communications in the very high frequency (VHF) and ultra high frequency (UHF) frequency band (the bands used by walkie-talkies, police, taxis and cellular phones), one of the most commonly used methods is that of Okumura–Hata as refined by the COST 231 project. Other well-known models are those of Walfisch–Ikegami, W. C. Y. Lee, and Erceg. For FM radio and TV broadcasting the path loss is most commonly predicted using the ITU model as described in P.1546 (successor to P.370) recommendation.
Deterministic methods based on the physical laws of wave propagation are also used; ray tracing is one such method. These methods are expected to produce more accurate and reliable predictions of the path loss than the empirical methods; however, they are significantly more expensive in computational effort and depend on the detailed and accurate description of all objects in the propagation space, such as buildings, roofs, windows, doors, and walls. For these reasons they are used predominantly for short propagation paths. Among the most commonly used methods in the design of radio equipment such as antennas and feeds is the finite-difference time-domain method.
The path loss in other frequency bands (medium wave (MW), shortwave (SW or HF), microwave (SHF)) is predicted with similar methods, though the concrete algorithms and formulas may be very different from those for VHF/UHF. Reliable prediction of the path loss in the SW/HF band is particularly difficult, and its accuracy is comparable to weather predictions.
Easy approximations for calculating the path loss over distances significantly shorter than the distance to the radio horizon:
In free space the path loss increases with 20 dB per decade (one decade is when the distance between the transmitter and the receiver increases ten times) or 6 dB per octave (one octave is when the distance between the transmitter and the receiver doubles). This can be used as a very rough first-order approximation for (microwave) communication links;
For signals in the UHF/VHF band propagating over the surface of the Earth the path loss increases with roughly 35–40 dB per decade (10–12 dB per octave). This can be used in cellular networks as a first guess.
Examples
In cellular networks, such as UMTS and GSM, which operate in the UHF band, the value of the path loss in built-up areas can reach 110–140 dB for the first kilometer of the link between the base transceiver station (BTS) and the mobile. The path loss for the first ten kilometers may be 150–190 dB (Note: These values are very approximate and are given here only as an illustration of the range in which the numbers used to express the path loss values can eventually be, these are not definitive or binding figures—the path loss may be very different for the same distance along two different paths and it can be different even along the same path if measured at different times.)
In the radio wave environment for mobile services the mobile antenna is close to the ground. Line-of-sight propagation (LOS) models are highly modified. The signal path from the BTS antenna normally elevated above the roof tops is refracted down into the local physical environment (hills, trees, houses) and the LOS signal seldom reaches the antenna. The environment will produce several deflections of the direct signal onto the antenna, where typically 2–5 deflected signal components will be vectorially added.
These refraction and deflection processes cause loss of signal strength, which changes when the mobile antenna moves (Rayleigh fading), causing instantaneous variations of up to 20 dB. The network is therefore designed to provide an excess of signal strength compared to LOS of 8–25 dB depending on the nature of the physical environment, and another 10 dB to overcome the fading due to movement.
See also
Air mass (astronomy)
Radio propagation model
Log-distance path loss model
Two-ray ground-reflection model
Computation of radiowave attenuation in the atmosphere
References
External links
Radio frequency propagation
Waves | Path loss | [
"Physics"
] | 1,958 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves",
"Motion (physics)"
] |
41,493 | https://en.wikipedia.org/wiki/Path%20profile | In telecommunications, a path profile is a graphic representation of the physical features of a propagation path in the vertical plane containing both endpoints of the path, showing the surface of the Earth and including trees, buildings, and other features that may obstruct the radio signal.
Profiles are drawn either with an effective Earth radius simulated by a parabolic arc--in which case the ray paths are drawn as straight lines--or with a "flat Earth"-- in which case the ray paths are drawn as parabolic arcs.
References
Radio frequency propagation | Path profile | [
"Physics"
] | 113 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.