text
stringlengths
11
320k
source
stringlengths
26
161
The Neotropical realm is one of the eight biogeographic realms constituting Earth's land surface. Physically, it includes the tropical terrestrial ecoregions of the Americas and the entire South American temperate zone. In biogeography , the Neotropic or Neotropical realm is one of the eight terrestrial realms . This realm includes South America, Central America , the Caribbean Islands , and southern North America. In Mexico, the Yucatán Peninsula and southern lowlands, and most of the east and west coastlines, including the southern tip of the Baja California Peninsula are Neotropical. In the United States southern Florida and coastal Central Florida are considered Neotropical. [ 1 ] The realm also includes temperate southern South America. In contrast, the Neotropical Floristic Kingdom excludes southernmost South America, which instead is placed in the Antarctic kingdom . The Neotropic is delimited by similarities in fauna or flora . Its fauna and flora are distinct from the Nearctic realm (which includes most of North America) because of the long separation of the two continents. The formation of the Isthmus of Panama joined the two continents two to three million years ago, precipitating the Great American Interchange , an important biogeographical event. The Neotropic includes more tropical rainforest ( tropical and subtropical moist broadleaf forests ) than any other realm, extending from southern Mexico through Central America and northern South America to southern Brazil, including the vast Amazon rainforest . These rainforest ecoregions are one of the most important reserves of biodiversity on Earth. These rainforests are also home to a diverse array of indigenous peoples , who to varying degrees persist in their autonomous and traditional cultures and subsistence within this environment. The number of these peoples who are as yet relatively untouched by external influences continues to decline significantly, however, along with the near-exponential expansion of urbanization , roads, pastoralism and forest industries which encroach on their customary lands and environment. Nevertheless, amidst these declining circumstances this vast "reservoir" of human diversity continues to survive, albeit much depleted. In South America alone, some 350–400 indigenous languages and dialects are still living (down from an estimated 1,500 at the time of first European contact ), in about 37 distinct language families and a further number of unclassified and isolate languages . Many of these languages and their cultures are also endangered. Accordingly, conservation in the Neotropical realm is a hot political concern, and raises many arguments about development versus indigenous versus ecological rights and access to or ownership of natural resources . The World Wide Fund for Nature (WWF) subdivides the realm into bioregions , defined as "geographic clusters of ecoregions that may span several habitat types, but have strong biogeographic affinities, particularly at taxonomic levels higher than the species level (genus, family)." Laurel forest and other cloud forest are subtropical and mild temperate forest , found in areas with high humidity and relatively stable and mild temperatures. Tropical rainforest , tropical and subtropical moist broadleaf forests are highlight [ clarification needed ] in Southern North America, Amazonia, Caribbean, Central America, Northern Andes and Central Andes. [ citation needed ] The Amazonia bioregion is mostly covered by tropical moist broadleaf forest , including the vast Amazon rainforest , which stretches from the Andes Mountains to the Atlantic Ocean, and the lowland forests of the Guianas . The bioregion also includes tropical savanna and tropical dry forest ecoregions. [ citation needed ] The Central Andes lie between the gulfs of Guayaquil and Penas and thus encompass southern Ecuador, Chile, Peru, western Bolivia, and northwest and western Argentina. [ 2 ] Eastern South America includes the Caatinga xeric shrublands of northeastern Brazil, the broad Cerrado grasslands and savannas of the Brazilian Plateau , and the Pantanal and Chaco grasslands. The diverse Atlantic forests of eastern Brazil are separated from the forests of Amazonia by the Caatinga and Cerrado, and are home to a distinct flora and fauna. North of the Gulf of Guayaquil in Ecuador and Colombia, a series of accreted oceanic terranes (discrete allochthonous fragments) have developed that constitute the Baudo, or Coastal, Mountains and the Cordillera Occidental. [ 3 ] The Orinoco is a region of humid forested broadleaf forest and wetland primarily comprising the drainage basin for the Orinoco River and other adjacent lowland forested areas. This region includes most of Venezuela and parts of Colombia, as well as Trinidad and Tobago . The temperate forest ecoregions of southwestern South America, including the temperate rain forests of the Valdivian temperate rain forests and Magellanic subpolar forests ecoregions, and the Juan Fernández Islands and Desventuradas Islands , are a refuge for the ancient Antarctic flora , which includes trees like the southern beech ( Nothofagus ), podocarps , the alerce ( Fitzroya cupressoides ), and Araucaria pines like the monkey-puzzle tree ( Araucaria araucana ). These rainforests are endangered by extensive logging and their replacement by fast-growing non-native pines and eucalyptus . South America was originally part of the supercontinent of Gondwana , which included Africa, Australia, India, New Zealand, and Antarctica, and the Neotropic shares many plant and animal lineages with these other continents, including marsupial mammals and the Antarctic flora . After the final breakup of the Gondwana about 110 million years ago, South America was separated from Africa and drifted north and west. 66 million years ago, the Cretaceous–Paleogene extinction event altered local flora and fauna. [ 4 ] [ 5 ] Much later, about two to three million years ago, South America was joined with North America by the formation of the Isthmus of Panama , which allowed a biotic exchange between the two continents, the Great American Interchange . South American species like the ancestors of the Virginia opossum ( Didelphis virginiana ) and the armadillo moved into North America, and North Americans like the ancestors of South America's camelids , including the llama ( Lama glama ), moved south. The long-term effect of the exchange was the extinction of many South American species, mostly by outcompetition by northern species. The Neotropical realm has 31 endemic bird families, which is over twice the number of any other realm. They include tanagers , rheas , tinamous , curassows , antbirds , ovenbirds , toucans , and seriemas . Bird families originally unique to the Neotropics include hummingbirds (family Trochilidae) and wrens (family Troglodytidae). Mammal groups originally unique to the Neotropics include: The Neotropical realm has 63 endemic fish families and subfamilies, which is more than any other realm. [ 6 ] Neotropical fishes include more than 5,700 species, and represent at least 66 distinct lineages in continental freshwaters (Albert and Reis, 2011). The well-known red-bellied piranha is endemic to the Neotropic realm, occupying a larger geographic area than any other piranha species. Some fish groups originally unique to the Neotropics include: Examples of other animal groups that are entirely or mainly restricted to the Neotropical region include: According to Simberloff. as of 1984 there were a total of 92,128 species of flowering plants (Angiosperms) in the Neotropics. [ 8 ] Plant families endemic and partly subendemic to the realm are, according to Takhtajan (1978), Hymenophyllopsidaceae , Marcgraviaceae , Caryocaraceae , Pellicieraceae , Quiinaceae , Peridiscaceae , Bixaceae , Cochlospermaceae , Tovariaceae , Lissocarpaceae ( Lissocarpa ), Brunelliaceae , Dulongiaceae , Columelliaceae , Julianiaceae , Picrodendraceae , Goupiaceae , Desfontainiaceae , Plocospermataceae , Tropaeolaceae , Dialypetalanthaceae ( Dialypetalanthus ), Nolanaceae ( Nolana ), Calyceraceae , Heliconiaceae , Cannaceae , Thurniaceae and Cyclanthaceae . [ 9 ] [ 10 ] Plant families that originated in the Neotropic include Bromeliaceae , Cannaceae and Heliconiaceae . [ 11 ] Plant species with economic importance originally unique to the Neotropic include: [ citation needed ]
https://en.wikipedia.org/wiki/Neotropical_realm
Neovascularization is the natural formation of new blood vessels ( neo- + vascular + -ization ), usually in the form of functional microvascular networks, capable of perfusion by red blood cells , that form to serve as collateral circulation in response to local poor perfusion or ischemia . Growth factors that inhibit neovascularization include those that affect endothelial cell division and differentiation. These growth factors often act in a paracrine or autocrine fashion; they include fibroblast growth factor , placental growth factor , insulin-like growth factor , hepatocyte growth factor , and platelet-derived endothelial growth factor . [ 1 ] There are three different pathways that comprise neovascularization: (1) vasculogenesis , (2) angiogenesis , and (3) arteriogenesis . [ 2 ] Vasculogenesis is the de novo formation of blood vessels. This primarily occurs in the developing embryo with the development of the first primitive vascular plexus, but also occurs to a limited extent with post-natal vascularization. Embryonic vasculogenesis occurs when endothelial cells precursors (hemangioblasts) begin to proliferate and migrate into avascular areas. There, they aggregate to form the primitive network of vessels characteristic of embryos. This primitive vascular system is necessary to provide adequate blood flow to cells, supplying oxygen and nutrients, and removing metabolic wastes. [ 2 ] Angiogenesis is the most common type of neovascularization seen in development and growth, and is important to both physiological and pathological processes. [ 3 ] Angiogenesis occurs through the formation of new vessels from pre-existing vessels. This occurs through the sprouting of new capillaries from post-capillary venules, requiring precise coordination of multiple steps and the participation and communication of multiple cell types. The complex process is initiated in response to local tissue ischemia or hypoxia, leading to the release of angiogenic factors such as VEGF and HIF-1 . This leads to vasodilatation and an increase in vascular permeability, leading to sprouting angiogenesis or intussusceptive angiogenesis . [ 2 ] Arteriogenesis is the process of flow-related remodelling of existing vasculature to create collateral arteries. This can occur in response to ischemic vascular diseases or increase demand (e.g. exercise training). Arteriogenesis is triggered through nonspecific factors, such as shear stress and blood flow. [ 2 ] Corneal neovascularization is a condition where new blood vessels invade into the cornea from the limbus. It is triggered when the balance between angiogenic and antiangiogenic factors are disrupted that otherwise maintain corneal transparency. The immature new blood vessels can lead to persistent inflammation and scarring, lipid exudation into the corneal tissues, and a reduction in corneal transparency, which can affect visual acuity. [ 4 ] Retinopathy of prematurity is a condition that occurs in premature babies. In premature babies, the retina has not completely vascularized. Rather than continuing in the normal in utero fashion, the vascularization of the retina is disrupted, leading to an abnormal proliferation of blood vessels between the areas of vascularized and avascular retina. These blood vessels grow in abnormal ways and can invade into the vitreous humor, where they can hemorrhage or cause retinal detachment in neonates. [ 5 ] Diabetic retinopathy, which can develop into proliferative diabetic retinopathy, is a condition where capillaries in the retina become occluded, which creates areas of ischemic retina and triggering the release of angiogenic growth factors. This retinal ischemia stimulates the proliferation of new blood vessels from pre-existing retinal venules. It is the leading cause of blindness of working age adults. [ 5 ] In persons who are over 65 years old, age-related macular degeneration is the leading cause of severe vision loss. A subtype of age-related macular degeneration, wet macular degeneration, is characterized by the formation of new blood vessels that originate in the choroidal vasculature and extend into the subretinal space. [ 5 ] In ophthalmology , choroidal neovascularization is the formation of a microvasculature within the innermost layer of the choroid of the eye. [ 6 ] Neovascularization in the eye can cause a type of glaucoma (neovascularization glaucoma) if the new blood vessels' bulk blocks the constant outflow of aqueous humour from inside the eye. Cardiovascular disease is the leading cause of death in the world. [ 7 ] Ischemic heart disease develops when stenosis and occlusion of coronary arteries develops, leading to reduced perfusion of the cardiac tissues. There is ongoing research exploring techniques that might be able to induce healthy neovascularization of ischemic cardiac tissues. [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Neovascularization
Neozealandia is a biogeographic province of the Antarctic Realm according to the classification developed by Miklos Udvardy in 1975. [ 1 ] [ 2 ] Neozealandia consists primarily of the major islands of New Zealand , including North Island and South Island , as well as Chatham Island . The southernmost areas of Neozealandia overlap with the Insulantarctica province, which includes the New Zealand Subantarctic Islands . Both New Zealand and the New Zealand Subantarctic Islands are remnants of a submerged subcontinent known as Zealandia , which gradually submerged itself beneath the sea after breaking off from the Gondwanan land masses of Antarctica and Australia . Due to isolation, the entire Zealandia archipelago has remained virtually free of mammals (except for bats and a few others) and invasive alien species . Since only very few mammals and other alien species have actually colonized the islands of the Neozealandia province over the millions of years, the flora and fauna on most of the islands, including those of New Zealand itself, have remained almost exactly the same as they were when the original Gondwana supercontinent existed. [ 3 ] A couple of tuatara species survive in small numbers on small islets adjacent to New Zealand. Also, New Zealand has vestiges of ancient temperate rain forests with plant species, such as giant club mosses , tree ferns and Nothofagus trees, dating from the time when the Zealandia subcontinent split off from Gondwana. New Zealand grasslands are dominated by vast spreadings of tussock grass fed upon by the native ground parrots. Most of New Zealand's few mammals are like those frequenting Antarctic shores.
https://en.wikipedia.org/wiki/Neozealandia
Nepal Engineers' Association (NEA) (Nepali:नेपाल ईन्जिनियर्स एसोसियसन) is an independent non-profit organization of engineers of Nepal . Its headquarters is located in Pulchowk, Lalitpur . It was established in 1962. It has the provincial committee in each province of Nepal. It has the following objectives: [ 1 ] Year 2022: 36,500 plus members [ 2 ] This article about an organization in Nepal is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nepal_Engineers_Association
The nephelauxetic effect is a term used in the inorganic chemistry of transition metals . [ 1 ] [ 2 ] It refers to a decrease in the Racah interelectronic repulsion parameter , given the symbol B , that occurs when a transition-metal free ion forms a complex with ligands . The name "nephelauxetic" comes from the Greek for cloud-expanding and was proposed by the Danish inorganic chemist C. K. Jorgensen. The presence of this effect highlights the disadvantages of crystal field theory , which treats metal-ligand interactions as purely electrostatic, since the nephelauxetic effect reveals the covalent character in the metal-ligand interaction. The decrease in the Racah parameter B indicates that in a complex there is less repulsion between the two electrons in a given doubly occupied metal d - orbital than there is in the respective M n+ gaseous metal ion, which in turn implies that the size of the orbital is larger in the complex. This electron cloud expansion effect may occur for one (or both) of two reasons. One is that the effective positive charge on the metal has decreased. Because the positive charge of the metal is reduced by any negative charge on the ligands, the d -orbitals can expand slightly. The second is the act of overlapping with ligand orbitals and forming covalent bonds increases orbital size, because the resulting molecular orbital is formed from two atomic orbitals . The reduction of B from its free ion value is normally reported in terms of the nephelauxetic parameter β : β = B complex B free ion {\displaystyle \beta ={\frac {B_{\text{complex}}}{B_{\text{free ion}}}}} Experimentally, it is observed that size of the nephelauxetic parameter always follows a certain trend with respect to the nature of the ligands present. The list shown below enlists some common ligands (showing increasing nephelauxetic effect): [ 3 ] Although parts of this series may seem quite similar to the spectrochemical series of ligands - for example, cyanide , ethylenediamine , and fluoride seem to occupy similar positions in the two - others such as chloride , iodide and bromide (amongst others), occupy very different positions. The ordering roughly reflects the ability of the ligands to form good covalent bonds with metals - those that have a small effect are at the start of the series, whereas those that have a large effect are at the end of the series. The nephelauxetic effect does not only depend upon the ligand type, but also upon the central metal ion. These too can be arranged in order of increasing nephelauxetic effect as follows: [ citation needed ]
https://en.wikipedia.org/wiki/Nephelauxetic_effect
A nephelometer [ 1 ] or aerosol photometer [ 2 ] is an instrument for measuring the concentration of suspended particulates in a liquid or gas colloid . A nephelometer measures suspended particulates by employing a light beam (source beam) and a light detector set to one side (often 90°) of the source beam. Particle density is then a function of the light reflected into the detector from the particles. To some extent, how much light reflects for a given density of particles is dependent upon properties of the particles such as their shape, color , and reflectivity . Nephelometers are calibrated to a known particulate, then use environmental factors (k-factors) to compensate lighter or darker colored dusts accordingly. K-factor is determined by the user by running the nephelometer next to an air sampling pump and comparing results. [ clarification needed ] There are a wide variety of research-grade nephelometers on the market as well as open source varieties. [ 3 ] The main uses of nephelometers relate to air quality measurement for pollution monitoring, climate monitoring, and visibility. Airborne particles are commonly either biological contaminants, particulate contaminants, gaseous contaminants, or dust. [ citation needed ] The accompanying chart shows the types and sizes of various particulate contaminants. This information helps understand the character of particulate pollution inside a building or in the ambient air, as well as the cleanliness level in a controlled environment. [ citation needed ] Biological contaminants include mold, fungus, bacteria, viruses, animal dander, dust mites, pollen, human skin cells, cockroach parts, or anything alive or living at one time. They are the biggest enemy of indoor air quality specialists because they are contaminants that cause health problems. Levels of biological contamination depend on humidity and temperature that supports the livelihood of micro-organisms. The presence of pets, plants, rodents, and insects will raise the level of biological contamination. [ 4 ] Sheath air is clean filtered air that surrounds the aerosol stream to prevent particulates from circulating or depositing within the optic chamber. Sheath air prevents contamination caused by build-up and deposits, improves response time by containing the sample, and improves maintenance by keeping the optic chamber clean. The nephelometer creates the sheath air by passing air through a zero filter before beginning the sample. [ citation needed ] Nephelometers are also used in global warming studies, specifically measuring the global radiation balance. Three wavelength nephelometers fitted with a backscatter shutter can determine the amount of solar radiation that is reflected back into space by dust and particulate matter. This reflected light influences the amount of radiation reaching the earth's lower atmosphere and warming the planet. [ citation needed ] Nephelometers are also used for measurement of visibility with simple one-wavelength nephelometers used throughout the world by many EPAs. Nephelometers, through the measurement of light scattering, can determine visibility in distance through the application of a conversion factor called Koschmieder's formula . [ citation needed ] In medicine, nephelometry is used to measure immune function. It is also used in clinical microbiology, for preparation of a standardized inoculum (McFarland suspension) for antimicrobial susceptibility testing. [ 5 ] [ 6 ] Gas-phase nephelometers are also used in the detection of smoke and other particles of combustion . In such use, the apparatus is referred to as an aspirated smoke detector . These have the capability to detect extremely low particle concentrations (to 0.005%) and are therefore highly suitable to protecting sensitive or valuable electronic equipment, such as mainframe computers and telephone switches . [ citation needed ] A more popular term for this instrument in water quality testing is a turbidimeter . However, there can be differences between models of turbidimeters, depending upon the arrangement ( geometry ) of the source beam and the detector. A nephelometric turbidimeter always monitors light reflected off the particles and not attenuation due to cloudiness. In the United States environmental monitoring the turbidity standard unit is called Nephelometric Turbidity Units (NTU), while the international standard unit is called Formazin Nephelometric Unit (FNU). The most generally applicable unit is Formazin Turbidity Unit (FTU), although different measurement methods can give quite different values as reported in FTU (see below). Gas-phase nephelometers are also used to study the atmosphere . These can provide information on visibility and atmospheric albedo .
https://en.wikipedia.org/wiki/Nephelometer
Nephelometry is a technique used in immunology to determine the levels of several blood plasma proteins. For example, the total levels of antibodies isotypes or classes: Immunoglobulin M , Immunoglobulin G , and Immunoglobulin A . [ 1 ] It is important in quantification of free light chains in diseases such as multiple myeloma . Quantification is important for disease classification and for disease monitoring once a patient has been treated (increased skewing of the ratio between kappa and lambda light chains after a patient has been treated is an indication of disease recurrence). It is performed by measuring the scattered light at an angle from the sample being measured. [ 2 ] In diagnostic nephelometry, the ascending branch of the Heidelberger-Kendall curve is extended by optimizing the course of the reaction so that most plasma proteins’ (from human blood) measurement signals fall at the left side of the Heidelberger-Kendall curve, even at very high concentrations. [ citation needed ] This technique is widely used in clinical laboratories because it is relatively easily automated. It is based on the principle that a dilute suspension of small particles will scatter light (usually a laser) passed through it rather than simply absorbing it. The amount of scatter is determined by collecting the light at an angle (usually at 30 and 90 degrees). [ 3 ] Antibody and the antigen are mixed in concentrations such that only small aggregates are formed that do not quickly settle to the bottom. The amount of light scatter is measured and compared to the amount of scatter from known mixtures. The amount of the unknown is determined from a standard curve. Nephelometry can be used to detect either antigen or antibody, but it is usually run with antibody as the reagent and the patient antigen as the unknown. [ 4 ] In the Immunology Medical Lab, two types of tests can be run: "end point nephelometry" and "kinetic (rate) nephelometry". End point nephelometry tests are run by allowing the antibody/antigen reaction to run through to completion (until all of the present reagent antibodies and the present patient sample antigens that can aggregate have done so and no more complexes can form). However, the large particles will fall out of the solution and cause a false scatter reading, thus kinetic nephelometry was devised. In kinetic nephelometry, the rate of scatter is measured right after the reagent is added. As long as the reagent is constant the rate of change can be seen as directly related to the amount of antigen present. Aside from the medical applications, nephelometry can be used to measure water clarity, [ 5 ] to measure the growth of microorganisms [ 6 ] [ 7 ] and to test drug solubility. [ 3 ] This immunology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nephelometry_(medicine)
Nephromyces molgularum Giard, 1888 Nephromyces rosocovitanus Giard, 1888 Nephromyces sorokini Giard, 1888 Nephromyces is a genus of apicomplexans that are symbionts of the ascidian genus Molgula (sea grapes). Nephromyces was first described in 1888 by Alfred Mathieu Giard as a chytrid fungus , because of its filamentous cells. He formally named three species, each corresponding to a different species of the host animal. [ 1 ] Molecular phylogenetics later showed that Nephromyces are not actually fungi, but instead constitute a group within the Apicomplexa that is related to the Piroplasmida . [ 2 ] Nephromyces is found in the lumen of the renal sac of its host animals. The renal sac is a closed, fluid-filled structure that is derived from the epicardium during development. [ 3 ] There are different cell types (at least seven in Nephromyces from Molgula manhattensis ) which appear to be different life cycle stages, as the different types appear in a consistent sequence after initial infection of the host animal. However, in a mature infection, different stages simultaneously co-occur in the same host individual. They include filaments (trophic stages), spores, motile but non-flagellated cells, and biflagellated swarmer cells. [ 4 ] The non-flagellated motile cells resemble the sporozoites of other apicomplexans, while the spores contain structures that resemble the rhoptries of the apical complex , another typical apicomplexan feature. [ 2 ] Nephromyces is specific to the family Molgulidae , and has been found in species of Molgula and at least one other molgulid genus, Bostrichobranchus ( B. pilularis ). [ 5 ] Every wild-collected adult Molgula animal examined has been found to contain Nephromyces , suggesting that it is a beneficial symbiont rather than a parasite; this makes Nephromyces an exception among apicomplexans, which are usually parasitic on their animal hosts. [ 2 ] However, animals without Nephromyces can be obtained by spawning and raising them in filtered seawater. These symbiont-free animals have been used to study the Nephromyces life cycle. Nephromyces is released into surrounding seawater when its host dies, and cells of Nephromyces can remain alive and infective for at least 29 days outside of a host. [ 6 ] The renal sac organ where Nephromyces lives contains high concentrations of urate , a nitrogenous waste product. Activity of urate oxidase , an enzyme that breaks down urate, has been found in Nephromyces cells, hence they may be using the waste products from their host animal as a nitrogen source for themselves. [ 7 ] Intracellular bacteria have been found within cells of Nephromyces from Molgula manhattensis and M. occidentalis , making this a symbiosis within a symbiosis. [ 8 ]
https://en.wikipedia.org/wiki/Nephromyces
The Neptune Pine is an unlocked GSM standalone, [ 1 ] [ 2 ] full featured smartwatch developed by Canadian consumer electronics and wearable technology company Neptune. [ 3 ] It was announced in January 2013 by Simon Tian and launched in November 2013 on Kickstarter . Within 27 hours, the campaign had reached its funding goal of $100,000, and ultimately went on to raise more than $800,000 in 30 days, becoming the highest-funded Canadian Kickstarter campaign at the time. The device started shipping in August 2014, and eventually became widely available through Best Buy and Amazon . It was featured in the 2017 film The Fate of the Furious , the CBS TV series Extant starring Halle Berry and the music video for Smartphones by Trey Songz . The Pine received mixed reviews from the press, generally praising its extensive set of features, while criticizing its large size. [ 4 ] [ 5 ] After a successful campaign on Kickstarter Neptune managed to raise over $800,000 out of a goal of $100,000. [ 6 ] It uses Google Android version 4.1 but is not a Google-licensed device and therefore does not include Google apps or the Google Play store. These apps can be manually added by the user. The Pine has a Snapdragon S4 system on a chip (SoC) by Qualcomm that has a Cortex-A5 Dual-Core ARM processor running at 1.2 GHz. The smartwatch has a capacitive touch screen , a Wi-Fi web browser , a 5.0 MP rear-facing camera and a VGA front-facing camera, both of them with LED flash, a multimedia player and recorder for music ( mp3 ) and video ( mp4 ), a 3.5 mm headphone jack and an internal GPS antenna that supports satellite navigation . Other data inputs are an accelerometer , a gyroscope , a pedometer , and a digital compass . [ 7 ] The Pine smartwatch can be released from the wrist strap by pressing a button on the strap for a better audio signal during a phone call or to take photos with the 5 MP rear-facing camera which is otherwise blocked by the strap. A Micro B USB to USB cable is required to charge the smartwatch. It can also connect to a computer so that the internal SD card can be recognized as a mass storage device for file management . [ 7 ] As a phone, it can be used in conjunction with a Bluetooth headset and can operate in a hands-free manner with its built-in microphone and speakers, also its Bluetooth functionality supports Stereo Bluetooth for wireless music playback and making calls. It offers a talk time of up to 8 hours on 2G and six hours on 3G. Internet usage time is up to seven hours and music playback is up to 10 hours. [ 8 ] The watch was initially designed to be waterproof, but it was found that the manufacturer was unable to apply the desired treatment. While this option is not available at retail, Kickstarter backers who wanted waterproofing could have their products transported to a third party, opened, and treated with the aftermarket HzO spray before delivery. [ 9 ]
https://en.wikipedia.org/wiki/Neptune_Pine
Neptunium diarsenide is a binary inorganic compound of neptunium and arsenic with the chemical formula NpAs 2 . [ 1 ] [ 2 ] The compound forms crystals. [ 3 ] Heating stoichiometric amounts of neptunium hydride and arsenic: [ 4 ] Neptunium diarsenide forms crystals of the tetragonal system , [ 5 ] space group P 4/ nmm , cell parameters a = 0.3958 nm, c = 0.8098 nm.
https://en.wikipedia.org/wiki/Neptunium_diarsenide
Neptunium silicide is a binary inorganic compound of neptunium and silicon with the chemical formula NpSi 2 . [ 1 ] The compound forms crystals and does not dissolve in water. [ 2 ] Heating neptunium trifluoride with powdered silicon in vacuum: [ 3 ] Neptunium silicide forms crystals [ 4 ] of tetragonal crystal system , space group I 4 1 / amd , cell parameters: a = 0.396 nm, c = 1.367 nm, Z = 4. [ 5 ] Neptunium disilicide does not dissolve in water. Neptunium disilicide reacts with HCl: [ 3 ]
https://en.wikipedia.org/wiki/Neptunium_silicide
Nericell is a system which uses smartphones for monitoring traffic data. [ 1 ] Nericell performs rich sensing by piggybacking on smartphones that users carry. It uses the accelerometer , radio , GPS , and microphone sensors found in these phones to detect potholes , bumps, braking , and honking. Nericell addresses several issues, including virtually reorienting the accelerometer on a phone that is in an arbitrary orientation, and performing honk detection and localization in an energy efficient manner. [ 1 ] This mobile technology related article is a stub . You can help Wikipedia by expanding it . This article about transport is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nericell
In electrochemistry , the Nernst equation is a chemical thermodynamical relationship that permits the calculation of the reduction potential of a reaction ( half-cell or full cell reaction) from the standard electrode potential , absolute temperature , the number of electrons involved in the redox reaction , and activities (often approximated by concentrations) of the chemical species undergoing reduction and oxidation respectively. It was named after Walther Nernst , a German physical chemist who formulated the equation. [ 1 ] [ 2 ] When an oxidized species ( Ox ) accepts a number z of electrons ( e − ) to be converted in its reduced form ( Red ), the half-reaction is expressed as: The reaction quotient ( Q r ), also often called the ion activity product ( IAP ), is the ratio between the chemical activities ( a ) of the reduced form (the reductant , a Red ) and the oxidized form (the oxidant , a Ox ). The chemical activity of a dissolved species corresponds to its true thermodynamic concentration taking into account the electrical interactions between all ions present in solution at elevated concentrations. For a given dissolved species, its chemical activity (a) is the product of its activity coefficient (γ) by its molar (mol/L solution), or molal (mol/kg water), concentration (C): a = γ C. So, if the concentration ( C , also denoted here below with square brackets [ ]) of all the dissolved species of interest are sufficiently low and that their activity coefficients are close to unity, their chemical activities can be approximated by their concentrations as commonly done when simplifying, or idealizing, a reaction for didactic purposes: At chemical equilibrium , the ratio Q r of the activity of the reaction product ( a Red ) by the reagent activity ( a Ox ) is equal to the equilibrium constant K of the half-reaction: The standard thermodynamics also says that the actual Gibbs free energy Δ G is related to the free energy change under standard state Δ G o by the relationship: Δ G = Δ G ⊖ + R T ln ⁡ Q r {\displaystyle \Delta G=\Delta G^{\ominus }+RT\ln Q_{r}} where Q r is the reaction quotient and R is the universal ideal gas constant . The cell potential E associated with the electrochemical reaction is defined as the decrease in Gibbs free energy per coulomb of charge transferred, which leads to the relationship Δ G = − z F E . {\displaystyle \Delta G=-zFE.} The constant F (the Faraday constant ) is a unit conversion factor F = N A q , where N A is the Avogadro constant and q is the fundamental electron charge. This immediately leads to the Nernst equation, which for an electrochemical half-cell is E red = E red ⊖ − R T z F ln ⁡ Q r = E red ⊖ − R T z F ln ⁡ a Red a Ox . {\displaystyle E_{\text{red}}=E_{\text{red}}^{\ominus }-{\frac {RT}{zF}}\ln Q_{r}=E_{\text{red}}^{\ominus }-{\frac {RT}{zF}}\ln {\frac {a_{\text{Red}}}{a_{\text{Ox}}}}.} For a complete electrochemical reaction (full cell), the equation can be written as E cell = E cell ⊖ − R T z F ln ⁡ Q r {\displaystyle E_{\text{cell}}=E_{\text{cell}}^{\ominus }-{\frac {RT}{zF}}\ln Q_{r}} where: At room temperature (25 °C), the thermal voltage V T = R T F {\displaystyle V_{T}={\frac {RT}{F}}} is approximately 25.693 mV. The Nernst equation is frequently expressed in terms of base-10 logarithms ( i.e. , common logarithms ) rather than natural logarithms , in which case it is written: E = E ⊖ − V T z ln ⁡ a Red a Ox = E ⊖ − λ V T z log 10 ⁡ a Red a Ox . {\displaystyle E=E^{\ominus }-{\frac {V_{T}}{z}}\ln {\frac {a_{\text{Red}}}{a_{\text{Ox}}}}=E^{\ominus }-{\frac {\lambda V_{T}}{z}}\log _{10}{\frac {a_{\text{Red}}}{a_{\text{Ox}}}}.} where λ = ln(10) ≈ 2.3026 and λV T ≈ 0.05916 Volt. Similarly to equilibrium constants, activities are always measured with respect to the standard state (1 mol/L for solutes, 1 atm for gases, and T = 298.15 K, i.e. , 25 °C or 77 °F). The chemical activity of a species i , a i , is related to the measured concentration C i via the relationship a i = γ i C i , where γ i is the activity coefficient of the species i . Because activity coefficients tend to unity at low concentrations, or are unknown or difficult to determine at medium and high concentrations, activities in the Nernst equation are frequently replaced by simple concentrations and then, formal standard reduction potentials E red ⊖ ′ {\displaystyle E_{\text{red}}^{\ominus '}} used. Taking into account the activity coefficients ( γ {\displaystyle \gamma } ) the Nernst equation becomes: E red = E red ⊖ − R T z F ln ⁡ ( γ Red γ Ox C Red C Ox ) {\displaystyle E_{\text{red}}=E_{\text{red}}^{\ominus }-{\frac {RT}{zF}}\ln \left({\frac {\gamma _{\text{Red}}}{\gamma _{\text{Ox}}}}{\frac {C_{\text{Red}}}{C_{\text{Ox}}}}\right)} E red = E red ⊖ − R T z F ( ln ⁡ γ Red γ Ox + ln ⁡ C Red C Ox ) {\displaystyle E_{\text{red}}=E_{\text{red}}^{\ominus }-{\frac {RT}{zF}}\left(\ln {\frac {\gamma _{\text{Red}}}{\gamma _{\text{Ox}}}}+\ln {\frac {C_{\text{Red}}}{C_{\text{Ox}}}}\right)} E red = ( E red ⊖ − R T z F ln ⁡ γ Red γ Ox ) ⏟ E red ⊖ ′ − R T z F ln ⁡ C Red C Ox {\displaystyle E_{\text{red}}=\underbrace {\left(E_{\text{red}}^{\ominus }-{\frac {RT}{zF}}\ln {\frac {\gamma _{\text{Red}}}{\gamma _{\text{Ox}}}}\right)} _{E_{\text{red}}^{\ominus '}}-{\frac {RT}{zF}}\ln {\frac {C_{\text{Red}}}{C_{\text{Ox}}}}} Where the first term including the activity coefficients ( γ {\displaystyle \gamma } ) is denoted E red ⊖ ′ {\displaystyle E_{\text{red}}^{\ominus '}} and called the formal standard reduction potential, so that E red {\displaystyle E_{\text{red}}} can be directly expressed as a function of E red ⊖ ′ {\displaystyle E_{\text{red}}^{\ominus '}} and the concentrations in the simplest form of the Nernst equation: E red = E red ⊖ ′ − R T z F ln ⁡ C Red C Ox {\displaystyle E_{\text{red}}=E_{\text{red}}^{\ominus '}-{\frac {RT}{zF}}\ln {\frac {C_{\text{Red}}}{C_{\text{Ox}}}}} When wishing to use simple concentrations in place of activities, but that the activity coefficients are far from unity and can no longer be neglected and are unknown or too difficult to determine, it can be convenient to introduce the notion of the "so-called" standard formal reduction potential ( E red ⊖ ′ {\displaystyle E_{\text{red}}^{\ominus '}} ) which is related to the standard reduction potential as follows: [ 3 ] E red ⊖ ′ = E red ⊖ − R T z F ln ⁡ γ Red γ Ox {\displaystyle E_{\text{red}}^{\ominus '}=E_{\text{red}}^{\ominus }-{\frac {RT}{zF}}\ln {\frac {\gamma _{\text{Red}}}{\gamma _{\text{Ox}}}}} So that the Nernst equation for the half-cell reaction can be correctly formally written in terms of concentrations as: E red = E red ⊖ ′ − R T z F ln ⁡ C Red C Ox {\displaystyle E_{\text{red}}=E_{\text{red}}^{\ominus '}-{\frac {RT}{zF}}\ln {\frac {C_{\text{Red}}}{C_{\text{Ox}}}}} and likewise for the full cell expression. According to Wenzel (2020), [ 4 ] a formal reduction potential E red ⊖ ′ {\displaystyle E_{\text{red}}^{\ominus '}} is the reduction potential that applies to a half reaction under a set of specified conditions such as, e.g., pH , ionic strength , or the concentration of complexing agents . The formal reduction potential E red ⊖ ′ {\displaystyle E_{\text{red}}^{\ominus '}} is often a more convenient, but conditional, form of the standard reduction potential, taking into account activity coefficients and specific conditions characteristics of the reaction medium. Therefore, its value is a conditional value, i.e. , that it depends on the experimental conditions and because the ionic strength affects the activity coefficients, E red ⊖ ′ {\displaystyle E_{\text{red}}^{\ominus '}} will vary from medium to medium. [ 3 ] Several definitions of the formal reduction potential can be found in the literature, depending on the pursued objective and the experimental constraints imposed by the studied system. The general definition of E red ⊖ ′ {\displaystyle E_{\text{red}}^{\ominus '}} refers to its value determined when C red C ox = 1 {\displaystyle {\frac {C_{\text{red}}}{C_{\text{ox}}}}=1} . A more particular case is when E red ⊖ ′ {\displaystyle E_{\text{red}}^{\ominus '}} is also determined at pH 7, as e.g. for redox reactions important in biochemistry or biological systems. The formal standard reduction potential E red ⊖ ′ {\displaystyle E_{\text{red}}^{\ominus '}} can be defined as the measured reduction potential E red {\displaystyle E_{\text{red}}} of the half-reaction at unity concentration ratio of the oxidized and reduced species ( i.e. , when ⁠ C red / C ox ⁠ = 1) under given conditions. [ 5 ] Indeed: as, E red = E red ⊖ {\displaystyle E_{\text{red}}=E_{\text{red}}^{\ominus }} , when a red a ox = 1 {\displaystyle {\frac {a_{\text{red}}}{a_{\text{ox}}}}=1} , because ln ⁡ 1 = 0 {\displaystyle \ln {1}=0} , and that the term γ red γ ox {\displaystyle {\frac {\gamma _{\text{red}}}{\gamma _{\text{ox}}}}} is included in E red ⊖ ′ {\displaystyle E_{\text{red}}^{\ominus '}} . The formal reduction potential makes possible to more simply work with molar (mol/L, M) or molal (mol/kg H 2 O , m) concentrations in place of activities . Because molar and molal concentrations were once referred as formal concentrations , it could explain the origin of the adjective formal in the expression formal potential. [ citation needed ] The formal potential is thus the reversible potential of an electrode at equilibrium immersed in a solution where reactants and products are at unit concentration. [ 6 ] If any small incremental change of potential causes a change in the direction of the reaction, i.e. from reduction to oxidation or vice versa , the system is close to equilibrium, reversible and is at its formal potential. When the formal potential is measured under standard conditions ( i.e. the activity of each dissolved species is 1 mol/L, T = 298.15 K = 25 °C = 77 °F, P gas = 1 bar) it becomes de facto a standard potential. [ 7 ] According to Brown and Swift (1949): "A formal potential is defined as the potential of a half-cell, measured against the standard hydrogen electrode , when the total concentration of each oxidation state is one formal ". [ 8 ] In this case, as for the standard reduction potentials, the concentrations of dissolved species remain equal to one molar (M) or one molal (m), and so are said to be one formal (F). So, expressing the concentration C in molarity M (1 mol/L): The term formal concentration (F) is now largely ignored in the current literature and can be commonly assimilated to molar concentration (M), or molality (m) in case of thermodynamic calculations. [ 9 ] The formal potential is also found halfway between the two peaks in a cyclic voltammogram , where at this point the concentration of Ox (the oxidized species) and Red (the reduced species) at the electrode surface are equal. The activity coefficients γ r e d {\displaystyle \gamma _{red}} and γ o x {\displaystyle \gamma _{ox}} are included in the formal potential E red ⊖ ′ {\displaystyle E_{\text{red}}^{\ominus '}} , and because they depend on experimental conditions such as temperature, ionic strength , and pH , E red ⊖ ′ {\displaystyle E_{\text{red}}^{\ominus '}} cannot be referred as an immutable standard potential but needs to be systematically determined for each specific set of experimental conditions. [ 7 ] Formal reduction potentials are applied to simplify calculations of a considered system under given conditions and measurements interpretation. The experimental conditions in which they are determined and their relationship to the standard reduction potentials must be clearly described to avoid to confuse them with standard reduction potentials. Formal standard reduction potentials ( E red ⊖ ′ {\displaystyle E_{\text{red}}^{\ominus '}} ) are also commonly used in biochemistry and cell biology for referring to standard reduction potentials measured at pH 7, a value closer to the pH of most physiological and intracellular fluids than the standard state pH of 0. The advantage is to defining a more appropriate redox scale better corresponding to real conditions than the standard state. Formal standard reduction potentials ( E red ⊖ ′ {\displaystyle E_{\text{red}}^{\ominus '}} ) allow to more easily estimate if a redox reaction supposed to occur in a metabolic process or to fuel microbial activity under some conditions is feasible or not. While, standard reduction potentials always refer to the standard hydrogen electrode (SHE), with [ H + ] = 1 M corresponding to a pH 0, and E red H+ ⊖ {\displaystyle E_{\text{red H+}}^{\ominus }} fixed arbitrarily to zero by convention, it is no longer the case at a pH of 7. Then, the reduction potential E red {\displaystyle E_{\text{red}}} of a hydrogen electrode operating at pH 7 is −0.413 V with respect to the standard hydrogen electrode (SHE). [ 10 ] The E h {\displaystyle E_{h}} and pH of a solution are related by the Nernst equation as commonly represented by a Pourbaix diagram ( E h {\displaystyle E_{h}} – pH plot) . E h {\displaystyle E_{h}} explicitly denotes E red {\displaystyle E_{\text{red}}} expressed versus the standard hydrogen electrode (SHE). For a half cell equation, conventionally written as a reduction reaction ( i.e. , electrons accepted by an oxidant on the left side): The half-cell standard reduction potential E red ⊖ {\displaystyle E_{\text{red}}^{\ominus }} is given by where Δ G ⊖ {\displaystyle \Delta G^{\ominus }} is the standard Gibbs free energy change, z is the number of electrons involved, and F is the Faraday's constant . The Nernst equation relates pH and E h {\displaystyle E_{h}} as follows: where curly brackets indicate activities , and exponents are shown in the conventional manner. This equation is the equation of a straight line for E red {\displaystyle E_{\text{red}}} as a function of pH with a slope of − 0.05916 ( h z ) {\displaystyle -0.05916\,\left({\frac {h}{z}}\right)} volt (pH has no units). This equation predicts lower E red {\displaystyle E_{\text{red}}} at higher pH values. This is observed for the reduction of O 2 into H 2 O, or OH − , and for the reduction of H + into H 2 . E red {\displaystyle E_{\text{red}}} is then often noted as E h {\displaystyle E_{h}} to indicate that it refers to the standard hydrogen electrode (SHE) whose E red {\displaystyle E_{\text{red}}} = 0 by convention under standard conditions (T = 298.15 K = 25 °C = 77 F, P gas = 1 atm (1.013 bar), concentrations = 1 M and thus pH = 0). The main factor affecting the formal reduction potentials in biochemical or biological processes is most often the pH. To determine approximate values of formal reduction potentials, neglecting in a first approach changes in activity coefficients due to ionic strength, the Nernst equation has to be applied taking care to first express the relationship as a function of pH. The second factor to be considered are the values of the concentrations taken into account in the Nernst equation. To define a formal reduction potential for a biochemical reaction, the pH value, the concentrations values and the hypotheses made on the activity coefficients must always be explicitly indicated. When using, or comparing, several formal reduction potentials they must also be internally consistent. Problems may occur when mixing different sources of data using different conventions or approximations ( i.e. , with different underlying hypotheses). When working at the frontier between inorganic and biological processes (e.g., when comparing abiotic and biotic processes in geochemistry when microbial activity could also be at work in the system), care must be taken not to inadvertently directly mix standard reduction potentials versus SHE (pH = 0) with formal reduction potentials (pH = 7). Definitions must be clearly expressed and carefully controlled, especially if the sources of data are different and arise from different fields (e.g., picking and mixing data from classical electrochemistry and microbiology textbooks without paying attention to the different conventions on which they are based). To illustrate the dependency of the reduction potential on pH, one can simply consider the two oxido-reduction equilibria determining the water stability domain in a Pourbaix diagram (E h –pH plot) . When water is submitted to electrolysis by applying a sufficient difference of electrical potential between two electrodes immersed in water, hydrogen is produced at the cathode (reduction of water protons) while oxygen is formed at the anode (oxidation of water oxygen atoms). The same may occur if a reductant stronger than hydrogen (e.g., metallic Na) or an oxidant stronger than oxygen (e.g., F 2 ) enters in contact with water and reacts with it. In the E h –pH plot here beside (the simplest possible version of a Pourbaix diagram), the water stability domain (grey surface) is delimited in term of redox potential by two inclined red dashed lines: When solving the Nernst equation for each corresponding reduction reaction (need to revert the water oxidation reaction producing oxygen), both equations have a similar form because the number of protons and the number of electrons involved within a reaction are the same and their ratio is one (2 H + /2 e − for H 2 and 4 H + /4 e − with O 2 respectively), so it simplifies when solving the Nernst equation expressed as a function of pH. The result can be numerically expressed as follows: Note that the slopes of the two water stability domain upper and lower lines are the same (−59.16 mV/pH unit), so they are parallel on a Pourbaix diagram . As the slopes are negative, at high pH, both hydrogen and oxygen evolution requires a much lower reduction potential than at low pH. For the reduction of H + into H 2 the here above mentioned relationship becomes: For the reduction of O 2 into 2 H 2 O the here above mentioned relationship becomes: The offset of −414 mV in E red {\displaystyle E_{\text{red}}} is the same for both reduction reactions because they share the same linear relationship as a function of pH and the slopes of their lines are the same. This can be directly verified on a Pourbaix diagram. For other reduction reactions, the value of the formal reduction potential at a pH of 7, commonly referred for biochemical reactions, also depends on the slope of the corresponding line in a Pourbaix diagram i.e. on the ratio h ⁄ z of the number of H + to the number of e − involved in the reduction reaction, and thus on the stoichiometry of the half-reaction. The determination of the formal reduction potential at pH = 7 for a given biochemical half-reaction requires thus to calculate it with the corresponding Nernst equation as a function of pH. One cannot simply apply an offset of −414 mV to the E h value (SHE) when the ratio h ⁄ z differs from 1. Beside important redox reactions in biochemistry and microbiology , the Nernst equation is also used in physiology for calculating the electric potential of a cell membrane with respect to one type of ion . It can be linked to the acid dissociation constant . The Nernst equation has a physiological application when used to calculate the potential of an ion of charge z across a membrane. This potential is determined using the concentration of the ion both inside and outside the cell: E = R T z F ln ⁡ [ ion outside cell ] [ ion inside cell ] = 2.3026 R T z F log 10 ⁡ [ ion outside cell ] [ ion inside cell ] . {\displaystyle E={\frac {RT}{zF}}\ln {\frac {[{\text{ion outside cell}}]}{[{\text{ion inside cell}}]}}=2.3026{\frac {RT}{zF}}\log _{10}{\frac {[{\text{ion outside cell}}]}{[{\text{ion inside cell}}]}}.} When the membrane is in thermodynamic equilibrium (i.e., no net flux of ions), and if the cell is permeable to only one ion, then the membrane potential must be equal to the Nernst potential for that ion. When the membrane is permeable to more than one ion, as is inevitably the case, the resting potential can be determined from the Goldman equation, which is a solution of G-H-K influx equation under the constraints that total current density driven by electrochemical force is zero: E m = R T F ln ⁡ ( ∑ i N P M i + [ M i + ] o u t + ∑ j M P A j − [ A j − ] i n ∑ i N P M i + [ M i + ] i n + ∑ j M P A j − [ A j − ] o u t ) , {\displaystyle E_{\mathrm {m} }={\frac {RT}{F}}\ln {\left({\frac {\displaystyle \sum _{i}^{N}P_{\mathrm {M} _{i}^{+}}\left[\mathrm {M} _{i}^{+}\right]_{\mathrm {out} }+\displaystyle \sum _{j}^{M}P_{\mathrm {A} _{j}^{-}}\left[\mathrm {A} _{j}^{-}\right]_{\mathrm {in} }}{\displaystyle \sum _{i}^{N}P_{\mathrm {M} _{i}^{+}}\left[\mathrm {M} _{i}^{+}\right]_{\mathrm {in} }+\displaystyle \sum _{j}^{M}P_{\mathrm {A} _{j}^{-}}\left[\mathrm {A} _{j}^{-}\right]_{\mathrm {out} }}}\right)},} where The potential across the cell membrane that exactly opposes net diffusion of a particular ion through the membrane is called the Nernst potential for that ion. As seen above, the magnitude of the Nernst potential is determined by the ratio of the concentrations of that specific ion on the two sides of the membrane. The greater this ratio the greater the tendency for the ion to diffuse in one direction, and therefore the greater the Nernst potential required to prevent the diffusion. A similar expression exists that includes r (the absolute value of the transport ratio). This takes transporters with unequal exchanges into account. See: sodium-potassium pump where the transport ratio would be 2/3, so r equals 1.5 in the formula below. The reason why we insert a factor r = 1.5 here is that current density by electrochemical force J e.c. (Na + ) + J e.c. (K + ) is no longer zero, but rather J e.c. (Na + ) + 1.5J e.c. (K + ) = 0 (as for both ions flux by electrochemical force is compensated by that by the pump, i.e. J e.c. = −J pump ), altering the constraints for applying GHK equation. The other variables are the same as above. The following example includes two ions: potassium (K + ) and sodium (Na + ). Chloride is assumed to be in equilibrium. E m = R T F ln ⁡ ( r P K + [ K + ] o u t + P N a + [ N a + ] o u t r P K + [ K + ] i n + P N a + [ N a + ] i n ) . {\displaystyle E_{m}={\frac {RT}{F}}\ln {\left({\frac {rP_{\mathrm {K} ^{+}}\left[\mathrm {K} ^{+}\right]_{\mathrm {out} }+P_{\mathrm {Na} ^{+}}\left[\mathrm {Na} ^{+}\right]_{\mathrm {out} }}{rP_{\mathrm {K} ^{+}}\left[\mathrm {K} ^{+}\right]_{\mathrm {in} }+P_{\mathrm {Na} ^{+}}\left[\mathrm {Na} ^{+}\right]_{\mathrm {in} }}}\right)}.} When chloride (Cl − ) is taken into account, E m = R T F ln ⁡ ( r P K + [ K + ] o u t + P N a + [ N a + ] o u t + P C l − [ C l − ] i n r P K + [ K + ] i n + P N a + [ N a + ] i n + P C l − [ C l − ] o u t ) . {\displaystyle E_{m}={\frac {RT}{F}}\ln {\left({\frac {rP_{\mathrm {K} ^{+}}\left[\mathrm {K} ^{+}\right]_{\mathrm {out} }+P_{\mathrm {Na} ^{+}}\left[\mathrm {Na} ^{+}\right]_{\mathrm {out} }+P_{\mathrm {Cl} ^{-}}\left[\mathrm {Cl} ^{-}\right]_{\mathrm {in} }}{rP_{\mathrm {K} ^{+}}\left[\mathrm {K} ^{+}\right]_{\mathrm {in} }+P_{\mathrm {Na} ^{+}}\left[\mathrm {Na} ^{+}\right]_{\mathrm {in} }+P_{\mathrm {Cl} ^{-}}\left[\mathrm {Cl} ^{-}\right]_{\mathrm {out} }}}\right)}.} For simplicity, we will consider a solution of redox-active molecules that undergo a one-electron reversible reaction and that have a standard potential of zero, and in which the activities are well represented by the concentrations (i.e. unit activity coefficient). The chemical potential μ c of this solution is the difference between the energy barriers for taking electrons from and for giving electrons to the working electrode that is setting the solution's electrochemical potential . The ratio of oxidized to reduced molecules, ⁠ [Ox] / [Red] ⁠ , is equivalent to the probability of being oxidized (giving electrons) over the probability of being reduced (taking electrons), which we can write in terms of the Boltzmann factor for these processes: [ R e d ] [ O x ] = exp ⁡ ( − [ barrier for gaining an electron ] / k T ) exp ⁡ ( − [ barrier for losing an electron ] / k T ) = exp ⁡ ( μ c k T ) . {\displaystyle {\begin{aligned}{\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}}&={\frac {\exp \left(-[{\text{barrier for gaining an electron}}]/kT\right)}{\exp \left(-[{\text{barrier for losing an electron}}]/kT\right)}}\\[6px]&=\exp \left({\frac {\mu _{\mathrm {c} }}{kT}}\right).\end{aligned}}} Taking the natural logarithm of both sides gives μ c = k T ln ⁡ [ R e d ] [ O x ] . {\displaystyle \mu _{\mathrm {c} }=kT\ln {\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}}.} If μ c ≠ 0 at ⁠ [Ox] / [Red] ⁠ = 1, we need to add in this additional constant: μ c = μ c ⊖ + k T ln ⁡ [ R e d ] [ O x ] . {\displaystyle \mu _{\mathrm {c} }=\mu _{\mathrm {c} }^{\ominus }+kT\ln {\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}}.} Dividing the equation by e to convert from chemical potentials to electrode potentials, and remembering that ⁠ k / e ⁠ = ⁠ R / F ⁠ , [ 11 ] we obtain the Nernst equation for the one-electron process Ox + e − ⇌ Red : E = E ⊖ − k T e ln ⁡ [ R e d ] [ O x ] = E ⊖ − R T F ln ⁡ [ R e d ] [ O x ] . {\displaystyle {\begin{aligned}E&=E^{\ominus }-{\frac {kT}{e}}\ln {\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}}\\&=E^{\ominus }-{\frac {RT}{F}}\ln {\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}}.\end{aligned}}} Quantities here are given per molecule, not per mole , and so Boltzmann constant k and the electron charge e are used instead of the gas constant R and Faraday's constant F . To convert to the molar quantities given in most chemistry textbooks, it is simply necessary to multiply by the Avogadro constant : R = kN A and F = eN A . The entropy of a molecule is defined as S = d e f k ln ⁡ Ω , {\displaystyle S\ {\stackrel {\mathrm {def} }{=}}\ k\ln \Omega ,} where Ω is the number of states available to the molecule. The number of states must vary linearly with the volume V of the system (here an idealized system is considered for better understanding, so that activities are posited very close to the true concentrations). Fundamental statistical proof of the mentioned linearity goes beyond the scope of this section, but to see this is true it is simpler to consider usual isothermal process for an ideal gas where the change of entropy Δ S = nR ln( ⁠ V 2 / V 1 ⁠ ) takes place. It follows from the definition of entropy and from the condition of constant temperature and quantity of gas n that the change in the number of states must be proportional to the relative change in volume ⁠ V 2 / V 1 ⁠ . In this sense there is no difference in statistical properties of ideal gas atoms compared with the dissolved species of a solution with activity coefficients equaling one: particles freely "hang around" filling the provided volume), which is inversely proportional to the concentration c , so we can also write the entropy as S = k ln ⁡ ( c o n s t a n t × V ) = − k ln ⁡ ( c o n s t a n t × c ) . {\displaystyle S=k\ln \ (\mathrm {constant} \times V)=-k\ln \ (\mathrm {constant} \times c).} The change in entropy from some state 1 to another state 2 is therefore Δ S = S 2 − S 1 = − k ln ⁡ c 2 c 1 , {\displaystyle \Delta S=S_{2}-S_{1}=-k\ln {\frac {c_{2}}{c_{1}}},} so that the entropy of state 2 is S 2 = S 1 − k ln ⁡ c 2 c 1 . {\displaystyle S_{2}=S_{1}-k\ln {\frac {c_{2}}{c_{1}}}.} If state 1 is at standard conditions, in which c 1 is unity (e.g., 1 atm or 1 M), it will merely cancel the units of c 2 . We can, therefore, write the entropy of an arbitrary molecule A as S ( A ) = S ⊖ ( A ) − k ln ⁡ [ A ] , {\displaystyle S(\mathrm {A} )=S^{\ominus }(\mathrm {A} )-k\ln[\mathrm {A} ],} where S ⊖ {\displaystyle S^{\ominus }} is the entropy at standard conditions and [A] denotes the concentration of A. The change in entropy for a reaction is then given by Δ S r x n = ( y S ( Y ) + z S ( Z ) ) − ( a S ( A ) + b S ( B ) ) = Δ S r x n ⊖ − k ln ⁡ [ Y ] y [ Z ] z [ A ] a [ B ] b . {\displaystyle \Delta S_{\mathrm {rxn} }={\big (}yS(\mathrm {Y} )+zS(\mathrm {Z} ){\big )}-{\big (}aS(\mathrm {A} )+bS(\mathrm {B} ){\big )}=\Delta S_{\mathrm {rxn} }^{\ominus }-k\ln {\frac {[\mathrm {Y} ]^{y}[\mathrm {Z} ]^{z}}{[\mathrm {A} ]^{a}[\mathrm {B} ]^{b}}}.} We define the ratio in the last term as the reaction quotient : Q r = ∏ j a j ν j ∏ i a i ν i ≈ [ Z ] z [ Y ] y [ A ] a [ B ] b , {\displaystyle Q_{r}={\frac {\displaystyle \prod _{j}a_{j}^{\nu _{j}}}{\displaystyle \prod _{i}a_{i}^{\nu _{i}}}}\approx {\frac {[\mathrm {Z} ]^{z}[\mathrm {Y} ]^{y}}{[\mathrm {A} ]^{a}[\mathrm {B} ]^{b}}},} where the numerator is a product of reaction product activities , a j , each raised to the power of a stoichiometric coefficient , ν j , and the denominator is a similar product of reactant activities. All activities refer to a time t . Under certain circumstances (see chemical equilibrium ) each activity term such as a ν j j may be replaced by a concentration term, [A].In an electrochemical cell, the cell potential E is the chemical potential available from redox reactions ( E = ⁠ μ c / e ⁠ ). E is related to the Gibbs free energy change Δ G only by a constant: Δ G = − zFE , where n is the number of electrons transferred and F is the Faraday constant . There is a negative sign because a spontaneous reaction has a negative Gibbs free energy Δ G and a positive potential E . The Gibbs free energy is related to the entropy by G = H − TS , where H is the enthalpy and T is the temperature of the system. Using these relations, we can now write the change in Gibbs free energy, Δ G = Δ H − T Δ S = Δ G ⊖ + k T ln ⁡ Q r , {\displaystyle \Delta G=\Delta H-T\Delta S=\Delta G^{\ominus }+kT\ln Q_{r},} and the cell potential, E = E ⊖ − k T z e ln ⁡ Q r . {\displaystyle E=E^{\ominus }-{\frac {kT}{ze}}\ln Q_{r}.} This is the more general form of the Nernst equation. For the redox reaction Ox + z e − → Red , Q r = [ R e d ] [ O x ] , {\displaystyle Q_{r}={\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}},} and we have: E = E ⊖ − k T z e ln ⁡ [ R e d ] [ O x ] = E ⊖ − R T z F ln ⁡ [ R e d ] [ O x ] = E ⊖ − R T z F ln ⁡ Q r . {\displaystyle {\begin{aligned}E&=E^{\ominus }-{\frac {kT}{ze}}\ln {\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}}\\&=E^{\ominus }-{\frac {RT}{zF}}\ln {\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}}\\&=E^{\ominus }-{\frac {RT}{zF}}\ln Q_{r}.\end{aligned}}} The cell potential at standard temperature and pressure (STP) E ⊖ {\displaystyle E^{\ominus }} is often replaced by the formal potential E ⊖ ′ {\displaystyle E^{\ominus '}} , which includes the activity coefficients of the dissolved species under given experimental conditions (T, P, ionic strength , pH , and complexing agents) and is the potential that is actually measured in an electrochemical cell. The standard Gibbs free energy Δ G ⊖ {\displaystyle \Delta G^{\ominus }} is related to the equilibrium constant K as follows: [ 12 ] At the same time, Δ G ⊖ {\displaystyle \Delta G^{\ominus }} is also equal to the product of the total charge ( zF ) transferred during the reaction and the cell potential ( E c e l l ⊖ {\displaystyle E_{cell}^{\ominus }} ): The sign is negative, because the considered system performs the work and thus releases energy. So, And therefore: Starting from the Nernst equation, one can also demonstrate the same relationship in the reverse way. At chemical equilibrium , or thermodynamic equilibrium , the electrochemical potential ( E ) = 0 and therefore the reaction quotient ( Q r ) attains the special value known as the equilibrium constant ( K eq ): Therefore, 0 = E ⊖ − R T z F ln ⁡ K R T z F ln ⁡ K = E ⊖ ln ⁡ K = z F E ⊖ R T {\displaystyle {\begin{aligned}0&=E^{\ominus }-{\frac {RT}{zF}}\ln K\\{\frac {RT}{zF}}\ln K&=E^{\ominus }\\\ln K&={\frac {zFE^{\ominus }}{RT}}\end{aligned}}} Or at standard state , log 10 ⁡ K = z E ⊖ λ V T = z E ⊖ 0.05916 V at T = 298.15 K {\displaystyle \log _{10}K={\frac {zE^{\ominus }}{\lambda V_{T}}}={\frac {zE^{\ominus }}{0.05916{\text{ V}}}}\quad {\text{at }}T=298.15~{\text{K}}} We have thus related the standard electrode potential and the equilibrium constant of a redox reaction. In dilute solutions, the Nernst equation can be expressed directly in the terms of concentrations (since activity coefficients are close to unity). But at higher concentrations, the true activities of the ions must be used. This complicates the use of the Nernst equation, since estimation of non-ideal activities of ions generally requires experimental measurements. The Nernst equation also only applies when there is no net current flow through the electrode. The activity of ions at the electrode surface changes when there is current flow , and there are additional overpotential and resistive loss terms which contribute to the measured potential. At very low concentrations of the potential-determining ions, the potential predicted by Nernst equation approaches toward ±∞ . This is physically meaningless because, under such conditions, the exchange current density becomes very low, and there may be no thermodynamic equilibrium necessary for Nernst equation to hold. The electrode is called unpoised in such case. Other effects tend to take control of the electrochemical behavior of the system, like the involvement of the solvated electron in electricity transfer and electrode equilibria, as analyzed by Alexander Frumkin and B. Damaskin, [ 13 ] Sergio Trasatti, etc. The expression of time dependence has been established by Karaoglanoff. [ 14 ] [ 15 ] [ 16 ] [ 17 ] The Nernst equation has been involved in the scientific controversy about cold fusion . Fleischmann and Pons, claiming that cold fusion could exist, calculated that a palladium cathode immersed in a heavy water electrolysis cell could achieve up to 10 27 atmospheres of pressure inside the crystal lattice of the metal of the cathode, enough pressure to cause spontaneous nuclear fusion . In reality, only 10,000–20,000 atmospheres were achieved. The American physicist John R. Huizenga claimed their original calculation was affected by a misinterpretation of the Nernst equation. [ 18 ] He cited a paper about Pd–Zr alloys . [ 19 ] The Nernst equation allows the calculation of the extent of reaction between two redox systems and can be used, for example, to assess whether a particular reaction will go to completion or not. At chemical equilibrium , the electromotive forces (emf) of the two half cells are equal. This allows the equilibrium constant K of the reaction to be calculated and hence the extent of the reaction.
https://en.wikipedia.org/wiki/Nernst_equation
The Nernst heat theorem was formulated by Walther Nernst early in the twentieth century and was used in the development of the third law of thermodynamics . The Nernst heat theorem says that as absolute zero is approached, the entropy change Δ S for a chemical or physical transformation approaches 0. This can be expressed mathematically as follows: The above equation is a modern statement of the theorem. Nernst often used a form that avoided the concept of entropy. [ 1 ] Another way of looking at the theorem is to start with the definition of the Gibbs free energy ( G ), G = H − T S {\displaystyle G=H-TS} , where H stands for enthalpy . For a change from reactants to products at constant temperature and pressure the equation becomes Δ G = Δ H − T Δ S {\displaystyle \Delta G=\Delta H-T\Delta S} . In the limit of T = 0 the equation reduces to just Δ G = Δ H , as illustrated in the figure shown here, which is supported by experimental data. [ 2 ] However, it is known from thermodynamics that the slope of the Δ G curve is −Δ S . Since the slope shown here reaches the horizontal limit of 0 as T → 0 then the implication is that Δ S → 0, which is the Nernst heat theorem. The significance of the Nernst heat theorem is that it was later used by Max Planck to give the third law of thermodynamics , which is that the entropy of all pure, perfectly crystalline homogeneous materials in complete internal equilibrium is 0 at absolute zero .
https://en.wikipedia.org/wiki/Nernst_heat_theorem
The Nernst–Planck equation is a conservation of mass equation used to describe the motion of a charged chemical species in a fluid medium. It extends Fick's law of diffusion for the case where the diffusing particles are also moved with respect to the fluid by electrostatic forces. [ 1 ] [ 2 ] It is named after Walther Nernst and Max Planck . The Nernst–Planck equation is a continuity equation for the time-dependent concentration c ( t , x ) {\displaystyle c(t,{\bf {x}})} of a chemical species: where J {\displaystyle {\bf {J}}} is the flux . It is assumed that the total flux is composed of three elements: diffusion , advection , and electromigration . This implies that the concentration is affected by an ionic concentration gradient ∇ c {\displaystyle \nabla c} , flow velocity v {\displaystyle {\bf {v}}} , and an electric field E {\displaystyle {\bf {E}}} : where D {\displaystyle D} is the diffusivity of the chemical species, z {\displaystyle z} is the valence of ionic species, e {\displaystyle e} is the elementary charge , k B {\displaystyle k_{\text{B}}} is the Boltzmann constant , and T {\displaystyle T} is the absolute temperature . The electric field may be further decomposed as: where ϕ {\displaystyle \phi } is the electric potential and A {\displaystyle {\bf {A}}} is the magnetic vector potential . Therefore, the Nernst–Planck equation is given by: ∂ c ∂ t = ∇ ⋅ [ D ∇ c − c v + D z e k B T c ( ∇ ϕ + ∂ A ∂ t ) ] {\displaystyle {\frac {\partial c}{\partial t}}=\nabla \cdot \left[D\nabla c-c\mathbf {v} +{\frac {Dze}{k_{\text{B}}T}}c\left(\nabla \phi +{\partial {\bf {A}} \over {\partial t}}\right)\right]} Assuming that the concentration is at equilibrium ( ∂ c / ∂ t = 0 ) {\displaystyle (\partial c/\partial t=0)} and the flow velocity is zero, meaning that only the ion species moves, the Nernst–Planck equation takes the form: Rather than a general electric field, if we assume that only the electrostatic component is significant, the equation is further simplified by removing the time derivative of the magnetic vector potential: Finally, in units of mol/(m 2 ·s) and the gas constant R {\displaystyle R} , one obtains the more familiar form: [ 3 ] [ 4 ] where F {\displaystyle F} is the Faraday constant equal to N A e {\displaystyle N_{\text{A}}e} ; the product of Avogadro constant and the elementary charge. The Nernst–Planck equation is applied in describing the ion-exchange kinetics in soils. [ 5 ] It has also been applied to membrane electrochemistry . [ 6 ]
https://en.wikipedia.org/wiki/Nernst–Planck_equation
In topology , the nerve complex of a set family is an abstract complex that records the pattern of intersections between the sets in the family. It was introduced by Pavel Alexandrov [ 1 ] and now has many variants and generalisations, among them the Čech nerve of a cover, which in turn is generalised by hypercoverings . It captures many of the interesting topological properties in an algorithmic or combinatorial way. [ 2 ] Let I {\displaystyle I} be a set of indices and C {\displaystyle C} be a family of sets ( U i ) i ∈ I {\displaystyle (U_{i})_{i\in I}} . The nerve of C {\displaystyle C} is a set of finite subsets of the index set I {\displaystyle I} . It contains all finite subsets J ⊆ I {\displaystyle J\subseteq I} such that the intersection of the U i {\displaystyle U_{i}} whose subindices are in J {\displaystyle J} is non-empty: [ 3 ] : 81 In Alexandrov's original definition, the sets ( U i ) i ∈ I {\displaystyle (U_{i})_{i\in I}} are open subsets of some topological space X {\displaystyle X} . The set N ( C ) {\displaystyle N(C)} may contain singletons (elements i ∈ I {\displaystyle i\in I} such that U i {\displaystyle U_{i}} is non-empty), pairs (pairs of elements i , j ∈ I {\displaystyle i,j\in I} such that U i ∩ U j ≠ ∅ {\displaystyle U_{i}\cap U_{j}\neq \emptyset } ), triplets, and so on. If J ∈ N ( C ) {\displaystyle J\in N(C)} , then any subset of J {\displaystyle J} is also in N ( C ) {\displaystyle N(C)} , making N ( C ) {\displaystyle N(C)} an abstract simplicial complex . Hence N(C) is often called the nerve complex of C {\displaystyle C} . Given an open cover C = { U i : i ∈ I } {\displaystyle C=\{U_{i}:i\in I\}} of a topological space X {\displaystyle X} , or more generally a cover in a site , we can consider the pairwise fibre products U i j = U i × X U j {\displaystyle U_{ij}=U_{i}\times _{X}U_{j}} , which in the case of a topological space are precisely the intersections U i ∩ U j {\displaystyle U_{i}\cap U_{j}} . The collection of all such intersections can be referred to as C × X C {\displaystyle C\times _{X}C} and the triple intersections as C × X C × X C {\displaystyle C\times _{X}C\times _{X}C} . By considering the natural maps U i j → U i {\displaystyle U_{ij}\to U_{i}} and U i → U i i {\displaystyle U_{i}\to U_{ii}} , we can construct a simplicial object S ( C ) ∙ {\displaystyle S(C)_{\bullet }} defined by S ( C ) n = C × X ⋯ × X C {\displaystyle S(C)_{n}=C\times _{X}\cdots \times _{X}C} , n-fold fibre product. This is the Čech nerve. [ 4 ] By taking connected components we get a simplicial set , which we can realise topologically: | S ( π 0 ( C ) ) | {\displaystyle |S(\pi _{0}(C))|} . The nerve complex N ( C ) {\displaystyle N(C)} is a simple combinatorial object. Often, it is much simpler than the underlying topological space (the union of the sets in C {\displaystyle C} ). Therefore, a natural question is whether the topology of N ( C ) {\displaystyle N(C)} is equivalent to the topology of ⋃ C {\displaystyle \bigcup C} . In general, this need not be the case. For example, one can cover any n -sphere with two contractible sets U 1 {\displaystyle U_{1}} and U 2 {\displaystyle U_{2}} that have a non-empty intersection, as in example 1 above. In this case, N ( C ) {\displaystyle N(C)} is an abstract 1-simplex, which is similar to a line but not to a sphere. However, in some cases N ( C ) {\displaystyle N(C)} does reflect the topology of X . For example, if a circle is covered by three open arcs, intersecting in pairs as in Example 2 above, then N ( C ) {\displaystyle N(C)} is a 2-simplex (without its interior) and it is homotopy-equivalent to the original circle. [ 5 ] A nerve theorem (or nerve lemma ) is a theorem that gives sufficient conditions on C guaranteeing that N ( C ) {\displaystyle N(C)} reflects, in some sense, the topology of ⋃ C {\displaystyle \bigcup C} . A functorial nerve theorem is a nerve theorem that is functorial in an appropriate sense, which is, for example, crucial in topological data analysis . [ 6 ] The basic nerve theorem of Jean Leray says that, if any intersection of sets in N ( C ) {\displaystyle N(C)} is contractible (equivalently: for each finite J ⊂ I {\displaystyle J\subset I} the set ⋂ i ∈ J U i {\displaystyle \bigcap _{i\in J}U_{i}} is either empty or contractible; equivalently: C is a good open cover ), then N ( C ) {\displaystyle N(C)} is homotopy-equivalent to ⋃ C {\displaystyle \bigcup C} . There is a discrete version, which is attributed to Borsuk . [ 7 ] [ 3 ] : 81, Thm.4.4.4 Let K 1 ,...,K n be abstract simplicial complexes , and denote their union by K . Let U i = || K i || = the geometric realization of K i , and denote the nerve of { U 1 , ... , U n } by N . If, for each nonempty J ⊂ I {\displaystyle J\subset I} , the intersection ⋂ i ∈ J U i {\displaystyle \bigcap _{i\in J}U_{i}} is either empty or contractible, then N is homotopy-equivalent to K . A stronger theorem was proved by Anders Bjorner . [ 8 ] if, for each nonempty J ⊂ I {\displaystyle J\subset I} , the intersection ⋂ i ∈ J U i {\displaystyle \bigcap _{i\in J}U_{i}} is either empty or (k-|J|+1)-connected , then for every j ≤ k , the j -th homotopy group of N is isomorphic to the j -th homotopy group of K . In particular, N is k -connected if-and-only-if K is k -connected. Another nerve theorem relates to the Čech nerve above: if X {\displaystyle X} is compact and all intersections of sets in C are contractible or empty, then the space | S ( π 0 ( C ) ) | {\displaystyle |S(\pi _{0}(C))|} is homotopy-equivalent to X {\displaystyle X} . [ 9 ] The following nerve theorem uses the homology groups of intersections of sets in the cover. [ 10 ] For each finite J ⊂ I {\displaystyle J\subset I} , denote H J , j := H ~ j ( ⋂ i ∈ J U i ) = {\displaystyle H_{J,j}:={\tilde {H}}_{j}(\bigcap _{i\in J}U_{i})=} the j -th reduced homology group of ⋂ i ∈ J U i {\displaystyle \bigcap _{i\in J}U_{i}} . If H J,j is the trivial group for all J in the k -skeleton of N( C ) and for all j in {0, ..., k -dim( J )}, then N( C ) is "homology-equivalent" to X in the following sense:
https://en.wikipedia.org/wiki/Nerve_complex
Nerve glide , also known as nerve flossing or nerve stretching, is an exercise that stretches nerves. It facilitates the smooth and regular movement of peripheral nerves in the body. It allows the nerve to glide freely along with the movement of the joint and relax the nerve from compression. Nerve gliding cannot proceed with injuries or inflammations as the nerve is trapped by the tissue surrounding the nerve near the joint. Thus, nerve gliding exercise is widely used in rehabilitation programs and during the post-surgical period. Radial , median , sciatic , and ulnar nerves require nerve gliding exercise during the rehabilitation period. The most common conditions that require nerve gliding exercise are carpal tunnel syndrome , cubital tunnel syndrome , radial neuropathy , and so on. Therapists prescribe different nerve gliding exercises in order to maximize the effects by correctly diagnosing the symptoms. Patients feel less pain when there is stretch in nerves, and there should be no aggressive exercise. Without correctly diagnosing symptoms and treatments, it worsens the conditions and nerves. Nerve gliding exercises should be done several times daily, depending on the issue. As patients continuously do nerve gliding exercises, they start to feel less pain after a few weeks. Carpal tunnel syndrome (CTS) is a condition that induces pain when the median nerve passes through the carpal tunnel in the wrist. It occurs when median nerves get irritated, compress, and strengthen. CTS evokes symptoms, including pain, paresthesia , and muscle atrophy . [ 1 ] This further leads to chronic pain and economic difficulties for patients as it requires work absence and surgical treatment. [ citation needed ] Nerve gliding exercise becomes one of the optimal CTS treatments by assisting nerve mobilization. Restoring nerve mobilization would relieve edema and restore adhesion in the carpal tunnel. [ 2 ] According to the research, nerve gliding exercise has reduced the pain, decreased sensitive distal latency, and improved the functions that require force to grab. However, inappropriate nerve gliding exercises would worsen the conditions. Neural mobilization via nerve gliding should avoid excessive median nerve stretching when extending fingers in wrist extensions or when otherwise not advised. [ 3 ] Nerve gliding exercise is not an optimal method for every patient. There is limited evidence on the effectiveness of neural gliding. However, the addition of nerve gliding exercise in conservative care accelerates the rehabilitation process and avoids surgical treatment. Further research is required to study the effectiveness of nerve glide physiotherapy and to determine groups that tend to respond better. [ 4 ] Sciatica is low back pain that can extend to the feet. [ 5 ] This nerve pain is caused by nerve root irritation or constriction. Sciatica is known as an extremely painful symptom. Nerve glides are a common option for sciatica due to their cost-effectiveness. After performing nerve glides, the Numeric Pain Rating Score (NPRS) rated by patients improved, indicating a reduction in the pain. The nerve glide reduces acute sciatica and improves the range of motion of the hip. When nerve glides were performed along with other therapies, it resulted in a greater reduction in pain. [ 6 ] However, the research indicates that there is no statistically significant difference in the results among patients who were treated with nerve glides and other conventional treatments. [ 7 ] Neural gliding is used for the rehabilitation of nerve-related neck and arm pain. The pain initiates from the neck, expanding to the arm. Nerve gliding physical therapy is beneficial in reducing pain intensity, bringing short-term improvements. [ 8 ] This treatment was found to manage neural tissue through specific postures and movements of the parts in pain. The stretch reduces nerve mechanosensitive that relieves discomforts, eventually leading to the normal function of the body. However, the long-term effects of nerve gliding exercises still remain unclear. [ 9 ] Cubital tunnel syndrome is a condition that induces pains when ulnar nerves are stretched, pressed, and irritated. This syndrome is also known as " ulnar nerve entrapment ". Similar to carpal tunnel syndrome, cubital tunnel syndrome evokes symptoms, including pain, numbness, tingling, and weakness in the hand. [ 10 ] Patients with cubital tunnel syndrome start to lose the power of their hands, which becomes hard to grip. The irritation occurs near the elbow, where the cubital tunnel is located. The ulnar nerve on the cubital tunnel is susceptible as the cubital tunnel is made up of soft tissue. Therefore, strong pressure leads to numbness. [ 11 ] Ulnar nerve gliding is recommended to reduce symptoms of cubital tunnel syndrome. Patients with ulnar nerve gliding should stay away from the holding position. Rather, patients must repeat nerve gliding with the range of movement. There are various ulnar nerve gliding methods, which include elbow flexion , wrist extension, head tilt, and arm flexion. [ citation needed ] Nerve gliding can reduce and strengthen connective tissues resulting in increased hamstring extensibility and passive stiffness. Passive stiffness refers to the resistance elongation that occurs in the joint, tendon , and connective tissue . The acute increase in hamstring extensibility can be seen right after nerve gliding intervention at the maximum range of motion. Nerve glide intervention is found to be slightly more effective than static stretching. The absolute static nerve extensibility was five times greater than the static stretching. While nerve gliding enhances the ability of the hamstring to stretch, the static stretch is more effective in terms of stress relaxation. Unlike static stretching, dynamic stretching shows similar outcomes to nerve gliding exercise. For hamstring flexibility, dynamic stretching targets low extremity muscles, while nerve gliding exercise targets posterior low extremity muscles and neural structures. Although both nerve gliding exercises and dynamic stretching do not lead to huge changes in exercise performance, both stretching methods are essential for pre-exercise stretches to avoid injuries. [ 12 ] An alternative treatment option for carpal tunnel syndrome is low-level laser therapy (LLLT). The research shows that there is a statistically significant improvement in both LLLT and nerve gliding exercises. However, the difference in the effectiveness of LLLT and nerve gliding is not considered to replace nerve gliding physiotherapy . Considering the feasibility of those two treatments in terms of machine availability and cost-effectiveness, nerve gliding exercise remains to be prescribed widely. [ 13 ] The injured or entrapped nerves are sensitive to external stimuli. Thus, nerve gliding, or nerve flossing, must be stopped, or range of motion (ROM) must be reduced once the patient feels pain. Patients' pain must be checked to avoid further irritation and injuries. Continuous nerve gliding enhances the movement of the joints and faster rehabilitation. If there is no further progress in rehabilitation, patients must see doctors or therapists for correct diagnosis. Nerve gliding exercise is not recommended for acute symptoms and severe damage. These cause pulling of nerve roots, which worsens the symptoms and nerve irritations. When the nerve glide is exercised by injured athletes, it has been shown to have no side effects on their sports performance. The measurements of sports performance include bilateral hamstring flexibility, vertical jump height, shuttle run, and dash sprint. This physiotherapy is safe when performed with caution (not going through the pains). Sports athletes can incorporate nerve glides in warm-up sessions. [ 14 ]
https://en.wikipedia.org/wiki/Nerve_glide
A nerve guidance conduit (also referred to as an artificial nerve conduit or artificial nerve graft , as opposed to an autograft ) is an artificial means of guiding axonal regrowth to facilitate nerve regeneration and is one of several clinical treatments for nerve injuries . When direct suturing of the two stumps of a severed nerve cannot be accomplished without tension, the standard clinical treatment for peripheral nerve injuries is autologous nerve grafting . Due to the limited availability of donor tissue and functional recovery in autologous nerve grafting, neural tissue engineering research has focused on the development of bioartificial nerve guidance conduits as an alternative treatment, especially for large defects. Similar techniques are also being explored for nerve repair in the spinal cord but nerve regeneration in the central nervous system poses a greater challenge because its axons do not regenerate appreciably in their native environment. [ 1 ] The creation of artificial conduits is also known as entubulation because the nerve ends and intervening gap are enclosed within a tube composed of biological or synthetic materials. [ 2 ] Whether the conduit is in the form of a biologic tube, synthetic tube or tissue-engineered conduit, it should facilitate neurotropic and neurotrophic communication between the proximal and distal ends of the nerve gap, block external inhibitory factors, and provide a physical guidance for axonal regrowth. [ 3 ] The most basic objective of a nerve guidance conduit is to combine physical, chemical, and biological cues under conditions that will foster tissue formation. [ 4 ] Materials that have been used to make biologic tubes include blood vessels and skeletal muscles, while nonabsorbable and bioabsorbable synthetic tubes have been made from silicone and polyglycolide respectively. [ 5 ] Tissue-engineered nerve guidance conduits are a combination of many elements: scaffold structure, scaffold material, cellular therapies, neurotrophic factors and biomimetic materials. The choice of which physical, chemical and biological cues to use is based on the properties of the nerve environment, which is critical in creating the most desirable environment for axon regeneration. The factors that control material selection include biocompatibility , biodegradability , [ 6 ] mechanical integrity, [ 3 ] controllability during nerve growth, implantation and sterilization. In tissue engineering , the three main levels of scaffold structure are considered to be: The superstructure of a conduit or scaffold is important for simulating in vivo conditions for nerve tissue formation. The extracellular matrix, which is mainly responsible for directing tissue growth and formation, has a complex superstructure created by many interwoven fibrous molecules. Ways of forming artificial superstructure include the use of thermo-responsive hydrogels, longitudinally oriented channels, longitudinally oriented fibers, stretch-grown axons, and nanofibrous scaffolds. In traumatic brain injury (TBI), a series of damaging events is initiated that lead to cell death and overall dysfunction, which cause the formation of an irregularly-shaped lesion cavity. [ 8 ] The resulting cavity causes many problems for tissue-engineered scaffolds because invasive implantation is required, and often the scaffold does not conform to the cavity shape. In order to get around these difficulties, thermo-responsive hydrogels have been engineered to undergo solution-gelation (sol-gel) transitions, which are caused by differences in room and physiological temperatures, to facilitate implantation through in situ gelation and conformation to cavity shape caused, allowing them to be injected in a minimally invasively manner. [ 8 ] Methylcellulose (MC) is a material with well-defined sol-gel transitions in the optimal range of temperatures. MC gelation occurs because of an increase in intra- and inter-molecular hydrophobic interactions as the temperature increases. [ 8 ] The sol-gel transition is governed by the lower critical solution temperature (LCST), which is the temperature at which the elastic modulus equals the viscous modulus. The LCST must not exceed physiological temperature (37 °C) if the scaffold is to gel upon implantation, creating a minimally invasive delivery. Following implantation into a TBI lesion cavity or peripheral nerve guidance conduit, MC elicits a minimal inflammatory response. [ 8 ] It is also very important for minimally invasive delivery that the MC solution has a viscosity at temperatures below its LCST, which allows it to be injected through a small gauge needle for implantation in in vivo applications. [ 8 ] MC has been successfully used as a delivery agent for intra-optical and oral pharmaceutical therapies. [ 8 ] Some disadvantages of MC include its limited propensity for protein adsorption and neuronal cellular adhesion making it a non-bioactive hydrogel. Due to these disadvantages, use of MC in neural tissue regeneration requires attaching a biologically active group onto the polymer backbone in order to enhance cell adhesion. Another thermo-responsive gel is one that is formed by combining chitosan with glycerophosphate (GP) salt. [ 9 ] This solution experiences gelation at temperatures above 37 °C. Gelation of chitosan/GP is rather slow, taking half an hour to initially set and 9 more hours to completely stabilize. Gel strength varies from 67 to 1572 Pa depending on the concentration of chitosan; the lower end of this range approaches the stiffness of brain tissue. Chitosan/GP has shown success in vitro , but the addition of polylysine is needed to enhance nerve cell attachment. Polylysine was covalently bonded to chitosan in order to prevent it from diffusing away. Polylysine was selected because of its positive nature and high hydrophilicity, which promotes neurite growth. Neuron survival was doubled, though neurite outgrowth did not change with the added polylysine. [ 9 ] Longitudinally oriented channels are macroscopic structures that can be added to a conduit in order to give the regenerating axons a well-defined guide for growing straight along the scaffold. In a scaffold with microtubular channel architecture, regenerating axons are able to extend through open longitudinal channels as they would normally extend through endoneurial tubes of peripheral nerves. [ 10 ] Additionally, the channels increase the surface area available for cell contact. The channels are usually created by inserting a needle, wire, or second polymer solution within a polymer scaffold; after stabilizing the shape of the main polymer, the needle, wire, or second polymer is removed in order to form the channels. Typically multiple channels are created; however, the scaffold can consist of just one large channel, which is simply one hollow tube. A molding technique was created by Wang et al. for forming a nerve guidance conduit with a multi-channel inner matrix and an outer tube wall from chitosan. [ 10 ] In their 2006 study, Wang et al. threaded acupuncture needles through a hollow chitosan tube, where they are held in place by fixing, on either end, patches created using CAD. A chitosan solution is then injected into the tube and solidified, after which the needles are removed, creating longitudinally oriented channels. A representative scaffold was then created for characterization with 21 channels using acupuncture needles of 400 μm in diameter. Upon investigation under a microscope, the channels were found to be approximately circular with slight irregularities; all channels were aligned with the inner diameter of the outer tube wall. It was confirmed by micro-CT imaging that the channels went through the entire length of the scaffold. Under water absorption, the inner and outer diameters of the scaffold became larger, but the channel diameters did not vary significantly, which is necessary for maintaining the scaffold shape that guides neurite extension. The inner structure provides an increase in compressive strength compared to a hollow tube alone, which can prevent collapse of the scaffold onto growing neurites. Neuro-2a cells were able to growth on the inner matrix of the scaffold, and they oriented along the channels. Although this method has only been tested on chitosan, it can be tailored to other materials. [ 10 ] lyophilizing and wire-heating process is another method of creating longitudinally oriented channels, developed by Huang et al. (2005). [ 11 ] A chitosan and acetic acid solution was frozen around nickel-copper (Ni-Cu) wires in a liquid nitrogen trap; subsequently the wires were heated and removed. Ni-Cu wires were chosen because they have a high resistance level. Temperature-controlled lyophilizers were used to sublimate the acetic acid. There was no evidence of the channels merging or splitting. After lyophilizing, scaffold dimensions shrunk causing channels to be a bit smaller than the wire used. The scaffolds were neutralized to a physiological pH value using a base, which had dramatic effects on the porous structure. [ 11 ] Weaker bases kept the porous structure uniform, but stronger base made it uncontrollable. The technique used here can be slightly modified to accommodate other polymers and solvents. [ 11 ] Another way to create longitudinally oriented channels is to create a conduit from one polymer with embedded longitudinally oriented fibers from another polymer; then selectively dissolve the fibers to form longitudinally oriented channels. Polycaprolactone (PCL) fibers were embedded in a (Hydroxyethyl)methacrylate (HEMA) scaffold. PCL was chosen over poly (lactic acid) (PLA) and poly (lactic-co-glycolic acid) (PLGA), because it is insoluble in HEMA but soluble in acetone . This is important because HEMA was used for the main conduit material and acetone was used to selectively dissolve the polymer fibers. Extruded PCL fibers were inserted into a glass tube and the HEMA solution was injected. The number of channels created was consistent from batch to batch and the variations in fiber diameter could be reduced by creating a more controlled PCL fiber extrusion system. [ 12 ] The channels formed were confirmed to be continuous and homogeneous by examination of porosity variations. This process is safe, reproducible and has controllable dimensions. [ 12 ] In a similar study conducted by Yu and Shoichet (2005), HEMA was copolymerized with AEMA to create a P(HEMA-co-AMEA) gel. Polycaprolactone (PCL) fibers were embedded in the gel, and then selectively dissolved by acetone with sonication to create channels. It was found that HEMA in mixture with 1% AEMA created the strongest gels. [ 13 ] When compared to scaffolds without channels, the addition of 82–132 channels can provide an approximately 6–9 fold increase in surface area, which may be advantageous for regeneration studies that depend on contact-mediated cues. [ 13 ] Itoh et al. (2003) developed a scaffold consisting of a single large longitudinally oriented channel was created using chitosan tendons from crabs. [ 14 ] Tendons were harvested from crabs (Macrocheira kaempferi) and repeatedly washed with sodium hydroxide solution to remove proteins and to deacetylate the tendon chitin , which subsequently became known as tendon chitosan. A stainless steel bar with triangular-shaped cross-section (each side 2.1 mm long) was inserted into a hollow tendon chitosan tube of circular-shaped cross-section (diameter: 2 mm; length: 15 mm). When comparing the circular-shaped and triangular-shaped tubes, it was found that the triangular tubes had improved mechanical strength, held their shape better, and increased the surface area available. [ 14 ] While this is an effective method for creating a single channel, it does not provide as much surface area for cellular growth as the multi-channel scaffolds. Newman et al. (2006) inserted conductive and non-conductive fibers into a collagen-TERP scaffold (collagen cross-linked with a terpolymer of poly(N-isopropylacrylamide) (PNiPAAm) ). The fibers were embedded by tightly wrapping them on a small glass slide and sandwiching a collagen-TERP solution between it and another glass slide; spacers between the glass slides set the gel thickness to 800 μm. The conductive fibers were carbon fiber and Kevlar , and the nonconductive fibers were nylon-6 and tungsten wire. Neurites extend in all directions in thick bundles on the carbon fiber; however with the other three fibers, neurites extended in fine web-like conformations. The neurites showed no directional growth on the carbon and Kevlar fibers, but they grew along the nylon-6 fibers and to some extent along the tungsten wire. The tungsten wire and nylon-6 fiber scaffolds had neurites grow into the gel near the fiber-gel interface in addition to growing along the surface. All fiber gels except Kevlar showed a significant increase in neurite extension compared to non-fiber gels. There was no difference in the neurite extension between the non-conductive and the conductive fibers. [ 15 ] In their 2005 study, Cai et al. added Poly (L-lactic acid) (PLLA) microfilaments to hollow poly(lactic acid) (PLA) and silicon tubes. The microfiber guidance characteristics were inversely related to the fiber diameter with smaller diameters promoting better longitudinally oriented cell migration and axonal regeneration. The microfibers also promoted myelination during peripheral nerve repair. [ 16 ] Mature axon tracts has been demonstrated to experience growth when mechanically stretched at the central portion of the axon cylinder. [ 17 ] Such mechanical stretch was applied by a custom axon stretch-growth bioreactor composed of four main components: custom-designed axon expansion chamber, linear motion table, stepper motor and controller. [ 17 ] The nerve tissue culture is placed within the expansion chamber with a port for gas exchange and a removable stretching frame, which is able to separate two groups of somas (neuron cell bodies) and thus stretch their axons. [ 17 ] Collagen gel was used to promote the growth of larger stretch-grown axon tracts that were visible to the unaided eye. There are two reasons for the growth enhancement due to the collagen coating: 1) the culture became hydrophobic after the collagen dried, which permitted a denser concentration of neurons to grow, and 2) the collagen coating created an unobstructed coating across the two elongation substrates. [ 17 ] Examination by scanning electron microscope and TEM showed no signs of axon thinning due to stretch, and the cytoskeleton appeared to be normal and intact. The stretch-grown axon tracts were cultured on a biocompatible membrane, which could be directly formed into a cylindrical structure for transplantation, eliminating the need to transfer axons to a scaffold after growth was complete. The stretch-grown axons were able to grow at an unprecedented rate of 1 cm/day after only 8 days of acclimation, which is much greater than the 1 mm/day maximal growth rate as measured for growth cone extension. The rate of 1 mm/day is also the average transport speed for structural elements such as neurofilaments. [ 17 ] Research on nanoscale fibers attempts to mimic the in vivo extracellular environment in order to promote directional growth and regeneration. [ 7 ] Three distinct methods for forming nanofibrous scaffolds are self-assembly, phase separation and electrospinning. However, there are many other methods for forming nanofibrous scaffolds. Self-assembly of nanofibrous scaffolds is able to occur only when the fibers themselves are engineered for self-assembly. One common way to drive the self-assembly of scaffold fibers is to use amphiphilic peptides so that in water the hydrophobic moiety drives the self-assembly. [ 7 ] Carefully calculated engineering of the amphiphilic peptides allows for precise control over the self-assembled matrix. Self-assembly is able to create both ordered and unordered topographies. Phillips et al. (2005) developed and tested in vitro and in vivo a self-aligned collagen - Schwann cell matrix, which allowed DRG neurite extension alignment in vitro . Collagen gels have been used extensively as substrates for three-dimensional tissue culture . Cells are able to form integrin-mediated attachments with collagen, which initiates cytoskeleton assembly and cell motility. As cells move along the collagen fibers they generate forces that contract the gel. When the collagen fibers are tethered at both ends, cell-generated forces create uniaxial strain, causing the cells and collagen fibers to align. The advantages of this matrix are its simplicity and speed of preparation. [ 2 ] Soluble plasma fibronectin can also self-assemble into stable insoluble fibers when put under direct mechanical shearing within a viscous solution. Phillips et al. (2004) investigated a new method of shear aggregation that causes an improved aggregation. [ 18 ] The mechanical shearing was created by dragging out a 0.2 ml bolus to 3 cm with forceps; fibronectin aggregates into insoluble fibers at the rapidly moving interface in an ultrafiltration cell. The proposed mechanism for this fiber aggregation is protein extension and elongation under mechanical shear force, which leads to lateral packing and protein aggregation of fibers. Phillips et al. showed that mechanical shear produced by stretching a high viscosity fibronectin gel causes substantial changes in its structure and that when applied through uniaxial extension, a viscous fibronectin gel forms oriented fibrous fibronectin aggregates; additionally, the fibrous aggregates have a decreased solubility and can support the various cell types in vitro. [ 18 ] Phase separation allows for three-dimensional sub-micrometre fiber scaffolds to be created without the use of specialized equipment. The five steps involved in phase separation are polymer dissolution, phase separation and gelation, solvent extraction from the gel, freezing and freeze drying in water. [ 7 ] The final product is a continuous fiber network. Phase separation can be modified to fit many different applications, and pore structure can be varied by using different solvents, which can change the entire process from liquid–liquid to solid–liquid. Porosity and fiber diameter can also be modified by varying the initial concentration of the polymer; a higher initial concentration leads to less pores and larger fiber diameters. This technique can be used to create networks of fibers with diameters reaching type I collagen fiber diameters. The fibrous network created is randomly oriented and so far work has not been done to attempt to organize the fibers. Phase separation is a widely used technique for creating highly porous nanofibrous scaffolds with ease. [ 7 ] Electrospinning provides a robust platform for development of synthetic nerve guidance conduits. Electrospinning can serve to create scaffolds at controlled dimensions with varying chemistry and topography. Furthermore, different materials can be encapsulated within fibers including particles, growth factors, and even cells. [ 19 ] Electrospinning creates fibers by electrically charging a droplet of polymer melt or solution and suspending it from a capillary. Then, an electric field is applied at one end of the capillary until the charge exceeds the surface tension, creating a polymer jet that elongates and thins. This polymer jet discharges as a Taylor cone, leaving behind electrically charged polymers, which are collected on a grounded surface as the solvent as the solvent evaporates from the jets. [ 20 ] Fibers have been spun with diameters ranging from less than 3 nm to over 1 μm. The process is affected by system parameters such as polymer type, polymer molecular weight, and solution properties and by process parameters such as flow rate, voltage, capillary diameter, distance between the collector and the capillary, and motion of the collector. [ 21 ] The fibrous network created is unordered and contains a high surface-to-volume ratio as a result of a high porosity; a large network surface area is ideal for growth and transport of wastes and nutrients in neural tissue engineering. [ 7 ] The two features of electrospun scaffolds that are advantageous for neural tissue engineering are the morphology and architecture, which closely mimics the ECM, and the pores, which are the correct range of sizes that allows nutrient exchange but prevents in growth of glial scar tissue (around 10 μm). [ 22 ] Random electrospun PLLA scaffolds have been demonstrated to have increased cell adhesion, which may be due to an increased surface roughness. [ 22 ] Chemically modified electrospun fiber mats have also been shown to influence neural stem cell differentiation and increase cell proliferation. [ 20 ] In the past decade, scientists have also developed numerous methods for production of aligned nanofiber scaffolds, which serve to provide additional topographic cues to cells. [ 23 ] This is advantageous because large scale three-dimensional aligned scaffolds cannot be created easily using traditional fabrication techniques. [ 7 ] In a study conducted by Yang et al. (2005), aligned and random electrospun poly (L-lactic acid) (PLLA) microfibrous and nanofibrous scaffolds were created, characterized, and compared. Fiber diameters were directly proportional to the initial polymer concentration used for electrospinning; the average diameter of aligned fibers was smaller than that of random fibers under identical processing conditions. It was shown that neural stem cells elongated parallel to the aligned electrospun fibers. [ 21 ] The aligned nanofibers had a longer average neurite length compared to aligned microfibers, random microfibers, and random nanofibers. In addition, more cells differentiated on aligned nanofibers than aligned microfibers. [ 21 ] Thus, the results of this study demonstrated that aligned nanofibers may be more beneficial than nonaligned fibers or microfibers for promoting nerve regeneration. Microstructure and nanostructure, along with superstructure are three main levels of scaffold structure that deserve consideration when creating scaffold topography. [ 7 ] While the superstructure refers to the overall shape of the scaffold, the microstructure refers to the cellular level structure of the surface and the nanostructure refers to the subcellular level structure of the surface. All three levels of structure are capable of eliciting cell responses; however, there is significant interest in the response of cells to nanoscale topography motivated by the presence of numerous nanoscale structures within the extracellular matrix. [ 7 ] There are a growing number of methods for the manufacture of micro- and nanostructures (many originating from the semiconductor industry) allowing for the creation of various topographies with controlled size, shape, and chemistry. [ 24 ] Physical cues are formed by creating an ordered surface structure at the level of the microstructure and/or nanostructure. Physical cues on the nanoscale have been shown to modulate cell adhesion, migration, orientation, contact inhibition, gene expression, and cytoskeletal formation. This allows for the direction of cell processes such as proliferation, differentiation, and spreading. [ 24 ] There are numerous methods for the manufacture of micro- and nanoscale topographies, which can be divided into those that create ordered topographies and those that create unordered topographies. Ordered topographies are defined as patterns that are organized and geometrically precise. [ 7 ] Though there are many methods for creating ordered topographies, they are usually time-consuming, requiring skill and experience and the use of expensive equipment. [ 7 ] Photolithography involves exposing a light source to a photoresist-coated silicon wafer; a mask with the desired pattern is place between the light source and the wafer, thereby selectively allowing light to filter through and create the pattern on the photoresist . Further development of the wafer brings out the pattern in the photoresist. Photolithography performed in the near-UV is often viewed as the standard for fabricating topographies on the micro-scale. [ 7 ] However, because the lower limit for size is a function of the wavelength, this method cannot be used to create nanoscale features. [ 7 ] In their 2005 study, Mahoney et al. created organized arrays of polyimide channels (11 μm in height and 20–60 μm in width) were created on a glass substrate by photolithography. [ 25 ] Polyimide was used because it adheres to glass well, is chemically stable in aqueous solution, and is biocompatible. It is hypothesized that the microchannels limited the range of angles that cytoskeletal elements within the neurite growth cones could accumulate, assemble, and orient. [ 25 ] There was a significant decrease in the number of neurites emerging from the soma; however, there was less decrease as the range of angles over which the neurites emerged was increased. Also, the neurites were on average two times longer when the neurons were cultured on the microchannels versus the controls on a flat surface; this could be due to a more efficient alignment of filaments. [ 25 ] In electron beam lithography (EBL), an electron-sensitive resist is exposed to a beam of high-energy electrons. There is the choice of a positive or negative type resist; however, lower feature resolution can be obtained with negative resists. [ 26 ] Patterns are created by programming the beam of electrons for the exact path to follow along the surface of the material. Resolution is affected by other factors such as electron scattering in the resist and backscattering from the substrate. EBL can create single surface features on the order of 3–5 nm. If multiple features are required over a large surface area, as is the case in tissue engineering, the resolution drops and features can only be created as small as 30–40 nm, and the resist development begins to weigh more heavily on pattern formation. [ 26 ] To prevent dissolution of the resist, ultrasonic agitation can be used to overcome intermolecular forces. In addition, isopropyl alcohol (IPA) helps develop high-density arrays. EBL can become a quicker and less costly process by replicating nanometer patterns in polymeric materials; the replication process has been demonstrated with polycaprolactone (PCL) using hot embossing and solvent casting . [ 7 ] In a study conducted by Gomez et al. (2007), microchannels 1 and 2 μm wide and 400 and 800 nm deep created by EBL on PDMS were shown to enhance axon formation of hippocampal cells in culture more so than immobilized chemical cues. [ 26 ] X-ray lithography is another method for forming ordered patterns that can be used to investigate the role that topography plays in promoting neuritogenesis. The mask parameters determine the pattern periodicity, but ridge width and depth are determined by the etching conditions. In a study, ridges were created with periods ranging from 400 through 4000 nm, widths ranging from 70 through 1900 nm, and a groove depth of 600 nm; developing neurites demonstrated contact guidance with features as small as 70 nm and greater than 90% of the neurites were within 10 degrees of parallel alignment with the ridges and grooves. [ 27 ] There was not a significant difference in orientation with respect to the feature sizes used. The number of neurites per cell was constrained by the ridges and grooves, producing bipolar rather than branching phenotypes. [ 27 ] Unordered topographies are generally created by processes that occur spontaneously during other processing; the patterns are random in orientation and organization with imprecise or no control over feature geometry. [ 7 ] The advantage to creating unordered topographies over ordered is that the processes are often less time-consuming, less expensive, and do not require great skill and experience. Unordered topographies can be created by polymer demixing, colloidal lithography and chemical etching. In polymer demixing , polymer blends experience spontaneous phase separation; it often occurs during conditions such as spin casting onto silicon wafers. Features that can be created by this method include nanoscale pits, islands, and ribbons, which can be controlled to an extent by adjusting the polymer ratio and concentration to change the feature shape and size, respectively. [ 7 ] There is not much control in the horizontal direction, though the vertical direction of the features can be precisely controlled. Because the pattern is very unordered horizontally, this method can only be used to study cell interactions with specific height nanotopographies . [ 7 ] Colloidal lithography is inexpensive and can be used to create surfaces with controlled heights and diameters. Nanocolliods are used as an etch mask by spreading them along the material surface, and then ion beam bombardment or film evaporation is used to etch away around the nanocolliods, creating nanocolumns and nanopits, respectively. The final surface structure can be controlled by varying the area covered by colloids and the colloid size. The area covered by the colloids can be changed by modifying the ionic strength of the colloid solution. This technique is able to create large patterned surface areas, which is necessary for tissue engineering applications. [ 7 ] Chemical etching involves soaking the material surface in an etchant such as hydrofluoric acid (HF) or sodium hydroxide (NaOH) until the surface is etched away to a desired roughness as created by pits and protrusions on the nanometer scale. [ 7 ] Longer etch times lead to rougher surfaces (i.e., smaller surface pits and protrusions). Structures with specific geometry or organization cannot be created by this rudimentary method because at best it can be considered a surface treatment for changing the surface roughness. The significant advantages of this method are ease of use and low cost for creating a surface with nanotopographies . Silicon wafers were etched using HF, and it was demonstrated that cell adhesion was enhanced only in a specified range of roughness (20–50 nm). [ 7 ] In addition to creating topography with physical cues, it can be created with chemical cues by selectively depositing polymer solution in patterns on the surface of a substrate. There are different methods for depositing the chemical cues. Two methods for dispensing chemical solutions include stripe patterning and piezoelectric microdispensing. Stripe-patterned polymer films can be formed on solid substrates by casting diluted polymer solution. This method is relatively easy, inexpensive, and has no restriction on the scaffold materials that can be used. The procedure involves horizontally overlapping glass plates while keeping them vertically separated by a narrow gap filled with a polymer solution. The upper plate is moved at a constant velocity between 60 and 100 μm/s. [ 28 ] A thin liquid film of solution is continuously formed at the edge of the sliding glass following evaporation of the solvent. Stripe patterns prepared at speeds of 60, 70, and 100 μm/s created width and groove spacings of 2.2 and 6.1 μm, 3.6 and 8.4 μm, and 4.3 and 12.7 μm, respectively; the range of heights for the ridges was 50–100 nm. [ 28 ] Tsuruma, Tanaka et al. demonstrated that embryonic neural cells cultured on film coated with poly-L-lysine attached and elongated parallel to poly(ε-caprolactone)/chloroform solution (1g/L) stripes with narrow pattern width and spacing (width: 2.2 μm, spacing: 6.1 μm). [ 28 ] However, the neurons grew across the axis of the patterns with wide width and spacing (width: 4.3 μm, spacing: 12.7 μm). On average, the neurons on the stripe-patterned films had less neurites per cell and longer neurites compared to the neurons on non-patterned films. Thus, the stripe pattern parameters are able to determine the growth direction, the length of neurites, and the number of neurites per cell. [ 28 ] Microdispensing was used to create micropatterns on polystyrene culture dishes by dispensing droplets of adhesive laminin and non-adhesive bovine serum albumin (BSA) solutions. [ 29 ] The microdispenser is a piezoelectric element attached to a push-bar on top of a channel etched in silicon, which has one inlet at each end and a nozzle in the middle. The piezoelectric element expands when voltage is applied, causing liquid to be dispensed through the nozzle. The microdispenser is moved using a computer-controlled x-y table. The micropattern resolution depends on many factors: dispensed liquid viscosity, drop pitch (the distance between the centre of two adjacent droplets in a line or array), and the substrate. [ 29 ] With increasing viscosity the lines become thinner, but if the liquid viscosity is too high the liquid cannot be expelled. Heating the solution creates more uniform protein lines. Although some droplet overlap is necessary to create continuous lines, uneven evaporation may cause uneven protein concentration along the lines; this can be prevented through smoother evaporation by modifying the dispensed solution properties. For patterns containing 0.5 mg/mL laminin, a higher proportion of neurites grew on the microdispensed lines than between the lines. [ 29 ] On 10 mg/mL and 1 mg/mL BSA protein patterns and fatty-acid free BSA protein patterns a significant number of neurites avoided the protein lines and grew between the lines. Thus, the fatty-acid-containing BSA lines were just as non-permissive for neurite growth as lines containing BSA with fatty acids. Because microdispensing does not require direct contact with the substrate surfaces, this technique can utilitze surfaces with delicate micro- or nanotopology that could be destroyed by contact. It is possible to vary the amount of protein deposited by dispensing more or less droplets. An advantage of microdispensing is that patterns can be created quickly in 5–10 minutes. Because the piezoelectric microdispenser does not require heating, heat-sensitive proteins and fluids as well as living cells can be dispensed. [ 29 ] The selection of the scaffold material is perhaps the most important decision to be made. It must be biocompatible and biodegradable; in addition, it must be able to incorporate any physical, chemical, or biological cues desired, which in the case of some chemical cues means that it must have a site available for chemically linking peptides and other molecules. The scaffold materials chosen for nerve guidance conduits are almost always hydrogels. The hydrogel may be composed of either biological or synthetic polymers. Both biological and synthetic polymers have their strengths and weaknesses. It is important to note that the conduit material can cause inadequate recovery when (1) degradation and resorption rates do not match the tissue formation rate, (2) the stress-strain properties do not compare well to those of neural tissue, (3) when degrading swelling occurs, causing significant deformation, (4) a large inflammatory response is elicited, or (5) the material has low permeability. [ 30 ] Hydrogels are a class of biomaterials that are chemically or physically cross-linked water-soluble polymers. They can be either degradable or non-degradable as determined by their chemistry, but degradable is more desirable whenever possible. There has been great interest in hydrogels for tissue engineering purposes, because they generally possess high biocompatibility, mechanical properties similar to soft tissue, and the ability to be injected as a liquid that gels. [ 4 ] When hydrogels are physically cross-linked they must rely on phase separation for gelation; the phase separation is temperature-dependent and reversible. [ 4 ] Some other advantages of hydrogels are that they use only non-toxic aqueous solvents, allow infusion of nutrients and exit of waste products, and allow cells to assemble spontaneously. [ 31 ] Hydrogels have low interfacial tension, meaning cells can easily migrate across the tissue-implant boundary. [ 9 ] However, with hydrogels it is difficult to form a broad range of mechanical properties or structures with controlled pore size. [ 4 ] A synthetic polymer may be non-degradable or degradable. For the purpose of neural tissue engineering degradable materials are preferred whenever possible, because long-term effects such as inflammation and scar could severely damage nerve function. The degradation rate is dependent on the molecular weight of the polymer, its crystallinity, and the ratio of glycolic acid to lactic acid subunits. [ 4 ] Because of a methyl group , lactic acid is more hydrophobic than glycolic acid causing its hydrolysis to be slower. [ 4 ] Synthetic polymers have more wieldy mechanical properties and degradation rates that can be controlled over a wide range, and they eliminate the concern for immunogenicity. [ 4 ] There are many different synthetic polymers currently being used in neural tissue engineering. However, the drawbacks of many of these polymers include a lack of biocompatibility and bioactivity, which prevents these polymers from promoting cell attachment, proliferation, and differentiation. [ 32 ] Synthetic conduits have only been clinically successful for the repair of very short nerve lesion gaps less than 1–2 cm. [ 33 ] Furthermore, nerve regeneration with these conduits has yet to reach the level of functional recovery seen with nerve autografts. [ 30 ] Collagen is a major component of the extracellular matrix , and it is found in the supporting tissues of peripheral nerves. A terpolymer (TERP) was synthesized by free radical copolymerization of its three monomers and cross-linked with collagen, creating a hybrid biological-synthetic hydrogel scaffold. [ 15 ] The terpolymer is based on poly(NIPAAM), which is known to be a cell friendly polymer. TERP is used both as a cross-linker to increase hydrogel robustness and as a site for grafting of bioactive peptides or growth factors, by reacting some of its acryloxysuccinimide groups with the –NH2 groups on the peptides or growth factors. [ 15 ] Because the collagen-terpolymer (collagen-TERP) hydrogel lacks a bioactive component, a study attached to it a common cell adhesion peptide found in laminin (YIGSR) in order to enhance its cell adhesion properties. [ 15 ] The polymers in the PLGA family include poly (lactic acid) (PLA), poly (glycolic acid) (PGA), and their copolymer poly (lactic-co-glycolic acid) (PLGA). All three polymers have been approved by the Food and Drug Administration for employment in various devices. These polymers are brittle and they do not have regions for permissible chemical modification; in addition, they degrade by bulk rather than by surface, which is not a smooth and ideal degradation process. [ 4 ] In an attempt to overcome the lack of functionalities, free amines have been incorporated into their structures from which peptides can be tethered to control cell attachment and behavior. [ 4 ] Dextran is a polysaccharide derived from bacteria; it is usually produced by enzymes from certain strains of leuconostoc or Streptococcus . It consists of α-1,6-linked D-glucopyranose residues. Cross-linked dextran hydrogel beads have been widely used as low protein-binding matrices for column chromatography applications and for microcarrier cell culture technology. [ 34 ] However, it has not been until recently that dextran hydrogels have been investigated in biomaterials applications and specifically as drug delivery vehicles. An advantage of using dextran in biomaterials applications include its resistance to protein adsorption and cell-adhesion, which allows specific cell adhesion to be determined by deliberately attached peptides from ECM components. [ 34 ] AEMA was copolymerized with Dex-MA in order to introduce primary amine groups to provide a site for attachment of ECM-derived peptides to promote cell adhesion. The peptides can be immobilized using sulfo-SMMC coupling chemistry and cysteine-terminated peptides. Copolymerization of Dex-MA with AEMA allowed the macroporous geometry of the scaffolds to be preserved in addition to promoting cellular interactions. [ 34 ] A novel biodegradable, tough elastomer has been developed from poly(glycerol sebacate) (PGS) for use in creation of a nerve guidance conduit. [ 30 ] PGS was originally developed for soft tissue engineering purposes to specifically mimic ECM mechanical properties. It is considered an elastomer because it is able to recover from deformation in mechanically dynamic environments and to effectively distribute stress evenly throughout regenerating tissues in the form of microstresses. PGS is synthesized by a polycondensation reaction of glycerol and sebacic acid, which can be melt processed or solvent processed into the desired shape. PGS has a Young's modulus of 0.28 MPa and an ultimate tensile strength greater than 0.5 MPa. [ 30 ] Peripheral nerve has a Young's modulus of approximately 0.45 MPa, which is very close to that of PGS. Additionally, PGS experiences surface degradation, accompanied by losses in linear mass and strength during resorption. [ 30 ] Following implantation, the degradation half-life was determined to be 21 days; complete degradation occurred at day 60. [ 30 ] PGS experiences minimal water absorption during degradation and does not have detectable swelling; swelling can cause distortion, which narrows the tubular lumen and can impede regeneration. It is advantageous that the degradation time of PGS can be varied by changing the degree of crosslinking and the ratio of sebacic acid to glycerol. [ 30 ] In a study by Sundback et al. (2005), implanted PGS and PLGA conduits had similar early tissue responses; however, PLGA inflammatory responses spiked later, while PGS inflammatory responses continued to decreases. [ 30 ] Polyethylene glycol (PEG) hydrogels are biocompatible and proven to be tolerated in many tissue types, including the CNS. Mahoney and Anseth formed PEG hydrogels by photopolymerizing methacrylate groups covalently linked to degradable PEG macromers. Hydrogel degradation was monitored over time by measuring mechanical strength (compressive modulus) and average mesh size from swelling ratio data. [ 35 ] Initially, the polymer chains were highly cross-linked, but as degradation proceeded, ester bonds were hydrolyzed, allowing the gel to swell; the compressive modulus decreased as the mesh size increased until the hydrogel was completely dissolved. It was demonstrated that neural precursor cells were able to be photoencapsulated and cultured on the PEG gels with minimal cell death. Because the mesh size is initially small, the hydrogel blocks inflammatory and other inhibitory signals from surrounding tissue. As the mesh size increases, the hydrogel is able to serve as a scaffold for axon regeneration. [ 35 ] There are advantages to using biological polymers over synthetic polymers. They are very likely to have good biocompatibility and be easily degraded, because they are already present in nature in some form. However, there are also several disadvantages. They have unwieldy mechanical properties and degradation rates that cannot be controlled over a wide range. In addition, there is always the possibility that naturally-derived materials may cause an immune response or contain microbes. [ 4 ] In the production of naturally-derived materials there will also be batch-to-batch variation in large-scale isolation procedures that cannot be controlled. [ 16 ] Some other problems plaguing natural polymers are their inability to support growth across long lesion gaps due to the possibility of collapse, scar formation, and early re-absorption. [ 16 ] Despite all these disadvantages, some of which can be overcome, biological polymers still prove to be the optimal choice in many situations. Polysialic acid (PSA) is a relatively new biocompatible and bioresorbable material for artificial nerve conduits. It is a homopolymer of α2,8-linked sialic acid residues and a dynamically regulated posttranslational modification of the neural cell adhesion molecule (NCAM). Recent studies have demonstrated that polysialylated NCAM (polySia-NCAM) promotes regeneration in the motor system. [ 36 ] PSA shows stability under cell culture conditions and allows for induced degradation by enzymes. It has also been discovered recently that PSA is involved in steering processes like neuritogenesis, axonal path finding, and neuroblast migration. [ 36 ] Animals with PSA genetically knocked out express a lethal phenotype, which has unsuccessful path finding; nerves connecting the two brain hemispheres were aberrant or missing. [ 36 ] Thus PSA is vital for proper nervous system development. Collagen is the major component of the extracellular matrix and has been widely used in nerve regeneration and repair. Due to its smooth microgeometry and permeability, collagen gels are able to allow diffusion of molecules through them. Collagen resorption rates are able to be controlled by crosslinking collagen with polypoxy compounds. [ 6 ] Additionally, collagen type I/III scaffolds have demonstrated good biocompatibility and are able to promote Schwann cell proliferation. However, collagen conduits filled with Schwann cells used to bridge nerve gaps in rats have shown surprisingly unsuccessful nerve regeneration compared to nerve autografts. [ 6 ] This is because biocompatibility is not the only factor necessary for successful nerve regeneration; other parameters such as inner diameter, inner microtopography, porosity, wall thickness, and Schwann cell seeding density will need to be examined in future studies in order to improve the results obtained by these collagen I/III gels. [ 6 ] Spider silk fibers are shown to promote cellular adhesion, proliferation, and vitality. Allmeling, Jokuszies et al. showed that Schwann cells attach quickly and firmly to the silk fibers, growing in a bipolar shape; proliferation and survival rates were normal on the silk fibers. [ 37 ] They used spider silk fibers to create a nerve conduit with Schwann cells and acellularized xenogenic veins. The Schwann cells formed columns along the silk fibers in a short amount of time, and the columns were similar to bands of Bungner that grow in vivo after PNS injury. [ 37 ] Spider silk has not been used in tissue engineering until now because of the predatory nature of spiders and the low yield of silk from individual spiders. It has been discovered that the species Nephila clavipes produces silk that is less immunogenic than silkworm silk; it has a tensile strength of 4 x 109 N/m, which is six times the breaking strength of steel. [ 37 ] Because spider silk is proteolytically degraded, there is not a shift in pH from the physiological pH during degradation. Other advantages of spider silk include its resistance to fungal and bacterial decomposition for weeks and the fact that it does not swell. Also, the silk's structure promotes cell adhesion and migration. However, silk harvest is still a tedious task and the exact composition varies among species and even among individuals of the same species depending on diet and environment. There have been attempts to synthetically manufacture spider silk. Further studies are needed to test the feasibility of using a spider silk nerve conduit in vitro and in vivo . [ 37 ] In addition to spiders, silkworms are another source of silk. Protein from Bombyx mori silkworms is a core of fibroin protein surrounded by sericin, which is a family of glue-like proteins. Fibroin has been characterized as a heavy chain with a repeated hydrophobic and crystallizable sequence: Gly-Ala-Gly-Ala-Gly-X (X stands for Ser or Tyr). The surrounding sericin is more hydrophilic due to many polar residues, but it does still have some hydrophobic β-sheet portions. Silks have been long been used as sutures due to their high mechanical strength and flexibility as well as permeability to water and oxygen. In addition, silk fibroin can be easily manipulated and sterilized. However, silk use halted when undesirable immunological reactions were reported. Recently, it has been discovered that the cause of the immunological problems lies solely with the surrounding sericin. [ 38 ] Since this discovery, silk with the sericin removed has been used in many pharmaceutical and biomedical applications. Because it is necessary to remove the sericin from around the fibroin before the silk can be used, an efficient procedure needs to be developed for its removal, which is known as degumming. One degumming method uses boiling aqueous Na 2 CO 3 solution, which removes the sericin without damaging the fibroin. Yang, Chen et al. demonstrated that the silk fibroin and silk fibroin extract fluid show good biocompatibility with Schwann cells, with no cytotoxic effects on proliferation. [ 38 ] Chitosan and chitin belong to a family of biopolymers composed of β(1–4)-linked N-acetyl-D-glucosamine and D-glucosamine subunits. [ 39 ] Chitosan is formed by alkaline N-deacetylation of chitin, which is the second most abundant natural polymer after cellulose. [ 14 ] Chitosan is a biodegradable polysaccharide that has been useful in many biomedical applications such as a chelating agent, drug carrier, membrane, and water treatment additive. [ 11 ] Chitosan is soluble in dilute aqueous solutions, but precipitates into a gel at a neutral pH. [ 11 ] It does not support neural cell attachment and proliferation well, but can be enhanced by ECM-derived peptide attachment. Chitosan also contains weak mechanical properties, which are more challenging to overcome. [ 9 ] Degree of acetylation (DA) for soluble chitosan ranges from 0% to 60%, depending on processing conditions. [ 39 ] A study was conducted to characterize how varying DA affects the properties of chitosan. Varying DA was obtained using acetic anhydride or alkaline hydrolysis . It was found that decreasing acetylation created an increase in compressive strength. [ 39 ] Biodegradation was examined by use of lysozyme, which is known to be mainly responsible for degrading chitosan in vivo by hydrolyzing its glycosidic bonds and is released by phagocytic cells after nerve injury. The results reveal that there was an accelerated mass loss with intermediate DAs, compared with high and low DAs over the time period studied. [ 39 ] When DRG cells were grown on the N-acetylated chitosan, cell viability decreased with increasing DA. Also, chitosan has an increasing charge density with decreasing DA, which is responsible for greater cell adhesion. [ 39 ] Thus, controlling the DA of chitosan is important for regulating the degradation time. This knowledge could help in the development of a nerve guidance conduit from chitosan. Aragonite scaffolds have recently been shown to support the growth of neurons from rat hippocampi. Shany et al. (2006) proved that aragonite matrices can support the growth of astrocytic networks in vitro and in vivo . Thus, aragonite scaffolds may be useful for nerve tissue repair and regeneration. It is hypothesized that aragonite-derived Ca 2+ is essential for promoting cell adherence and cell–cell contact. This is probably carried out through the help of Ca 2+ -dependent adhesion molecules such as cadherins. [ 40 ] Aragonite crystalline matrices have many advantages over hydrogels. They have larger pores, which allows for better cell growth, and the material is bioactive as a result of releasing Ca 2+ , which promotes cell adhesion and survival. In addition, the aragonite matrices have higher mechanical strength than hydrogels, allowing them to withstand more pressure when pressed into an injured tissue. [ 40 ] Alginate is a polysaccharide that readily forms chains; it can be cross-linked at its carboxylic groups with multivalent cations such as Cu 2+ , Ca 2+ , or Al 3+ to form a more mechanically stable hydrogel. [ 41 ] Calcium alginates form polymers that are both biocompatible and non-immunogenic and have been used in tissue engineering applications. However, they are unable to support longitudinally oriented growth, which is necessary for reconnection of the proximal end with its target. In order to overcome this problem, anisotropic capillary hydrogels (ACH) have been developed. They are created by superimposing aqueous solutions of sodium alginate with aqueous solutions of multivalent cations in layers. [ 41 ] After formation, the electrolyte ions diffuse into the polymer solution layers, and a dissipative convective process causes the ions to precipitate, creating capillaries. The dissipative convective process results the opposition of diffusion gradients and friction between the polyelectrolyte chains. [ 41 ] The capillary walls are lined with the precipitated metal alginate, while the lumen is filled with the extruded water. Prang et al. (2006) assessed the capacity of ACH gels to promote directed axonal regrowth in the injured mammalian CNS. The multivalent ions used to create the alginate-based ACH gels were copper ions, whose diffusion into the sodium alginate layers created hexagonally structured anisotropic capillary gels. [ 41 ] After precipitation, the entire gel was traversed by longitudinally oriented capillaries. The ACH scaffolds promoted adult NPC survival and highly oriented axon regeneration. [ 41 ] This is the first instance of using alginates to produce anisotropic structured capillary gels. Future studies are need to study the long-term physical stability of the ACH scaffolds, because CNS axon regeneration can take many months; however, in addition to being able to provide long-term support the scaffolds must also be degradable. Of all the biological and synthetic biopolymers investigated by Prang et al. (2006), only agarose-based gels were able to compare with the linear regeneration caused by ACH scaffolds. Future studies will also need to investigate whether the ACH scaffolds allow for reinnervation of the target in vivo after a spinal cord injury. [ 41 ] Hyaluronic acid (HA) is a widely used biomaterial as a result of its excellent biocompatibility and its physiologic function diversity. It is abundant in the extracellular matrix (ECM) where it binds large glycosaminoglycans (GAGs) and proteoglycans through specific HA-protein interactions. HA also binds cell surface receptors such as CD44, which results in the activation of intracellular signaling cascades that regulate cell adhesion and motility and promote proliferation and differentiation. [ 42 ] HA is also known to support angiogenesis because its degradation products stimulate endothelial cell proliferation and migration. Thus, HA plays a pivotal role in maintaining the normal processes necessary for tissue survival. Unmodified HA has been used in clinical applications such as ocular surgery, wound healing, and plastic surgery. [ 42 ] HA can be crosslinked to form hydrogels. HA hydrogels that were either unmodified or modified with laminin were implanted into an adult central nervous system lesion and tested for their ability to induce neural tissue formation in a study by Hou et al.. They demonstrated the ability to support cell ingrowth and angiogenesis, in addition to inhibiting glial scar formation. Also, the HA hydrogels modified with laminin were able to promote neurite extension. [ 42 ] These results support HA gels as a promising biomaterial for a nerve guidance conduit. In addition to scaffold material and physical cues, biological cues can also be incorporated into a bioartificial nerve conduit in the form of cells. In the nervous system there are many different cell types that help support the growth and maintenance of neurons. These cells are collectively termed glial cells. Glial cells have been investigated in an attempt to understand the mechanisms behind their abilities to promote axon regeneration. Three types of glial cells are discussed: Schwann cells, astrocytes, and olfactory ensheathing cells. In addition to glial cells, stem cells also have potential benefit for repair and regeneration because many are able to differentiate into neurons or glial cells. This article briefly discusses the use of adult, transdifferentiated mesenchymal, ectomesenchymal, neural and neural progenitor stem cells. Glial cells are necessary for supporting the growth and maintenance of neurons in the peripheral and central nervous system. Most glial cells are specific to either the peripheral or central nervous system. Schwann cells are located in the peripheral nervous system where they myelinate the axons of neurons. Astrocytes are specific to the central nervous system; they provide nutrients, physical support, and insulation for neurons. They also form the blood brain barrier. Olfactory ensheathing cells, however, cross the CNS-PNS boundary, because they guide olfactory receptor neurons from the PNS to the CNS. Schwann cells (SC) are crucial to peripheral nerve regeneration; they play both structural and functional roles. Schwann cells are responsible for taking part in both Wallerian degeneration and bands of Bungner. When a peripheral nerve is damaged, Schwann cells alter their morphology, behavior and proliferation to become involved in Wallerian degeneration and Bungner bands. [ 38 ] In Wallerian degeneration, Schwann cells grow in ordered columns along the endoneurial tube, creating a band of Bungner (boB) that protects and preserves the endoneurial channel. Additionally, they release neurotrophic factors that enhance regrowth in conjunction with macrophages. There are some disadvantages to using Schwann cells in neural tissue engineering; for example, it is difficult to selectively isolate Schwann cells and they show poor proliferation once isolated. One way to overcome this difficulty is to artificially induce other cells such as stem cells into SC-like phenotypes. [ 43 ] Eguchi et al. (2003) have investigated the use of magnetic fields in order to align Schwann cells. They used a horizontal type superconducting magnet, which produces an 8 T field at its center. Within 60 hours of exposure, Schwann cells aligned parallel to the field; during the same interval, Schwann cells not exposed oriented in a random fashion. It is hypothesized that differences in magnetic field susceptibility of membrane components and cytoskeletal elements may cause the magnetic orientation. [ 44 ] Collagen fibers were also exposed to the magnetic field, and within 2 hours, they aligned perpendicular to the magnetic field, while collagen fibers formed a random meshwork pattern without magnetic field exposure. When cultured on the collagen fibers, Schwann cells aligned along the magnetically oriented collagen after two hours of 8-T magnetic field exposure. In contrast, the Schwann cells randomly oriented on the collagen fibers without magnetic field exposure. Thus, culture on collagen fibers allowed Schwann cells to be oriented perpendicular to the magnetic field and oriented much quicker. [ 44 ] These findings may be useful for aligning Schwann cells in a nervous system injury to promote the formation of bands of Bungner, which are crucial for maintaining the endoneurial tube that guides the regrowing axons back to their targets. It is nearly impossible to align Schwann cells by external physical techniques; thus, the discovery of an alternative technique for alignment is significant. However, the technique developed still has its disadvantages, namely that it takes a considerable amount of energy to sustain the magnetic field for extended periods. Studies have been conducted in attempts to enhance the migratory ability of Schwann cells. Schwann cell migration is regulated by integrins with ECM molecules such as fibronectin and laminin. In addition, neural cell adhesion molecule ( NCAM ) is known to enhance Schwann cell motility in vitro . [ 45 ] NCAM is a glycoprotein that is expressed on axonal and Schwann cell membranes. Polysialic acid (PSA) is synthesized on NCAM by polysialyltransferase (PST) and sialyltransferase X (STX). [ 45 ] During the development of the CNS, PSA expression on NCAM is upregulated until postnatal stages. However, in the adult brain PSA is found only in regions with high plasticity . PSA expression does not occur on Schwann cells. Lavdas et al. (2006) investigated whether sustained expression of PSA on Schwann cells enhances their migration. Schwann cells were tranduced with a retroviral vector encoding STX in order to induce PSA expression. PSA-expressing Schwann cells did obtain enhanced motility as demonstrated in a gap bridging assay and after grafting in postnatal forebrain slice cultures. [ 45 ] PSA expression did not alter molecular and morphological differentiation. The PSA-expressing Schwann cells were able to myelinate CNS axons in cerebellar slices, which is not normally possible in vivo . It is hopeful that these PSA-expressing Schwann cells will be able to migrate throughout the CNS without loss of myelinating abilities and may become useful for regeneration and myelination of axons in the central nervous system. [ 45 ] Astrocytes are glial cells that are abundant in the central nervous system. They are crucial for the metabolic and trophic support of neurons; additionally, astrocytes provide ion buffering and neurotransmitter clearance. Growing axons are guided by cues created by astrocytes; thus, astrocytes can regulate neurite pathfinding and subsequently, patterning in the developing brain. [ 40 ] The glial scar that forms post-injury in the central nervous system is formed by astrocytes and fibroblasts ; it is the most significant obstacle for regeneration. The glial scar consists of hypertrophied astrocytes, connective tissue, and ECM. Two goals of neural tissue engineering are to understand astrocyte function and to develop control over astrocytic growth. Studies by Shany et al. (2006) have demonstrated that astrocyte survival rates are increased on 3D aragonite matrices compared to conventional 2D cell cultures. The ability of cell processes to stretch out across curves and pores allows for the formation of multiple cell layers with complex 3D configurations. The three distinct ways by which the cells acquired a 3D shape are: [ 40 ] In conventional cell culture, growth is restricted to one plane, causing monolayer formation with most cells contacting the surface; however, the 3D curvature of the aragonite surface allows multiple layers to develop and for astrocytes far apart to contact each other. It is important to promote process formation similar to 3D in vivo conditions, because astrocytic process morphology is essential in guiding directionality of regenerating axons. [ 40 ] The aragonite topography provides a high surface area to volume ratio and lacks edges, which leads to a reduction of the culture edge effect. [ 40 ] Crystalline matrices such as the aragonite mentioned here are allowed for the promotion of a complex 3D tissue formation that approaches in vivo conditions. The mammalian primary olfactory system has retained the ability to continuously regenerate during adulthood. [ 46 ] Olfactory receptor neurons have an average lifespan of 6–8 weeks and therefore must be replaced by cells differentiated from the stem cells that are within a layer at the nearby epithelium's base. The new olfactory receptor neurons must project their axons through the CNS to an olfactory bulb in order to be functional. Axonal growth is guided by the glial composition and cytoarchitecture of the olfactory bulb in addition to the presence of olfactory ensheathing cells (OECs). [ 46 ] It is postulated that OECs originate in the olfactory placode , suggesting a different developmental origin than other similar nervous system microglia. Another interesting concept is that OECs are found in both the peripheral and central nervous system portions of the primary olfactory system, that is, the olfactory epithelium and bulb. [ 46 ] OECs are similar to Schwann cells in that they provide an upregulation of low-affinity NGF receptor p75 following injury; however, unlike Schwann cells they produce lower levels of neurotrophins . Several studies have shown evidence of OECs being able to support regeneration of lesioned axons, but these results are often unable to be reproduced. [ 46 ] Regardless, OECs have been investigated thoroughly in relation to spinal cord injuries, amyotrophic lateral sclerosis , and other neurodegenerative diseases. Researchers suggest that these cells possess a unique ability to remyelinate injured neurons. [ 47 ] OECs have properties similar to those of astrocytes , [ 48 ] both of which have been identified as being susceptible to viral infection. [ 47 ] [ 48 ] Stem cells are characterized by their ability to self-renew for a prolonged time and still maintain the ability to differentiate along one or more cell lineages. Stem cells may be unipotent, multipotent, or pluripotent, meaning they can differentiate into one, multiple, or all cell types, respectively. [ 49 ] Pluripotent stem cells can become cells derived from any of the three embryonic germ layers. [ 49 ] Stem cells have the advantage over glial cells because they are able to proliferate more easily in culture. However, it remains difficult to preferentially differentiate these cells into varied cell types in an ordered manner. [ 4 ] Another difficulty with stem cells is the lack of a well-defined definition of stem cells beyond hematopoietic stem cells (HSCs). Each stem cell 'type' has more than one method for identifying, isolating, and expanding the cells; this has caused much confusion because all stem cells of a 'type' (neural, mesenchymal, retinal) do not necessarily behave in the same manner under identical conditions. Adult stem cells are not able to proliferate and differentiate as effectively in vitro as they are able to in vivo . Adult stem cells can come from many different tissue locations, but it is difficult to isolate them because they are defined by behavior and not surface markers. A method has yet to be developed for clearly distinguishing between stem cells and the differentiated cells surrounding them. However, surface markers can still be used to a certain extent to remove most of the unwanted differentiated cells. Stem cell plasticity is the ability to differentiate across embryonic germ line boundaries. Though, the presence of plasticity has been hotly contested. Some claim that plasticity is caused by heterogeneity among the cells or cell fusion events. Currently, cells can be differentiated across cell lines with yields ranging from 10% to 90% depending on techniques used. [ 49 ] More studies need to be done in order to standardize the yield with transdifferentiation. Transdifferentiation of multipotent stem cells is a potential means for obtaining stem cells that are not available or not easily obtained in the adult. [ 4 ] Mesenchymal stem cells are adult stem cells that are located in the bone marrow; they are able to differentiate into lineages of mesodermal origin. Some examples of tissue they form are bone , cartilage , fat , and tendon . MSCs are obtained by aspiration of bone marrow. Many factors promote the growth of MSCs including: platelet-derived growth factor , epidermal growth factor β, and insulin-like growth factor-1 . In addition to their normal differentiation paths, MSCs can be transdifferentiated along nonmesenchymal lineages such as astrocytes, neurons, and PNS myelinating cells. MSCs are potentially useful for nerve regeneration strategies because: [ 50 ] Keilhoff et al. (2006) performed a study comparing the nerve regeneration capacity of non-differentiated and transdifferentiated MSCs to Schwann cells in devitalized muscle grafts bridging a 2-cm gap in the rat sciatic nerve. All cells were autologous. The transdifferentiated MSCs were cultured in a mixture of factors in order to promote Schwann cell-like cell formation. The undifferentiated MSCs demonstrated no regenerative capacity, while the transdifferentiated MSCs showed some regenerative capacity, though it did not reach the capacity of the Schwann cells. [ 50 ] The difficulty of isolating Schwann cells and subsequently inducing proliferation is a large obstacle. A solution is to selectively induce cells such as ectomesenchymal stem cells (EMSCs) into Schwann cell-like phenotypes. EMSCs are neural crest cells that migrate from the cranial neural crest into the first branchial arch during early development of the peripheral nervous system. [ 43 ] EMSCs are multipotent and possess a self-renewing capacity. They can be thought of as Schwann progenitor cells because they are associated with dorsal root ganglion and motor nerve development. EMSC differentiation appears to be regulated by intrinsic genetic programs and extracellular signals in the surrounding environment. [ 43 ] Schwann cells are the source for both neurotropic and neurotrophic factors essential for regenerating nerves and a scaffold for guiding growth. Nie, Zhang et al. conducted a study investigating the benefits of culturing EMSCs within PLGA conduits. Adding foskolin and BPE to an EMSC culture caused the formation of elongated cell processes, which is common to Schwann cells in vitro . [ 43 ] Thus, foskolin and BPF may induce differentiation into Schwann cell-like phenotypes. BPE contains the cytokines GDNF , basic fibroblast growth factor and platelet-derived growth factor , which cause differentiation and proliferation of glial and Schwann cells by activating MAP kinases . When implanted into the PLGA conduits, the EMSCs maintained long-term survival and promoted peripheral nerve regeneration across a 10 mm gap, which usually demonstrates little to no regeneration. Myelinated axons were present within the grafts and basal laminae were formed within the myelin. These observations suggest that EMSCs may promote myelination of regenerated nerve fibers within the conduit. Inserting neurons into a bioartificial nerve conduit seems like the most obvious method for replacing damaged nerves; however, neurons are unable to proliferate and they are often short-lived in culture. Thus, neural progenitor cells are more promising candidates for replacing damaged and degenerated neurons because they are self-renewing, which allows for the in vitro production of many cells with minimal donor material. [ 31 ] In order to confirm that the new neurons formed from neural progenitor cells are a part of a functional network, the presence of synapse formation is required. A study by Ma, Fitzgerald et al. is the first demonstration of murine neural stem and progenitor cell-derived functional synapse and neuronal network formation on a 3D collagen matrix. The neural progenitor cells expanded and spontaneously differentiated into excitable neurons and formed synapses; furthermore, they retained the ability to differentiate into the three neural tissue lineages. [ 31 ] It was also demonstrated that not only active synaptic vesicle recycling occurred, but also that excitatory and inhibitory connections capable of generating action potentials spontaneously were formed. [ 31 ] Thus, neural progenitor cells are a viable and relatively unlimited source for creating functional neurons. Neural stem cells (NSCs) have the capability to self-renew and to differentiate into neuronal and glial lineages. Many culture methods have been developed for directing NSC differentiation; however, the creation of biomaterials for directing NSC differentiation is seen as a more clinically relevant and usable technology. [ citation needed ] One approach to develop a biomaterial for directing NSC differentiation is to combine extracellular matrix (ECM) components and growth factors. A very recent study by Nakajima, Ishimuro et al. examined the effects of different molecular pairs consisting of a growth factor and an ECM component on the differentiation of NSCs into astrocytes and neuronal cells. The ECM components investigated were laminin-1 and fibronectin, which are natural ECM components, and ProNectin F plus (Pro-F) and ProNectin L (Pro-L), which are artificial ECM components, and poly(ethyleneimine) (PEI). The neurotrophic factors used were epidermal growth factor (EGF), fibroblast growth factor -2 (FGF-2), nerve growth factor (NGF), neurotrophin-3 (NT-3), and ciliary neurotrophic factor (CNTF). The pair combinations were immobilized onto matrix cell arrays, on which the NSCs were cultured. After 2 days in culture, the cells were stained with antibodies against nestin , β- tubulin III, and GFAP , which are markers for NSCs, neuronal cells, and astrocytes, respectively. [ 51 ] The results provide valuable information on advantageous combinations of ECM components and growth factors as a practical method for developing a biomaterial for directing differentiation of NSCs. [ 51 ] Currently, neurotrophic factors are being intensely studied for use in bioartificial nerve conduits because they are necessary in vivo for directing axon growth and regeneration. In studies, neurotrophic factors are normally used in conjunction with other techniques such as biological and physical cues created by the addition of cells and specific topographies. The neurotrophic factors may or may not be immobilized to the scaffold structure, though immobilization is preferred because it allows for the creation of permanent, controllable gradients. In some cases, such as neural drug delivery systems , they are loosely immobilized such that they can be selectively released at specified times and in specified amounts. Drug delivery is the next step beyond the basic addition of growth factors to nerve guidance conduits. Many biomaterials used for nerve guidance conduits are biomimetic materials . Biomimetic materials are materials that have been design such that they elicit specified cellular responses mediated by interactions with scaffold-tethered peptides from ECM proteins; essentially, the incorporation of cell-binding peptides into biomaterials via chemical or physical modification. [ 52 ] Synergism often occurs when two elements are combined; it is an interaction between two elements that causes an effect greater than the combined effects of each element separately. Synergism is evident in the combining of scaffold material and topography with cellular therapies, neurotrophic factors, and biomimetic materials. Investigation of synergism is the next step after individual techniques have proven to be successful by themselves. The combinations of these different factors need to be carefully studied in order to optimize synergistic effects. It was hypothesized that interactions between neurotrophic factors could alter the optimal concentrations of each factor. While cell survival and phenotype maintenance are important, the emphasis of evaluation was on neurite extension. A combination of NGF , glial cell-line derived neurotrophic factor ( GDNF ), and ciliary neurotrophic factor ( CNTF ) was presented to Dorsal root ganglion cultures in vitro . One factor from each neurotrophic family was used. [ 53 ] It was determined that there is not a difference in individual optimal concentration and combinatorial optimal concentration; however, around day 5 or 6 the neurites ceased extension and began to degrade. This was hypothesized to be due to lack of a critical nutrient or of proper gradients; previous studies have shown that growth factors are able to optimize neurite extension best when presented in gradients. [ 53 ] Future studies on neurotrophic factor combinations will need to include gradients. Cell adhesion molecules (CAMs) and neurotrophic factors embedded together into biocompatible matrices is a relatively new concept being investigated. [ 54 ] CAMs of the immunoglobulin superfamily (IgSF), which includes L1/NgCAM and neurofascin, are particularly promising, because they are expressed in the developing nervous system on neurons or Schwann cells. They are known to serve as guidance cues and mediate neuronal differentiation. Neurotrophic factors such as NGF and growth differentiation factor 5 (GDF-5), however, are well established as promoters of regeneration in vivo . A recent study by Niere, Brown et al. investigated the synergistic effects of combining L1 and neurofascin with NGF and GDF-5 on DRG neurons in culture; this combination enhanced neurite outgrowth. Further enhancement was demonstrated by combining L1 and neurofascin into an artificial fusion protein, which improves efficiency since factors are not delivered individually. [ 54 ] Not only can different cues be used, but they may even be fused into a single 'new' cue. The effect of presenting multiple stimuli types such as chemical, physical, and biological cues on neural progenitor cell differentiation has not been explored. A study was conducted in which three different stimuli were presented to adult rat hippocampal progenitor cells (AHPCs): postnatal rat type-1 astrocytes (biological), laminin (chemical), and micropatterned substrate (physical). [ 55 ] Over 75% of the AHPCs aligned within 20° of the grooves compared to random growth on the non-patterned substrates. [ 55 ] When AHPCs were grown on micropatterned substrates with astrocytes, outgrowth was influenced by the astrocytes that had aligned with the grooves; namely, the AHPCs extended processes along the astrocytic cytoskeletal filaments. However, the alignment was not as significant as that seen by the AHPCs in culture alone with the micropatterned substrate. In order to assess the different phenotypes expressed as a result of differentiation, the cells were stained with antibodies for class III β-tubulin (TuJI), receptor interacting protein (RIP), and glial fibrillary acidic protein (GFAP), which are markers for early neurons, oligodendrocytes, and astrocytes, respectively. The greatest amount of differentiation was seen with AHPCs cultured on patterned substrates with astrocytes. [ 55 ]
https://en.wikipedia.org/wiki/Nerve_guidance_conduit
In mathematics , Nesbitt's inequality , named after Alfred Nesbitt , [ R 1 ] states that for positive real numbers a , b and c , with equality only when a = b = c {\displaystyle a=b=c} (i. e. in an equilateral triangle ). There is no corresponding upper bound as any of the 3 fractions in the inequality can be made arbitrarily large. It is the three-variable case of the rather more difficult Shapiro inequality , and was published at least 50 years earlier. By the AM - HM inequality on ( a + b ) , ( b + c ) , ( c + a ) {\displaystyle (a+b),(b+c),(c+a)} , Clearing denominators yields from which we obtain by expanding the product and collecting like denominators. This then simplifies directly to the final result. Supposing a ≥ b ≥ c {\displaystyle a\geq b\geq c} , we have that Define By the rearrangement inequality, the dot product of the two sequences is maximized when the terms are arranged to be both increasing or both decreasing. The order here is both decreasing. Let y → 1 {\displaystyle {\vec {y}}_{1}} and y → 2 {\displaystyle {\vec {y}}_{2}} be the vector y → {\displaystyle {\vec {y}}} cyclically shifted by one and by two places; then Addition then yields Nesbitt's inequality. The following identity is true for all a , b , c : {\displaystyle a,b,c:} This clearly proves that the left side is no less than 3 / 2 {\displaystyle 3/2} for positive a , b and c . Note: every rational inequality can be demonstrated by transforming it to the appropriate sum-of-squares identity—see Hilbert's seventeenth problem . Invoking the Cauchy–Schwarz inequality on the vectors ⟨ a + b , b + c , c + a ⟩ , ⟨ 1 a + b , 1 b + c , 1 c + a ⟩ {\displaystyle \displaystyle \left\langle {\sqrt {a+b}},{\sqrt {b+c}},{\sqrt {c+a}}\right\rangle ,\left\langle {\frac {1}{\sqrt {a+b}}},{\frac {1}{\sqrt {b+c}}},{\frac {1}{\sqrt {c+a}}}\right\rangle } yields which can be transformed into the final result as we did in the AM-HM proof . Let x = a + b , y = b + c , z = c + a {\displaystyle x=a+b,y=b+c,z=c+a} . We then apply the AM-GM inequality to obtain because x y + z y + y x + z x + x z + y z ≥ 6 x y ⋅ z y ⋅ y x ⋅ z x ⋅ x z ⋅ y z 6 = 6. {\displaystyle {\frac {x}{y}}+{\frac {z}{y}}+{\frac {y}{x}}+{\frac {z}{x}}+{\frac {x}{z}}+{\frac {y}{z}}\geq 6{\sqrt[{6}]{{\frac {x}{y}}\cdot {\frac {z}{y}}\cdot {\frac {y}{x}}\cdot {\frac {z}{x}}\cdot {\frac {x}{z}}\cdot {\frac {y}{z}}}}=6.} Substituting out the x , y , z {\displaystyle x,y,z} in favor of a , b , c {\displaystyle a,b,c} yields which then simplifies to the final result. Titu's lemma , a direct consequence of the Cauchy–Schwarz inequality , states that for any sequence of n {\displaystyle n} real numbers ( x k ) {\displaystyle (x_{k})} and any sequence of n {\displaystyle n} positive numbers ( a k ) {\displaystyle (a_{k})} , ∑ k = 1 n x k 2 a k ≥ ( ∑ k = 1 n x k ) 2 ∑ k = 1 n a k . {\displaystyle \displaystyle \sum _{k=1}^{n}{\frac {x_{k}^{2}}{a_{k}}}\geq {\frac {(\sum _{k=1}^{n}x_{k})^{2}}{\sum _{k=1}^{n}a_{k}}}.} We use the lemma on ( x k ) = ( 1 , 1 , 1 ) {\displaystyle (x_{k})=(1,1,1)} and ( a k ) = ( b + c , a + c , a + b ) {\displaystyle (a_{k})=(b+c,a+c,a+b)} . This gives which results in As the left side of the inequality is homogeneous, we may assume a + b + c = 1 {\displaystyle a+b+c=1} . Now define x = a + b {\displaystyle x=a+b} , y = b + c {\displaystyle y=b+c} , and z = c + a {\displaystyle z=c+a} . The desired inequality turns into 1 − x x + 1 − y y + 1 − z z ≥ 3 2 {\displaystyle {\frac {1-x}{x}}+{\frac {1-y}{y}}+{\frac {1-z}{z}}\geq {\frac {3}{2}}} , or, equivalently, 1 x + 1 y + 1 z ≥ 9 2 {\displaystyle {\frac {1}{x}}+{\frac {1}{y}}+{\frac {1}{z}}\geq {\frac {9}{2}}} . This is clearly true by Titu's Lemma. Let S = a + b + c {\displaystyle S=a+b+c} and consider the function f ( x ) = x S − x {\displaystyle f(x)={\frac {x}{S-x}}} . This function can be shown to be convex in [ 0 , S ] {\displaystyle [0,S]} and, invoking Jensen's inequality , we get A straightforward computation then yields By clearing denominators, It therefore suffices to prove that x 3 + y 3 ≥ x y 2 + x 2 y {\displaystyle x^{3}+y^{3}\geq xy^{2}+x^{2}y} for ( x , y ) ∈ R + 2 {\displaystyle (x,y)\in \mathbb {R} _{+}^{2}} , as summing this three times for ( x , y ) = ( a , b ) , ( a , c ) , {\displaystyle (x,y)=(a,b),\ (a,c),} and ( b , c ) {\displaystyle (b,c)} completes the proof. As x 3 + y 3 ≥ x y 2 + x 2 y ⟺ ( x − y ) ( x 2 − y 2 ) ≥ 0 {\displaystyle x^{3}+y^{3}\geq xy^{2}+x^{2}y\iff (x-y)(x^{2}-y^{2})\geq 0} we are done.
https://en.wikipedia.org/wiki/Nesbitt's_inequality
Nesfatin-1 is a neuropeptide produced in the hypothalamus of mammals . It participates in the regulation of hunger and fat storage. [ 1 ] Increased nesfatin-1 in the hypothalamus contributes to diminished hunger, a 'sense of fullness', and a potential loss of body fat and weight. A study of metabolic effects of nesfatin-1 in rats was done in which subjects administered nesfatin-1 ate less, used more stored fat and became more active. Nesfatin-1-induced inhibition of feeding may be mediated through the inhibition of orexigenic neurons. [ 2 ] In addition, the protein stimulated insulin secretion from the pancreatic beta cells of both rats and mice. [ 3 ] Nesfatin-1 is a polypeptide encoded in the N-terminal region of the protein precursor, Nucleobindin-2 ( NUCB2 ). Recombinant human Nesfatin-1 is a 9.7 kDa protein containing 82 amino acid residues. [ 4 ] Nesfatin-1 is expressed in the hypothalamus, in other areas of the brain, and in pancreatic islets , gastric endocrine cells and adipocytes . Nesfatin/NUCB2 is expressed in the appetite-control hypothalamic nuclei such as paraventricular nucleus (PVN), arcuate nucleus (ARC), supraoptic nucleus (SON) of hypothalamus , lateral hypothalamic area (LHA), and zona incerta in rats. Nesfatin-1 immunoreactivity was also found in the brainstem nuclei such as nucleus of the solitary tract (NTS) and Dorsal nucleus of vagus nerve . Nesfatin-1 can cross the blood–brain barrier without saturation. [ 5 ] The receptors within the brain are in the hypothalamus and the solitary nucleus , where nesfatin-1 is believed to be produced via peroxisome proliferator-activated receptors (PPARs). It appears there is a relationship between nesfatin-1 and cannabinoid receptors . Nesfatin-1-induced inhibition of feeding may be mediated through the inhibition of orexigenic NPY neurons. Nesfatin/NUCB2 expression has been reported to be modulated by starvation and re-feeding in the Paraventricular nucleus (PVN) and supraoptic nucleus (SON) of the brain. Nesfatin-1 influences the excitability of a large proportion of different subpopulations of neurons located in the PVN. It is also reported that magnocellular oxytocin neurons are activated during feeding, and ICV infusion of oxytocin antagonist increases food intake, indicating a possible role of oxytocin in the regulation of feeding behavior. In addition, it is proposed that feeding-activated nesfatin-1 neurons in the PVN and SON could play an important role in the postprandial regulation of feeding behavior and energy homeostasis . [ 6 ] [ 7 ] Nesfatin-1 immunopositive neurons are also located in the arcuate nucleus (ARC). Nesfatin-1 immunoreactive neurons in the ARC are activated by simultaneous injection of ghrelin and desacyl ghrelin, nesfatin-1 may be involved in the desacyl ghrelin-induced inhibition of the orexigenic effect of peripherally administered ghrelin in freely fed rat. Nesfatin-1 was co-expressed with melanin concentrating hormone ( MCH ) in tuberal hypothalamic neurons. Nesfatin-1 co-expressed in MCH neurons may play a complex role not only in the regulation of food intake, but also in other essential integrative brain functions involving MCH signaling, ranging from autonomic regulation, stress, mood, cognition to sleep. [ 8 ] There is growing evidence that nesfatin-1 may play an important role in the regulation of food intake and glucose homeostasis . [ 9 ] For instance, continuous infusion of nesfatin-1 into the third brain ventricle significantly decreased food intake and body weight gain in rats. In previous studies, it was demonstrated that plasma nesfatin-1 levels were elevated in patients with type 2 diabetes mellitus (T2DM) and associated with BMI , plasma insulin , and the homeostasis model assessment of insulin resistance . [ 10 ] [ 11 ] It was found that central nesfatin-1 resulted in a marked suppression of hepatic PEPCK mRNA and protein levels in both standard diet (SD) and high fat diet (HFD) rats but failed to alter glucose 6-phosphatase (G-6-Pase) activity and protein expression. Central nesfatin-1 appeared to antagonize the effect of HFD on increasing PEPCK gene expression in vivo . In agreement with decreasing PEPCK gene expression, central nesfatin-1 also resulted in a reduced PEPCK enzyme activity, further confirming that it affected PEPCK rather than G-6-Pase. [ 11 ] The part of the glucose entering the liver is phosphorylated by glucokinase and then dephosphorylated by G-6-Pase. This futile cycle between glucokinase and G-6-Pase is named glucose cycling, and it accounts for the difference between the total flux through G-6-Pase and glucose production. G-6-Pase catalyzes the last step in both gluconeogenesis and glycogenolysis , and PEPCK is responsible only for gluconeogenesis. In this study, central nesfatin-1 led to a marked suppression of hepatic PEPCK protein and activity, but failed to alter hepatic G-6-Pase activity, suggesting that PEPCK may be more sensitive to short-term central nesfatin-1 exposure than G-6-Pase. In addition, suppression of HGP by central nesfatin-1 was dependent on an inhibition of the substrate flux through G-6-Pase and not on a decrease in the amount of G-6-Pase enzyme. Thus, in SD and HFD rats, central nesfatin-1 may have decreased glucose production mainly via decreasing gluconeogenesis and PEPCK activity. [ 11 ] Recently, it has been reported that ICV nesfatin-1 produced a dose-dependent delay of gastric emptying . [ 11 ] [ 12 ] To further delineate the mechanism by which central nesfatin-1 modulates glucose homeostasis, we assessed the effects of central nesfatin-1 on the phosphorylation of several proteins in the INSR → IRS-1 → AMPK → Akt signaling cascade in the liver. We found that central nesfatin-1 significantly augmented InsR and IRS-1 tyrosine phosphorylation. These results demonstrated that central nesfatin-1 in both SD and HFD rats resulted in a stimulation of liver insulin signaling that could account for the increased insulin sensitivity and improving glucose metabolism. [ 11 ] AMPK is a key regulator of both lipid and glucose metabolism. It has been referred to as a metabolic master switch, because its activity is regulated by the energy status of the cell. In this study, we demonstrate that central nesfatin-1 resulted in increased phosphorylation of AMPK accompanied by a marked suppression of hepatic PEPCK activity, mRNA, and protein levels in both SD and HFD rats. Notably, central nesfatin-1 appears to prevent the obesity-driven decrease in phospho-AMPK levels in HFD-fed rats. Because hepatic AMPK controls glucose homeostasis mainly through the inhibition of gluconeogenic gene expression and glucose production, the suppressive effect of central nesfatin-1 on the HGP (Hepatic Glucose Production) can be attributed partly to its ability to suppress the expression of PEPCK mRNA and protein through AMPK activation. Furthermore, the activation of AMPK has been shown to enhance glucose uptake in skeletal muscle . Therefore, increased AMPK phosphorylation by central nesfatin-1 may also have been responsible for the improved glucose uptake in muscle. [ 11 ] Akt is a key effector of insulin-induced inhibition of HGP and stimulation of muscle glucose uptake. We therefore examined the effects of central nesfatin-1 on Akt phosphorylation in vivo . We found that central nesfatin-1 produced a pronounced increase in insulin-mediated phosphorylation of Akt in the liver of HFD-fed rats. This increase was paralleled by an increase in muscle glucose uptake and inhibition of HGP. This provided correlative evidence that Akt activation may be involved in nesfatin-1 signaling and its effects on glucose homeostasis and insulin sensitivity . [ 11 ] The mTOR pathway has emerged as a molecular mediator of insulin resistance , which can be activated by both insulin and nutrients. It is needed to fully activate AKT and consists of two discrete protein complexes, TORC1 and TORC2 , only one of which, TORC1, binds rapamycin . In addition to mTOR, the TORC2 complex contains RICTOR , mLST8 , and SIN1 and regulates insulin action and Akt phosphorylation. Thus, mTOR sits at a critical juncture between insulin and nutrient signaling, making it important both for insulin signaling downstream from Akt and for nutrient sensing. Until now, it has not been known whether nesfatin-1 affects activation of mTOR. To gain further insight into the mechanism underlying the insulin-sensitizing effects of ICV nesfatin-1, we assessed mTOR and TORC2 phosphorylation in liver samples of SD- and HFD-fed animals. Both mTOR and TORC2 phosphorylations were increased in livers from these rats, demonstrating activation of mTOR and TORC2 by central nesfatin-1 in vivo. As mTOR kinase activity is required for Akt phosphorylation, the observed increased Akt phosphorylation may have been caused by the concomitant activation of the mTOR/TORC2. Thus, it's postulated that the mTOR/TORC2 plays a role as a negative-feedback mechanism in the regulation of metabolism and insulin sensitivity mediated by central nesfatin-1. [ 11 ]
https://en.wikipedia.org/wiki/Nesfatin-1
The A.N. Nesmeyanov Institute of Organoelement Compounds [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] of Russian Academy of Sciences (INEOS RAS) [ 6 ] ( Russian : Институт элементоорганических соединений Российской Академии Наук им. А.Н. Несмеянова (ИНЭОС РАН) ) is a research centre founded in 1954 by the president of the USSR Academy of Sciences , Alexander Nesmeyanov . [ 7 ] [ 8 ] After his exit, the institute was ran by A.V. Fokin from 1980 to 1988, M.E. Vol’pin (1989-1996), Yu.N.Bubnov (1996-2013), A.M. Muzafarov (2013-2018), and A.A. Trifonov (2018) In 2019 and 2020 scientific journal " Journal of Organometallic Chemistry " decided to publish two special issues on the occasion of the 120th anniversary of the famous Russian organometallic chemist Alexander N. Nesmeyanov [ 9 ] and the occasion of the 70th birthday of professor Elena Shubina, [ 10 ] to note the scientific contribution of scientists in organometallics and the field of non-covalent interactions (Shubina). [ 10 ] This article about a chemistry organization is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nesmeyanov_Institute_of_Organoelement_Compounds
Potassium tetraiodomercurate(II) is an inorganic compound with the chemical formula K 2 [ Hg I 4 ] . It consists of potassium cations and tetraiodomercurate(II) anions. It is the active agent in Nessler's reagent, used for detection of ammonia . [ 2 ] The compound crystallizes from a heated solution of mercuric iodide , potassium iodide , and precisely 2% water in acetone . Attempted synthesis in concentrated aqueous solution will give the pale orange monohydrate K[Hg(H 2 O)I 3 ] instead. [ 3 ] K 2 [HgI 4 ] is a precursor to analogous copper and silver salts M 2 [HgI 4 ] (M=Cu, Ag). [ 4 ] Nessler's reagent , named after Julius Neßler (Nessler), is a 0.09 mol/L solution of potassium tetraiodomercurate(II) in 2.5 mol/L potassium hydroxide . This pale solution becomes deeper yellow in the presence of ammonia ( NH 3 ). At higher concentrations, a brown precipitate derivative of Millon's base ( HgO·Hg(NH 2 )Cl ) may form. The sensitivity as a spot test is about 0.3 μg NH 3 in 2 μL. [ 5 ] The brown precipitate is not fully characterized and may vary from HgO·Hg(NH 2 )I to 3HgO·Hg(NH 3 ) 2 I 2 . [ 6 ]
https://en.wikipedia.org/wiki/Nessler's_reagent
The nest protection hypothesis (NPH) is one of multiple hypotheses that seek to explain the behaviour of birds repeatedly introducing green, often aromatic, plant material into the nest after its completion and throughout the incubation and nestling periods. [ 1 ] The hypothesis suggests that this behaviour is an evolutionary strategy to ward off or kill ectoparasites that would otherwise cause higher nestling mortality through blood loss and the spread of pathogens . The aromatic species of greenery are often collected from trees and long-living shrubs containing strong aromatic compounds [ 2 ] which are expected to either disrupt olfaction in host seeking parasites or kill harmful parasites and pathogens. [ 3 ] In more recent reviews on NPH, the name of the hypothesis has been critiqued for its inaccuracy, suggesting chick-protection hypothesis as a more suitable alternative, since protection is being conveyed, not to the nest, but to the chicks instead. [ 4 ] Species that reuse their nest annually are expected to benefit more from using ectoparasite-repellent greenery by warding off overwintering larvae which pose a greater threat to nestlings in spring. This was proven in a study performed on a variety of North American and European Falconiformes which found that species that made use of greenery were more often species that reuse their nests. [ 1 ] A subsequent study analysed the effectiveness of different plant species, found in and around European starling nests, at inhibiting bacterial growth in a nutrient medium and found that the plant species preferred by starlings were those with high bacterial inhibitory effectiveness. Furthermore, the behaviour was typically seen more in cavity nesters than in open-cup nesters which are expected to benefit more from sterilising their nest. [ 5 ] This study, while providing evidence for NPH, stresses the use of greenery as a fumigant instead of an olfactory disruptor. Another study on blue tits found that the collection of greenery was solely performed by the females during egg laying and chick stages, increasing the frequency of this behaviour over time, peaking when parasite load would be at its highest, providing evidence for the use of greenery specifically against parasitism . [ 6 ] Despite the nest protection hypothesis’ prominence in the literature, multiple alternative hypotheses have been suggested over the years. The mate hypothesis suggests there is a courtship element involved in the collection and display of greenery. It has been observed that, in starlings, only males collect greenery and tend to do so during other courtship behaviours. Furthermore, the amount of greenery collected is found to be a function of the length of courtship, and males have been reported to carry greenery to the nest in an ‘eye catching manner’. [ 5 ] The drug hypothesis suggests that greenery has a direct beneficial health benefit to chicks by potentiation of their immune system or similar mechanisms. [ 7 ] Evidence for this theory has been found in study where aromatic herb species were artificially placed in starling nests. Chicks from nests which had aromatic herbs placed in them were found to be heavier and have a higher haematocrit despite there being no noticeable effect on parasite numbers. [ 7 ] However, the inverse has been found in other studies, where the presence of greenery did have an impact on parasite numbers but not on chick weight and leucocyte cell number, providing evidence for NPH over the drug hypothesis [ 8 ]
https://en.wikipedia.org/wiki/Nest_protection_hypothesis
Neste MY Renewable Diesel (formerly NExBTL) is a vegetable oil refining fuel production process commercialized by the Finnish oil and refining company Neste . Whether as an admixture or in its pure form, the fuel is able to supplement or partially replace conventional diesel without problems. Neste guarantees that every gallon sold meets ASTM D975 and EN 15940 specifications in compliance with OEM standards. [ 1 ] Despite the former name BTL , the feedstock is vegetable oil and waste animal fats, not whole plants. However, fuel quality is equal to the synthetic Fischer-Tropsch BTL and GTL diesel fuels. Neste Renewable Diesel is produced in a patented vegetable oil refining process . Chemically, it entails direct catalytic hydrodeoxygenation ( hydrogenolysis ) of plant oils, which are triglycerides , into the corresponding alkanes and propane . The glycerol chain of the triglyceride is hydrogenated to the corresponding C3 alkane, propane — there is no glycerol sidestream. This process removes oxygen from the oil; the diesel is not an oxygenate like traditional transesterified FAME biodiesel . Catalytic isomerization into branched alkanes is then done to adjust the cloud point in order to meet winter operability requirements. As it is chemically identical to ideal conventional diesel, it requires no modification or special precautions for the engine. [ 2 ] [ 3 ] Two refineries in Porvoo, Finland were brought on stream in 2007 and 2009, each with a capacity of 0.2 million tons per year. [ 4 ] [ 5 ] Two larger refineries, with annual production of 0.8 million tons both, were brought on stream in Singapore and Rotterdam in 2010 and 2011, respectively. [ 6 ] [ 7 ] Neste has estimated that the use of NExBTL diesel cuts greenhouse gas emissions by 40 to 90 percent in comparison to fossil based diesel. [ 8 ] Due to the chemistry of the process, the renewable diesel is pure alkane and contains no aromatics, oxygen (although oxygen would have promoted cleaner combustion [ 9 ] ) or sulfur. [ 3 ] [ dead link ] The cloud point (or gel point) can be adjusted down to −40 °C (−40 °F) [ 10 ] during the manufacturing process, compared to petrodiesel's cloud point of −30 °C (−22 °F), [ failed verification ] which could improve the cloud point of diesel when blended. The cloud point is the temperature when the wax precipitates out of the fuel in the form of small wax crystals, making the fuel cloudy and more difficult to move within the fuel lines and systems of vehicles. The lower the cloud point of a particular fuel is, the more suitable it is in colder environments. [ 11 ] A mix of palm oil, rapeseed oil, and waste fat from the food industry can be used. Initially, palm oil was the principal (90%) feedstock, although its share was reduced to 53% by 2013 [ 8 ] and to less than 20% by 2017. [ 12 ] However, the EU biofuels industry has increased its use of palm oil by 365% during the years 2006–2012, from 0.4 to 1.9 million tonnes per year, and the trend is increasing. [ 13 ] [ 14 ] Also note that categorising Palm Fatty Acid Distillate, PFAD, as waste is controversial since it can be used to make e.g. soap, candles and animal fodder. In the UK PFAD is classified as a bi-product [ 15 ] [ 16 ] [ 17 ] ). PFAD is also omitted from sustainability requirements regarding biodiversity and high carbon stock areas (HCV). [ 18 ] [ 19 ] Palm oil may endanger the carbon neutrality of the fuel if forest is cleared and swamps drained to make way for palm plantations. In response to this concern, Neste has joined the Round Table on Sustainable Palm Oil (RSPO) to certify that the palm oil is produced in a carbon-neutral, environmentally responsible manner. Neste purchases most of its palm oil from IOI , [ 20 ] but requires a separate production chain for the RSPO-certified palm oil, in order not to create demand for rainforest destruction. [ citation needed ] Deforestation would release carbon to the atmosphere, and reduce the overall carbon binding capacity of the land, thus it would be counterproductive with respect to the carbon balance. In 2007, Greenpeace protested the use of palm oil, concluding the potential for deforestation remains. According to Greenpeace, increasing the production of palm oil reduces the available land area, so indirectly generates demand for rainforest destruction, even if the palm oil itself is rainforest-certified. Greenpeace noted RSPO is voluntary organization and claimed government regulation in palm oil producing countries, such as Indonesia , cannot be relied on because of political corruption . Greenpeace also claimed palm oil diesel can actually produce three to 10 times more carbon dioxide emissions than petrodiesel because of the indirect effects of clearing of swamps , forest fires and indirect generation of demand for land area. [ 21 ] Greenpeace demands that Neste should use domestic feedstocks such as rapeseed oil or biogas , instead. However, rapeseed is a slower-growing, cold-climate source with lesser carbon-binding potential than the oil palm, making emissions from cultivation and transport proportionally more severe. [ citation needed ] In 2017, the share of palm oil in the feedstock has been reduced to less than 20%, [ 12 ] being replaced by reclaimed waste oils such as used frying oil, animal and fish fat, and camelina, jatropha, soy and rapeseed oil. Use of reclaimed waste oil reduces the greenhouse gas impact by 88–91%. [ 8 ] Neste is continuing to look into new feedstock, including algae , jatropha [ 22 ] and microbial oil . [ 23 ] [ 24 ] [ needs update ] This diesel is blended with petrodiesel. A market is created because the European Union required 5.75% of transport fuels should be biofuels by 2010. The EU further decided on 18 December 2008, that by 2020, the share of energy from renewable sources in all forms of transportation be at least 10% of the final consumption of energy. [ 25 ] Systems and regions without an electrical grid will be the long-term market for hydrotreated vegetable oils, as the EU prefers electrical use by factor 2.5. In the Helsinki area, the Helsinki Metropolitan Area Council and the Helsinki City Transport conducted a three-year experiment by running buses with 25% Neste Renewable Diesel at first, and then switching to 100% in 2008. [ 26 ] The trial, which was the largest field test of a biofuel produced from renewable raw materials worldwide, was a success: local emissions were decreased significantly, with particle emissions decreased by 30% and nitrogen oxide emissions by 10%, with excellent winter performance and no problems with catalytic converters. [ 27 ] Since then, Helsinki buses have run on Neste Renewable Diesel. [ citation needed ] As a result of its hydrocarbon nature, Neste Renewable Diesel operates without problems in current diesel vehicles in all climatic conditions. It does not have any of the drawbacks of the traditional ester type FAME biodiesel, such as cold operability, 'best before' date, engine and fuel system deposit formation, risk for microbial growth and water pick up, engine oil dilution and deterioration. [ citation needed ] Neste Renewable Diesel can be blended into diesel fuel in any ratio, whereas the use of the traditional FAME biodiesel is limited to maximum 7% by the EN 590 standard in order to avoid technical problems in engines and vehicles. [ citation needed ] Following a proposal by VDA , Daimler Trucks and Daimler Buses recommend the biofuel Neste Renewable Diesel as an admixture to petrodiesel . [ 28 ]
https://en.wikipedia.org/wiki/Neste_Renewable_Diesel
A nested gene is a gene whose entire coding sequence lies within the bounds (between the start codon and the stop codon ) of a larger external gene. The coding sequence for a nested gene differs greatly from the coding sequence for its external host gene. Typically, nested genes and their host genes encode functionally unrelated proteins, and have different expression patterns in an organism. There are two categories of nested genes: A nested intronic gene lies within the non-coding intronic region of a larger gene, and occurs relatively frequently, especially in the introns of metazoans and higher eukaryotes . Because only eukaryotic DNA contains intronic regions, this type of gene does not occur in bacteria or archaea . [ 1 ] The human genome contains a relatively high proportion of nested intronic genes. It is predicted to contain at least 158 functional intronic nested genes, with an additional 212 pseudogenes and three snoRNA genes nested in intronic regions. These genes seem to be distributed randomly across all chromosomes , and the majority code for proteins that are functionally unrelated to their host genes. [ 2 ] [ 1 ] Genes nested opposite the coding sequences of their host genes are very rare, and have been observed in prokaryotes , and more recently, in yeast ( S. cerevisiae ) and in Tetrahymena thermophila . These non-intronic nested genes remain to be identified in metazoan genomes. As with intronic nested genes, nonintronic nested genes typically do not share functions or expression patterns with their host genes. [ 1 ]
https://en.wikipedia.org/wiki/Nested_gene
Nested polymerase chain reaction ( nested PCR ) is a modification of polymerase chain reaction intended to reduce non-specific binding in products due to the amplification of unexpected primer binding sites. [ 1 ] Polymerase chain reaction itself is the process used to amplify DNA samples, via a temperature-mediated DNA polymerase . The products can be used for sequencing or analysis, and this process is a key part of many genetics research laboratories, along with uses in DNA fingerprinting for forensics and other human genetic cases. Conventional PCR requires primers complementary to the termini of the target DNA. The amount of product from the PCR increases with the number of temperature cycles that the reaction is subjected to. A commonly occurring problem is primers binding to incorrect regions of the DNA, giving unexpected products. This problem becomes more likely with an increased number of cycles of PCR. Nested polymerase chain reaction involves two sets of primers, used in two successive runs of polymerase chain reaction, the second set intended to amplify a secondary target within the first run product. This allows amplification for a low number of runs in the first round, limiting non-specific products. The second nested primer set should only amplify the intended product from the first round of amplification and not non-specific product. This allows running more total cycles while minimizing non-specific products. This is useful for rare templates or PCR with high background. The target DNA undergoes the first run of polymerase chain reaction with the first set of primers, shown in green. The selection of alternative and similar primer binding sites gives a selection of products, only one containing the intended sequence. The product from the first reaction undergoes a second run with the second set of primers, shown in red. It is very unlikely that any of the unwanted PCR products contain binding sites for both the new primers, ensuring the product from the second PCR has little contamination from unwanted products.
https://en.wikipedia.org/wiki/Nested_polymerase_chain_reaction
In structural proof theory , the nested sequent calculus is a reformulation of the sequent calculus to allow deep inference . [ 1 ] This mathematical logic -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nested_sequent_calculus
Nesting behavior is an instinct in animals during reproduction where they prepare a place with optimal conditions to nurture their offspring . [ 1 ] The nesting place provides protection against predators and competitors that mean to exploit or kill offspring. [ 2 ] It also provides protection against the physical environment. [ 1 ] Nest building is important in family structure [ 3 ] and is therefore influenced by different mating behaviours and social settings. [ 4 ] It is found in a variety of animals such as birds , fish , mammals , amphibians , and reptiles . [ 1 ] Female dogs may show signs of nesting behaviour about one week before they are due [ 5 ] that include pacing and building a nest with items from around the house such as blankets, clothing, and stuffed animals. [ 5 ] (They also sometimes do this in cases of false pregnancy , or pseudocyesis). Domestic cats often make nests by bringing straw, cloth scraps, and other soft materials to a selected nook or box; they particularly are attracted to haylofts as nest sites. Commercial whelping and queening boxes are available; however, children's wading pools (dogs) and plastic dishpans (cats) work just as well. [ 5 ] In birds it is known as "going broody ", and is characterized by the insistence to stay on the nest as much as possible, and by cessation of laying new eggs. Marsupials do not exhibit a nesting instinct per se, because the mother's pouch fulfills the function of housing the newborns. Nest building is performed in order to provide sufficient shelter and comfort to the arriving offspring. [ 6 ] Threats, such as predators, that decrease the chance of survival will increase care of offspring. [ 7 ] Under natural conditions, sows will leave the herd and travel up to 6.5 km (4.0 mi) [ 6 ] a day prior to parturition in order to find the appropriate spot for a nest. [ 8 ] The sows will use their forelimbs and snouts in order to create excavated depressions within the ground and to gather/transport nesting materials. [ 9 ] Although the nests vary in radius dependent on the age of the sow, the nests are generally a round to oval shape and are usually located near trees, uprooted stumps or logs. [ 9 ] The shelter provided by the nest built in sows is of utmost importance to thermoregulation . For the first two weeks of the piglets life their physiological thermoregulation is still developing, and due to a lack of amount of brown fat tissue , piglets require an increased surrounding temperature. Without the protection of the nest, the piglets will be subjected to climatic influences causing their internal temperature to drop to life-threatening levels. [ 6 ] Farrowing crates have been widely implemented into modern pig husbandry in order to reduce piglet mortality via crushing. However, this type of housing disturbs the sows natural instinct to nest build due to lack of space. Thus, it is necessary for the sows to farrow without the performance of this natural pre-partum activity which results in high stress for the animal. In rodents and lagomorphs , the nesting instinct is typically characterized by the urge to seek the lowest sheltered spot available; this is where these mammals give birth. Rats, for example, prefer to burrow amongst dense areas of vegetation or around human settlements which they come into contact with often. [ 10 ] Often some rodent species create burrows that develop microclimates. This is another way that nesting instinct aids in thermoregulation . [ 4 ] Alzheimer's disease in rats has been observed to impair the nesting ability, especially in females. These impairments become exaggerated with age and progression of disease. [ 3 ] Particularly among burrowing animals , such as groundhogs and prairie dogs nesting is used all across the burrows for uses such as insulation, bedding, litter chambers, transportation, comfort and various other uses. [ 11 ] Marmot species such as groundhogs, and alpine marmots nest their borrows with thick grasses in advance of winter, this keeps a thermoregulated insulated comfortable environment for the marmots as they undergo hibernation . [ 12 ] [ 13 ] [ 14 ] Maternal nest-building is regulated by the hormonal actions of estradiol , progesterone , and prolactin . Given the importance of shelter to offspring survival and reproductive success, it is no wonder that a set of common hormonal signals has evolved. However, the exact timing and features of nest building vary among species, depending on endocrine and external factors. The initial drive to perform this behavior is stimulated internally via hormones , specifically a rise in prolactin levels. This increase is driven by an increase in prostaglandin and a decrease in progesterone. [ 15 ] The second phase of nest building is driven by external stimuli, this phase is also known as the material-oriented phase. In this stage it is said that external stimuli such as the proper nest building materials must be present. Both internal and external stimuli must exist in conjunction with one another for nest building to commence. The cessation of the nest building is correlated with a rise in oxytocin which is the hormone responsible for the contraction of the uterus . Shortly after this, parturition will commence. [ 6 ] In rabbits , nest building occurs towards the last third of pregnancy. The mother digs and builds a nest of straw and grass, which she lines with hair plucked from her body. This sequential motor pattern is produced by changes in estradiol, progesterone, and prolactin levels. Six to eight days pre- partum , high levels of estradiol and progesterone lead to a peak in digging behavior. Both estradiol and progesterone are produced and released by the ovaries . One to three days pre-partum, straw-carrying behavior is expressed as a function of decreasing progesterone levels, maintenance of high estradiol levels, and increasing prolactin levels. This release of prolactin (from the anterior pituitary ) is likely caused by the increase in estrogen-to-progesterone ratio. One day pre-partum to four days post-partum, hair loosening and plucking occur as a result of low progesterone and high prolactin levels, together with a decrease in testosterone . [ 16 ] In house mice and golden hamsters , nest-building takes place earlier, at the start or middle of pregnancy. For these species, nest-building coincides with high levels of estrogen and progestin. [ 17 ] [ 18 ] External factors also interact with hormones to influence maternal nest-building behavior. Pregnant rabbits that have been shaved will line their straw nest with available alternatives, such as male rabbit hair or synthetic hair. If given both straw and hair, mothers prefer straw during the straw-carrying period, and prefer hair during the nest-lining period. If given hair as the only material, shaved mothers collect the hair even when it is the straw-carrying period. [ 19 ] Research on avian paternal behavior shows that nest-building is triggered by different stimuli in the two sexes. Unlike the case for females, male nest-building among ring doves depends on the behavior of the prospective mate rather than on hormonal mechanisms. Males that are castrated and injected daily with testosterone either court females or build nests, depending purely on the behavior of the female. Hence, the male avian transition from courtship to nest-building is prompted by social cues and not by changes in hormone levels. [ 20 ] In sand goby ( Pomatoschistus minutus ) the males are the ones who build the nests. When males exhibit increased paternal care to eggs, they build nests with smaller entrances in comparison to males who provide less parental care. This helps prevent predators from entering the nest and consuming the offspring or developing eggs. [ 7 ] Nesting behavior is also present in many invertebrates. The best known example of nesting behavior in insects is that of the domestic honey bee . Most bees build nests. Solitary bees, like honey bees, make nests. However, solitary bees make individual nests for larvae and are not always in colonies. [ 21 ] Solitary bees will burrow into the ground, dead wood and plants. [ 21 ] [ 22 ]
https://en.wikipedia.org/wiki/Nesting_instinct
net.art refers to a group of artists who have worked in the medium of Internet art since 1994. Some of the early adopters and main members of this movement include Vuk Ćosić , Jodi.org , Alexei Shulgin , Olia Lialina , Heath Bunting , Daniel García Andújar , [ 1 ] and Rachel Baker. [ 2 ] Although this group was formed as a parody of avant garde movements by writers such as Tilman Baumgärtel , Josephine Bosma, Hans Dieter Huber and Pit Schultz, their individual works have little in common. The term "net.art" is also used as a synonym for net art or Internet art and covers a much wider range of artistic practices. In this wider definition, net.art means art that uses the Internet as its medium and that cannot be experienced in any other way. Typically net.art has the Internet and the specific socio-culture that it spawned as its subject matter but this is not required. The German critic Tilman Baumgärtel - building on the ideas of American critic Clement Greenberg - has frequently argued for a "media specificity" of net.art in his writings. According to the introduction to his book "net.art. Materialien zur Netzkunst", the specific qualities of net.art are "connectivity, global reach, multimediality, immateriality, interactivity and egality". [ 3 ] The net.art movement arose in the context of the wider development of Internet art . As such, net.art is more of a movement and a critical and political landmark in Internet art history, than a specific genre . Early precursors of the net.art movement include the international fluxus (Nam June Paik) and avant-pop ( Mark Amerika ) movements. The avant-pop movement particularly became widely recognized in Internet circles from 1993, largely via the popular Alt-X site. In 1995, the term "net.art" was used by nettime initiator Pit Schultz as a title for an exhibition in Berlin in 1995, in which Vuk Cosic and Alexei Shulgin both showed their work. [ 4 ] It was later used with regard to the "net.art per se" meeting of artists and theorists in Trieste in May 1996, and referred to a group of artists who worked together closely in the first half of the 1990s. These meetings gave birth to the website net.art per se, [ 5 ] a fake CNN website "commemorating" the event. [ 6 ] The term "net.art" has been wrongly attributed to artist Vuk Cosic in 1997, after Alexei Shulgin wrote about the origin of the term in a prank mail to the nettime mailinglist. [ 7 ] According to Shulgin's mail net.art stemmed from "conjoined phrases in an email bungled by a technical glitch (a morass of alphanumeric junk, its only legible term 'net.art')". [ 6 ] The researcher and artist Ramzi Turki uses the Facebook platform as a space for artistic exchangeFanny Drugeon, « Ramzi Turki, Le Net art et l’esthétique du partage : les murs ont aussi des yeux qui nous regardent », Critique d’art. Actualité internationale de la littérature critique sur l’art contemporain, 27 mai 2020 (ISSN 1246-8258, DOI 10.4000/critiquedart.47849. net.artists have built digital art communities through an active practice of web hosting and web art curating. net.artists have defined themselves through an international and networked mode of communication, an interplay of exchanges, collaborative and cooperative work [ citation needed ] . They have a large presence on several mailing lists such as Rhizome , File festival, Electronic Language International Festival , Nettime , Syndicate and Eyebeam. The identity of the net.artists is defined by both their digital works and their critical involvement in the digital art community, as the polemical discussion led by Olia Lialina that occurred on Nettime in early 2006 on the "New Media" Wikipedia entry shows [ 8 ] net.artists like Jodi developed a particular form of e-mail art, or spam mail art, through text reprocessing and ASCII art . The term "spam art" was coined [ 9 ] by net critique and net art practitioner [ 10 ] Frederic Madre to describe all such forms of disruptive interventions in mailing-lists, where seemingly nonsensical texts were generated by simple scripts, online forms or typed by hand. A connection can be made to the e-mail interventions of "Codeworks" artists such as Mez or mi ga or robots like Mailia which analyze emails and reply to them. "Codeworks" is a term coined by poet Alan Sondheim to define the textual experiments of artists playing with faux-code and non-executable script or mark-up languages. [ citation needed ] net.art developed in a context of cultural crisis in Eastern Europe in the beginning of the 1990s after the end of the Soviet Union and the fall of the Berlin Wall . The artists involved in net.art experiments are associated with the idea of a "social responsibility" that would answer the idea of democracy as a modern capitalist myth. The Internet, often promoted as the democratic tool par excellence, but largely participating in the rules of vested interests, is targeted by the net.artists who claimed that "a space where you can buy is a space where you can steal, but also where you can distribute". net.artists focus on finding new ways of sharing public space . By questioning structures such as the navigation window and challenging their functionality, net.artists have shown that what is considered to be natural by most Internet users is actually highly constructed, even controlled, by corporations. Company browsers like Netscape Navigator or Internet Explorer display user-friendly structures (the "navigation", the "exploration" are landmarks of social practices) to provide the user with a familiar environment; net.artists try to break this familiarity. Olia Lialina, in My Boyfriend Came Back From The War [ 11 ] or the duo Jodi, with their series of pop-up interventions and browser crashing applets, have engaged the materiality of navigation in their work. Their experiments have given birth to what could be called "browser art", which has been expanded by the British collective I/O/D 's experimental navigator WebStalker. Alexei Shulgin and Heath Bunting have played with the structure of advertisement portals by establishing lists of keywords unlikely to be searched for but nonetheless existing on the web as URLs or metadata components: they use this relational data to enmesh paths of navigation in order to create new readable texts [ citation needed ] . The user is not exploring one art website that has its own meaning and aesthetic significance within itself, but rather they are exposed to the entire network as a collection of socioeconomic forces and political stances that are not always visible. Rachel Greene has associated net.art with tactical media as a form of Detournement . Greene writes: "The subversion of corporate websites shares a blurry border with hacking and agitprop practices that would become an important field of net art, often referred to as 'tactical media'." [ 6 ] The Jodi collective works with the aesthetics of computer errors, which has a lot in common, on both the aesthetic and pragmatic levels, with hacker culture . Questioning and disturbing the browsing experience with hacks, code tricks, faux-code, and faux-virus, critically investigates the context in which they are agents. In turn, the digital environment becomes concerned with its own internal structure. The collective 0100101110101101.org expands the idea of "art hacktivism" by performing code interventions and perturbations in art festivals such as the Venice Biennale . On the other hand, the collective irational.org expands the idea of "art hacktivism" by performing interventions and perturbations in the real world, acting on it as on a possible ground for social reengineering. "We can point to a superficial difference between most net.art and hacking: hackers have an obsession with getting inside other computer systems and having an agency there, whereas the 404 errors in the JTDDS (for example) only engage other systems in an intentionally wrong manner in order to store a 'secret' message in their error logs. It's nice to think of artists as hackers who endeavour to get inside cultural systems and make them do things they were never intended to do: artists as culture hackers.". [ 12 ] A networking expert hacked into DNS servers to have the traceroute Linux command reveal the history of star wars IV . [ 13 ] This deep technical repurposing for the sake of enchantment and fun can be considered as a net.art performance. Computer worms can be intentionally good and positive when they are repurposed for large-scale ephemeral art that uses the whole Internet as a canvas. [ 14 ] During the heyday of net.art developments, particularly during the rise of global dot.com capitalism, the first series of critical columns appeared in German and English in the online publication Telepolis . Edited by writer and artist Armin Medosch , the work published at Telepolis featured American artist and net theorist Mark Amerika's "Amerika Online" columns. [ 15 ] These columns satirized the way self-effacing net.artists (himself included) took themselves too seriously. In response, European net.artists impersonated Amerika in faux emails to deconstruct his demystification of the marketing schemes most net.artists employed to achieve art world legitimacy. It was suggested that "the duplicitous dispatches were meant to raise US awareness of electronic artists in Europe, and may even contain an element of jealousy." [ 16 ] Many of these net.art interventions also tackled the issue of art as business and investigated mainstream cultural institutions such as the Tate Modern . Harwood, a member of the Mongrel collective, in his work Uncomfortable Proximity [ 17 ] (the first on-line project commissioned by Tate) mirrors the Tate's own website, and offers new images and ideas, collaged from his own experiences, his readings of Tate works, and publicity materials that inform his interest in the Tate website [ citation needed ] . net.artists have actively participated in the debate over the definition of net.art within the context of the art market. net.art promoted the modernist idea of the work of art as a process , as opposed to a conception of art as object making [ citation needed ] . Alexander R. Galloway , in an e-flux article entitled "Jodi's Infrastructure" argues that Jodi's approach to net.art, which involves the very structures that govern coding, is uniquely modernist: the form and content converge in the artwork. [ 18 ] The presentation of this process within the art world—whether it should be sold in the market, or shown in the institutional art environment, is problematic for digital works [ citation needed ] created for the Internet . The web, as marketable as it is, cannot be restricted to the ideological dimensions of the legitimate field of art, the institution of legitimation for art value, that is both ideological and economical [ citation needed ] . All for Sale by Aliona is an early net.art experiment addressing such issues. The WWWArt Award competition initiated by Alexei Shulgin in 1995 suggests rewarding found Internet works with what he calls an "art feeling." [ citation needed ] Some projects, such as Joachim Schmid's Archiv , Hybrids , or Copies by Eva & Franco Mattes (under the pseudonym of 0100101110101101.org ), are examples of how to store art-related or documentary data on a website. Cloning, plagiarizing, and collective creation are provided as alternative answers, such as in the Refresh Project. [ 19 ] Olia Lialina has addressed the issue of digital curating via her web platform Teleportacia.org, an online gallery to promote and sell net.art works. Each piece of net.art has its originality protected by a guarantee constituted by its URL , which acts as a barrier against reproducibility and/or forgery. Lialina claimed that this allowed the buyer of the piece to own it as they wished: controlling the location address as a means of controlling access to the piece. [ citation needed ] This attempt at giving net.art an economic identity and a legitimation within the art world was questioned even within the net.art sphere, though the project was often understood as a satire . [ 20 ] On the other hand, Teo Spiller really sold a web art project Megatronix to Ljubljana Municipal Museum in May 1999, calling the whole project of selling the net.art.trade. [ 21 ] Teleportacia.org became an ambiguous experiment on the notion of originality in the age of extreme digital reproduction and remix culture . The guarantee of originality protected by the URL was quickly challenged by Eva & Franco Mattes , who, under the pseudonym of 0100101110101101.org , cloned the content and produced an unauthorized mirror -site, showing the net.art works in the same context and the same quality as the original. The Last Real Net Art Museum is another example of Olia Lialina's attempt to deal with the issue. Online social networks experiments, such as the Poietic Generator , which existed before [ 22 ] the net.art movement, was involved in it, [ 23 ] and still exist after it, [ 24 ] may show that the fashion scheme of net.art may have forgotten some deep theoretical questions. [ 25 ]
https://en.wikipedia.org/wiki/Net.art
Zali Ritholtz - Chief Operations Officer Jeffrey Skelton - Chief Technology Officer net2phone is a Cloud Communications provider offering cloud based telephony services to businesses worldwide. The company is a subsidiary of IDT Corporation . net2phone was founded in 1990 by telecom entrepreneur Howard Jonas , the chairman and chief executive officer of net2phone’s parent company, IDT Corporation. The company was an early pioneer in the commercialization of voice-over-Internet protocol (VoIP) technologies leveraging the global carrier business and infrastructure of IDT and focusing on transitioning businesses and consumers from PSTN, traditional telecom interconnects, to Voice over IP. On July 30, 1999, during the dot-com bubble , the company became a public company via an initial public offering , raising $81 million. Shares rose 77% on the first day of trading to $26 per share. [ 1 ] After completion of the IPO, IDT owned 57% of the company. [ 2 ] Within a few weeks, the shares increased another 100% in value, to $53 per share. [ 2 ] In March 2000, in a transaction facilitated by IDT CEO Howard Jonas, a consortium of telecommunications companies led by AT&T announced a $1.4 billion investment for a 32% stake in the company, buying shares for $75 each. The transaction was completed in August 2000. [ 3 ] AOL had expressed an interest in buying all or part of the company but was not agreeable to the price. [ 4 ] In August 2000, Jonathan Fram, president of the company, left the company to join eVoice . [ 5 ] [ 6 ] In September 2000, the company formed Adir Technologies, a joint venture with Cisco Systems . [ 7 ] [ 8 ] In March 2002, the company sued Cisco for breach of contract. [ 9 ] In February 2002, the company announced 110 layoffs, or 28% of its workforce. [ 10 ] In October 2004, Liore Alroy became chief executive officer of the company. [ 11 ] On March 13, 2006, IDT Corporation acquired the shares of the company that it did not already own for $2.05 per share. [ 12 ] In 2015, net2phone began providing Unified Communications as a Service (UCaaS) targeted to the SMB market. net2phone’s UCaaS initiative was developed by the Company’s management team led by its President, Jonah Fink. Over the next 3 years, net2phone continued to expand its UCaaS offering into Argentina, Brazil, [ 13 ] Colombia, Mexico, and Peru leveraging its local infrastructure, communication licenses and local staff all while selling in the respective market’s local currency and language sets. In 2001, the company acquired iPing. [ 14 ] In 2000, the company acquired Aplio , an internet appliance maker located in San Bruno, California . As Unified Communications demands more than just voice over IP, such as messaging – in January 2017 net2phone acquired Live Ninja, [ 15 ] [ 16 ] a Miami based provider of a customer-facing messaging and live chat management service. In 2018, net2phone launched an updated version of its communications platform, incorporating the technology and capabilities from the Live Ninja acquisition. Further expansion came in 2019 with the acquisition of Versature, [ 17 ] a SaaS-based business communications and hosted VoIP provider serving the Canadian market. In 2020 with the acquisition of RingSouth Europa, [ 18 ] a business communications provider headquartered in Murcia, Spain. In 2020, with the rise of the COVID-19 pandemic causing a shift in the workplace environment, net2phone launched a native integration into Microsoft Teams, [ 19 ] as well as its own video conferencing platform, net2phone Huddle, [ 20 ] followed by further integrations into CRM tools such as Salesforce and Zoho and collaboration tools such as Slack. In 2022, net2phone acquired Integra CCS, [ 21 ] a Contact Center as a Service (CCaaS) provider operating out of Uruguay. UNITE is net2phone's Unified Communications as a Service (UCaaS) product, which provides businesses with voice, video, chat, text, and integrations. The product offers advanced call features, reporting, analytics, and integrations with popular SaaS tools that can be managed through a web-based interface. [ 22 ] net2phone's uContact is a Contact Center as a Service (CCaaS) platform introduced through their acquisition of Integra in 2022.  uContact features a suite of contact center features, including omnichannel support, social media, chatbots, workflow management, and development tools. [ 23 ] Huddle is a net2phone’s high-definition video conferencing platform released in April 2020. Huddle conferences are passcode-protected and encrypted. Huddle includes several features including screen sharing, YouTube casting, chat messaging, and a raise hand option. The application is accessible from desktop or mobile device. [ 24 ] net2phone AI was released in July 2023 as an add-on service designed to optimize agent and client interactions. Key functionalities include sentiment analysis, automatic call transcription, auto-generated follow-up emails, auto-generated call summaries, AI-generated coaching notes, call analytics, and CRM integrations. net2phone AI is available in multiple languages and integrates with communication or voice platforms that support API webhooks. [ 25 ] net2phone offers SIP Trunking services, allowing businesses to merge voice and data into a unified communications platform without the need for equipment replacement. The SIP trunking solution includes features such as high-quality voice interactions, international calling, hybrid SIP and hosted support, increased security, codec support, and a stable, fully redundant network.
https://en.wikipedia.org/wiki/Net2Phone
The NetEqualizer is a bandwidth shaping appliance designed for voice and data networks, created by APconnections in 2003. NetEqualizer traffic shaping appliances use built-in behavior-based algorithms to automatically shape traffic during peak periods on the network. [ 1 ] When the network is congested, the fairness algorithms favor business class applications at the expense of large file downloads. The favored applications include those such as VoIP, web browsing, web-based applications, chat and email. [ 2 ] Traffic is prioritized based on the nature of the traffic, so the NetEqualizer remains Net Neutral . The NetEqualizer also provides quality of service (QoS) through rate limiting, shared limits, and quota. New in 2015 is a DDoS Monitor. In addition, the NetEqualizer can be configured to control both encrypted an unencrypted peer-to-peer file sharing (P2P) traffic. Add-on modules include directory integration (NDI), caching (NCO), and a DDoS Firewall. The NetEqualizer has been implemented by colleges, universities, libraries, hotels, and businesses around the world. The appliance is currently being used in the rebuilding efforts in both Iraq and Afghanistan . [ 3 ] APconnections is a privately held company founded in 2003 and is based in Lafayette, Colorado .
https://en.wikipedia.org/wiki/NetEqualizer
NetExpert monitors and controls networks and service impacting resources [ 1 ] using object-oriented and expert systems technologies. [ 2 ] NetExpert is considered an OSS , used in managing wireline and wireless networks and services. [ 3 ] NetExpert is a scalable and distributable architecture that supports flexible configuration while maintaining individual component independence. Its application packages address many areas of communication services management, including fault, performance, reporting, activation, IP services, and others. These can be further tailored to individual customer environments and management requirements. [ 3 ] This framework consists of a set of integrated software modules and graphical user interface (GUI) development tools to enable the creation and deployment of complex management solutions. The object-oriented architecture of the NetExpert framework provides the building blocks to implement operations support and management systems using high-level tools rather than low-level programming languages. [ 4 ] The NetExpert framework is founded on open systems and object-oriented methodology. NetExpert supports different standards, transmission protocols, and equipment data models. NetExpert is based on the Telecommunications Management Network architecture created by the Telecommunications Standardization Sector of the International Telecommunication Union . It supports the development and deployment of applications for the main TMN management areas—fault, configuration, accounting, performance, and security—and the implementation of layered management architectures. In addition, the NetExpert framework employs expert rules and policies that replace complex programming languages and enable network analysts to model desired system behaviors by using GUI-based rule editors. [ 4 ]
https://en.wikipedia.org/wiki/NetExpert
NetGenie is a wireless router [ 1 ] part of Cyberoam 's product portfolio, and was launched in 2011. NetGenie provides Internet connectivity across all Internet-access devices like desktop , laptop , PDA , smartphones , tablets and other handheld devices. NetGenie contains 4 basic products, two for home users (NG11VH and NG11EH [ 2 ] ) and two for SOHO users (NG11VO and NG11EO [ 3 ] ). The NG11VH is a Wireless VDSL2 / ADSL2+ Modem Router and supports VDSL2, ADSL2+, Cable Internet, 3G USB modem connections. [ 4 ] The NG11EH model has parental control options for blocking unsafe/adult content (under lists of pornography , Spyware , nudity ), customizable internet access, and reports on online activities that includes the relevant information about security on user's internet activities such as visited websites, used online applications, and attempts to visit blocked websites. [ 5 ] The NG11VO is a Wireless VDSL2 / ADSL2+ Integrated Security Appliance for small offices. It supports VDSL2, ADSL2+, Cable Internet, 3G USB modem connections. [ 6 ] The NG11VO comes with pre-configured security against unauthorized access and misuse of office Wi-Fi network(s). [ citation needed ] The NG11EO [ 7 ] is for small or home offices and can be managed through a web-based GUI , available over any internet connected device within the office network. Features include security, VPN , 3G ready, internal controls, pre-configured Wi-Fi Security , reports, and remote management. [ citation needed ]
https://en.wikipedia.org/wiki/NetGenie
NetMotion Software , formerly NetMotion Wireless , is a privately held software company specializing in network security. [ 5 ] [ 6 ] NetMotion Wireless was formed in 2001 as a spin-off for wireless software from WRQ (based in Seattle, Washington and later part of the Attachmate Group). Former WRQ president Craig McKibben was CEO. [ 7 ] In 2021, NetMotion was acquired by Absolute Software for an estimated US$340 million. [ 16 ] NetMotion is headquartered in Seattle, Washington with offices in Chicago, London, Tokyo and Sydney. The company added a second headquarters in Victoria, British Columbia in 2019. [ 17 ] NetMotion products allow users to transition from traditional secure remote access technologies to a zero-trust approach, without affecting productivity or admin controls. [ 18 ] Fundamentally, it consists of client software on each mobile device, which communicates with a control server in the cloud or data center that pushes policies and actions to the client for execution. [ 19 ] Through this architecture that gives administrators control of the endpoints, they can manage application delivery based on changing network conditions through software, regardless of the combination of networks used, [ 20 ] including cellular and Wi-Fi networks that are outside of their direct administrative control. [ 19 ] An enhanced filtering feature named Aware was added in 2019. [ 21 ] In July, 2020, a new release of the software added the term software-defined perimeter (SDP). [ 22 ] In late 2020, NetMotion’s products were marketed with the term secure access service edge . [ 23 ]
https://en.wikipedia.org/wiki/NetMotion_Software
NetPIPE (Network Protocol-Independent Performance Evaluater) is a protocol independent performance tool that visually represents the network performance under a variety of conditions. It has modules for PVM , TCGMSG, and the 1-sided message-passing standards of MPI-2 and SHMEM .
https://en.wikipedia.org/wiki/NetPIPE
NetQoS , [ 1 ] which sells network performance management software and services, was co-founded by Joel Trammell in 1999 [ 2 ] and acquired by CA Technologies in 2009. [ 3 ] [ 4 ] [ 5 ] The company's name refers to Network Quality of Service . [ 6 ] Their ReportAnalyzer provides "real-time visibility into network traffic" [ 7 ] and seeks to improve network performance. [ 8 ] Offerings introduced shortly before the company was acquired by CA Technologies include: Earlier offerings include: NetQos' products were cited by over 100 articles regarding NetQos patents [ 14 ] and prior art . [ 15 ] This article about a technological corporation or company is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/NetQoS
NetSpot is a software tool for wireless network assessment, scanning, and surveys, analyzing Wi-Fi coverage and performance. [ 1 ] It runs on Mac OS X 10.6+ and Windows 7, 8, 10, and 11. Netspot supports 802.11n , 802.11a , 802.11b , and 802.11g wireless networks and uses the standard Wi-Fi network adapter and its Airport interface to map radio signal strength and other wireless network parameters, and build reports on that. NetSpot was released in August 2011. [ 2 ] NetSpot provides all professional wireless site survey features for Wi-Fi and maps coverage of a living area, office space, buildings, etc. [ 3 ] It provides visual data to help analyze radio signal leaks, discover noise sources, map channel use, optimize access point locations. The application can also perform Wi-Fi network planning: the data that are collected help to select channels and placements for new hotspots. Survey reports can be generated in PDF format.
https://en.wikipedia.org/wiki/NetSpot
NetStumbler (also known as Network Stumbler ) was a tool for Windows that facilitates detection of Wireless LANs using the 802.11b, 802.11a and 802.11g WLAN standards . It runs on Microsoft Windows operating systems from Windows 2000 to Windows XP. A trimmed-down version called MiniStumbler is available for the handheld Windows CE operating system. Netstumbler has become one of the most popular programs for wardriving and wireless reconnaissance, although it has a disadvantage. It can be detected easily by most intrusion detection system , because it actively probes a network to collect information. Netstumbler has integrated support for a GPS unit. With this support, Netstumbler displays GPS coordinate information next to the information about each discovered network, which can be useful for finding specific networks again after having sorted out collected data. [ 1 ] The program is commonly used for: No updated version has been developed since 2004. This computer networking article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/NetStumbler
In mathematics, more specifically in general topology and related branches, a net or Moore–Smith sequence is a function whose domain is a directed set . The codomain of this function is usually some topological space . Nets directly generalize the concept of a sequence in a metric space . Nets are primarily used in the fields of analysis and topology , where they are used to characterize many important topological properties that (in general), sequences are unable to characterize (this shortcoming of sequences motivated the study of sequential spaces and Fréchet–Urysohn spaces ). Nets are in one-to-one correspondence with filters . The concept of a net was first introduced by E. H. Moore and Herman L. Smith in 1922. [ 1 ] The term "net" was coined by John L. Kelley . [ 2 ] [ 3 ] The related concept of a filter was developed in 1937 by Henri Cartan . A directed set is a non-empty set A {\displaystyle A} together with a preorder , typically automatically assumed to be denoted by ≤ {\displaystyle \,\leq \,} (unless indicated otherwise), with the property that it is also ( upward ) directed , which means that for any a , b ∈ A , {\displaystyle a,b\in A,} there exists some c ∈ A {\displaystyle c\in A} such that a ≤ c {\displaystyle a\leq c} and b ≤ c . {\displaystyle b\leq c.} In words, this property means that given any two elements (of A {\displaystyle A} ), there is always some element that is "above" both of them (greater than or equal to each); in this way, directed sets generalize the notion of "a direction" in a mathematically rigorous way. Importantly though, directed sets are not required to be total orders or even partial orders . A directed set may have the greatest element . In this case, the conditions a ≤ c {\displaystyle a\leq c} and b ≤ c {\displaystyle b\leq c} cannot be replaced by the strict inequalities a < c {\displaystyle a<c} and b < c {\displaystyle b<c} , since the strict inequalities cannot be satisfied if a or b is the greatest element. A net in X {\displaystyle X} , denoted x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} , is a function of the form x ∙ : A → X {\displaystyle x_{\bullet }:A\to X} whose domain A {\displaystyle A} is some directed set, and whose values are x ∙ ( a ) = x a {\displaystyle x_{\bullet }(a)=x_{a}} . Elements of a net's domain are called its indices . When the set X {\displaystyle X} is clear from context it is simply called a net , and one assumes A {\displaystyle A} is a directed set with preorder ≤ . {\displaystyle \,\leq .} Notation for nets varies, for example using angled brackets ⟨ x a ⟩ a ∈ A {\displaystyle \left\langle x_{a}\right\rangle _{a\in A}} . As is common in algebraic topology notation, the filled disk or "bullet" stands in place of the input variable or index a ∈ A {\displaystyle a\in A} . A net x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} is said to be eventually or residually in a set S {\displaystyle S} if there exists some a ∈ A {\displaystyle a\in A} such that for every b ∈ A {\displaystyle b\in A} with b ≥ a , {\displaystyle b\geq a,} the point x b ∈ S . {\displaystyle x_{b}\in S.} A point x ∈ X {\displaystyle x\in X} is called a limit point or limit of the net x ∙ {\displaystyle x_{\bullet }} in X {\displaystyle X} whenever: expressed equivalently as: the net converges to/towards x {\displaystyle x} or has x {\displaystyle x} as a limit ; and variously denoted as: x ∙ → x in X x a → x in X lim x ∙ → x in X lim a ∈ A x a → x in X lim a x a → x in X . {\displaystyle {\begin{alignedat}{4}&x_{\bullet }&&\to \;&&x&&\;\;{\text{ in }}X\\&x_{a}&&\to \;&&x&&\;\;{\text{ in }}X\\\lim \;&x_{\bullet }&&\to \;&&x&&\;\;{\text{ in }}X\\\lim _{a\in A}\;&x_{a}&&\to \;&&x&&\;\;{\text{ in }}X\\\lim _{a}\;&x_{a}&&\to \;&&x&&\;\;{\text{ in }}X.\end{alignedat}}} If X {\displaystyle X} is clear from context, it may be omitted from the notation. If lim x ∙ → x {\displaystyle \lim x_{\bullet }\to x} and this limit is unique (i.e. lim x ∙ → y {\displaystyle \lim x_{\bullet }\to y} only for x = y {\displaystyle x=y} ) then one writes: lim x ∙ = x or lim x a = x or lim a ∈ A x a = x {\displaystyle \lim x_{\bullet }=x\;~~{\text{ or }}~~\;\lim x_{a}=x\;~~{\text{ or }}~~\;\lim _{a\in A}x_{a}=x} using the equal sign in place of the arrow → . {\displaystyle \to .} [ 4 ] In a Hausdorff space , every net has at most one limit, and the limit of a convergent net is always unique. [ 4 ] Some authors do not distinguish between the notations lim x ∙ = x {\displaystyle \lim x_{\bullet }=x} and lim x ∙ → x {\displaystyle \lim x_{\bullet }\to x} , but this can lead to ambiguities if the ambient space X {\displaystyle X} is not Hausdorff. A net x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} is said to be frequently or cofinally in S {\displaystyle S} if for every a ∈ A {\displaystyle a\in A} there exists some b ∈ A {\displaystyle b\in A} such that b ≥ a {\displaystyle b\geq a} and x b ∈ S . {\displaystyle x_{b}\in S.} [ 5 ] A point x ∈ X {\displaystyle x\in X} is said to be an accumulation point or cluster point of a net if for every neighborhood U {\displaystyle U} of x , {\displaystyle x,} the net is frequently/cofinally in U . {\displaystyle U.} [ 5 ] In fact, x ∈ X {\displaystyle x\in X} is a cluster point if and only if it has a subnet that converges to x . {\displaystyle x.} [ 6 ] The set cl X ⁡ ( x ∙ ) {\textstyle \operatorname {cl} _{X}\left(x_{\bullet }\right)} of all cluster points of x ∙ {\displaystyle x_{\bullet }} in X {\displaystyle X} is equal to cl X ⁡ ( x ≥ a ) {\textstyle \operatorname {cl} _{X}\left(x_{\geq a}\right)} for each a ∈ A {\displaystyle a\in A} , where x ≥ a := { x b : b ≥ a , b ∈ A } {\displaystyle x_{\geq a}:=\left\{x_{b}:b\geq a,b\in A\right\}} . The analogue of " subsequence " for nets is the notion of a "subnet". There are several different non-equivalent definitions of "subnet" and this article will use the definition introduced in 1970 by Stephen Willard, [ 7 ] which is as follows: If x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} and s ∙ = ( s i ) i ∈ I {\displaystyle s_{\bullet }=\left(s_{i}\right)_{i\in I}} are nets then s ∙ {\displaystyle s_{\bullet }} is called a subnet or Willard-subnet [ 7 ] of x ∙ {\displaystyle x_{\bullet }} if there exists an order-preserving map h : I → A {\displaystyle h:I\to A} such that h ( I ) {\displaystyle h(I)} is a cofinal subset of A {\displaystyle A} and s i = x h ( i ) for all i ∈ I . {\displaystyle s_{i}=x_{h(i)}\quad {\text{ for all }}i\in I.} The map h : I → A {\displaystyle h:I\to A} is called order-preserving and an order homomorphism if whenever i ≤ j {\displaystyle i\leq j} then h ( i ) ≤ h ( j ) . {\displaystyle h(i)\leq h(j).} The set h ( I ) {\displaystyle h(I)} being cofinal in A {\displaystyle A} means that for every a ∈ A , {\displaystyle a\in A,} there exists some b ∈ h ( I ) {\displaystyle b\in h(I)} such that b ≥ a . {\displaystyle b\geq a.} If x ∈ X {\displaystyle x\in X} is a cluster point of some subnet of x ∙ {\displaystyle x_{\bullet }} then x {\displaystyle x} is also a cluster point of x ∙ . {\displaystyle x_{\bullet }.} [ 6 ] A net x ∙ {\displaystyle x_{\bullet }} in set X {\displaystyle X} is called a universal net or an ultranet if for every subset S ⊆ X , {\displaystyle S\subseteq X,} x ∙ {\displaystyle x_{\bullet }} is eventually in S {\displaystyle S} or x ∙ {\displaystyle x_{\bullet }} is eventually in the complement X ∖ S . {\displaystyle X\setminus S.} [ 5 ] Every constant net is a (trivial) ultranet. Every subnet of an ultranet is an ultranet. [ 8 ] Assuming the axiom of choice , every net has some subnet that is an ultranet, but no nontrivial ultranets have ever been constructed explicitly. [ 5 ] If x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} is an ultranet in X {\displaystyle X} and f : X → Y {\displaystyle f:X\to Y} is a function then f ∘ x ∙ = ( f ( x a ) ) a ∈ A {\displaystyle f\circ x_{\bullet }=\left(f\left(x_{a}\right)\right)_{a\in A}} is an ultranet in Y . {\displaystyle Y.} [ 5 ] Given x ∈ X , {\displaystyle x\in X,} an ultranet clusters at x {\displaystyle x} if and only it converges to x . {\displaystyle x.} [ 5 ] A Cauchy net generalizes the notion of Cauchy sequence to nets defined on uniform spaces . [ 9 ] A net x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} is a Cauchy net if for every entourage V {\displaystyle V} there exists c ∈ A {\displaystyle c\in A} such that for all a , b ≥ c , {\displaystyle a,b\geq c,} ( x a , x b ) {\displaystyle \left(x_{a},x_{b}\right)} is a member of V . {\displaystyle V.} [ 9 ] [ 10 ] More generally, in a Cauchy space , a net x ∙ {\displaystyle x_{\bullet }} is Cauchy if the filter generated by the net is a Cauchy filter . A topological vector space (TVS) is called complete if every Cauchy net converges to some point. A normed space , which is a special type of topological vector space, is a complete TVS (equivalently, a Banach space ) if and only if every Cauchy sequence converges to some point (a property that is called sequential completeness ). Although Cauchy nets are not needed to describe completeness of normed spaces, they are needed to describe completeness of more general (possibly non- normable ) topological vector spaces. Virtually all concepts of topology can be rephrased in the language of nets and limits. This may be useful to guide the intuition since the notion of limit of a net is very similar to that of limit of a sequence . The following set of theorems and lemmas help cement that similarity: A subset S ⊆ X {\displaystyle S\subseteq X} is closed in X {\displaystyle X} if and only if every limit point in X {\displaystyle X} of a net in S {\displaystyle S} necessarily lies in S {\displaystyle S} . Explicitly, this means that if s ∙ = ( s a ) a ∈ A {\displaystyle s_{\bullet }=\left(s_{a}\right)_{a\in A}} is a net with s a ∈ S {\displaystyle s_{a}\in S} for all a ∈ A {\displaystyle a\in A} , and lim s ∙ → x {\displaystyle \lim {}_{}s_{\bullet }\to x} in X , {\displaystyle X,} then x ∈ S . {\displaystyle x\in S.} More generally, if S ⊆ X {\displaystyle S\subseteq X} is any subset, the closure of S {\displaystyle S} is the set of points x ∈ X {\displaystyle x\in X} with lim a ∈ A s ∙ → x {\displaystyle \lim _{a\in A}s_{\bullet }\to x} for some net ( s a ) a ∈ A {\displaystyle \left(s_{a}\right)_{a\in A}} in S {\displaystyle S} . [ 6 ] A subset S ⊆ X {\displaystyle S\subseteq X} is open if and only if no net in X ∖ S {\displaystyle X\setminus S} converges to a point of S . {\displaystyle S.} [ 11 ] Also, subset S ⊆ X {\displaystyle S\subseteq X} is open if and only if every net converging to an element of S {\displaystyle S} is eventually contained in S . {\displaystyle S.} It is these characterizations of "open subset" that allow nets to characterize topologies . Topologies can also be characterized by closed subsets since a set is open if and only if its complement is closed. So the characterizations of " closed set " in terms of nets can also be used to characterize topologies. A function f : X → Y {\displaystyle f:X\to Y} between topological spaces is continuous at a point x {\displaystyle x} if and only if for every net x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} in the domain, lim x ∙ → x {\displaystyle \lim _{}x_{\bullet }\to x} in X {\displaystyle X} implies lim f ( x ∙ ) → f ( x ) {\displaystyle \lim {}f\left(x_{\bullet }\right)\to f(x)} in Y . {\displaystyle Y.} [ 6 ] Briefly, a function f : X → Y {\displaystyle f:X\to Y} is continuous if and only if x ∙ → x {\displaystyle x_{\bullet }\to x} in X {\displaystyle X} implies f ( x ∙ ) → f ( x ) {\displaystyle f\left(x_{\bullet }\right)\to f(x)} in Y . {\displaystyle Y.} In general, this statement would not be true if the word "net" was replaced by "sequence"; that is, it is necessary to allow for directed sets other than just the natural numbers if X {\displaystyle X} is not a first-countable space (or not a sequential space ). ( ⟹ {\displaystyle \implies } ) Let f {\displaystyle f} be continuous at point x , {\displaystyle x,} and let x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} be a net such that lim x ∙ → x . {\displaystyle \lim _{}x_{\bullet }\to x.} Then for every open neighborhood U {\displaystyle U} of f ( x ) , {\displaystyle f(x),} its preimage under f , {\displaystyle f,} V := f − 1 ( U ) , {\displaystyle V:=f^{-1}(U),} is a neighborhood of x {\displaystyle x} (by the continuity of f {\displaystyle f} at x {\displaystyle x} ). Thus the interior of V , {\displaystyle V,} which is denoted by int ⁡ V , {\displaystyle \operatorname {int} V,} is an open neighborhood of x , {\displaystyle x,} and consequently x ∙ {\displaystyle x_{\bullet }} is eventually in int ⁡ V . {\displaystyle \operatorname {int} V.} Therefore ( f ( x a ) ) a ∈ A {\displaystyle \left(f\left(x_{a}\right)\right)_{a\in A}} is eventually in f ( int ⁡ V ) {\displaystyle f(\operatorname {int} V)} and thus also eventually in f ( V ) {\displaystyle f(V)} which is a subset of U . {\displaystyle U.} Thus lim ( f ( x a ) ) a ∈ A → f ( x ) , {\displaystyle \lim _{}\left(f\left(x_{a}\right)\right)_{a\in A}\to f(x),} and this direction is proven. ( ⟸ {\displaystyle \Longleftarrow } ) Let x {\displaystyle x} be a point such that for every net x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} such that lim x ∙ → x , {\displaystyle \lim _{}x_{\bullet }\to x,} lim ( f ( x a ) ) a ∈ A → f ( x ) . {\displaystyle \lim _{}\left(f\left(x_{a}\right)\right)_{a\in A}\to f(x).} Now suppose that f {\displaystyle f} is not continuous at x . {\displaystyle x.} Then there is a neighborhood U {\displaystyle U} of f ( x ) {\displaystyle f(x)} whose preimage under f , {\displaystyle f,} V , {\displaystyle V,} is not a neighborhood of x . {\displaystyle x.} Because f ( x ) ∈ U , {\displaystyle f(x)\in U,} necessarily x ∈ V . {\displaystyle x\in V.} Now the set of open neighborhoods of x {\displaystyle x} with the containment preorder is a directed set (since the intersection of every two such neighborhoods is an open neighborhood of x {\displaystyle x} as well). We construct a net x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} such that for every open neighborhood of x {\displaystyle x} whose index is a , {\displaystyle a,} x a {\displaystyle x_{a}} is a point in this neighborhood that is not in V {\displaystyle V} ; that there is always such a point follows from the fact that no open neighborhood of x {\displaystyle x} is included in V {\displaystyle V} (because by assumption, V {\displaystyle V} is not a neighborhood of x {\displaystyle x} ). It follows that f ( x a ) {\displaystyle f\left(x_{a}\right)} is not in U . {\displaystyle U.} Now, for every open neighborhood W {\displaystyle W} of x , {\displaystyle x,} this neighborhood is a member of the directed set whose index we denote a 0 . {\displaystyle a_{0}.} For every b ≥ a 0 , {\displaystyle b\geq a_{0},} the member of the directed set whose index is b {\displaystyle b} is contained within W {\displaystyle W} ; therefore x b ∈ W . {\displaystyle x_{b}\in W.} Thus lim x ∙ → x . {\displaystyle \lim _{}x_{\bullet }\to x.} and by our assumption lim ( f ( x a ) ) a ∈ A → f ( x ) . {\displaystyle \lim _{}\left(f\left(x_{a}\right)\right)_{a\in A}\to f(x).} But int ⁡ U {\displaystyle \operatorname {int} U} is an open neighborhood of f ( x ) {\displaystyle f(x)} and thus f ( x a ) {\displaystyle f\left(x_{a}\right)} is eventually in int ⁡ U {\displaystyle \operatorname {int} U} and therefore also in U , {\displaystyle U,} in contradiction to f ( x a ) {\displaystyle f\left(x_{a}\right)} not being in U {\displaystyle U} for every a . {\displaystyle a.} This is a contradiction so f {\displaystyle f} must be continuous at x . {\displaystyle x.} This completes the proof. A space X {\displaystyle X} is compact if and only if every net x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} in X {\displaystyle X} has a subnet with a limit in X . {\displaystyle X.} This can be seen as a generalization of the Bolzano–Weierstrass theorem and Heine–Borel theorem . ( ⟹ {\displaystyle \implies } ) First, suppose that X {\displaystyle X} is compact. We will need the following observation (see finite intersection property ). Let I {\displaystyle I} be any non-empty set and { C i } i ∈ I {\displaystyle \left\{C_{i}\right\}_{i\in I}} be a collection of closed subsets of X {\displaystyle X} such that ⋂ i ∈ J C i ≠ ∅ {\displaystyle \bigcap _{i\in J}C_{i}\neq \varnothing } for each finite J ⊆ I . {\displaystyle J\subseteq I.} Then ⋂ i ∈ I C i ≠ ∅ {\displaystyle \bigcap _{i\in I}C_{i}\neq \varnothing } as well. Otherwise, { C i c } i ∈ I {\displaystyle \left\{C_{i}^{c}\right\}_{i\in I}} would be an open cover for X {\displaystyle X} with no finite subcover contrary to the compactness of X . {\displaystyle X.} Let x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} be a net in X {\displaystyle X} directed by A . {\displaystyle A.} For every a ∈ A {\displaystyle a\in A} define E a ≜ { x b : b ≥ a } . {\displaystyle E_{a}\triangleq \left\{x_{b}:b\geq a\right\}.} The collection { cl ⁡ ( E a ) : a ∈ A } {\displaystyle \{\operatorname {cl} \left(E_{a}\right):a\in A\}} has the property that every finite subcollection has non-empty intersection. Thus, by the remark above, we have that ⋂ a ∈ A cl ⁡ E a ≠ ∅ {\displaystyle \bigcap _{a\in A}\operatorname {cl} E_{a}\neq \varnothing } and this is precisely the set of cluster points of x ∙ . {\displaystyle x_{\bullet }.} By the proof given in the next section, it is equal to the set of limits of convergent subnets of x ∙ . {\displaystyle x_{\bullet }.} Thus x ∙ {\displaystyle x_{\bullet }} has a convergent subnet. ( ⟸ {\displaystyle \Longleftarrow } ) Conversely, suppose that every net in X {\displaystyle X} has a convergent subnet. For the sake of contradiction, let { U i : i ∈ I } {\displaystyle \left\{U_{i}:i\in I\right\}} be an open cover of X {\displaystyle X} with no finite subcover. Consider D ≜ { J ⊂ I : | J | < ∞ } . {\displaystyle D\triangleq \{J\subset I:|J|<\infty \}.} Observe that D {\displaystyle D} is a directed set under inclusion and for each C ∈ D , {\displaystyle C\in D,} there exists an x C ∈ X {\displaystyle x_{C}\in X} such that x C ∉ U a {\displaystyle x_{C}\notin U_{a}} for all a ∈ C . {\displaystyle a\in C.} Consider the net ( x C ) C ∈ D . {\displaystyle \left(x_{C}\right)_{C\in D}.} This net cannot have a convergent subnet, because for each x ∈ X {\displaystyle x\in X} there exists c ∈ I {\displaystyle c\in I} such that U c {\displaystyle U_{c}} is a neighbourhood of x {\displaystyle x} ; however, for all B ⊇ { c } , {\displaystyle B\supseteq \{c\},} we have that x B ∉ U c . {\displaystyle x_{B}\notin U_{c}.} This is a contradiction and completes the proof. The set of cluster points of a net is equal to the set of limits of its convergent subnets . Let x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} be a net in a topological space X {\displaystyle X} (where as usual A {\displaystyle A} automatically assumed to be a directed set) and also let y ∈ X . {\displaystyle y\in X.} If y {\displaystyle y} is a limit of a subnet of x ∙ {\displaystyle x_{\bullet }} then y {\displaystyle y} is a cluster point of x ∙ . {\displaystyle x_{\bullet }.} Conversely, assume that y {\displaystyle y} is a cluster point of x ∙ . {\displaystyle x_{\bullet }.} Let B {\displaystyle B} be the set of pairs ( U , a ) {\displaystyle (U,a)} where U {\displaystyle U} is an open neighborhood of y {\displaystyle y} in X {\displaystyle X} and a ∈ A {\displaystyle a\in A} is such that x a ∈ U . {\displaystyle x_{a}\in U.} The map h : B → A {\displaystyle h:B\to A} mapping ( U , a ) {\displaystyle (U,a)} to a {\displaystyle a} is then cofinal. Moreover, giving B {\displaystyle B} the product order (the neighborhoods of y {\displaystyle y} are ordered by inclusion) makes it a directed set, and the net ( y b ) b ∈ B {\displaystyle \left(y_{b}\right)_{b\in B}} defined by y b = x h ( b ) {\displaystyle y_{b}=x_{h(b)}} converges to y . {\displaystyle y.} A net has a limit if and only if all of its subnets have limits. In that case, every limit of the net is also a limit of every subnet. In general, a net in a space X {\displaystyle X} can have more than one limit, but if X {\displaystyle X} is a Hausdorff space , the limit of a net, if it exists, is unique. Conversely, if X {\displaystyle X} is not Hausdorff, then there exists a net on X {\displaystyle X} with two distinct limits. Thus the uniqueness of the limit is equivalent to the Hausdorff condition on the space, and indeed this may be taken as the definition. This result depends on the directedness condition; a set indexed by a general preorder or partial order may have distinct limit points even in a Hausdorff space. A filter is a related idea in topology that allows for a general definition for convergence in general topological spaces. The two ideas are equivalent in the sense that they give the same concept of convergence. [ 12 ] More specifically, every filter base induces an associated net using the filter's pointed sets, and convergence of the filter base implies convergence of the associated net. Similarly, any net ( x a ) a ∈ A {\displaystyle \left(x_{a}\right)_{a\in A}} in X {\displaystyle X} induces a filter base of tails { { x a : a ∈ A , a 0 ≤ a } : a 0 ∈ A } {\displaystyle \left\{\left\{x_{a}:a\in A,a_{0}\leq a\right\}:a_{0}\in A\right\}} where the filter in X {\displaystyle X} generated by this filter base is called the net's eventuality filter . Convergence of the net implies convergence of the eventuality filter. [ 13 ] This correspondence allows for any theorem that can be proven with one concept to be proven with the other. [ 13 ] For instance, continuity of a function from one topological space to the other can be characterized either by the convergence of a net in the domain implying the convergence of the corresponding net in the codomain, or by the same statement with filter bases. Robert G. Bartle argues that despite their equivalence, it is useful to have both concepts. [ 13 ] He argues that nets are enough like sequences to make natural proofs and definitions in analogy to sequences, especially ones using sequential elements, such as is common in analysis , while filters are most useful in algebraic topology . In any case, he shows how the two can be used in combination to prove various theorems in general topology . The learning curve for using nets is typically much less steep than that for filters, which is why many mathematicians, especially analysts , prefer them over filters. However, filters, and especially ultrafilters , have some important technical advantages over nets that ultimately result in nets being encountered much less often than filters outside of the fields of analysis and topology. Every non-empty totally ordered set is directed. Therefore, every function on such a set is a net. In particular, the natural numbers N {\displaystyle \mathbb {N} } together with the usual integer comparison ≤ {\displaystyle \,\leq \,} preorder form the archetypical example of a directed set. A sequence is a function on the natural numbers, so every sequence a 1 , a 2 , … {\displaystyle a_{1},a_{2},\ldots } in a topological space X {\displaystyle X} can be considered a net in X {\displaystyle X} defined on N . {\displaystyle \mathbb {N} .} Conversely, any net whose domain is the natural numbers is a sequence because by definition, a sequence in X {\displaystyle X} is just a function from N = { 1 , 2 , … } {\displaystyle \mathbb {N} =\{1,2,\ldots \}} into X . {\displaystyle X.} It is in this way that nets are generalizations of sequences: rather than being defined on a countable linearly ordered set ( N {\displaystyle \mathbb {N} } ), a net is defined on an arbitrary directed set . Nets are frequently denoted using notation that is similar to (and inspired by) that used with sequences. For example, the subscript notation x a {\displaystyle x_{a}} is taken from sequences. Similarly, every limit of a sequence and limit of a function can be interpreted as a limit of a net. Specifically, the net is eventually in a subset S {\displaystyle S} of X {\displaystyle X} if there exists an N ∈ N {\displaystyle N\in \mathbb {N} } such that for every integer n ≥ N , {\displaystyle n\geq N,} the point a n {\displaystyle a_{n}} is in S . {\displaystyle S.} So lim n a n → L {\displaystyle \lim {}_{n}a_{n}\to L} if and only if for every neighborhood V {\displaystyle V} of L , {\displaystyle L,} the net is eventually in V . {\displaystyle V.} The net is frequently in a subset S {\displaystyle S} of X {\displaystyle X} if and only if for every N ∈ N {\displaystyle N\in \mathbb {N} } there exists some integer n ≥ N {\displaystyle n\geq N} such that a n ∈ S , {\displaystyle a_{n}\in S,} that is, if and only if infinitely many elements of the sequence are in S . {\displaystyle S.} Thus a point y ∈ X {\displaystyle y\in X} is a cluster point of the net if and only if every neighborhood V {\displaystyle V} of y {\displaystyle y} contains infinitely many elements of the sequence. In the context of topology, sequences do not fully encode all information about functions between topological spaces. In particular, the following two conditions are, in general, not equivalent for a map f {\displaystyle f} between topological spaces X {\displaystyle X} and Y {\displaystyle Y} : While condition 1 always guarantees condition 2, the converse is not necessarily true. The spaces for which the two conditions are equivalent are called sequential spaces . All first-countable spaces , including metric spaces , are sequential spaces, but not all topological spaces are sequential. Nets generalize the notion of a sequence so that condition 2 reads as follows: With this change, the conditions become equivalent for all maps of topological spaces, including topological spaces that do not necessarily have a countable or linearly ordered neighbourhood basis around a point. Therefore, while sequences do not encode sufficient information about functions between topological spaces, nets do, because collections of open sets in topological spaces are much like directed sets in behavior. For an example where sequences do not suffice, interpret the set R R {\displaystyle \mathbb {R} ^{\mathbb {R} }} of all functions with prototype f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } as the Cartesian product ∏ x ∈ R R {\displaystyle {\textstyle \prod \limits _{x\in \mathbb {R} }}\mathbb {R} } (by identifying a function f {\displaystyle f} with the tuple ( f ( x ) ) x ∈ R , {\displaystyle (f(x))_{x\in \mathbb {R} },} and conversely) and endow it with the product topology . This (product) topology on R R {\displaystyle \mathbb {R} ^{\mathbb {R} }} is identical to the topology of pointwise convergence . Let E {\displaystyle E} denote the set of all functions f : R → { 0 , 1 } {\displaystyle f:\mathbb {R} \to \{0,1\}} that are equal to 1 {\displaystyle 1} everywhere except for at most finitely many points (that is, such that the set { x : f ( x ) = 0 } {\displaystyle \{x:f(x)=0\}} is finite). Then the constant 0 {\displaystyle 0} function 0 : R → { 0 } {\displaystyle \mathbf {0} :\mathbb {R} \to \{0\}} belongs to the closure of E {\displaystyle E} in R R ; {\displaystyle \mathbb {R} ^{\mathbb {R} };} that is, 0 ∈ cl R R ⁡ E . {\displaystyle \mathbf {0} \in \operatorname {cl} _{\mathbb {R} ^{\mathbb {R} }}E.} [ 8 ] This will be proven by constructing a net in E {\displaystyle E} that converges to 0 . {\displaystyle \mathbf {0} .} However, there does not exist any sequence in E {\displaystyle E} that converges to 0 , {\displaystyle \mathbf {0} ,} [ 14 ] which makes this one instance where (non-sequence) nets must be used because sequences alone can not reach the desired conclusion. Compare elements of R R {\displaystyle \mathbb {R} ^{\mathbb {R} }} pointwise in the usual way by declaring that f ≥ g {\displaystyle f\geq g} if and only if f ( x ) ≥ g ( x ) {\displaystyle f(x)\geq g(x)} for all x . {\displaystyle x.} This pointwise comparison is a partial order that makes ( E , ≥ ) {\displaystyle (E,\geq )} a directed set since given any f , g ∈ E , {\displaystyle f,g\in E,} their pointwise minimum m := min { f , g } {\displaystyle m:=\min\{f,g\}} belongs to E {\displaystyle E} and satisfies f ≥ m {\displaystyle f\geq m} and g ≥ m . {\displaystyle g\geq m.} This partial order turns the identity map Id : ( E , ≥ ) → E {\displaystyle \operatorname {Id} :(E,\geq )\to E} (defined by f ↦ f {\displaystyle f\mapsto f} ) into an E {\displaystyle E} -valued net. This net converges pointwise to 0 {\displaystyle \mathbf {0} } in R R , {\displaystyle \mathbb {R} ^{\mathbb {R} },} which implies that 0 {\displaystyle \mathbf {0} } belongs to the closure of E {\displaystyle E} in R R . {\displaystyle \mathbb {R} ^{\mathbb {R} }.} More generally, a subnet of a sequence is not necessarily a sequence. [ 5 ] [ a ] Moreso, a subnet of a sequence may be a sequence, but not a subsequence. [ b ] But, in the specific case of a sequential space, every net induces a corresponding sequence, and this relationship maps subnets to subsequences. Specifically, for a first-countable space, the net ( x a ) a ∈ A {\displaystyle \left(x_{a}\right)_{a\in A}} induces the sequence ( x h n ) n ∈ N {\displaystyle \left(x_{h_{n}}\right)_{n\in \mathbb {N} }} where h n {\displaystyle h_{n}} is defined as the n th {\displaystyle n^{\text{th}}} smallest value in A {\displaystyle A} – that is, let h 1 := inf A {\displaystyle h_{1}:=\inf A} and let h n := inf { a ∈ A : a > h n − 1 } {\displaystyle h_{n}:=\inf\{a\in A:a>h_{n-1}\}} for every integer n > 1 {\displaystyle n>1} . If the set S = { x } ∪ { x a : a ∈ A } {\displaystyle S=\{x\}\cup \left\{x_{a}:a\in A\right\}} is endowed with the subspace topology induced on it by X , {\displaystyle X,} then lim x ∙ → x {\displaystyle \lim _{}x_{\bullet }\to x} in X {\displaystyle X} if and only if lim x ∙ → x {\displaystyle \lim _{}x_{\bullet }\to x} in S . {\displaystyle S.} In this way, the question of whether or not the net x ∙ {\displaystyle x_{\bullet }} converges to the given point x {\displaystyle x} depends solely on this topological subspace S {\displaystyle S} consisting of x {\displaystyle x} and the image of (that is, the points of) the net x ∙ . {\displaystyle x_{\bullet }.} Intuitively, convergence of a net ( x a ) a ∈ A {\displaystyle \left(x_{a}\right)_{a\in A}} means that the values x a {\displaystyle x_{a}} come and stay as close as we want to x {\displaystyle x} for large enough a . {\displaystyle a.} Given a point x {\displaystyle x} in a topological space, let N x {\displaystyle N_{x}} denote the set of all neighbourhoods containing x . {\displaystyle x.} Then N x {\displaystyle N_{x}} is a directed set, where the direction is given by reverse inclusion, so that S ≥ T {\displaystyle S\geq T} if and only if S {\displaystyle S} is contained in T . {\displaystyle T.} For S ∈ N x , {\displaystyle S\in N_{x},} let x S {\displaystyle x_{S}} be a point in S . {\displaystyle S.} Then ( x S ) {\displaystyle \left(x_{S}\right)} is a net. As S {\displaystyle S} increases with respect to ≥ , {\displaystyle \,\geq ,} the points x S {\displaystyle x_{S}} in the net are constrained to lie in decreasing neighbourhoods of x , {\displaystyle x,} . Therefore, in this neighborhood system of a point x {\displaystyle x} , x S {\displaystyle x_{S}} does indeed converge to x {\displaystyle x} according to the definition of net convergence. Given a subbase B {\displaystyle {\mathcal {B}}} for the topology on X {\displaystyle X} (where note that every base for a topology is also a subbase) and given a point x ∈ X , {\displaystyle x\in X,} a net x ∙ {\displaystyle x_{\bullet }} in X {\displaystyle X} converges to x {\displaystyle x} if and only if it is eventually in every neighborhood U ∈ B {\displaystyle U\in {\mathcal {B}}} of x . {\displaystyle x.} This characterization extends to neighborhood subbases (and so also neighborhood bases ) of the given point x . {\displaystyle x.} A net in the product space has a limit if and only if each projection has a limit. Explicitly, let ( X i ) i ∈ I {\displaystyle \left(X_{i}\right)_{i\in I}} be topological spaces, endow their Cartesian product ∏ X ∙ := ∏ i ∈ I X i {\displaystyle {\textstyle \prod }X_{\bullet }:=\prod _{i\in I}X_{i}} with the product topology , and that for every index l ∈ I , {\displaystyle l\in I,} denote the canonical projection to X l {\displaystyle X_{l}} by π l : ∏ X ∙ → X l ( x i ) i ∈ I ↦ x l {\displaystyle {\begin{alignedat}{4}\pi _{l}:\;&&{\textstyle \prod }X_{\bullet }&&\;\to \;&X_{l}\\[0.3ex]&&\left(x_{i}\right)_{i\in I}&&\;\mapsto \;&x_{l}\\\end{alignedat}}} Let f ∙ = ( f a ) a ∈ A {\displaystyle f_{\bullet }=\left(f_{a}\right)_{a\in A}} be a net in ∏ X ∙ {\displaystyle {\textstyle \prod }X_{\bullet }} directed by A {\displaystyle A} and for every index i ∈ I , {\displaystyle i\in I,} let π i ( f ∙ ) = def ( π i ( f a ) ) a ∈ A {\displaystyle \pi _{i}\left(f_{\bullet }\right)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left(\pi _{i}\left(f_{a}\right)\right)_{a\in A}} denote the result of "plugging f ∙ {\displaystyle f_{\bullet }} into π i {\displaystyle \pi _{i}} ", which results in the net π i ( f ∙ ) : A → X i . {\displaystyle \pi _{i}\left(f_{\bullet }\right):A\to X_{i}.} It is sometimes useful to think of this definition in terms of function composition : the net π i ( f ∙ ) {\displaystyle \pi _{i}\left(f_{\bullet }\right)} is equal to the composition of the net f ∙ : A → ∏ X ∙ {\displaystyle f_{\bullet }:A\to {\textstyle \prod }X_{\bullet }} with the projection π i : ∏ X ∙ → X i ; {\displaystyle \pi _{i}:{\textstyle \prod }X_{\bullet }\to X_{i};} that is, π i ( f ∙ ) = def π i ∘ f ∙ . {\displaystyle \pi _{i}\left(f_{\bullet }\right)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\pi _{i}\,\circ \,f_{\bullet }.} For any given point L = ( L i ) i ∈ I ∈ ∏ i ∈ I X i , {\displaystyle L=\left(L_{i}\right)_{i\in I}\in {\textstyle \prod \limits _{i\in I}}X_{i},} the net f ∙ {\displaystyle f_{\bullet }} converges to L {\displaystyle L} in the product space ∏ X ∙ {\displaystyle {\textstyle \prod }X_{\bullet }} if and only if for every index i ∈ I , {\displaystyle i\in I,} π i ( f ∙ ) = def ( π i ( f a ) ) a ∈ A {\displaystyle \pi _{i}\left(f_{\bullet }\right)\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;\left(\pi _{i}\left(f_{a}\right)\right)_{a\in A}} converges to L i {\displaystyle L_{i}} in X i . {\displaystyle X_{i}.} [ 15 ] And whenever the net f ∙ {\displaystyle f_{\bullet }} clusters at L {\displaystyle L} in ∏ X ∙ {\displaystyle {\textstyle \prod }X_{\bullet }} then π i ( f ∙ ) {\displaystyle \pi _{i}\left(f_{\bullet }\right)} clusters at L i {\displaystyle L_{i}} for every index i ∈ I . {\displaystyle i\in I.} [ 8 ] However, the converse does not hold in general. [ 8 ] For example, suppose X 1 = X 2 = R {\displaystyle X_{1}=X_{2}=\mathbb {R} } and let f ∙ = ( f a ) a ∈ N {\displaystyle f_{\bullet }=\left(f_{a}\right)_{a\in \mathbb {N} }} denote the sequence ( 1 , 1 ) , ( 0 , 0 ) , ( 1 , 1 ) , ( 0 , 0 ) , … {\displaystyle (1,1),(0,0),(1,1),(0,0),\ldots } that alternates between ( 1 , 1 ) {\displaystyle (1,1)} and ( 0 , 0 ) . {\displaystyle (0,0).} Then L 1 := 0 {\displaystyle L_{1}:=0} and L 2 := 1 {\displaystyle L_{2}:=1} are cluster points of both π 1 ( f ∙ ) {\displaystyle \pi _{1}\left(f_{\bullet }\right)} and π 2 ( f ∙ ) {\displaystyle \pi _{2}\left(f_{\bullet }\right)} in X 1 × X 2 = R 2 {\displaystyle X_{1}\times X_{2}=\mathbb {R} ^{2}} but ( L 1 , L 2 ) = ( 0 , 1 ) {\displaystyle \left(L_{1},L_{2}\right)=(0,1)} is not a cluster point of f ∙ {\displaystyle f_{\bullet }} since the open ball of radius 1 {\displaystyle 1} centered at ( 0 , 1 ) {\displaystyle (0,1)} does not contain even a single point f ∙ {\displaystyle f_{\bullet }} If no L ∈ X {\displaystyle L\in X} is given but for every i ∈ I , {\displaystyle i\in I,} there exists some L i ∈ X i {\displaystyle L_{i}\in X_{i}} such that π i ( f ∙ ) → L i {\displaystyle \pi _{i}\left(f_{\bullet }\right)\to L_{i}} in X i {\displaystyle X_{i}} then the tuple defined by L = ( L i ) i ∈ I {\displaystyle L=\left(L_{i}\right)_{i\in I}} will be a limit of f ∙ {\displaystyle f_{\bullet }} in X . {\displaystyle X.} However, the axiom of choice might be need to be assumed to conclude that this tuple L {\displaystyle L} exists; the axiom of choice is not needed in some situations, such as when I {\displaystyle I} is finite or when every L i ∈ X i {\displaystyle L_{i}\in X_{i}} is the unique limit of the net π i ( f ∙ ) {\displaystyle \pi _{i}\left(f_{\bullet }\right)} (because then there is nothing to choose between), which happens for example, when every X i {\displaystyle X_{i}} is a Hausdorff space . If I {\displaystyle I} is infinite and ∏ X ∙ = ∏ j ∈ I X j {\displaystyle {\textstyle \prod }X_{\bullet }={\textstyle \prod \limits _{j\in I}}X_{j}} is not empty, then the axiom of choice would (in general) still be needed to conclude that the projections π i : ∏ X ∙ → X i {\displaystyle \pi _{i}:{\textstyle \prod }X_{\bullet }\to X_{i}} are surjective maps . The axiom of choice is equivalent to Tychonoff's theorem , which states that the product of any collection of compact topological spaces is compact. But if every compact space is also Hausdorff, then the so called "Tychonoff's theorem for compact Hausdorff spaces" can be used instead, which is equivalent to the ultrafilter lemma and so strictly weaker than the axiom of choice . Nets can be used to give short proofs of both version of Tychonoff's theorem by using the characterization of net convergence given above together with the fact that a space is compact if and only if every net has a convergent subnet . Limit superior and limit inferior of a net of real numbers can be defined in a similar manner as for sequences. [ 16 ] [ 17 ] [ 18 ] Some authors work even with more general structures than the real line, like complete lattices. [ 19 ] For a net ( x a ) a ∈ A , {\displaystyle \left(x_{a}\right)_{a\in A},} put lim sup x a = lim a ∈ A sup b ⪰ a x b = inf a ∈ A sup b ⪰ a x b . {\displaystyle \limsup x_{a}=\lim _{a\in A}\sup _{b\succeq a}x_{b}=\inf _{a\in A}\sup _{b\succeq a}x_{b}.} Limit superior of a net of real numbers has many properties analogous to the case of sequences. For example, lim sup ( x a + y a ) ≤ lim sup x a + lim sup y a , {\displaystyle \limsup(x_{a}+y_{a})\leq \limsup x_{a}+\limsup y_{a},} where equality holds whenever one of the nets is convergent. The definition of the value of a Riemann integral can be interpreted as a limit of a net of Riemann sums where the net's directed set is the set of all partitions of the interval of integration, partially ordered by inclusion. Suppose ( M , d ) {\displaystyle (M,d)} is a metric space (or a pseudometric space ) and M {\displaystyle M} is endowed with the metric topology . If m ∈ M {\displaystyle m\in M} is a point and m ∙ = ( m i ) a ∈ A {\displaystyle m_{\bullet }=\left(m_{i}\right)_{a\in A}} is a net, then m ∙ → m {\displaystyle m_{\bullet }\to m} in ( M , d ) {\displaystyle (M,d)} if and only if d ( m , m ∙ ) → 0 {\displaystyle d\left(m,m_{\bullet }\right)\to 0} in R , {\displaystyle \mathbb {R} ,} where d ( m , m ∙ ) := ( d ( m , m a ) ) a ∈ A {\displaystyle d\left(m,m_{\bullet }\right):=\left(d\left(m,m_{a}\right)\right)_{a\in A}} is a net of real numbers . In plain English , this characterization says that a net converges to a point in a metric space if and only if the distance between the net and the point converges to zero. If ( M , ‖ ⋅ ‖ ) {\displaystyle (M,\|\cdot \|)} is a normed space (or a seminormed space ) then m ∙ → m {\displaystyle m_{\bullet }\to m} in ( M , ‖ ⋅ ‖ ) {\displaystyle (M,\|\cdot \|)} if and only if ‖ m − m ∙ ‖ → 0 {\displaystyle \left\|m-m_{\bullet }\right\|\to 0} in R , {\displaystyle \mathbb {R} ,} where ‖ m − m ∙ ‖ := ( ‖ m − m a ‖ ) a ∈ A . {\displaystyle \left\|m-m_{\bullet }\right\|:=\left(\left\|m-m_{a}\right\|\right)_{a\in A}.} If ( M , d ) {\displaystyle (M,d)} has at least two points, then we can fix a point c ∈ M {\displaystyle c\in M} (such as M := R n {\displaystyle M:=\mathbb {R} ^{n}} with the Euclidean metric with c := 0 {\displaystyle c:=0} being the origin, for example) and direct the set I := M ∖ { c } {\displaystyle I:=M\setminus \{c\}} reversely according to distance from c {\displaystyle c} by declaring that i ≤ j {\displaystyle i\leq j} if and only if d ( j , c ) ≤ d ( i , c ) . {\displaystyle d(j,c)\leq d(i,c).} In other words, the relation is "has at least the same distance to c {\displaystyle c} as", so that "large enough" with respect to this relation means "close enough to c {\displaystyle c} ". Given any function with domain M , {\displaystyle M,} its restriction to I := M ∖ { c } {\displaystyle I:=M\setminus \{c\}} can be canonically interpreted as a net directed by ( I , ≤ ) . {\displaystyle (I,\leq ).} [ 8 ] A net f : M ∖ { c } → X {\displaystyle f:M\setminus \{c\}\to X} is eventually in a subset S {\displaystyle S} of a topological space X {\displaystyle X} if and only if there exists some n ∈ M ∖ { c } {\displaystyle n\in M\setminus \{c\}} such that for every m ∈ M ∖ { c } {\displaystyle m\in M\setminus \{c\}} satisfying d ( m , c ) ≤ d ( n , c ) , {\displaystyle d(m,c)\leq d(n,c),} the point f ( m ) {\displaystyle f(m)} is in S . {\displaystyle S.} Such a net f {\displaystyle f} converges in X {\displaystyle X} to a given point L ∈ X {\displaystyle L\in X} if and only if lim m → c f ( m ) → L {\displaystyle \lim _{m\to c}f(m)\to L} in the usual sense (meaning that for every neighborhood V {\displaystyle V} of L , {\displaystyle L,} f {\displaystyle f} is eventually in V {\displaystyle V} ). [ 8 ] The net f : M ∖ { c } → X {\displaystyle f:M\setminus \{c\}\to X} is frequently in a subset S {\displaystyle S} of X {\displaystyle X} if and only if for every n ∈ M ∖ { c } {\displaystyle n\in M\setminus \{c\}} there exists some m ∈ M ∖ { c } {\displaystyle m\in M\setminus \{c\}} with d ( m , c ) ≤ d ( n , c ) {\displaystyle d(m,c)\leq d(n,c)} such that f ( m ) {\displaystyle f(m)} is in S . {\displaystyle S.} Consequently, a point L ∈ X {\displaystyle L\in X} is a cluster point of the net f {\displaystyle f} if and only if for every neighborhood V {\displaystyle V} of L , {\displaystyle L,} the net is frequently in V . {\displaystyle V.} Consider a well-ordered set [ 0 , c ] {\displaystyle [0,c]} with limit point t {\displaystyle t} and a function f {\displaystyle f} from [ 0 , t ) {\displaystyle [0,t)} to a topological space X . {\displaystyle X.} This function is a net on [ 0 , t ) . {\displaystyle [0,t).} It is eventually in a subset V {\displaystyle V} of X {\displaystyle X} if there exists an r ∈ [ 0 , t ) {\displaystyle r\in [0,t)} such that for every s ∈ [ r , t ) {\displaystyle s\in [r,t)} the point f ( s ) {\displaystyle f(s)} is in V . {\displaystyle V.} So lim x → t f ( x ) → L {\displaystyle \lim _{x\to t}f(x)\to L} if and only if for every neighborhood V {\displaystyle V} of L , {\displaystyle L,} f {\displaystyle f} is eventually in V . {\displaystyle V.} The net f {\displaystyle f} is frequently in a subset V {\displaystyle V} of X {\displaystyle X} if and only if for every r ∈ [ 0 , t ) {\displaystyle r\in [0,t)} there exists some s ∈ [ r , t ) {\displaystyle s\in [r,t)} such that f ( s ) ∈ V . {\displaystyle f(s)\in V.} A point y ∈ X {\displaystyle y\in X} is a cluster point of the net f {\displaystyle f} if and only if for every neighborhood V {\displaystyle V} of y , {\displaystyle y,} the net is frequently in V . {\displaystyle V.} The first example is a special case of this with c = ω . {\displaystyle c=\omega .} See also ordinal-indexed sequence .
https://en.wikipedia.org/wiki/Net_(mathematics)
Net ecosystem production (NEP) in ecology , limnology , and oceanography , is the difference between gross primary production (GPP) and net ecosystem respiration . [ 1 ] Net ecosystem production represents all the carbon produced by plants in water through photosynthesis that does not get respired by animals , other heterotrophs , or the plants themselves. Net ecosystem production describes the total carbon in an ecosystem that can be stored, exported, or oxidized back into carbon dioxide gas. NEP is written in units of mass of carbon per unit area per time, for example, grams carbon per square meter per year (g C m −2 yr −1 ). In a given ecosystem, carbon quantified as net ecosystem production can eventually end up: oxidized by fire or ultraviolet radiation , accumulated as biomass , exported as organic carbon to another system, or accumulated in sediments or soils . Carbon classified as NEP can be in the form of particles in the particulate organic carbon (POC) pool such as phytoplankton cells (living) and detritus (non-living), or it can be in the form of dissolved substances that have not yet been decomposed in the dissolved organic carbon (DOC) pool. [ 2 ] In any form, if the carbon gets respired or decomposed by any living organism (plant, animal, bacteria , or other microscopic organism) to release carbon dioxide , that carbon no longer counts as NEP. [ 1 ] Net ecosystem production is all the carbon not respired, including respiration by plants and heterotrophic organisms such as animals and microbes. In contrast, net primary production (NPP) is all the carbon taken up by plants ( autotrophs ) minus the carbon that the plants themselves respire through cellular respiration. [ citation needed ] Net community production (NCP) is the difference between net primary production and respiration by animals and heterotrophs only. [ 3 ] Net community production is equal to net ecosystem production, and is only calculated differently. Annual net community production (ANCP) is this carbon pool estimated per year. [ 3 ] For example, annual net community production in the tropical South Pacific Ocean can be very close to zero, meaning that basically all carbon produced is respired by heterotrophs. In the rest of the Pacific Ocean, annual net community production can range from 2.0 to 2.4 mol C m −2 yr −1 , meaning that carbon produced by phytoplankton (minus what the phytoplankton respire themselves) is greater during a given year than what gets respired by heterotrophs [ 4 ]
https://en.wikipedia.org/wiki/Net_ecosystem_production
In a hydraulic circuit, net positive suction head ( NPSH ) may refer to one of two quantities in the analysis of cavitation : NPSH is particularly relevant inside centrifugal pumps and turbines , which are parts of a hydraulic system that are most vulnerable to cavitation. If cavitation occurs, the drag coefficient of the impeller vanes will increase drastically—possibly stopping flow altogether—and prolonged exposure will damage the impeller. In a pump, cavitation will first occur at the inlet of the impeller. [ 1 ] Denoting the inlet by i , the NPSH A at this point is defined as: NPSH A = ( p i ρ g + V i 2 2 g ) − p v ρ g {\displaystyle {\text{NPSH}}_{A}=\left({\frac {p_{i}}{\rho g}}+{\frac {V_{i}^{2}}{2g}}\right)-{\frac {p_{v}}{\rho g}}} where p i {\displaystyle p_{i}} is the absolute pressure at the inlet, V i {\displaystyle V_{i}} is the average velocity at the inlet, ρ {\displaystyle \rho } is the fluid density, g {\displaystyle g} is the acceleration of gravity and p v {\displaystyle p_{v}} is the vapor pressure of the fluid. Note that NPSH is equivalent to the sum of both the static and dynamic heads – that is, the stagnation head – minus the equilibrium vapor pressure head, hence "net positive suction head". Applying the Bernoulli's equation for the control volume enclosing the suction free surface 0 and the pump inlet i , under the assumption that the kinetic energy at 0 is negligible, that the fluid is inviscid , and that the fluid density is constant: p 0 ρ g + z 0 = p i ρ g + V i 2 2 g + z i + h f {\displaystyle {\frac {p_{0}}{\rho g}}+z_{0}={\frac {p_{i}}{\rho g}}+{\frac {V_{i}^{2}}{2g}}+z_{i}+h_{f}} Using the above application of Bernoulli to eliminate the velocity term and local pressure terms in the definition of NPSH A : Net Positive Suction Head A = p 0 ρ g − p v ρ g − ( z i − z 0 ) − h f {\displaystyle {\text{Net Positive Suction Head}}_{A}={\frac {p_{0}}{\rho g}}-{\frac {p_{v}}{\rho g}}-(z_{i}-z_{0})-h_{f}} This is the standard expression for the available NPSH at a point. Cavitation will occur at the point i when the available NPSH is less than the NPSH required to prevent cavitation (NPSH R ). For simple impeller systems, NPSH R can be derived theoretically, [ 2 ] but very often it is determined empirically. [ 1 ] Note NPSH A and NPSH R are in absolute units and usually expressed in "m" or "ft," not "psia". Experimentally, NPSH R is often defined as the NPSH 3 , the point at which the head output of the pump decreases by 3 % at a given flow due to reduced hydraulic performance. On multi-stage pumps this is limited to a 3 % drop in the first stage head. [ 3 ] The calculation of NPSH in a reaction turbine is different to the calculation of NPSH in a pump, because the point at which cavitation will first occur is in a different place. In a reaction turbine, cavitation will first occur at the outlet of the impeller, at the entrance of the draft tube . [ 4 ] Denoting the entrance of the draft tube by e , the NPSH A is defined in the same way as for pumps: NPSH A = ( p e ρ g + V e 2 2 g ) − p v ρ g {\displaystyle {\text{NPSH}}_{A}=\left({\frac {p_{e}}{\rho g}}+{\frac {V_{e}^{2}}{2g}}\right)-{\frac {p_{v}}{\rho g}}} [ 1 ] Applying Bernoulli's principle from the draft tube entrance e to the lower free surface 0 , under the assumption that the kinetic energy at 0 is negligible, that the fluid is inviscid, and that the fluid density is constant: p e ρ g + V e 2 2 g + z e = p 0 ρ g + z 0 + h f {\displaystyle {\frac {p_{e}}{\rho g}}+{\frac {V_{e}^{2}}{2g}}+z_{e}={\frac {p_{0}}{\rho g}}+z_{0}+h_{f}} Using the above application of Bernoulli to eliminate the velocity term and local pressure terms in the definition of NPSH A : NPSH A = p 0 ρ g − p v ρ g − ( z e − z 0 ) + h f {\displaystyle {\text{NPSH}}_{A}={\frac {p_{0}}{\rho g}}-{\frac {p_{v}}{\rho g}}-(z_{e}-z_{0})+h_{f}} Note that, in turbines minor friction losses ( h f {\displaystyle h_{f}} ) alleviate the effect of cavitation - opposite to what happens in pumps. Vapour pressure is strongly dependent on temperature, and thus so will both NPSH R and NPSH A . Centrifugal pumps are particularly vulnerable especially when pumping heated solution near the vapor pressure, whereas positive displacement pumps are less affected by cavitation, as they are better able to pump two-phase flow (the mixture of gas and liquid), however, the resultant flow rate of the pump will be diminished because of the gas volumetrically displacing a disproportion of liquid. Careful design is required to pump high temperature liquids with a centrifugal pump when the liquid is near its boiling point. The violent collapse of the cavitation bubble creates a shock wave that can carve material from internal pump components (usually the leading edge of the impeller) and creates noise often described as "pumping gravel". Additionally, the inevitable increase in vibration can cause other mechanical faults in the pump and associated equipment. The NPSH appears in a number of other cavitation-relevant parameters. The suction head coefficient is a dimensionless measure of NPSH: C NPSH = g ⋅ NPSH n 2 D 2 {\displaystyle C_{\text{NPSH}}={\frac {g\cdot {\text{NPSH}}}{n^{2}D^{2}}}} Where n {\displaystyle n} is the angular velocity (in rad/s) of the turbo-machine shaft, and D {\displaystyle D} is the turbo-machine impeller diameter. Thoma's cavitation number is defined as: σ = NPSH H {\displaystyle \sigma ={\frac {\text{NPSH}}{H}}} Where H {\displaystyle H} is the head across the turbo-machine. (based on sea level). Example Number 1: A tank with a liquid level 2 metres above the pump intake, plus the atmospheric pressure of 10 metres, minus a 2 metre friction loss into the pump (say for pipe & valve loss), minus the NPSH R curve (say 2.5 metres) of the pre-designed pump (see the manufacturers curve) = an NPSH A (available) of 7.5 metres. (not forgetting the flow duty). This equates to 3 times the NPSH required. This pump will operate well so long as all other parameters are correct. Remember that positive or negative flow duty will change the reading on the pump manufacture NPSH R curve. The lower the flow, the lower the NPSH R , and vice versa. Lifting out of a well will also create negative NPSH; however remember that atmospheric pressure at sea level is 10 metres! This helps us, as it gives us a bonus boost or “push” into the pump intake. (Remember that you only have 10 metres of atmospheric pressure as a bonus and nothing more!). Example Number 2: A well or bore with an operating level of 5 metres below the intake, minus a 2 metre friction loss into pump (pipe loss), minus the NPSH R curve (say 2.4 metres) of the pre-designed pump = an NPSH A (available) of (negative) -9.4 metres. Adding the atmospheric pressure of 10 metres gives a positive NPSH A of 0.6 metres. The minimum requirement is 0.6 metres above NPSH R ), so the pump should lift from the well. Using the situation from example 2 above, but pumping 70 degrees Celsius (158F) water from a hot spring, creating negative NPSH, yields the following: Example Number 3: A well or bore running at 70 degrees Celsius (158F) with an operating level of 5 metres below the intake, minus a 2 metre friction loss into pump (pipe loss), minus the NPSH R curve (say 2.4 metres) of the pre-designed pump, minus a temperature loss of 3 metres/10 feet = an NPSH A (available) of (negative) -12.4 metres. Adding the atmospheric pressure of 10 metres and gives a negative NPSH A of -2.4 metres remaining. Remembering that the minimum requirement is 600 mm above the NPSH R therefore this pump will not be able to pump the 70 degree Celsius liquid and will cavitate and lose performance and cause damage. To work efficiently, the pump must be buried in the ground at a depth of 2.4 metres plus the required 600 mm minimum, totalling a total depth of 3 metres into the pit. (3.5 metres to be completely safe). A minimum of 600 mm (0.06 bar) and a recommended 1.5 metre (0.15 bar ) head pressure “higher” than the NPSH R pressure value required by the manufacturer is required to allow the pump to operate properly. Serious damage may occur if a large pump has been sited incorrectly with an incorrect NPSH R value and this may result in a very expensive pump or installation repair. NPSH problems may be able to be solved by changing the NPSH R or by re-siting the pump. If an NPSH A is say 10 bar then the pump you are using will deliver exactly 10 bar more over the entire operational curve of a pump than its listed operational curve. Example: A pump with a max. pressure head of 8 bar (80 metres) will actually run at 18 bar if the NPSH A is 10 bar. i.e.: 8 bar (pump curve) plus 10 bar NPSH A = 18 bar. This phenomenon is what manufacturers use when they design multistage pumps, (Pumps with more than one impeller). Each multi stacked impeller boosts the succeeding impeller to raise the pressure head. Some pumps can have up to 150 stages or more, in order to boost heads up to hundreds of metres.
https://en.wikipedia.org/wiki/Net_positive_suction_head
The net present value ( NPV ) or net present worth ( NPW ) [ 1 ] is a way of measuring the value of an asset that has cashflow by adding up the present value of all the future cash flows that asset will generate. The present value of a cash flow depends on the interval of time between now and the cash flow because of the Time value of money (which includes the annual effective discount rate ). It provides a method for evaluating and comparing capital projects or financial products with cash flows spread over time, as in loans , investments , payouts from insurance contracts plus many other applications. Time value of money dictates that time affects the value of cash flows. For example, a lender may offer 99 cents for the promise of receiving $1.00 a month from now, but the promise to receive that same dollar 20 years in the future would be worth much less today to that same person (lender), even if the payback in both cases was equally certain. This decrease in the current value of future cash flows is based on a chosen rate of return (or discount rate). If for example there exists a time series of identical cash flows, the cash flow in the present is the most valuable, with each future cash flow becoming less valuable than the previous cash flow. A cash flow today is more valuable than an identical cash flow in the future [ 2 ] because a present flow can be invested immediately and begin earning returns, while a future flow cannot. NPV is determined by calculating the costs (negative cash flows) and benefits (positive cash flows) for each period of an investment. After the cash flow for each period is calculated, the present value (PV) of each one is achieved by discounting its future value (see Formula ) at a periodic rate of return (the rate of return dictated by the market). NPV is the sum of all the discounted future cash flows. Because of its simplicity, NPV is a useful tool to determine whether a project or investment will result in a net profit or a loss. A positive NPV results in profit, while a negative NPV results in a loss. The NPV measures the excess or shortfall of cash flows, in present value terms, above the cost of funds. [ 3 ] In a theoretical situation of unlimited capital budgeting , a company should pursue every investment with a positive NPV. However, in practical terms a company's capital constraints limit investments to projects with the highest NPV whose cost cash flows, or initial cash investment, do not exceed the company's capital. NPV is a central tool in discounted cash flow (DCF) analysis and is a standard method for using the time value of money to appraise long-term projects. It is widely used throughout economics , financial analysis , and financial accounting . In the case when all future cash flows are positive, or incoming (such as the principal and coupon payment of a bond ) the only outflow of cash is the purchase price, the NPV is simply the PV of future cash flows minus the purchase price (which is its own PV). NPV can be described as the "difference amount" between the sums of discounted cash inflows and cash outflows. It compares the present value of money today to the present value of money in the future, taking inflation and returns into account. The NPV of a sequence of cash flows takes as input the cash flows and a discount rate or discount curve and outputs a present value, which is the current fair price . The converse process in discounted cash flow (DCF) analysis takes a sequence of cash flows and a price as input and as output the discount rate, or internal rate of return (IRR) which would yield the given price as NPV. This rate, called the yield , is widely used in bond trading. Each cash inflow/outflow is discounted back to its present value (PV). Then all are summed such that NPV is the sum of all terms: P V = R t ( 1 + i ) t {\displaystyle \mathrm {PV} ={\frac {R_{t}}{(1+i)^{t}}}} where: The result of this formula is multiplied with the Annual Net cash in-flows and reduced by Initial Cash outlay the present value, but in cases where the cash flows are not equal in amount, the previous formula will be used to determine the present value of each cash flow separately. Any cash flow within 12 months will not be discounted for NPV purpose, nevertheless the usual initial investments during the first year R 0 are summed up a negative cash flow. [ 4 ] The NPV can also be thought of as the difference between the discounted benefits and costs over time. As such, the NPV can also be written as: where: Given the (period, cash inflows, cash outflows) shown by ( t , B t {\displaystyle B_{t}} , C t {\displaystyle C_{t}} ) where N is the total number of periods, the net present value N P V {\displaystyle \mathrm {NPV} } is given by: where: The NPV can be rewritten using the net cash flow ( R t ) {\displaystyle (R_{t})} in each time period as: N P V ( i , N ) = ∑ t = 0 N R t ( 1 + i ) t {\displaystyle \mathrm {NPV} (i,N)=\sum _{t=0}^{N}{\frac {R_{t}}{(1+i)^{t}}}} By convention, the initial period occurs at time t = 0 {\displaystyle t=0} , where cash flows in successive periods are then discounted from t = 1 , 2 , 3... {\displaystyle t=1,2,3...} and so on. Furthermore, all future cash flows during a period are assumed to be at the end of each period. [ 5 ] For constant cash flow R , the net present value N P V {\displaystyle \mathrm {NPV} } is a finite geometric series and is given by: N P V ( i , N , R ) = R ( 1 − ( 1 1 + i ) N + 1 1 − ( 1 1 + i ) ) , i ≠ 0 {\displaystyle \mathrm {NPV} (i,N,R)=R\left({\frac {1-\left({\frac {1}{1+i}}\right)^{N+1}}{1-\left({\frac {1}{1+i}}\right)}}\right),\quad i\neq 0} Inclusion of the R 0 {\displaystyle R_{0}} term is important in the above formulae. A typical capital project involves a large negative R 0 {\displaystyle R_{0}} cashflow (the initial investment) with positive future cashflows (the return on the investment). A key assessment is whether, for a given discount rate, the NPV is positive (profitable) or negative (loss-making). The IRR is the discount rate for which the NPV is exactly 0. The NPV method can be slightly adjusted to calculate how much money is contributed to a project's investment per dollar invested. This is known as the capital efficiency ratio. The formula for the net present value per dollar investment (NPVI) is given below: where: If the discounted benefits across the life of a project are $100 million and the discounted net costs across the life of a project are $60 million then the NPVI is: That is for every dollar invested in the project, a contribution of $0.6667 is made to the project's NPV. [ 6 ] The NPV formula assumes that the benefits and costs occur at the end of each period, resulting in a more conservative NPV. However, it may be that the cash inflows and outflows occur at the beginning of the period or in the middle of the period. The NPV formula for mid period discounting is given by: Over a project's lifecycle, cash flows are typically spread across each period (for example spread across each year), and as such the middle of the year represents the average point in time in which these cash flows occur. Hence mid period discounting typically provides a more accurate, although less conservative NPV. [ 7 ] [ 8 ] ЧикЙ The NPV formula using beginning of period discounting is given by: This results in the least conservative NPV. The rate used to discount future cash flows to the present value is a key variable of this process. A firm's weighted average cost of capital (after tax) is often used, but many people believe that it is appropriate to use higher discount rates to adjust for risk, opportunity cost, or other factors. A variable discount rate with higher rates applied to cash flows occurring further along the time span might be used to reflect the yield curve premium for long-term debt. Another approach to choosing the discount rate factor is to decide the rate which the capital needed for the project could return if invested in an alternative venture. If, for example, the capital required for Project A can earn 5% elsewhere, use this discount rate in the NPV calculation to allow a direct comparison to be made between Project A and the alternative. Related to this concept is to use the firm's reinvestment rate. Re-investment rate can be defined as the rate of return for the firm's investments on average. When analyzing projects in a capital constrained environment, it may be appropriate to use the reinvestment rate rather than the firm's weighted average cost of capital as the discount factor. It reflects opportunity cost of investment, rather than the possibly lower cost of capital. An NPV calculated using variable discount rates (if they are known for the duration of the investment) may better reflect the situation than one calculated from a constant discount rate for the entire investment duration. Refer to the tutorial article written by Samuel Baker [ 9 ] for more detailed relationship between the NPV and the discount rate. For some professional investors, their investment funds are committed to target a specified rate of return. In such cases, that rate of return should be selected as the discount rate for the NPV calculation. In this way, a direct comparison can be made between the profitability of the project and the desired rate of return. To some extent, the selection of the discount rate is dependent on the use to which it will be put. If the intent is simply to determine whether a project will add value to the company, using the firm's weighted average cost of capital may be appropriate. If trying to decide between alternative investments in order to maximize the value of the firm, the corporate reinvestment rate would probably be a better choice. Using variable rates over time, or discounting "guaranteed" cash flows differently from "at risk" cash flows, may be a superior methodology but is seldom used in practice. Using the discount rate to adjust for risk is often difficult to do in practice (especially internationally) and is difficult to do well. An alternative to using discount factor to adjust for risk is to explicitly correct the cash flows for the risk elements using risk-adjusted net present value ( rNPV ) or a similar method, then discount at the firm's rate. NPV is an indicator of how much value an investment or project adds to the firm. With a particular project, if R t {\displaystyle R_{t}} is a positive value, the project is in the status of positive cash inflow in the time of t . If R t {\displaystyle R_{t}} is a negative value, the project is in the status of discounted cash outflow in the time of t . Appropriately risked projects with a positive NPV could be accepted. This does not necessarily mean that they should be undertaken since NPV at the cost of capital may not account for opportunity cost , i.e., comparison with other available investments. In financial theory , if there is a choice between two mutually exclusive alternatives, the one yielding the higher NPV should be selected. A positive net present value indicates that the projected earnings generated by a project or investment (in present dollars) exceeds the anticipated costs (also in present dollars). This concept is the basis for the Net Present Value Rule, which dictates that the only investments that should be made are those with positive NPVs. An investment with a positive NPV is profitable, but one with a negative NPV will not necessarily result in a net loss: it is just that the internal rate of return of the project falls below the required rate of return. NPV is an indicator for project investments, and has several advantages and disadvantages for decision-making. The NPV includes all relevant time and cash flows for the project by considering the time value of money , which is consistent with the goal of wealth maximization by creating the highest wealth for shareholders. The NPV formula accounts for cash flow timing patterns and size differences for each project, and provides an easy, unambiguous dollar value comparison of different investment options. [ 10 ] [ 11 ] The NPV can be easily calculated using modern spreadsheets, under the assumption that the discount rate and future cash flows are known. For a firm considering investing in multiple projects, the NPV has the benefit of being additive. That is, the NPVs of different projects may be aggregated to calculate the highest wealth creation, based on the available capital that can be invested by a firm. [ 12 ] The NPV method has several disadvantages. The NPV approach does not consider hidden costs and project size. Thus, investment decisions on projects with substantial hidden costs may not be accurate. [ 13 ] The NPV is heavily dependent on knowledge of future cash flows, their timing, the length of a project, the initial investment required, and the discount rate. Hence, it can only be accurate if these input parameters are correct; although, sensitivity analyzes can be undertaken to examine how the NPV changes as the input variables are changed, thus reducing the uncertainty of the NPV. [ 14 ] The accuracy of the NPV method relies heavily on the choice of a discount rate and hence discount factor , representing an i nvestment's true risk premium . [ 15 ] The discount rate is assumed to be constant over the life of an investment; however, discount rates can change over time. For example, discount rates can change as the cost of capital changes. [ 16 ] [ 10 ] There are other drawbacks to the NPV method, such as the fact that it displays a lack of consideration for a project’s size and the cost of capital . [ 17 ] [ 11 ] The NPV calculation is purely financial and thus does not consider non-financial metrics that may be relevant to an investment decision. [ 18 ] Comparing mutually exclusive projects with different investment horizons can be difficult. Since unequal projects are all assumed to have duplicate investment horizons, the NPV approach can be used to compare the optimal duration NPV. [ 19 ] The time-discrete formula of the net present value can also be written in a continuous variation where Net present value can be regarded as Laplace- [ 20 ] [ 21 ] respectively Z-transformed cash flow with the integral operator including the complex number s which resembles to the interest rate i from the real number space or more precisely s = ln(1 + i ). From this follow simplifications known from cybernetics , control theory and system dynamics . Imaginary parts of the complex number s describe the oscillating behaviour (compare with the pork cycle , cobweb theorem , and phase shift between commodity price and supply offer) whereas real parts are responsible for representing the effect of compound interest (compare with damping ). A corporation must decide whether to introduce a new product line. The company will have immediate costs of 100,000 at t = 0 . Recall, a cost is a negative for outgoing cash flow, thus this cash flow is represented as −100,000. The company assumes the product will provide equal benefits of 10,000 for each of 12 years beginning at t = 1 . For simplicity, assume the company will have no outgoing cash flows after the initial 100,000 cost. This also makes the simplifying assumption that the net cash received or paid is lumped into a single transaction occurring on the last day of each year. At the end of the 12 years the product no longer provides any cash flow and is discontinued without any additional costs. Assume that the effective annual discount rate is 10%. The present value (value at t = 0 ) can be calculated for each year: The total present value of the incoming cash flows is 68,136.91. The total present value of the outgoing cash flows is simply the 100,000 at time t = 0 . Thus: In this example: Observe that as t increases the present value of each cash flow at t decreases. For example, the final incoming cash flow has a future value of 10,000 at t = 12 but has a present value (at t = 0 ) of 3,186.31. The opposite of discounting is compounding. Taking the example in reverse, it is the equivalent of investing 3,186.31 at t = 0 (the present value) at an interest rate of 10% compounded for 12 years, which results in a cash flow of 10,000 at t = 12 (the future value). The importance of NPV becomes clear in this instance. Although the incoming cash flows ( 10,000 × 12 = 120,000 ) appear to exceed the outgoing cash flow (100,000), the future cash flows are not adjusted using the discount rate. Thus, the project appears misleadingly profitable. When the cash flows are discounted however, it indicates the project would result in a net loss of 31,863.09. Thus, the NPV calculation indicates that this project should be disregarded because investing in this project is the equivalent of a loss of 31,863.09 at t = 0 . The concept of time value of money indicates that cash flows in different periods of time cannot be accurately compared unless they have been adjusted to reflect their value at the same period of time (in this instance, t = 0 ). [ 2 ] It is the present value of each future cash flow that must be determined in order to provide any meaningful comparison between cash flows at different periods of time. There are a few inherent assumptions in this type of analysis: More realistic problems would also need to consider other factors, generally including: smaller time buckets, the calculation of taxes (including the cash flow timing), inflation, currency exchange fluctuations, hedged or unhedged commodity costs, risks of technical obsolescence, potential future competitive factors, uneven or unpredictable cash flows , and a more realistic salvage value assumption, as well as many others. A more simple example of the net present value of incoming cash flow over a set period of time, would be winning a Powerball lottery of $500 million . If one does not select the "CASH" option they will be paid $25,000,000 per year for 20 years, a total of $500,000,000 , however, if one does select the "CASH" option, they will receive a one-time lump sum payment of approximately $285 million , the NPV of $500,000,000 paid over time. See "other factors" above that could affect the payment amount. Both scenarios are before taxes. Many computer-based spreadsheet programs have built-in formulae for PV and NPV. Net present value as a valuation methodology dates at least to the 19th century. Karl Marx refers to NPV as fictitious capital , and the calculation as "capitalising," writing: [ 22 ] The forming of a fictitious capital is called capitalising. Every periodically repeated income is capitalised by calculating it on the average rate of interest, as an income which would be realised by a capital at this rate of interest. In mainstream neo-classical economics , NPV was formalized and popularized by Irving Fisher , in his 1907 The Rate of Interest and became included in textbooks from the 1950s onwards, starting in finance texts. [ 23 ] [ 24 ] Cost–benefit analysis (CBA), sometimes also called benefit–cost analysis, is a systematic approach to estimating the strengths and weaknesses of alternatives. It is used to determine options which provide the best approach to achieving benefits while preserving savings in, for example, transactions, activities, and functional business requirements. [ 29 ] A CBA may be used to compare completed or potential courses of action, and to estimate or evaluate the value against the cost of a decision, project, or policy. It is commonly used to evaluate business or policy decisions (particularly public policy ), commercial transactions, and project investments. For example, the U.S. Securities and Exchange Commission must conduct cost–benefit analyses before instituting regulations or deregulations. [ 30 ] : 6 In finance, the equivalent annual cost (EAC) is the cost per year of owning and operating an asset over its entire lifespan. It is calculated by dividing the negative NPV of a project by the "present value of annuity factor": where r is the annual interest rate and t is the number of years. Alternatively, EAC can be obtained by multiplying the NPV of the project by the "loan repayment factor". EAC is often used as a decision-making tool in capital budgeting when comparing investment projects of unequal lifespans. However, the projects being compared must have equal risk: otherwise, EAC must not be used. [ 34 ]
https://en.wikipedia.org/wiki/Net_present_value
Netatua (Neta) Pelesikoti (died 11 November 2020), [ 1 ] also known as Netatua Pelesikoti Taufatofua , was an environmental scientist from Tonga . [ 2 ] Pelesikoti studied geography and economics at the University of the South Pacific , followed by a master's degree in coastal management in the Netherlands and a Ph.D. degree at University of Wollongong , Australia (2003) in Sustainable Coastal Resource Monitoring and Assessment, coastal water quality, coral reefs and sea grass. [ 3 ] In 1999, Pelesikoti was named to an elite group of 15 international experts who were part of the World Meteorological Organization Scientific Advisory Panel (WMOSAP). [ 4 ] A coastal ecologist by profession, she began her life's work as an environmental technical officer in Tonga. She then moved on to work on policy and management at the national level. She also served as an advisor at the South Pacific Applied Geoscience Commission (SOPAC), now called the Applied Geoscience Division of the Pacific Community. [ 2 ] Pelesikoti was the director of the Climate Change Division at the Secretariat of the Pacific Regional Environment Programme for more than seven years in Apia, Samoa . [ 4 ] In 2012, she was the first Pacific island woman to become a lead author of an Intergovernmental Panel on Climate Change Report. [ 2 ] [ 5 ] She also worked as a consultant with the World Bank. [ 4 ] She has been described as the 'Queen of Disaster Risk Management' in the Pacific region. [ 6 ] In 2017, at the conclusion of her term at the Pacific Regional Environment Programme, she returned to Tonga. [ 2 ] She ran as a candidate for Tongatapu 1 in the previous General Election and narrowly lost a 2019 By-Election. [ 3 ] In 2019 she was admitted to an International Scientific Advisory Panel for the World Meteorological Organisation, and was Deputy Chair of the Tonga Cable Ltd. Board. [ 3 ] Pelesikoti died suddenly in Nuku’alofa on 11 November 2020 in Tonga's capital, Nuku’alofa . She was survived by her husband, Dr. Pita Taufatofua, and children Siosi’ana and Filimone. [ 4 ] She was buried in Telekava Cemetery in Kolomotu’a. [ 3 ]
https://en.wikipedia.org/wiki/Netatua_Pelesikoti
Netcracker Technology Corporation , a wholly owned subsidiary of NEC Corporation , is an American-based multinational telecommunications technology company headquartered in Waltham, Massachusetts . The company specializes in software products and professional services for communications service providers (CSPs) and cable providers. Products and services include business support system (BSS) and operational support systems (OSS) software and services. Founded in 1993, the company provides full-stack OSS/BSS software and services to communications service providers . [ 1 ] Netcracker has since expanded into cloud-native, microservices-based software, [ 4 ] virtualization , automation , generative artificial intelligence (GenAI) and Digital Satellite Service support software. On October 21, 2008, NEC Corporation acquired Netcracker for about US$ 300 million making it a wholly owned subsidiary of NEC. [ 1 ] [ 5 ] [ 6 ] On December 11, 2017, the United States Department of Justice announced in a press release a mutual agreement was reached with Netcracker regarding enhanced security protocols in software development. [ 2 ] One primary finding that led to the agreement was Netcracker's use of a global technical workforce creating concerns for security of sensitive individual data , sensitive network data , and domestic communications infrastructure . This concern was considered critical in light of "the cyber threat posed by foreign government agencies and cyber criminals". [ 17 ] The agreement would enhance security of U.S. telecommunications networks limiting information sent to, stored in, or accessed from overseas locations. Under the non-prosecution and security agreement, [ 17 ] Netcracker would designate a Security Director and create/implement a security policy detailing specific procedures to comply with the plan. [ 18 ] By September 2024, Netcracker had become a member of the United States Communications Sector Coordinating Council, [ 19 ] As a software engineering company, Netcracker offers network optimization for communications and operations . [ 17 ] Globally, Netcracker provides the following products and services to more than 280 communications service provider and cable provider customers. [ 20 ] [ 21 ] A search of patents where Netcracker Technology Corporation is the "assignee" reveals the company holds several patents, including: Official website
https://en.wikipedia.org/wiki/Netcracker_Technology
Netperf is a software application that provides network bandwidth testing between two hosts on a network. It supports Unix domain sockets , TCP , SCTP , DLPI and UDP via BSD Sockets. [ 1 ] Netperf provides a number of predefined tests e.g. to measure bulk (unidirectional) data transfer or request response performance. A particular feature of Netperf is that it runs each test multiple times and reports not only the results but also reports the Confidence Interval. It can test both TCP and UDP. It was written in C and works on most UNIX variants, including BSD, System V, Linux, and MacOS. Netperf was originally developed by Rick Jones at Hewlett Packard in Cupertino, CA. [ 2 ]
https://en.wikipedia.org/wiki/Netperf
In computer networking , specifically Internet Relay Chat (IRC), netsplit is a disconnection between two servers. A split between any two servers splits the entire network into two pieces. [ 1 ] Consider the graph to the right, which represents the computer network . Each line represents an established connection. Therefore, the server C is connected directly to A , which is also connected to B and D . If a disruption in the connection between C and A occurs, the connection may be terminated as a result. This can occur either by a socket producing an error, or by excessive lag in which the far server A anticipates this case (which is called a timeout). When the connection between A and C is severed, users who were connected to other servers that are no longer reachable on the network appear to quit. For example, if user Sara is connected to server A , user Bob is connected to server B , and user Joe is connected to C , and C splits, or disconnects, from A , it will appear to Joe as if Sara and Bob both quit (disconnected from the network), and it will appear to both Sara and Bob that Joe quit. However, Joe can still talk to anyone who is connected to the same server (in this case server C ). This happens because the servers to which they are connected are informed of the change in the network status, and update their local information accordingly to display the change. Later, server C may relink (reconnect) to a server (or servers) on the network and the users who appeared to have quit will rejoin; the process of sending this updated information to all servers on the network is called a netburst (or sync ). Occasionally, users will attempt to use netsplits to gain access to private channels. A denial-of-service attack can be used to cause a netsplit by overloading an IRC server's network connection or Internet infrastructure between two servers. If none of the channel users were on server C, a user could join a private channel and later gain access when the servers relink. This is commonly known as split riding or riding the split . Another typical netsplit-oriented IRC attack is Nickname Colliding. In this attack, a user on a split segment of the network would change nicknames to that of a user on the other side of the split network. Upon reconnection, the network would disconnect both users because only one nickname may be in use at one time. Modern IRC server software has largely eliminated this method, but servers using older software may still be vulnerable. Below are examples of a typical netsplit. When two servers split, a user sees this as large number of users quitting. After the servers are reconnected, a user sees the other users rejoining.
https://en.wikipedia.org/wiki/Netsplit
Nettle agents (named after stinging nettles ) or urticants are a variety of chemical warfare agents that produce corrosive skin and tissue injury upon contact, resulting in erythema , urticaria , intense itching , and a hive -like rash. [ 1 ] Most nettle agents, such as the best known and studied nettle agent, phosgene oxime , are often grouped with the vesicant (blister agent) chemical agents. However, because nettle agents do not cause blisters , they are not true vesicants. [ 2 ] This article related to weaponry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nettle_agent
In mathematical analysis , Netto's theorem states that continuous bijections of smooth manifolds preserve dimension . That is, there does not exist a continuous bijection between two smooth manifolds of different dimension. It is named after Eugen Netto . [ 1 ] The case for maps from a higher-dimensional manifold to a one-dimensional manifold was proven by Jacob Lüroth in 1878, using the intermediate value theorem to show that no manifold containing a topological circle can be mapped continuously and bijectively to the real line . Both Netto in 1878, and Georg Cantor in 1879, gave faulty proofs of the general theorem. The faults were later recognized and corrected. [ 2 ] An important special case of this theorem concerns the non-existence of continuous bijections from one-dimensional spaces, such as the real line or unit interval , to two-dimensional spaces, such as the Euclidean plane or unit square . The conditions of the theorem can be relaxed in different ways to obtain interesting classes of functions from one-dimensional spaces to two-dimensional spaces:
https://en.wikipedia.org/wiki/Netto's_theorem
Netvibes is a French company that offers web services. Founded in 2005 by Tariq Krim and Florent Frémont, the company provided software for personalized dashboards for real-time monitoring, social analytics, knowledge sharing, and decision support. [ 1 ] On February 9, 2012, Dassault Systèmes announced the acquisition of Netvibes. As of 2024, the Netvibes brand comprises three French software companies acquired by Dassault Systèmes: In an e-mail dated April 15, Dassault Systèmes announced the definitive closure of the Netvibes service, scheduled for midnight (Paris time) on June 2, 2025. The software is a multi-lingual Ajax -based start page or web portal . It is organized into tabs, with each tab containing user-defined modules. Built-in Netvibes modules include an RSS / Atom feed reader, local weather forecasts, a calendar supporting iCal , bookmarks, notes, to-do lists, multiple searches, support for POP3 , IMAP4 email as well as several webmail providers including Gmail , Yahoo! Mail , Hotmail , and AOL Mail , Box.net web storage, Delicious , Meebo , Flickr photos, podcast support with a built-in audio player, and several others. A page can be personalized further through the use of existing themes or by creating personal theme. Customized tabs, feeds and modules can be shared with others individually or via the Netvibes Ecosystem. [ 5 ] For privacy reasons, only modules with publicly available content can be shared. [ 6 ] [[Category: Data science] This World Wide Web –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Netvibes
The NAI (Network Advertising Initiative) is an industry trade group founded in 2000 that develops self-regulatory standards for online advertising . [ 1 ] Advertising networks created the organization in response to concerns from the Federal Trade Commission and consumer groups that online advertising — particularly targeted or behavioral advertising — harmed user privacy. The NAI seeks to provide self-regulatory guidelines for participating networks and opt-out technologies for consumers in order to maintain the value of online advertising while protecting consumer privacy . Membership in the NAI has fluctuated greatly over time, and both the organization and its self-regulatory system have been criticized for being ineffective in promoting privacy. [ citation needed ] The NAI was formally announced at the Public Workshop on Online Profiling [ 2 ] held by the FTC and the Department of Commerce on November 8, 1999. [ 3 ] Its membership then consisted of 24/7 Media, AdForce , AdKnowledge, Adsmart, DoubleClick , Engage, Flycast, MatchLogic, NetGravity (a division of DoubleClick) and Real Media. In July 2000, the NAI published a set of principles, negotiated with the FTC and endorsed by the FTC, in their report to Congress on online profiling. [ 4 ] In May 2001, the NAI released an accompanying website [ 5 ] allowing users to more quickly download opt-out cookies for all participating ad networks. [ 6 ] In 2002, the NAI released guidelines for the use of web beacons — small images or pieces of code used to track visiting and traffic patterns, and to install cookies on visitors' machines. [ 7 ] These guidelines use a similar model of notice and choice as the NAI Principles; opt-in consent is only required when sensitive information is associated with personally identifiable information and transferred to a third party. [ 8 ] In 2003, the NAI formed the Email Service Provider Coalition (since renamed the Email Sender and Provider Coalition). [ 9 ] The ESPC engages in lobbying, press relations and technical standards development to support "email deliverability" — ensuring that mass email delivery continues despite anti- spam legislation and technologies. [ 10 ] Today the two organizations exist entirely independent from each other. In response to a 2007 FTC staff report ( Self-Regulatory Principles for Online Behavioral Advertising [ 11 ] ), the NAI published an updated set of principles in December 2008 [ 12 ] after providing a draft in April for public comments. [ 13 ] [ 14 ] The new principles incorporated new restrictions on the collection and use of sensitive data and data related to children. In 2009, the NAI launched a consumer education page, which provided a centralized location for a variety of informational articles, videos, and other creative content designed to educate users about online behavioral advertising. In 2010, the NAI joined the Digital Advertising Alliance, a non-profit organization of leading companies and trade associations including the Association of National Advertisers (ANA), the American Association of Advertising Agencies (4As), the Direct Marketing Association (DMA), the Interactive Advertising Bureau (IAB), the American Advertising Federation (AAF) and the NAI. These associations and their members are dedicated to developing effective self-regulatory solutions to consumer choice for web viewing data. In 2012, the NAI issued its third compliance report, which demonstrated that overall, the NAI member companies continue to meet the obligations of the NAI code. Ad network membership in the NAI fluctuated between 12 members in 2000, two members in 2002-2003 and five members in 2007, prompting criticism that it did not consistently represent or regulate the industry. [ 15 ] As of July 2017, the NAI lists over 100 members, including Google , Microsoft and Yahoo! . [ 16 ] In 2013, the NAI released its fourth annual compliance report. [ 17 ] The report described the NAI's planned initiatives for 2013, which included the development of a revised NAI Code of Conduct [ 18 ] governing the collection and use of data on mobile devices. Additionally, in 2013, the NAI released its first Mobile Application Code, [ 19 ] which expanded the organization’s self-regulatory program to cover data collected across mobile applications. [ 20 ] In 2014, NAI released its 5th Annual Compliance report, showing that NAI members overwhelmingly met their obligations under the provisions of the code and continued to uphold the NAI's rigorous standards for providing notice and choice around interest-based advertising (IBA). [ 21 ] The NAI compliance team reviewed 88 member companies. The NAI also created a prestigious one-year compliance and technology fellowship for highly qualified graduates with an interest in the intersection of technology, advertising and policy. [ 22 ] In May 2015, the NAI released an update to the Code of Conduct and its Guidance for NAI Members: "Use of Non-Cookie Technologies for Interest-Based Advertising Consistent with the NAI principles and Code of Conduct (Beyond Cookies Guidance)". [ 23 ] In July 2015, the NAI released its Guidance for NAI Members: "Determining Whether Location is Imprecise (Imprecise Location Guidance)", [ 24 ] which provided clarity on the types of location data that may require opt-in consent. In August 2015, NAI released an update to the Mobile Application Code in order to incorporate many of the changes in the 2015 Update to the NAI Code of Conduct and apply them to the mobile advertising ecosystem. [ 25 ] As of January 1, 2016, NAI members engaged in cross-app advertising (CAA) are required to come into compliance with the Mobile App Code. Also in April 2016, NAI welcomed its 100th member, evidence of the continued appeal of the NAI's compliance program. In September 2016, the NAI became one of the founding members of the Coalition for Better Ads, an industry coalition developing new global standards for online advertising. [ 26 ] In 2017, the 2018 NAI Code of Conduct was released. [ 27 ] Also released in 2017 were updates to the non-cookie technology guidance, titled Guidance for NAI Members: Use of Non-Cookie Technologies for Interest-Based Advertising , [ 28 ] and a cross-device linking guidance in May 2017. [ 29 ] In 2019, the most recent version of the NAI Code of Conduct was released. [ 30 ] In 2022, the NAI released Precise Location Information Solution Provider Voluntary Enhanced Standards. [ 31 ] In 2023, the NAI announced that they temporarily paused enforcement of the 2020 NAI Code of Conduct in order to draft new governing guidelines that more accurately reflect state legal requirements. [ 30 ] The NAI's Self-Regulatory Code of Conduct [ 32 ] imposes notice, choice, transparency, education, and data security requirements on members, along with other obligations with respect to the collection and use of data for interest-based advertising (IBA). The Code also limits the types of data that member companies can use for advertising purposes and imposes a host of substantive restrictions on member companies' collection, use, and transfer of data used for interest-based advertising. The NAI mandates that member companies provide users a means to opt out of interest-based advertising. The NAI opt-out tool [ 33 ] is a simple web-based utility that allows users to opt out of receiving targeted ads from one, some, or all member companies. The NAI employs a comprehensive compliance and enforcement program [ 34 ] to verify ongoing member compliance with these obligations. The NAI's self-regulatory principles for online behavioral advertising depend on a model of notice and choice . Notice : The NAI principles require "clear, meaningful and prominent" notice on the member’s website that describes its data collection, including what behavioral or multi-site advertising the ad network engages in, what types of data they collect for what purposes and for what length of time, data transfer, and use practices for interest-based advertising and/or ad delivery and reporting. Since ads are commonly shown on websites not controlled by the ad network, members must also require that partnering websites that display their ads also provide "prominent" notice that behavioral advertising is taking place, as well as what data is being collected, for what purposes and with whom it will be shared. Typically, these notices are presented in each website's privacy policy . [ 27 ] "Robust" notice — where the notice is presented before personal information is collected — is required when personally-identifiable information ("name, address, telephone number, email address, financial account number, government-issued identifier and any other data used to identify, contact or precisely locate a person") will be merged with other non-identifiable information (like demographics or interests). Choice : Ad networks which satisfy the NAI principles must provide consumers a choice about whether information collected about them is tracked and used to provide targeted advertising. Whether this choice is "opt-out" or "opt-in" depends on the type and usage of data. For sensitive information (including Social Security Numbers, financial account numbers, real-time location information and precise information about medical conditions), tracking is always "opt-in". Also, when previously collected personally-identifiable information is merged with non-identifiable information (and the consumer wasn't provided "robust notice" of this practice originally), then ad networks must obtain affirmative consent. In all other cases of tracking personally-identifiable and non-identifiable information, choice is provided through an "opt-out" mechanism: the opt-out cookie. Although HTTP cookies are commonly used by advertising networks to track consumers as they access information across different web sites, the opt-out cookie is used to signal that the consumer has chosen not to have their data collected for providing targeted ads. The NAI provides a tool to download opt-out cookies for each of their member networks: member networks who detect the opt-out cookie must not collect data on that user for targeted advertising. [ citation needed ] Additional principles prohibit collecting information of children under age 13 or using collected data for non-marketing purposes. Ad networks are required to provide subjects of data collection "reasonable" access to the personally-identifiable information they collect, make "reasonable" efforts to use reliable data, provide "reasonable" security and use "reasonable" efforts (through the NAI) to educate consumers about targeted advertising. Retention of data is limited to "legitimate business needs". [ citation needed ] In 2013, the NAI unveiled new educational resources for consumers covering a variety of topics and concerns related to online behavioral advertising or internet-based advertising. As part of these efforts, the NAI provides current information and tools that are easy to understand and use, and the organization’s members donate billions of ad impressions to raise awareness and point consumers to these and other resources. The NAI also provides a framework to help businesses honor consumer preferences and act responsibly. Every NAI member company is required to provide choices through both the NAI and Digital Advertising Alliance [ 35 ] websites. In addition, NAI requires members to include opt-out tools and comprehensive disclosures on their own websites. Moreover, NAI companies support the Ad Choices icon, just-in-time notice embedded in or around the advertisements consumers see online. [ 20 ] The NAI and its set of self-regulatory principles have been widely criticized by consumer advocacy organizations. The World Privacy Forum has argued that the NAI opt-out cookie has been ineffective because consumers don't understand how cookies work, don't realize that cookies can simultaneously track them and be used to signal that they should not be tracked, don't recognize that changing membership in the NAI requires regularly updating their opt-out cookies, and regularly encounter errors on the NAI web site while trying to opt out. [ 15 ] Before 2008, the NAI principles covered tracking only via HTTP cookies despite additional technologies for uniquely identifying and tracking browsers, [ 15 ] the updated principles explicitly cover Flash cookies and similar technologies. [ 36 ] Since its first review in 2007, however, the World Privacy Forum’s founder has described the NAI improvements “profound,” calling its 2013 Code of Conduct “remarkable” for a number of reasons. The founder went on to say that the “NAI represents a really important step forward for what self-regulation has been.” [ 37 ] Concerns have also been raised about the process for developing and enforcing the NAI principles. The Electronic Privacy Information Center criticized the negotiation of the original set of principles for not substantively including privacy advocates or consumer protection organizations, [ 38 ] a concern echoed by seven senators in a letter to then FTC Chairman Pitofsky. [ 39 ] The NAI used TRUSTe for third-party enforcement of its principles starting in 2002, but over time TRUSTe provided less and less detail in their reports on consumer complaints about the NAI and stopped reporting these complaints altogether in 2006. [ 15 ] When the NAI published updated principles in 2008, it chose to review member compliance itself, which the Center for Democracy and Technology argued would reduce consumer trust in the organization. [ 36 ] The NAI responded to this criticism on its blog. [ 40 ] The NAI initially allowed for "associate members" to join the association; these members were not required to comply with the organization's principles. However, this concept was quickly discarded, and all members of the NAI are currently required to comply with the NAI Codes of Conduct and are evaluated regularly. [ 41 ]
https://en.wikipedia.org/wiki/Network_Advertising_Initiative
Network Installation Manager (NIM) is an object-oriented system management framework on the IBM AIX operating system that installs and manages systems over a network . [ 1 ] [ 2 ] [ 3 ] NIM is analogous to Kickstart in the Linux world. [ 4 ] NIM is a client-server system [ 5 ] in which a NIM server provides a boot image to client systems via the BOOTP and TFTP protocols. [ 6 ] In addition to boot images, NIM can manage software updates and third-party applications. [ 7 ] The SUMA command can be integrated with NIM to automate system updates from a central server and subsequent distribution to clients. [ 8 ] NIM data is organized into object classes and object types. [ 9 ] Classes include machines, networks and resources while types refer to the kind of object within a class, e.g., script or image resources. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Network_Installation_Manager
Network Investigative Technique ( NIT ) is a form of malware (or hacking ) employed by the FBI since at least 2002. It is a drive-by download computer program designed to provide access to a computer. Its usage has raised both Fourth Amendment concerns [ 1 ] and jurisdictional issues. [ 2 ] The FBI has to date, despite a court order, declined to provide the complete code [ 3 ] in a child sex abuse case involving the Tor anonymity network . [ 4 ] On May 12, 2016 Mozilla filed an amicus curiae brief inasmuch as the FBI's exploit against the Mozilla Firefox web browsers potentially puts millions of users at risk. It asked that the exploit be told to them before it is told to the defendant, thus raising Fifth Amendment issues as well. [ 5 ] Also, US District Judge Robert J. Bryan in Tacoma, Washington has ruled that while the defendant in United States v. Michaud has the right to review the code, the government also has the right to keep it secret (two other federal judges in related cases have ruled to suppress evidence found as a result of the NIT); [ 6 ] On May 25, 2016, however, he ruled that "For the reasons stated orally on the record, evidence of the NIT., the search warrant issued based on the NIT., and the fruits of that warrant should be excluded and should not be offered in evidence at trial..." [ 7 ] In March 2017 the American Civil Liberties Union , Electronic Frontier Foundation , and the National Association of Criminal Defense Lawyers released a 188-page guide to enable meaningful 4th Amendment analysis. [ 8 ] In April a Minnesota judge ruled that the warrant was invalid from the moment it was signed, given that the FBI agent knew that it exceed the jurisdictional requirements of Rule 41 . All evidence gathered after that warrant was served was hence the fruit of the poisonous tree . [ 9 ] The ACLU and Privacy International successfully litigated (see [18-cv-1488]) the release of U.S. sealed court records that revealed details about a NIT deployed in 2016 on 23 separate onion services of the Tor (network) . The sworn affidavit submitted by a Special Agent of the FBI (affidavit template formerly written by the NAIC ) indicated the NIT had the following abilities: There is a growing list of government operations that are known to have used NITS.
https://en.wikipedia.org/wiki/Network_Investigative_Technique
Network Performance Monitor (NPM) in Operations Management Suite, a component of Microsoft Azure , monitors network performance between office sites, data centers , clouds and applications in near real-time. It helps a network administrator locate and troubleshoot bottlenecks like network delay , data loss and availability of any network link across on-premises networks, Microsoft Azure VNets, Amazon Web Services VPCs, hybrid networks, VPNs or even public internet links. Network Performance Monitor (NPM) is network monitoring from the Operations Management Suite, that monitors networks. NPM monitors the availability of connectivity and quality of connectivity between multiple locations within and across campuses, private and public clouds. It uses synthetic transactions to test for reachability and can be used on any IP network irrespective of the make and model of network routers or switches deployed. It does not require any access to network devices. Microsoft Monitoring Agent (MMA) or OMS extension (valid only for Virtual machines hosted in Azure) is to be installed on the servers in the Subnetworks that are to be monitored. A video-based demo of NPM is available online. NPM uses synthetic transactions to test for reachability and calculate network performance metrics across the network. Tests are performed using either TCP or ICMP and users have the option of choosing between these protocols. Users must evaluate their environments and weigh the pros and cons of the protocols. The following is a summary of the differences. NPM Solution became generally available (GA). [ 1 ] The launch was picked up by eWeek [ 2 ] NPM solution was announced in the Public Preview Windows 7 SP1 or later Network Performance Monitor is available in the following Azure regions: TCP handshakes every 5 seconds, data sent every 3 minutes [ 3 ]
https://en.wikipedia.org/wiki/Network_Performance_Monitoring_Solution
A network address is an identifier for a node or host on a telecommunications network . Network addresses are designed to be unique identifiers across the network, although some networks allow for local , private addresses , or locally administered addresses that may not be unique. [ 1 ] Special network addresses are allocated as broadcast or multicast addresses . These too are not unique. In some cases, network hosts may have more than one network address. For example, each network interface controller may be uniquely identified. Further, because protocols are frequently layered , more than one protocol's network address can occur in any particular network interface or node and more than one type of network address may be used in any one network. [ 2 ] Network addresses can be flat addresses which contain no information about the node's location in the network (such as a MAC address ), or may contain structure or hierarchical information for the routing (such as an IP address ). Examples of network addresses include:
https://en.wikipedia.org/wiki/Network_address
A network administrator is a person designated in an organization whose responsibility includes maintaining computer infrastructures with emphasis on local area networks (LANs) up to wide area networks (WANs). Responsibilities may vary between organizations, but installing new hardware , on-site servers, enforcing licensing agreements, software-network interactions as well as network integrity and resilience are some of the key areas of focus. The role of the network administrator can vary significantly depending on an organization's size, location, and socioeconomic considerations. Some organizations work on a user-to-technical support ratio, [ 1 ] [ 2 ] [ 3 ] Network administrators are often involved in proactive work. This type of work will often include: [ citation needed ] Network administrators are responsible for making sure that computer hardware and network infrastructure related to an organization's data network are effectively maintained. In smaller organizations, they are typically involved in the procurement of new hardware, the rollout of new software, maintaining disk images for new computer installs, making sure that licenses are paid for and up to date for software that needs it, maintaining the standards for server installations and applications, monitoring the performance of the network, checking for security breaches, and poor data management practices. A common question for the small-medium business (SMB) network administrator is, how much bandwidth do I need to run my business? [ 4 ] Typically, within a larger organization, these roles are split into multiple roles or functions across various divisions and are not actioned by the one individual. In other organizations, some of these roles mentioned are carried out by system administrators . As with many technical roles, network administrator positions require a breadth of technical knowledge and the ability to learn the intricacies of new networking and server software packages quickly. Within smaller organizations, the more senior role of network engineer is sometimes attached to the responsibilities of the network administrator. It is common for smaller organizations to outsource this function. [ 5 ]
https://en.wikipedia.org/wiki/Network_administrator
In electrical engineering and electronics , a network is a collection of interconnected components . Network analysis is the process of finding the voltages across, and the currents through, all network components. There are many techniques for calculating these values; however, for the most part, the techniques assume linear components. Except where stated, the methods described in this article are applicable only to linear network analysis. A useful procedure in network analysis is to simplify the network by reducing the number of components. This can be done by replacing physical components with other notional components that have the same effect. A particular technique might directly reduce the number of components, for instance by combining impedances in series. On the other hand, it might merely change the form into one in which the components can be reduced in a later operation. For instance, one might transform a voltage generator into a current generator using Norton's theorem in order to be able to later combine the internal resistance of the generator with a parallel impedance load. A resistive circuit is a circuit containing only resistors , ideal current sources , and ideal voltage sources . If the sources are constant ( DC ) sources, the result is a DC circuit . Analysis of a circuit consists of solving for the voltages and currents present in the circuit. The solution principles outlined here also apply to phasor analysis of AC circuits . Two circuits are said to be equivalent with respect to a pair of terminals if the voltage across the terminals and current through the terminals for one network have the same relationship as the voltage and current at the terminals of the other network. If V 2 = V 1 {\displaystyle V_{2}=V_{1}} implies I 2 = I 1 {\displaystyle I_{2}=I_{1}} for all (real) values of V 1 , then with respect to terminals ab and xy , circuit 1 and circuit 2 are equivalent. The above is a sufficient definition for a one-port network. For more than one port, then it must be defined that the currents and voltages between all pairs of corresponding ports must bear the same relationship. For instance, star and delta networks are effectively three port networks and hence require three simultaneous equations to fully specify their equivalence. Some two terminal network of impedances can eventually be reduced to a single impedance by successive applications of impedances in series or impedances in parallel. A network of impedances with more than two terminals cannot be reduced to a single impedance equivalent circuit. An n -terminal network can, at best, be reduced to n impedances (at worst ( n 2 ) {\displaystyle {\tbinom {n}{2}}} ). For a three terminal network, the three impedances can be expressed as a three node delta (Δ) network or four node star (Y) network. These two networks are equivalent and the transformations between them are given below. A general network with an arbitrary number of nodes cannot be reduced to the minimum number of impedances using only series and parallel combinations. In general, Y-Δ and Δ-Y transformations must also be used. For some networks the extension of Y-Δ to star-polygon transformations may also be required. For equivalence, the impedances between any pair of terminals must be the same for both networks, resulting in a set of three simultaneous equations. The equations below are expressed as resistances but apply equally to the general case with impedances. The star-to-delta and series-resistor transformations are special cases of the general resistor network node elimination algorithm. Any node connected by N resistors ( R 1 … R N ) to nodes 1 … N can be replaced by ( N 2 ) {\displaystyle {\tbinom {N}{2}}} resistors interconnecting the remaining N nodes. The resistance between any two nodes x, y is given by: For a star-to-delta ( N = 3 ) this reduces to: For a series reduction ( N = 2 ) this reduces to: For a dangling resistor ( N = 1 ) it results in the elimination of the resistor because ( 1 2 ) = 0 {\displaystyle {\tbinom {1}{2}}=0} . A generator with an internal impedance (i.e. non-ideal generator) can be represented as either an ideal voltage generator or an ideal current generator plus the impedance. These two forms are equivalent and the transformations are given below. If the two networks are equivalent with respect to terminals ab, then V and I must be identical for both networks. Thus, Some very simple networks can be analysed without the need to apply the more systematic approaches. Consider n impedances that are connected in series . The voltage V i {\displaystyle V_{i}} across any impedance Z i {\displaystyle Z_{i}} is Consider n admittances that are connected in parallel . The current I i {\displaystyle I_{i}} through any admittance Y i {\displaystyle Y_{i}} is for i = 1 , 2 , . . . , n . {\displaystyle i=1,2,...,n.} Nodal analysis uses the concept of a node voltage and considers the node voltages to be the unknown variables. [ 2 ] : 2-8 - 2-9 For all nodes, except a chosen reference node, the node voltage is defined as the voltage drop from the node to the reference node. Therefore, there are N-1 node voltages for a circuit with N nodes. [ 2 ] : 2-10 In principle, nodal analysis uses Kirchhoff's current law (KCL) at N-1 nodes to get N-1 independent equations. Since equations generated with KCL are in terms of currents going in and out of nodes, these currents, if their values are not known, need to be represented by the unknown variables (node voltages). For some elements (such as resistors and capacitors) getting the element currents in terms of node voltages is trivial. For some common elements where this is not possible, specialized methods are developed. For example, a concept called supernode is used for circuits with independent voltage sources. [ 2 ] : 2-12 - 2-13 Mesh — a loop that does not contain an inner loop. In this method, the effect of each generator in turn is calculated. All the generators other than the one being considered are removed and either short-circuited in the case of voltage generators or open-circuited in the case of current generators. The total current through or the total voltage across a particular branch is then calculated by summing all the individual currents or voltages. There is an underlying assumption to this method that the total current or voltage is a linear superposition of its parts. Therefore, the method cannot be used if non-linear components are present. [ 2 ] : 6–14 Superposition of powers cannot be used to find total power consumed by elements even in linear circuits. Power varies according to the square of total voltage or current and the square of the sum is not generally equal to the sum of the squares. Total power in an element can be found by applying superposition to the voltages and current independently and then calculating power from the total voltage and current. Choice of method [ 3 ] : 112–113 is to some extent a matter of taste. If the network is particularly simple or only a specific current or voltage is required then ad-hoc application of some simple equivalent circuits may yield the answer without recourse to the more systematic methods. A transfer function expresses the relationship between an input and an output of a network. For resistive networks, this will always be a simple real number or an expression which boils down to a real number. Resistive networks are represented by a system of simultaneous algebraic equations. However, in the general case of linear networks, the network is represented by a system of simultaneous linear differential equations. In network analysis, rather than use the differential equations directly, it is usual practice to carry out a Laplace transform on them first and then express the result in terms of the Laplace parameter s, which in general is complex . This is described as working in the s-domain . Working with the equations directly would be described as working in the time (or t) domain because the results would be expressed as time varying quantities. The Laplace transform is the mathematical method of transforming between the s-domain and the t-domain. This approach is standard in control theory and is useful for determining stability of a system, for instance, in an amplifier with feedback. For two terminal components the transfer function, or more generally for non-linear elements, the constitutive equation , is the relationship between the current input to the device and the resulting voltage across it. The transfer function, Z(s), will thus have units of impedance, ohms. For the three passive components found in electrical networks, the transfer functions are; For a network to which only steady ac signals are applied, s is replaced with jω and the more familiar values from ac network theory result. Finally, for a network to which only steady dc is applied, s is replaced with zero and dc network theory applies. Transfer functions, in general, in control theory are given the symbol H(s). Most commonly in electronics, transfer function is defined as the ratio of output voltage to input voltage and given the symbol A(s), or more commonly (because analysis is invariably done in terms of sine wave response), A ( jω ), so that; A ( j ω ) = V o V i {\displaystyle A(j\omega )={\frac {V_{o}}{V_{i}}}} The A standing for attenuation, or amplification, depending on context. In general, this will be a complex function of jω , which can be derived from an analysis of the impedances in the network and their individual transfer functions. Sometimes the analyst is only interested in the magnitude of the gain and not the phase angle. In this case the complex numbers can be eliminated from the transfer function and it might then be written as; A ( ω ) = | V o V i | {\displaystyle A(\omega )=\left|{\frac {V_{o}}{V_{i}}}\right|} The concept of a two-port network can be useful in network analysis as a black box approach to analysis. The behaviour of the two-port network in a larger network can be entirely characterised without necessarily stating anything about the internal structure. However, to do this it is necessary to have more information than just the A(jω) described above. It can be shown that four such parameters are required to fully characterise the two-port network. These could be the forward transfer function, the input impedance, the reverse transfer function (i.e., the voltage appearing at the input when a voltage is applied to the output) and the output impedance. There are many others (see the main article for a full listing), one of these expresses all four parameters as impedances. It is usual to express the four parameters as a matrix; [ V 1 V 0 ] = [ z ( j ω ) 11 z ( j ω ) 12 z ( j ω ) 21 z ( j ω ) 22 ] [ I 1 I 0 ] {\displaystyle {\begin{bmatrix}V_{1}\\V_{0}\end{bmatrix}}={\begin{bmatrix}z(j\omega )_{11}&z(j\omega )_{12}\\z(j\omega )_{21}&z(j\omega )_{22}\end{bmatrix}}{\begin{bmatrix}I_{1}\\I_{0}\end{bmatrix}}} The matrix may be abbreviated to a representative element; [ z ( j ω ) ] {\displaystyle \left[z(j\omega )\right]} or just [ z ] {\displaystyle \left[z\right]} These concepts are capable of being extended to networks of more than two ports. However, this is rarely done in reality because, in many practical cases, ports are considered either purely input or purely output. If reverse direction transfer functions are ignored, a multi-port network can always be decomposed into a number of two-port networks. Where a network is composed of discrete components, analysis using two-port networks is a matter of choice, not essential. The network can always alternatively be analysed in terms of its individual component transfer functions. However, if a network contains distributed components , such as in the case of a transmission line , then it is not possible to analyse in terms of individual components since they do not exist. The most common approach to this is to model the line as a two-port network and characterise it using two-port parameters (or something equivalent to them). Another example of this technique is modelling the carriers crossing the base region in a high frequency transistor. The base region has to be modelled as distributed resistance and capacitance rather than lumped components . Transmission lines and certain types of filter design use the image method to determine their transfer parameters. In this method, the behaviour of an infinitely long cascade connected chain of identical networks is considered. The input and output impedances and the forward and reverse transmission functions are then calculated for this infinitely long chain. Although the theoretical values so obtained can never be exactly realised in practice, in many cases they serve as a very good approximation for the behaviour of a finite chain as long as it is not too short. Most analysis methods calculate the voltage and current values for static networks, which are circuits consisting of memoryless components only but have difficulties with complex dynamic networks. In general, the equations that describe the behaviour of a dynamic circuit are in the form of a differential-algebraic system of equations (DAEs). DAEs are challenging to solve and the methods for doing so are not yet fully understood and developed (as of 2010). Also, there is no general theorem that guarantees solutions to DAEs will exist and be unique. [ 5 ] : 204–205 In special cases, the equations of the dynamic circuit will be in the form of an ordinary differential equations (ODE), which are easier to solve, since numerical methods for solving ODEs have a rich history, dating back to the late 1800s. One strategy for adapting ODE solution methods to DAEs is called direct discretization and is the method of choice in circuit simulation. [ 5 ] : 204-205 Simulation-based methods for time-based network analysis solve a circuit that is posed as an initial value problem (IVP). That is, the values of the components with memories (for example, the voltages on capacitors and currents through inductors) are given at an initial point of time t 0 , and the analysis is done for the time t 0 ≤ t ≤ t f {\displaystyle t_{0}\leq t\leq t_{f}} . [ 5 ] : 206-207 Since finding numerical results for the infinite number of time points from t 0 to t f is not possible, this time period is discretized into discrete time instances, and the numerical solution is found for every instance. The time between the time instances is called the time step and can be fixed throughout the whole simulation or may be adaptive . In an IVP, when finding a solution for time t n+1 , the solution for time t n is already known. Then, temporal discretization is used to replace the derivatives with differences, such as x ′ ( t n + 1 ) ≈ x n + 1 − x n h n + 1 {\displaystyle x'(t_{n+1})\approx {\frac {x_{n+1}-x_{n}}{h_{n+1}}}} for the backward Euler method , where h n+1 is the time step. [ 5 ] : 266 If all circuit components were linear or the circuit was linearized beforehand, the equation system at this point is a system of linear equations and is solved with numerical linear algebra methods. Otherwise, it is a nonlinear algebraic equation system and is solved with nonlinear numerical methods such as Root-finding algorithms . Simulation methods are much more applicable than Laplace transform based methods, such as transfer functions , which only work for simple dynamic networks with capacitors and inductors. Also, the input signals to the network cannot be arbitrarily defined for Laplace transform based methods. Most electronic designs are, in reality, non-linear. There are very few that do not include some semiconductor devices. These are invariably non-linear, the transfer function of an ideal semiconductor p-n junction is given by the very non-linear relationship; where; There are many other ways that non-linearity can appear in a network. All methods utilising linear superposition will fail when non-linear components are present. There are several options for dealing with non-linearity depending on the type of circuit and the information the analyst wishes to obtain. The diode equation above is an example of an element constitutive equation of the general form, This can be thought of as a non-linear resistor. The corresponding constitutive equations for non-linear inductors and capacitors are respectively; where f is any arbitrary function, φ is the stored magnetic flux and q is the stored charge. An important consideration in non-linear analysis is the question of uniqueness. For a network composed of linear components there will always be one, and only one, unique solution for a given set of boundary conditions. This is not always the case in non-linear circuits. For instance, a linear resistor with a fixed current applied to it has only one solution for the voltage across it. On the other hand, the non-linear tunnel diode has up to three solutions for the voltage for a given current. That is, a particular solution for the current through the diode is not unique, there may be others, equally valid. In some cases there may not be a solution at all: the question of existence of solutions must be considered. Another important consideration is the question of stability. A particular solution may exist, but it may not be stable, rapidly departing from that point at the slightest stimulation. It can be shown that a network that is absolutely stable for all conditions must have one, and only one, solution for each set of conditions. [ 6 ] A switching device is one where the non-linearity is utilised to produce two opposite states. CMOS devices in digital circuits, for instance, have their output connected to either the positive or the negative supply rail and are never found at anything in between except during a transient period when the device is switching. Here the non-linearity is designed to be extreme, and the analyst can take advantage of that fact. These kinds of networks can be analysed using Boolean algebra by assigning the two states ("on"/"off", "positive"/"negative" or whatever states are being used) to the Boolean constants "0" and "1". The transients are ignored in this analysis, along with any slight discrepancy between the state of the device and the nominal state assigned to a Boolean value. For instance, Boolean "1" may be assigned to the state of +5V. The output of the device may be +4.5V but the analyst still considers this to be Boolean "1". Device manufacturers will usually specify a range of values in their data sheets that are to be considered undefined (i.e. the result will be unpredictable). The transients are not entirely uninteresting to the analyst. The maximum rate of switching is determined by the speed of transition from one state to the other. Happily for the analyst, for many devices most of the transition occurs in the linear portion of the devices transfer function and linear analysis can be applied to obtain at least an approximate answer. It is mathematically possible to derive Boolean algebras that have more than two states. There is not too much use found for these in electronics, although three-state devices are passingly common. This technique is used where the operation of the circuit is to be essentially linear, but the devices used to implement it are non-linear. A transistor amplifier is an example of this kind of network. The essence of this technique is to separate the analysis into two parts. Firstly, the dc biases are analysed using some non-linear method. This establishes the quiescent operating point of the circuit. Secondly, the small signal characteristics of the circuit are analysed using linear network analysis. Examples of methods that can be used for both these stages are given below. In a great many circuit designs, the dc bias is fed to a non-linear component via a resistor (or possibly a network of resistors). Since resistors are linear components, it is particularly easy to determine the quiescent operating point of the non-linear device from a graph of its transfer function. The method is as follows: from linear network analysis the output transfer function (that is output voltage against output current) is calculated for the network of resistor(s) and the generator driving them. This will be a straight line (called the load line ) and can readily be superimposed on the transfer function plot of the non-linear device. The point where the lines cross is the quiescent operating point. Perhaps the easiest practical method is to calculate the (linear) network open circuit voltage and short circuit current and plot these on the transfer function of the non-linear device. The straight line joining these two point is the transfer function of the network. In reality, the designer of the circuit would proceed in the reverse direction to that described. Starting from a plot provided in the manufacturers data sheet for the non-linear device, the designer would choose the desired operating point and then calculate the linear component values required to achieve it. It is still possible to use this method if the device being biased has its bias fed through another device which is itself non-linear, a diode for instance. In this case however, the plot of the network transfer function onto the device being biased would no longer be a straight line and is consequently more tedious to do. This method can be used where the deviation of the input and output signals in a network stay within a substantially linear portion of the non-linear devices transfer function, or else are so small that the curve of the transfer function can be considered linear. Under a set of these specific conditions, the non-linear device can be represented by an equivalent linear network. It must be remembered that this equivalent circuit is entirely notional and only valid for the small signal deviations. It is entirely inapplicable to the dc biasing of the device. For a simple two-terminal device, the small signal equivalent circuit may be no more than two components. A resistance equal to the slope of the v/i curve at the operating point (called the dynamic resistance), and tangent to the curve. A generator, because this tangent will not, in general, pass through the origin. With more terminals, more complicated equivalent circuits are required. A popular form of specifying the small signal equivalent circuit amongst transistor manufacturers is to use the two-port network parameters known as [h] parameters . These are a matrix of four parameters as with the [z] parameters but in the case of the [h] parameters they are a hybrid mixture of impedances, admittances, current gains and voltage gains. In this model the three terminal transistor is considered to be a two port network, one of its terminals being common to both ports. The [h] parameters are quite different depending on which terminal is chosen as the common one. The most important parameter for transistors is usually the forward current gain, h 21 , in the common emitter configuration. This is designated h fe on data sheets. The small signal equivalent circuit in terms of two-port parameters leads to the concept of dependent generators. That is, the value of a voltage or current generator depends linearly on a voltage or current elsewhere in the circuit. For instance the [z] parameter model leads to dependent voltage generators as shown in this diagram; There will always be dependent generators in a two-port parameter equivalent circuit. This applies to the [h] parameters as well as to the [z] and any other kind. These dependencies must be preserved when developing the equations in a larger linear network analysis. In this method, the transfer function of the non-linear device is broken up into regions. Each of these regions is approximated by a straight line. Thus, the transfer function will be linear up to a particular point where there will be a discontinuity. Past this point the transfer function will again be linear but with a different slope. A well known application of this method is the approximation of the transfer function of a pn junction diode. The transfer function of an ideal diode has been given at the top of this (non-linear) section. However, this formula is rarely used in network analysis, a piecewise approximation being used instead. It can be seen that the diode current rapidly diminishes to -I o as the voltage falls. This current, for most purposes, is so small it can be ignored. With increasing voltage, the current increases exponentially. The diode is modelled as an open circuit up to the knee of the exponential curve, then past this point as a resistor equal to the bulk resistance of the semiconducting material. The commonly accepted values for the transition point voltage are 0.7V for silicon devices and 0.3V for germanium devices. An even simpler model of the diode, sometimes used in switching applications, is short circuit for forward voltages and open circuit for reverse voltages. The model of a forward biased pn junction having an approximately constant 0.7V is also a much used approximation for transistor base-emitter junction voltage in amplifier design. The piecewise method is similar to the small signal method in that linear network analysis techniques can only be applied if the signal stays within certain bounds. If the signal crosses a discontinuity point then the model is no longer valid for linear analysis purposes. The model does have the advantage over small signal however, in that it is equally applicable to signal and dc bias. These can therefore both be analysed in the same operations and will be linearly superimposable. In linear analysis, the components of the network are assumed to be unchanging, but in some circuits this does not apply, such as sweep oscillators, voltage controlled amplifiers , and variable equalisers . In many circumstances the change in component value is periodic. A non-linear component excited with a periodic signal, for instance, can be represented as a periodically varying linear component. Sidney Darlington disclosed a method of analysing such periodic time varying circuits. He developed canonical circuit forms which are analogous to the canonical forms of Ronald M. Foster and Wilhelm Cauer used for analysing linear circuits. [ 7 ] Generalization of circuit theory based on scalar quantities to vectorial currents is a necessity for newly evolving circuits such as spin circuits. [ clarification needed ] Generalized circuit variables consist of four components: scalar current and vector spin current in x, y, and z directions. The voltages and currents each become vector quantities with conductance described as a 4x4 spin conductance matrix. [ citation needed ]
https://en.wikipedia.org/wiki/Network_analysis_(electrical_circuits)
From 1929 [ 1 ] to the late 1960s, large alternating current power systems were modelled and studied on AC network analyzers (also called alternating current network calculators or AC calculating boards ) or transient network analyzers . These special-purpose analog computers were an outgrowth of the DC calculating boards used in the very earliest power system analysis. By the middle of the 1950s, fifty network analyzers were in operation. [ 2 ] AC network analyzers were much used for power-flow studies , short circuit calculations, and system stability studies, but were ultimately replaced by numerical solutions running on digital computers. While the analyzers could provide real-time simulation of events, with no concerns about numeric stability of algorithms, the analyzers were costly, inflexible, and limited in the number of buses and lines that could be simulated. [ 3 ] Eventually powerful digital computers replaced analog network analyzers for practical calculations, but analog physical models for studying electrical transients are still in use. As AC power systems became larger at the start of the 20th century, with more interconnected devices, the problem of calculating the expected behavior of the systems became more difficult. Manual methods were only practical for systems of a few sources and nodes. The complexity of practical problems made manual calculation techniques too laborious or inaccurate to be useful. Many mechanical aids to calculation were developed to solve problems relating to network power systems. DC calculating boards used resistors and DC sources to represent an AC network. A resistor was used to model the inductive reactance of a circuit, while the actual series resistance of the circuit was neglected. The principle disadvantage was the inability to model complex impedances. However, for short-circuit fault studies, the effect of the resistance component was usually small. DC boards served to produce results accurate to around 20% error, sufficient for some purposes. Artificial lines were used to analyze transmission lines. These carefully constructed replicas of the distributed inductance, capacitance and resistance of a full-size line were used to investigate propagation of impulses in lines and to validate theoretical calculations of transmission line properties. An artificial line was made by winding layers of wire around a glass cylinder, with interleaved sheets of tin foil, to give the model proportionally the same distributed inductance and capacitance as the full-size line. Later, lumped-element approximations of transmission lines were found to give adequate precision for many calculations. Laboratory investigations of the stability of multiple-machine systems were constrained by the use of direct-operated indicating instruments (voltmeters, ammeters, and wattmeters). To ensure that the instruments negligibly loaded the model system, the machine power level used was substantial. Some workers in the 1920s used three-phase model generators rated up to 600 kVA and 2300 volts to represent a power system. General Electric developed model systems using generators rated at 3.75 kVA. [ 4 ] It was difficult to keep multiple generators in synchronism, and the size and cost of the units was a constraint. While transmission lines and loads could be accurately scaled down to laboratory representations, rotating machines could not be accurately miniaturized and keep the same dynamic characteristics as full-sized prototypes; the ratio of machine inertia to machine frictional loss did not scale. [ 5 ] A network analyzer system was essentially a scale model of the electrical properties of a specific power system. Generators, transmission lines, and loads were represented by miniature electrical components with scale values in proportion to the modeled system. [ 6 ] Model components were interconnected with flexible cords to represent the schematic diagram of the modeled system. Instead of using miniature rotating machines, accurately calibrated phase-shifting transformers were built to simulate electrical machines. These were all energized by the same source (at local power frequency or from a motor-generator set) and so inherently maintained synchronism. The phase angle and terminal voltage of each simulated generator could be set using rotary scales on each phase-shifting transformer unit. Using the per-unit system allowed values to be conveniently interpreted without additional calculation. To reduce the size of the model components, the network analyzer often was energized at a higher frequency than the 50 Hz or 60 Hz utility frequency . The operating frequency was chosen to be high enough to allow high-quality inductors and capacitors to be made, and to be compatible with the available indicating instruments, but not so high that stray capacitance would affect results. Many systems used either 440 Hz, or 480 Hz, provided by a motor-generator set, to reduce size of model components. Some systems used 10 kHz, using capacitors and inductors similar to those used in radio electronics. Model circuits were energized at relatively low voltages to allow for safe measurement with adequate precision. The model base quantities varied by manufacturer and date of design; as amplified indicating instruments became more common, lower base quantities were feasible. Model voltages and currents started off around 200 volts and 0.5 amperes in the MIT analyzer, which still allowed directly driven (but especially sensitive) instruments to be used to measure model parameters. The later machines used as little as 50 volts and 50 mA, used with amplified indicating instruments. By use of the per-unit system , model quantities could be readily transformed into the actual system quantities of voltage, current, power or impedance. A watt measured in the model might correspond to hundreds of kilowatts or megawatts in the modeled system. One hundred volts measured on the model might correspond to one per-unit, which could represent, say, 230,000 volts on a transmission line or 11,000 volts in a distribution system. Typically, results accurate to around 2% of measurement could be obtained. [ 7 ] Model components were single-phase devices, but using the symmetrical components method, unbalanced three-phase systems could be studied as well. A complete network analyzer was a system that filled a large room; one model was described as four bays of equipment, spanning a U-shaped arrangement 26 feet (8 metres) across. Companies such as General Electric and Westinghouse could provide consulting services based on their analyzers; but some large electrical utilities operated their own analyzers. The use of network analyzers allowed quick solutions to difficult calculation problems, and allowed problems to be analyzed that would otherwise be uneconomic to compute using manual calculations. Although expensive to build and operate, network analyzers often repaid their costs in reduced calculation time and expedited project schedules. [ 8 ] For example, a stability study might indicate if a transmission line should have larger or differently spaced conductors to preserve stability margin during system faults; potentially saving many miles of cable and thousands of insulators. Network analyzers did not directly simulate the dynamic effects of load application to machine dynamics ( torque angle , and others). Instead, the analyzer would be used to solve dynamic problems in a stepwise fashion, first calculating a load flow, then adjusting the phase angle of the machine in response to its power flow, and re-calculating the power flow. In use, the system to be modelled would be represented as a single line diagram and all the impedances of lines and machines would be scaled to model values on the analyzer. A plugging diagram would be prepared to show the interconnections to be made between the model elements. The circuit elements would be interconnected by patch cables. The model system would be energized, and measurements taken at the points of interest in the model; these could be scaled up to the values in the full-scale system. [ 9 ] The network analyzer installed at Massachusetts Institute of Technology (MIT) grew out of a 1924 thesis project by Hugh H. Spencer and Harold Locke Hazen , investigating a power system modelling concept proposed by Vannevar Bush . Instead of miniature rotating machines, each generator was represented by a transformer with adjustable voltage and phase, all fed from a common source. This eliminated a significant source of the poor accuracy of models with miniature rotating machines. The 1925 publication of this thesis attracted the attention at General Electric, where Robert Doherty was interested in modelling problems of system stability. He asked Hazen to verify that the model could accurately reproduce the behavior of machines during load changes. Design and construction was carried out jointly by General Electric and MIT. When first demonstrated in June 1929, the system had eight phase-shifting transformers to represent synchronous machines. Other elements included 100 variable line resistors, 100 variable reactors, 32 fixed capacitors, and 40 adjustable load units. The analyzer was described in a 1930 paper by H.L Hazen, O.R. Schurig and M.F. Gardner. The base quantities for the analyzer were 200 volts, and 0.5 amperes. Sensitive portable thermocouple-type instruments were used for measurement. [ 10 ] The analyzer occupied four large panels, arranged in a U-shape, with tables in front of each section to hold measuring instruments. While primarily conceived as an educational tool, the analyzer saw considerable use by outside firms, who would pay to use the device. American Gas and Electric Company , the Tennessee Valley Authority , and many other organizations studied problems on the MIT analyzer in its first decade of operation. In 1940 the system was moved and expanded to handle more complex systems. By 1953 the MIT analyzer was beginning to fall behind the state of the art. Digital computers were first used on power system problems as early as " Whirlwind " in 1949. Unlike most of the forty other analyzers in service by that point, the MIT instrument was energized at 60 Hz, not 440 or 480 Hz, making its components large, and expansion to new types of problems difficult. Many utility customers had bought their own network analyzers. The MIT system was dismantled and sold to the Puerto Rico Water Resources Authority in 1954. [ 11 ] By 1947, fourteen network analyzers had been built at a total cost of about two million US dollars. General Electric built two full-scale network analyzers for its own work and for services to its clients. Westinghouse built systems for their internal use and provided more than 20 analyzers to utility and university clients. After the Second World War analyzers were known to be in use in France, the UK, Australia, Japan, and the Soviet Union. Later models had improvements such as centralized control of switching, central measurement bays, and chart recorders to automatically provide permanent records of results. General Electric's Model 307 was a miniaturized AC network analyzer with four generator units and a single electronically amplified metering unit. It was targeted at utility companies to solve problems too large for hand computation but not worth the expense of renting time on a full size analyzer. Like the Iowa State College analyzer, it used a system frequency of 10 kHz instead of 60 Hz or 480 Hz, allowing much smaller radio-style capacitor and inductors to be used to model power system components. The 307 was cataloged from 1957 and had a list of about 20 utility, educational and government customers. In 1959 its list price was $8,590. [ 12 ] In 1953, the Metropolitan Edison Company and a group of six other electrical companies purchased a new Westinghouse AC network analyzer for installation at the Franklin Institute in Philadelphia. The system, described as the largest ever built, cost $400,000. [ 13 ] In Japan, network analyzers were installed starting in 1951. The Yokogawa Electric company introduced a model energized at 3980 Hz starting in 1956. [ 14 ] A "transient network analyzer" was an analog model of a transmission system especially adapted to study high-frequency transient surges (such as those due to lightning or switching), instead of AC power frequency currents. Similarly to an AC network analyzer, they represented apparatus and lines with scaled inductances and resistances. A synchronously driven switch repeatedly applied a transient impulse to the model system, and the response at any point could be observed on an oscilloscope or recorded on an oscillograph. Some transient analyzers are still in use for research and education, sometimes combined with digital protective relays or recording instruments. [ 20 ] The Westinghouse Anacom was an AC-energized electrical analog computer system used extensively for problems in mechanical design, structural elements, lubrication oil flow, and various transient problems including those due to lightning surges in electric power transmission systems. The excitation frequency of the computer could be varied. The Westinghouse Anacom constructed in 1948 was used up to the early 1990s for engineering calculations; its original cost was $500,000. The system was periodically updated and expanded; by the 1980s the Anacom could be run through many simulation cases unattended, under the control of a digital computer that automatically set up initial conditions and recorded the results. Westinghouse built a replica Anacom for Northwestern University , sold an Anacom to ABB , and twenty or thirty similar computers by other makers were used around the world. [ 9 ] Since the multiple elements of the AC network analyzer formed a powerful analog computer, occasionally problems in physics and chemistry were modeled (by such researchers as Gabriel Kron of General Electric ), in the late 1940s prior to the ready availability of general-purpose digital computers. [ 21 ] Another application was water flow in water distribution systems. The forces and displacements of a mechanical system could be readily modelled with the voltages and currents of a network analyzer, which allowed easy adjustment of properties such as the stiffness of a spring by, for example, changing the value of a capacitor. [ 22 ] The David Taylor Model Basin operated an AC network analyzer from the late 1950s until the mid-1960s. The system was used on problems in ship design. An electrical analog of the structural properties of a proposed ship, shaft, or other structure could be built, and tested for its vibrational modes. Unlike AC analyzers used for power systems work, the exciting frequency was made continuously variable so that mechanical resonance effects could be investigated. Even during the Depression and the Second World War, many network analyzers were constructed because of their great value in solving calculations related to electric power transmission. By the mid 1950s, about thirty analyzers were available in the United States, representing an oversupply. Institutions such as MIT could no longer justify operating analyzers as paying clients barely covered operating expenses. [ 22 ] Once digital computers of adequate performance became available, the solution methods developed on analog network analyzers were migrated to the digital realm, where plugboards, switches and meter pointers were replaced with punch cards and printouts. The same general-purpose digital computer hardware that ran network studies could easily be dual-tasked with business functions such as payroll. Analog network analyzers faded from general use for load-flow and fault studies, although some persisted in transient studies for a while longer. Analog analyzers were dismantled and either sold off to other utilities, donated to engineering schools, or scrapped. The fate of a few analyzers illustrates the trend. The analyzer purchased by American Electric Power was replaced by digital systems in 1961, and donated to Virginia Tech . The Westinghouse network analyzer purchased by the State Electricity Commission of Victoria , Australia in 1950 was taken out of utility service in 1967 and donated to the Engineering department at Monash University ; but by 1985, even instructional use of the analyzer was no longer practical and the system was finally dismantled. [ 23 ] One factor contributing to the obsolescence of analog models was the increasing complexity of interconnected power systems. Even a large analyzer could only represent a few machines, and perhaps a few score lines and busses. Digital computers routinely handled systems with thousands of busses and transmission lines.
https://en.wikipedia.org/wiki/Network_analyzer_(AC_power)
A network analyzer is an instrument that measures the network parameters of electrical networks . Today, network analyzers commonly measure s–parameters because reflection and transmission of electrical networks are easy to measure at high frequencies, but there are other network parameter sets such as y-parameters , z-parameters , and h-parameters . Network analyzers are often used to characterize two-port networks such as amplifiers and filters, but they can be used on networks with an arbitrary number of ports . Network analyzers are used mostly at high frequencies ; operating frequencies can range from 1 Hz to 1.5 THz. [ 1 ] Special types of network analyzers can also cover lower frequency ranges down to 1 Hz. [ 2 ] These network analyzers can be used, for example, for the stability analysis of open loops or for the measurement of audio and ultrasonic components. [ 3 ] The two basic types of network analyzers are A VNA is a form of RF network analyzer widely used for RF design applications. A VNA may also be called a gain–phase meter or an automatic network analyzer . An SNA is functionally identical to a spectrum analyzer in combination with a tracking generator . As of 2007 [update] , VNAs are the most common type of network analyzers, and so references to an unqualified "network analyzer" most often mean a VNA. Six prominent VNA manufacturers are Keysight , [ 4 ] Anritsu , Advantest , Rohde & Schwarz , Siglent, Copper Mountain Technologies and OMICRON Lab . For some years now, entry-level devices and do-it-yourself projects have also been available, some for less than $100, mainly from the amateur radio sector. Although these have significantly reduced features compared to professional devices and offer only a limited range of functions, they are often sufficient for private users - especially during studies and for hobby applications up to the single-digit GHz range. [ 5 ] Another category of network analyzer is the microwave transition analyzer (MTA) or large-signal network analyzer (LSNA), which measure both amplitude and phase of the fundamental and harmonics. The MTA was commercialized before the LSNA, but was lacking some of the user-friendly calibration features now available with the LSNA. The basic architecture of a network analyzer involves a signal generator, a test set, one or more receivers and display. In some setups, these units are distinct instruments. Most VNAs have two test ports, permitting measurement of four S-parameters ( S 11 , S 21 , S 12 , S 22 ) {\displaystyle (S_{11},S_{21},S_{12},S_{22})} , but instruments with more than two ports are available commercially. The network analyzer needs a test signal, and a signal generator or signal source will provide one. Older network analyzers did not have their own signal generator, but had the ability to control a stand-alone signal generator using, for example, a GPIB connection. Nearly all modern network analyzers have a built-in signal generator. High-performance network analyzers have two built-in sources. Two built-in sources are useful for applications such as mixer test, where one source provides the RF signal, another the LO; or amplifier intermodulation testing, where two tones are required for the test. The test set takes the signal generator output and routes it to the device under test, and it routes the signal to be measured to the receivers. It often splits off a reference channel for the incident wave. In a SNA, the reference channel may go to a diode detector (receiver) whose output is sent to the signal generator's automatic level control. The result is better control of the signal generator's output and better measurement accuracy. In a VNA, the reference channel goes to the receivers; it is needed to serve as a phase reference. Directional couplers or two resistor power dividers are used for signal separation. Some microwave test sets include the front end mixers for the receivers (e.g., test sets for HP 8510). The receivers make the measurements. A network analyzer will have one or more receivers connected to its test ports. The reference test port is usually labeled R , and the primary test ports are A , B , C , ... Some analyzers will dedicate a separate receiver to each test port, but others share one or two receivers among the ports. The R receiver may be less sensitive than the receivers used on the test ports. For the SNA, the receiver only measures the magnitude of the signal. A receiver can be a detector diode that operates at the test frequency. The simplest SNA will have a single test port, but more accurate measurements are made when a reference port is also used. The reference port will compensate for amplitude variations in the test signal at the measurement plane. It is possible to share a single detector and use it for both the reference port and the test port by making two measurement passes. For the VNA, the receiver measures both the magnitude and the phase of the signal. It needs a reference channel ( R ) to determine the phase, so a VNA needs at least two receivers. The usual method down converts the reference and test channels to make the measurements at a lower frequency. The phase may be measured with a quadrature detector . A VNA requires at least two receivers, but some will have three or four receivers to permit simultaneous measurement of different parameters. There are some VNA architectures (six-port) that infer phase and magnitude from just power measurements. With the processed RF signal available from the receiver / detector section it is necessary to display the signal in a format that can be interpreted. With the levels of processing that are available today, some very sophisticated solutions are available in RF network analyzers. Here the reflection and transmission data is formatted to enable the information to be interpreted as easily as possible. Most RF network analyzers incorporate features including linear and logarithmic sweeps, linear and log formats, polar plots, Smith charts, etc. Trace markers, limit lines and pass/fail criteria are also added in many instances. [ 6 ] A VNA is a test system that enables the RF performance of radio frequency and microwave devices to be characterised in terms of network scattering parameters , or S parameters. The diagram shows the essential parts of a typical 2-port vector network analyzer (VNA). The two ports of the device under test (DUT) are denoted port 1 (P1) and port 2 (P2). The test port connectors provided on the VNA itself are precision types which will normally have to be extended and connected to P1 and P2 using precision cables 1 and 2, PC1 and PC2 respectively and suitable connector adaptors A1 and A2 respectively. The test frequency is generated by a variable frequency CW source and its power level is set using a variable attenuator . The position of switch SW1 sets the direction that the test signal passes through the DUT. Initially consider that SW1 is at position 1 so that the test signal is incident on the DUT at P1 which is appropriate for measuring S 11 {\displaystyle S_{11}\,} and S 21 {\displaystyle S_{21}\,} . The test signal is fed by SW1 to the common port of splitter 1, one arm (the reference channel) feeding a reference receiver for P1 (RX REF1) and the other (the test channel) connecting to P1 via the directional coupler DC1, PC1 and A1. The third port of DC1 couples off the power reflected from P1 via A1 and PC1, then feeding it to test receiver 1 (RX TEST1). Similarly, signals leaving P2 pass via A2, PC2 and DC2 to RX TEST2. RX REF1, RX TEST1, RX REF2 and RXTEST2 are known as coherent receivers as they share the same reference oscillator, and they are capable of measuring the test signal's amplitude and phase at the test frequency. All of the complex receiver output signals are fed to a processor which does the mathematical processing and displays the chosen parameters and format on the phase and amplitude display. The instantaneous value of phase includes both the temporal and spatial parts, but the former is removed by virtue of using 2 test channels, one as a reference and the other for measurement. When SW1 is set to position 2, the test signals are applied to P2, the reference is measured by RX REF2, reflections from P2 are coupled off by DC2 and measured by RX TEST2 and signals leaving P1 are coupled off by DC1 and measured by RX TEST1. This position is appropriate for measuring S 22 {\displaystyle S_{22}\,} and S 12 {\displaystyle S_{12}\,} . A network analyzer, like most electronic instruments requires periodic calibration ; typically this is performed once per year and is performed by the manufacturer or by a 3rd party in a calibration laboratory. When the instrument is calibrated, a sticker will usually be attached, stating the date it was calibrated and when the next calibration is due. A calibration certificate will be issued. A vector network analyzer achieves highly accurate measurements by correcting for the systematic errors in the instrument, the characteristics of cables, adapters and test fixtures. The process of error correction, although commonly just called calibration, is an entirely different process, and may be performed by an engineer several times in an hour. Sometimes it is called user-calibration, to indicate the difference from periodic calibration by a manufacturer. A network analyzer has connectors on its front panel, but the measurements are seldom made at the front panel. Usually some test cables will connect from the front panel to the device under test (DUT). The length of those cables will introduce a time delay and corresponding phase shift (affecting VNA measurements); the cables will also introduce some attenuation (affecting SNA and VNA measurements). The same is true for cables and couplers inside the network analyzer. All these factors will change with temperature. Calibration usually involves measuring known standards and using those measurements to compensate for systematic errors, but there are methods which do not require known standards. Only systematic errors can be corrected. Random errors , such as connector repeatability cannot be corrected by the user calibration. However, some portable vector network analyzers, designed for lower accuracy measurement outside using batteries, do attempt some correction for temperature by measuring the internal temperature of the network analyzer. The first steps, prior to starting the user calibration are: There are several different methods of calibration. The simplest calibration that can be performed on a network analyzer is a transmission measurement. This gives no phase information, and so gives similar data to a scalar network analyzer. The simplest calibration that can be performed on a network analyzer, whilst providing phase information is a 1-port calibration (S11 or S22, but not both). This accounts for the three systematic errors which appear in 1-port reflectivity measurements: In a typical 1-port reflection calibration, the user measures three known standards, usually an open, a short and a known load. From these three measurements the network analyzer can account for the three errors above. [ 9 ] [ 10 ] A more complex calibration is a full 2-port reflectivity and transmission calibration. For two ports there are 12 possible systematic errors analogous to the three above. The most common method for correcting for these involves measuring a short, load and open standard on each of the two ports, as well as transmission between the two ports. It is impossible to make a perfect short circuit, as there will always be some inductance in the short. It is impossible to make a perfect open circuit, as there will always be some fringing capacitance. A modern network analyzer will have data stored about the devices in a calibration kit. ( Keysight Technologies 2006 ) harv error: no target: CITEREFKeysight_Technologies2006 ( help ) For the open-circuit, this will be some electrical delay (typically tens of picoseconds), and fringing capacitance which will be frequency dependent. The capacitance is normally specified in terms of a polynomial, with the coefficients specific to each standard. A short will have some delay, and a frequency dependent inductance, although the inductance is normally considered insignificant below about 6 GHz. The definitions for a number of standards used in Keysight calibration kits can be found at http://na.support.keysight.com/pna/caldefs/stddefs.html The definitions of the standards for a particular calibration kit will often change depending on the frequency range of the network analyzer. If a calibration kit works to 9 GHz, but a particular network analyzer has a maximum frequency of operation of 3 GHz, then the capacitance of the open standard can approximated more closely up to 3 GHz, using a different set of coefficients than are necessary to work up to 9 GHz. In some calibration kits, the data on the males is different from the females, so the user needs to specify the gender of the connector. In other calibration kits (e.g. Keysight 85033E 9 GHz 3.5 mm), the male and female have identical characteristics, so there is no need for the user to specify the gender. For gender-less connectors, like APC-7 , this issue does not arise. Most network analyzers have the ability to have a user defined calibration kit. So if a user has a particular calibration kit details of which are not in the firmware of the network analyzer, the data about the kit can be loaded into the network analyzer and so the kit used. Typically the calibration data can be entered on the instrument front panel or loaded from a medium such as floppy disk or USB stick , or down a bus such as USB or GPIB. The more expensive calibration kits will usually include a torque wrench to tighten connectors properly and a connector gauge to ensure there are no gross errors in the connectors. A calibration using a mechanical calibration kit may take a significant amount of time. Not only must the operator sweep through all the frequencies of interest, but the operator must also disconnect and reconnect the various standards. ( Keysight Technologies 2003 , p. 9) To avoid that work, network analyzers can employ automated calibration standards. ( Keysight Technologies 2003 ) The operator connects one box to the network analyzer. The box has a set of standards inside and some switches that have already been characterized. The network analyzer can read the characterization and control the configuration using a digital bus such as USB. Many verification kits are available to verify the network analyzer is performing to specification. These typically consist of transmission lines with an air dielectric and attenuators. The Keysight 85055A verification kit includes a 10 cm airline, stepped impedance airline, 20 dB and 50 dB attenuators with data on the devices measured by the manufacturer and stored on both a floppy disk and USB flash drive. Older versions of the 85055A have the data stored on tape and floppy disks rather than on USB drives. Verification kits are also manufactured for other transmission lines such as waveguide which contain a known through mismatch and attenuations. The Flann verification kit includes 5 mismatches using a decrease in waveguide height to provide a known VSWR and 2 attenuators of differing attenuation levels. The three major manufacturers of VNAs, Keysight , Anritsu , and Rohde & Schwarz , all produce models which permit the use of noise figure measurements. The vector error correction permits higher accuracy than is possible with other forms of commercial noise figure meters.
https://en.wikipedia.org/wiki/Network_analyzer_(electrical)
Network behavior anomaly detection ( NBAD ) is a security technique that provides network security threat detection. It is a complementary technology to systems that detect security threats based on packet signatures . [ 1 ] NBAD is the continuous monitoring of a network for unusual events or trends. NBAD is an integral part of network behavior analysis (NBA), which offers security in addition to that provided by traditional anti-threat applications such as firewalls, intrusion detection systems, antivirus software and spyware -detection software. Most security monitoring systems utilize a signature-based approach to detect threats. They generally monitor packets on the network and look for patterns in the packets which match their database of signatures representing pre-identified known security threats. NBAD-based systems are particularly helpful in detecting security threat vectors in two instances where signature-based systems cannot: (i) new zero-day attacks, and (ii) when the threat traffic is encrypted such as the command and control channel for certain Botnets. An NBAD program tracks critical network characteristics in real time and generates an alarm if a strange event or trend is detected that could indicate the presence of a threat. Large-scale examples of such characteristics include traffic volume, bandwidth use and protocol use. NBAD solutions can also monitor the behavior of individual network subscribers. In order for NBAD to be optimally effective, a baseline of normal network or user behavior must be established over a period of time. Once certain parameters have been defined as normal, any departure from one or more of them is flagged as anomalous. NBAD technology/techniques are applied in a number of network and security monitoring domains including: (i) Log analysis (ii) Packet inspection systems (iii) Flow monitoring systems and (iv) Route analytics . NBAD has also been described as outlier detection, novelty detection, deviation detection and exception mining. [ 2 ]
https://en.wikipedia.org/wiki/Network_behavior_anomaly_detection
Network calculus is "a set of mathematical results which give insights into man-made systems such as concurrent programs , digital circuits and communication networks ." [ 1 ] Network calculus gives a theoretical framework for analysing performance guarantees in computer networks . As traffic flows through a network it is subject to constraints imposed by the system components, for example: These constraints can be expressed and analysed with network calculus methods. Constraint curves can be combined using convolution under min-plus algebra . Network calculus can also be used to express traffic arrival and departure functions as well as service curves. The calculus uses "alternate algebras ... to transform complex non-linear network systems into analytically tractable linear systems." [ 2 ] Currently, there exists two branches in network calculus: one handling deterministic bounded, and one handling stochastic bounds. [ 3 ] In network calculus, a flow is modelled as cumulative functions A , where A(t) represents the amount of data (number of bits for example) sent by the flow in the interval [0,t) . Such functions are non-negative and non-decreasing. The time domain is often the set of non negative reals. A : R + → R + {\displaystyle A:\mathbb {R} ^{+}\rightarrow \mathbb {R} ^{+}} ∀ u , t ∈ R + : u < t ⟹ A ( u ) ≤ A ( t ) {\displaystyle \forall u,t\in \mathbb {R} ^{+}:u<t\implies A(u)\leq A(t)} A server can be a link, a scheduler, a traffic shaper, or a whole network. It is simply modelled as a relation between some arrival cumulative curve A and some departure cumulative curve D . It is required that A ≥ D , to model the fact that the departure of some data can not occur before its arrival. Given some arrival and departure curve A and D , the backlog at any instant t , denoted b(A,D,t) can be defined as the difference between A and D . The delay at t , d(A,D,t) is defined as the minimal amount of time such that the departure function reached the arrival function. When considering the whole flows, the supremum of these values is used. b ( A , D , t ) := A ( t ) − D ( t ) {\displaystyle b(A,D,t):=A(t)-D(t)} d ( A , D , t ) := inf { d ∈ R + s . t . D ( t + d ) ≥ A ( t ) } {\displaystyle d(A,D,t):=\inf \left\{d\in \mathbb {R} ^{+}~s.t.~D(t+d)\geq A(t)\right\}} b ( A , D ) := sup t ≥ 0 { A ( t ) − D ( t ) } {\displaystyle b(A,D):=\sup _{t\geq 0}\left\{A(t)-D(t)\right\}} d ( A , D ) := sup t ≥ 0 { inf { d ∈ R + s . t . D ( t + d ) ≥ A ( t ) } } {\displaystyle d(A,D):=\sup _{t\geq 0}\left\{\inf \left\{d\in \mathbb {R} ^{+}~s.t.~D(t+d)\geq A(t)\right\}\right\}} In general, the flows are not exactly known, and only some constraints on flows and servers are known (like the maximal number of packet sent on some period, the maximal size of packets, the minimal link bandwidth). The aim of network calculus is to compute upper bounds on delay and backlog, based on these constraints. To do so, network calculus uses the min-plus algebra. Network calculus makes an intensive use on the min-plus semiring (sometimes called min-plus algebra). In filter theory and linear systems theory the convolution of two functions f {\displaystyle f} and g {\displaystyle g} is defined as ( f ∗ g ) ( t ) := ∫ 0 t f ( τ ) ⋅ g ( t − τ ) d τ {\displaystyle (f\ast g)(t):=\int _{0}^{t}f(\tau )\cdot g(t-\tau )d\tau } In min-plus semiring the sum is replaced by the minimum respectively infimum operator and the product is replaced by the sum . So the min-plus convolution of two functions f {\displaystyle f} and g {\displaystyle g} becomes ( f ⊗ g ) ( t ) := inf 0 ≤ τ ≤ t { f ( τ ) + g ( t − τ ) } {\displaystyle (f\otimes g)(t):=\inf _{0\leq \tau \leq t}\left\{f(\tau )+g(t-\tau )\right\}} e.g. see the definition of service curves. Convolution and min-plus convolution share many algebraic properties. In particular both are commutative and associative. A so-called min-plus de-convolution operation is defined as ( f ⊘ g ) ( t ) := sup τ ≥ 0 { f ( t + τ ) − g ( τ ) } {\displaystyle (f\oslash g)(t):=\sup _{\tau \geq 0}\left\{f(t+\tau )-g(\tau )\right\}} e.g. as used in the definition of traffic envelopes. The vertical and horizontal deviations can be expressed in terms of min-plus operators. b ( f , g ) = ( f ⊘ g ) ( 0 ) {\displaystyle b(f,g)=(f\oslash g)(0)} d ( f , g ) = inf { w : ( f ⊘ g ) ( − w ) ≤ 0 } {\displaystyle d(f,g)=\inf\{w:(f\oslash g)(-w)\leq 0\}} Cumulative curves are real behaviours, unknown at design time. What is known is some constraint. Network calculus uses the notion of traffic envelope, also known as arrival curves. A cumulative function A is said to conform to an envelope E (also called arrival curve and denoted α ) , if for all t it holds that E ( t ) ≥ sup τ ≥ 0 { A ( t + τ ) − A ( τ ) } = ( A ⊘ A ) ( t ) . {\displaystyle E(t)\geq \sup _{\tau \geq 0}\{A(t+\tau )-A(\tau )\}=(A\oslash A)(t).} Two equivalent definitions can be given ∀ t , d ∈ R + : A ( t + d ) − A ( t ) ≤ E ( d ) {\displaystyle \forall t,d\in \mathbb {R} ^{+}:A(t+d)-A(t)\leq E(d)} A ≤ A ⊗ E {\displaystyle A\leq A\otimes E} Thus, E places an upper constraint on flow A . Such function E can be seen as an envelope that specifies an upper bound on the number of bits of flow seen in any interval of length d starting at an arbitrary t , cf. eq. ( 1 ). In order to provide performance guarantees to traffic flows it is necessary to specify some minimal performance of the server (depending on reservations in the network, or scheduling policy, etc.). Service curves provide a means of expressing resource availability. Several kinds of service curves exists, like weakly strict, variable capacity node, etc. See [ 4 ] [ 5 ] for an overview. Let A be an arrival flow, arriving at the ingress of a server, and D be the flow departing at the egress. The system is said to provide a simple minimal service curve S to the pair (A,B) , if for all t it holds that D ( t ) ≥ ( A ⊗ S ) ( t ) . {\displaystyle D(t)\geq (A\otimes S)(t).} Let A be an arrival flow, arriving at the ingress of a server, and D be the flow departing at the egress. A backlog period is an interval I such that, on any t ∈ I , A(t)>D(t) . The system is said to provide a strict minimal service curve S to the pair (A,B) iff, ∀ s , t ∈ R + {\displaystyle \forall s,t\in \mathbb {R} ^{+}} , such that s ≤ t {\displaystyle s\leq t} , if ( s , t ] {\displaystyle (s,t]} is a backlog period, then D ( t ) − D ( s ) ≥ S ( t − s ) {\displaystyle D(t)-D(s)\geq S(t-s)} . If a server offers a strict minimal service of curve S , it also offers a simple minimal service of curve S . Depending on the authors, on the purpose of the paper, different notations or even names are used for the same notion. From traffic envelope and service curves, some bounds on the delay and backlog, and an envelope on the departure flow can be computed. Let A be an arrival flow, arriving at the ingress of a server, and D be the flow departing at the egress. If the flow as a traffic envelope E , and the server provides a minimal service of curve S , then the backlog and delay can be bounded: b ( A , D ) ≤ b ( E , S ) {\displaystyle b(A,D)\leq b(E,S)} d ( A , D ) ≤ d ( E , S ) {\displaystyle d(A,D)\leq d(E,S)} Moreover, the departure curve has envelope E ′ = E ⊘ S {\displaystyle E'=E\oslash S} . Moreover, these bounds are tight i.e. given some E , and S , one may build an arrival and departure such that b(A,D) = b(E,S) and v(A,D) = v(E,S) . Consider a sequence of two servers, when the output of the first one is the input of the second one. This sequence can be seen as a new server, built as the concatenation of the two other ones. Then, if the first (resp. second) server offers a simple minimal service S 1 {\displaystyle S_{1}} (resp. S 2 {\displaystyle S_{2}} ), then, the concatenation of both offers a simple minimal service S e 2 e = S 1 ⊗ S 2 {\displaystyle S_{e2e}=S_{1}\otimes S_{2}} . The proof does iterative application of the definition of service curves X ≥ A ⊗ S 1 {\displaystyle X\geq A\otimes S_{1}} , D ≥ X ⊗ S 2 {\displaystyle D\geq X\otimes S_{2}} and some properties of convolution, isotonicity ( D ≥ ( X ⊗ S 2 ) ⊗ S 1 {\displaystyle D\geq (X\otimes S_{2})\otimes S_{1}} ), and associativity ( D ≥ X ⊗ ( S 2 ⊗ S 1 ) {\displaystyle D\geq X\otimes (S_{2}\otimes S_{1})} ). The interest of this result is that the end-to-end delay bound is not greater than the sum of local delays: d ( E , S 2 ⊗ S 1 ) ≤ d ( E , S 1 ) + d ( E ⊘ S 1 , S 2 ) {\displaystyle d(E,S_{2}\otimes S_{1})\leq d(E,S_{1})+d(E\oslash S_{1},S_{2})} . This result is known as Pay burst only once (PBOO). There are several tools based on network calculus. A comparison can be found in. [ 6 ] There exist several tools and library devoted to the min-plus algebra. All these tools and library are based on the algorithms presented in. [ 9 ] WoNeCa workshop is a Wo rkshop on Ne twork Ca lculus. It is organized every two years to bring together researchers with an interest in the theory of network calculus as well as those who want to apply existing results to new applications. The workshop also serves to promote the network calculus theory to researchers with an interest in applied queueing models. In 2018, International Workshop on Network Calculus and Applications (NetCal 2018) was held in Vienna, Austria as a part of the 30th International Teletraffic Congress (ITC 30). In 2024, the network calculus Dagstuhl seminar (24141) was held from 1st April to 4th April in Dagstuhl, Germany.
https://en.wikipedia.org/wiki/Network_calculus
Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay , packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput . [ 1 ] Network protocols that use aggressive retransmissions to compensate for packet loss due to congestion can increase congestion, even after the initial load has been reduced to a level that would not normally have induced network congestion. Such networks exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse . Networks use congestion control and congestion avoidance techniques to try to avoid collapse. These include: exponential backoff in protocols such as CSMA/CA in 802.11 and the similar CSMA/CD in the original Ethernet , window reduction in TCP , and fair queueing in devices such as routers and network switches . Other techniques that address congestion include priority schemes which transmit some packets with higher priority ahead of others and the explicit allocation of network resources to specific flows through the use of admission control . Network resources are limited, including router processing time and link throughput . Resource contention may occur on networks in several common circumstances. A wireless LAN is easily filled by a single personal computer. [ 2 ] Even on fast computer networks, the backbone can easily be congested by a few servers and client PCs. Denial-of-service attacks by botnets are capable of filling even the largest Internet backbone network links, generating large-scale network congestion. In telephone networks, a mass call event can overwhelm digital telephone circuits, in what can otherwise be defined as a denial-of-service attack. Congestive collapse (or congestion collapse) is the condition in which congestion prevents or limits useful communication. Congestion collapse generally occurs at choke points in the network, where incoming traffic exceeds outgoing bandwidth. Connection points between a local area network and a wide area network are common choke points. When a network is in this condition, it settles into a stable state where traffic demand is high but little useful throughput is available, during which packet delay and loss occur and quality of service is extremely poor. Congestive collapse was identified as a possible problem by 1984. [ 3 ] It was first observed on the early Internet in October 1986, [ 4 ] when the NSFNET phase-I backbone dropped three orders of magnitude from its capacity of 32 kbit/s to 40 bit/s, [ 5 ] which continued until end nodes started implementing Van Jacobson and Sally Floyd 's congestion control between 1987 and 1988. [ 6 ] When more packets were sent than could be handled by intermediate routers, the intermediate routers discarded many packets, expecting the endpoints of the network to retransmit the information. However, early TCP implementations had poor retransmission behavior. When this packet loss occurred, the endpoints sent extra packets that repeated the information lost, doubling the incoming rate. Congestion control modulates traffic entry into a telecommunications network in order to avoid congestive collapse resulting from oversubscription. [ 7 ] This is typically accomplished by reducing the rate of packets. Whereas congestion control prevents senders from overwhelming the network , flow control prevents the sender from overwhelming the receiver . The theory of congestion control was pioneered by Frank Kelly , who applied microeconomic theory and convex optimization theory to describe how individuals controlling their own rates can interact to achieve an optimal network-wide rate allocation. Examples of optimal rate allocation are max-min fair allocation and Kelly's suggestion of proportionally fair allocation, although many others are possible. Let x i {\displaystyle x_{i}} be the rate of flow Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle i} , c l {\displaystyle c_{l}} be the capacity of link l {\displaystyle l} , and Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle r_{li}} be 1 if flow i {\displaystyle i} uses link l {\displaystyle l} and 0 otherwise. Let x {\displaystyle x} , Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle c} and R {\displaystyle R} be the corresponding vectors and matrix. Let U ( x ) {\displaystyle U(x)} be an increasing, strictly concave function , called the utility , which measures how much benefit a user obtains by transmitting at rate x {\displaystyle x} . The optimal rate allocation then satisfies The Lagrange dual of this problem decouples so that each flow sets its own rate, based only on a price signaled by the network. Each link capacity imposes a constraint, which gives rise to a Lagrange multiplier , p l {\displaystyle p_{l}} . The sum of these multipliers, y i = ∑ l p l r l i , {\displaystyle y_{i}=\sum _{l}p_{l}r_{li},} is the price to which the flow responds. Congestion control then becomes a distributed optimization algorithm. Many current congestion control algorithms can be modeled in this framework, with p l {\displaystyle p_{l}} being either the loss probability or the queueing delay at link l {\displaystyle l} . A major weakness is that it assigns the same price to all flows, while sliding window flow control causes burstiness that causes different flows to observe different loss or delay at a given link. Among the ways to classify congestion control algorithms are: Mechanisms have been invented to prevent network congestion or to deal with a network collapse: The correct endpoint behavior is usually to repeat dropped information, but progressively slow the repetition rate. Provided all endpoints do this, the congestion lifts and the network resumes normal behavior. [ citation needed ] Other strategies such as slow start ensure that new connections do not overwhelm the router before congestion detection initiates. Common router congestion avoidance mechanisms include fair queuing and other scheduling algorithms , and random early detection where packets are randomly dropped as congestion is detected. This proactively triggers the endpoints to slow transmission before congestion collapse occurs. Some end-to-end protocols are designed to behave well under congested conditions; TCP is a well known example. The first TCP implementations to handle congestion were described in 1984, [ 8 ] but Van Jacobson's inclusion of an open source solution in the Berkeley Standard Distribution UNIX (" BSD ") in 1988 first provided good behavior. UDP does not control congestion. Protocols built atop UDP must handle congestion independently. Protocols that transmit at a fixed rate, independent of congestion, can be problematic. Real-time streaming protocols, including many Voice over IP protocols, have this property. Thus, special measures, such as quality of service, must be taken to keep packets from being dropped in the presence of congestion. Connection-oriented protocols , such as the widely used TCP protocol, watch for packet loss or queuing delay to adjust their transmission rate. Various network congestion avoidance processes support different trade-offs. [ 9 ] The TCP congestion avoidance algorithm is the primary basis for congestion control on the Internet. [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] Problems occur when concurrent TCP flows experience tail-drops , especially when bufferbloat is present. This delayed packet loss interferes with TCP's automatic congestion avoidance. All flows that experience this packet loss begin a TCP retrain at the same moment – this is called TCP global synchronization . Active queue management (AQM) is the reordering or dropping of network packets inside a transmit buffer that is associated with a network interface controller (NIC). This task is performed by the network scheduler . One solution is to use random early detection (RED) on the network equipment's egress queue. [ 15 ] [ 16 ] On networking hardware ports with more than one egress queue, weighted random early detection (WRED) can be used. RED indirectly signals TCP sender and receiver by dropping some packets, e.g. when the average queue length is more than a threshold (e.g. 50%) and deletes linearly or cubically more packets, [ 17 ] up to e.g. 100%, as the queue fills further. The robust random early detection (RRED) algorithm was proposed to improve the TCP throughput against denial-of-service (DoS) attacks, particularly low-rate denial-of-service (LDoS) attacks. Experiments confirmed that RED-like algorithms were vulnerable under LDoS attacks due to the oscillating TCP queue size caused by the attacks. [ 18 ] Some network equipment is equipped with ports that can follow and measure each flow and are thereby able to signal a too big bandwidth flow according to some quality of service policy. A policy could then divide the bandwidth among all flows by some criteria. [ 19 ] Another approach is to use Explicit Congestion Notification (ECN). [ 20 ] ECN is used only when two hosts signal that they want to use it. With this method, a protocol bit is used to signal explicit congestion. This is better than the indirect congestion notification signaled by packet loss by the RED/WRED algorithms, but it requires support by both hosts. [ 21 ] [ 15 ] When a router receives a packet marked as ECN-capable and the router anticipates congestion, it sets the ECN flag, notifying the sender of congestion. The sender should respond by decreasing its transmission bandwidth, e.g., by decreasing its sending rate by reducing the TCP window size or by other means. The L4S protocol is an enhanced version of ECN which allows senders to collaborate with network devices to control congestion. [ 22 ] Congestion avoidance can be achieved efficiently by reducing traffic. When an application requests a large file, graphic or web page, it usually advertises a window of between 32K and 64K. This results in the server sending a full window of data (assuming the file is larger than the window). When many applications simultaneously request downloads, this data can create a congestion point at an upstream provider. By reducing the window advertisement, the remote servers send less data, thus reducing the congestion. [ 23 ] [ 24 ] Backward ECN (BECN) is another proposed congestion notification mechanism. It uses ICMP source quench messages as an IP signaling mechanism to implement a basic ECN mechanism for IP networks, keeping congestion notifications at the IP level and requiring no negotiation between network endpoints. Effective congestion notifications can be propagated to transport layer protocols, such as TCP and UDP, for the appropriate adjustments. [ 25 ] The protocols that avoid congestive collapse generally assume that data loss is caused by congestion. On wired networks, errors during transmission are rare. WiFi , 3G and other networks with a radio layer are susceptible to data loss due to interference and may experience poor throughput in some cases. The TCP connections running over a radio-based physical layer see the data loss and tend to erroneously believe that congestion is occurring. The slow-start protocol performs badly for short connections. Older web browsers created many short-lived connections and opened and closed the connection for each file. This kept most connections in the slow start mode. Initial performance can be poor, and many connections never get out of the slow-start regime, significantly increasing latency. To avoid this problem, modern browsers either open multiple connections simultaneously or reuse one connection for all files requested from a particular server. Admission control is any system that requires devices to receive permission before establishing new network connections. If the new connection risks creating congestion, permission can be denied. Examples include Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn standard for home networking over legacy wiring, Resource Reservation Protocol for IP networks and Stream Reservation Protocol for Ethernet .
https://en.wikipedia.org/wiki/Network_congestion
A network solid or covalent network solid (also called atomic crystalline solids or giant covalent structures ) [ 1 ] [ 2 ] is a chemical compound (or element) in which the atoms are bonded by covalent bonds in a continuous network extending throughout the material. In a network solid there are no individual molecules , and the entire crystal or amorphous solid may be considered a macromolecule . Formulas for network solids, like those for ionic compounds , are simple ratios of the component atoms represented by a formula unit . [ 3 ] Examples of network solids include diamond with a continuous network of carbon atoms and silicon dioxide or quartz with a continuous three-dimensional network of SiO 2 units. Graphite and the mica group of silicate minerals structurally consist of continuous two-dimensional sheets covalently bonded within the layer, with other bond types holding the layers together. [ 3 ] Disordered network solids are termed glasses . These are typically formed on rapid cooling of melts so that little time is left for atomic ordering to occur. [ 4 ]
https://en.wikipedia.org/wiki/Network_covalent_bonding
Network detectors or network discovery software [ 1 ] are computer programs that facilitate detection of wireless LANs using the 802.11b, 802.11a and 802.11g WLAN standards. [ 2 ] Discovering networks may be done through active as well as passive scanning. Active scanning is done through sending multiple probe requests and recording the probe responses. The probe response received normally contains BSSID and WLAN SSID . If SSID broadcasting has been turned off, and active scanning is the only type of scanning supported by the software, no networks will show up. An example of an active scanner is NetStumbler . Passive scanning is not done by active probing, but by mere listening to any data sent out by the AP. Once a legitimate user connects to the AP, the AP will eventually send out a SSID in cleartext. By impersonating this AP by automatic altering of the MAC address, the computer running the network discovery scanner will be given this SSID by legitimate users. Passive scanners include Kismet and essid jack (a program under AirJack ). Notable programs include Network Stumbler , Kismet , Lumeta Corporation , Aerosol, AirMagnet , MacStumbler , Ministumbler , Mognet, NetChaser , perlskan , Wireless Security Auditor , Wlandump , PocketWarrior , pocketWinc , Prismstumbler , Sniff-em , AiroPeek , Airscanner , AP Scanner , AP Radar , Apsniff , BSD-Airtools , dstumbler , gtk-scanner , gWireless , iStumbler , KisMAC , Sniffer Wireless , THC-Scan , THC-Wardrive , WarGlue , WarKizniz , Wellenreiter, Wi-Scan and WiStumbler .
https://en.wikipedia.org/wiki/Network_detector
Network engineering may refer to:
https://en.wikipedia.org/wiki/Network_engineering
Network enumeration is a computing activity in which usernames and info on groups, shares, and services of networked computers are retrieved. It should not be confused with network mapping , which only retrieves information about which servers are connected to a specific network and what operating system runs on them. Network enumeration is the discovery of hosts or devices on a network . Network enumeration tends to use overt discovery protocols such as ICMP and SNMP to gather information. It may also scan various ports on remote hosts for looking for well known services in an attempt to further identify the function of a remote host. The next stage of enumeration is to fingerprint the operating system of the remote host. A network enumerator (also network scanner ) is a computer program used to retrieve usernames and info on groups, shares, and services of networked computers. This type of program scans networks for vulnerabilities in the security of that network. If there is a vulnerability with the security of the network, it will send a report back to a hacker who may use this info to exploit that network glitch to gain entry to the network or for other malicious activities. Ethical hackers often also use the information to remove the glitches and strengthen their network. Malicious (or " black-hat ") hackers can, on entry of the network, get to security-sensitive information or corrupt the network making it useless. If this network belonged to a company which used this network on a regular basis, the company would lose the function to send information internally to other departments. Network enumerators are often used by script kiddies for ease of use, as well as by more experienced hackers in cooperation with other programs/manual lookups. Also, whois queries, zone transfers , ping sweeps , and traceroute can be performed. [ 1 ]
https://en.wikipedia.org/wiki/Network_enumeration
Network for Astronomy School Education (NASE) is an International Astronomical Union (IAU) Working Group that works on Training Teachers for primary and secondary schools. In 2007, professor George K. Miley , IAU vice-president, invited Rosa M. Ros to begin exploring the idea of setting up an astronomy program to give primary and secondary school teachers a better preparation in this area of knowledge. The birth of NASE Group occurred when Rosa Maria Ros and Alexandre Costa were sent by UNESCO and IAU to give two courses in Peru and Ecuador in July 2009. Shortly after NASE was officially created in August 2009 during IAU's General Assembly at Rio de Janeiro. From there on more than 80 courses have been presented worldwide. The topics of "the basic NASE course" are: NASE classes were designed for developing countries where teachers don't have many financial resources. NASE Working Group members go to these countries for the first time to prepare a local task group that will disseminate astronomy knowledge and inexpensive didactic materials. The main goal is precisely to set up in each country a local group of NASE members who carry on teaching the essential NASE course [ 1 ] every year and to create new didactic inexpensive experiments , demonstrations and astronomical instruments. This has allowed to build a very large repository of educational materials for astronomy with PowerPoint Presentations [ 2 ] ], animations , articles and lectures , [ 3 ] photos , games , simulations websites , [ 4 ] interactive programs (e.g. Stellarium [ 5 ] ) and videos . NASE has now given more than seventy courses mainly in South America, Africa and Asia. NASE has also cooperated with other associations to promote teacher training on astronomy, namely with UNESCO and the European Association for Astronomy Education-EAAE .
https://en.wikipedia.org/wiki/Network_for_Astronomy_School_Education
Network motifs are recurrent and statistically significant subgraphs or patterns of a larger graph . All networks, including biological networks , social networks, technological networks (e.g., computer networks and electrical circuits) and more, can be represented as graphs, which include a wide variety of subgraphs. [ citation needed ] Network motifs are sub-graphs that repeat themselves in a specific network or even among various networks. Each of these sub-graphs, defined by a particular pattern of interactions between vertices, may reflect a framework in which particular functions are achieved efficiently. Indeed, motifs are of notable importance largely because they may reflect functional properties. They have recently [ when? ] gathered much attention as a useful concept to uncover structural design principles of complex networks. [ 1 ] Although network motifs may provide a deep insight into the network's functional abilities, their detection is computationally challenging. [ citation needed ] Let G = (V, E) and G′ = (V′, E′) be two graphs. Graph G′ is a sub-graph of graph G (written as G′ ⊆ G ) if V′ ⊆ V and E′ ⊆ E ∩ (V′ × V′) . If G′ ⊆ G and G′ contains all of the edges ⟨u, v⟩ ∈ E with u, v ∈ V′ , then G′ is an induced sub-graph of G . We call G′ and G isomorphic (written as G′ ↔ G ), if there exists a bijection (one-to-one correspondence) f:V′ → V with ⟨u, v⟩ ∈ E′ ⇔ ⟨f(u), f(v)⟩ ∈ E for all u, v ∈ V′ . The mapping f is called an isomorphism between G and G′ . [ 2 ] When G″ ⊂ G and there exists an isomorphism between the sub-graph G″ and a graph G′ , this mapping represents an appearance of G′ in G . The number of appearances of graph G′ in G is called the frequency F G of G′ in G . A graph is called recurrent (or frequent ) in G when its frequency F G (G′) is above a predefined threshold or cut-off value. We use terms pattern and frequent sub-graph in this review interchangeably. There is an ensemble Ω(G) of random graphs corresponding to the null-model associated to G . We should choose N random graphs uniformly from Ω(G) and calculate the frequency for a particular frequent sub-graph G′ in G . If the frequency of G′ in G is higher than its arithmetic mean frequency in N random graphs R i , where 1 ≤ i ≤ N , we call this recurrent pattern significant and hence treat G′ as a network motif for G . For a small graph G′ , the network G , and a set of randomized networks R(G) ⊆ Ω(R) , where R(G) = N , the Z-score of the frequency of G′ is given by Z ( G ′ ) = F G ( G ′ ) − μ R ( G ′ ) σ R ( G ′ ) {\displaystyle Z(G^{\prime })={\frac {F_{G}(G^{\prime })-\mu _{R}(G^{\prime })}{\sigma _{R}(G^{\prime })}}} where μ R (G′) and σ R (G′) stand for the mean and standard deviation of the frequency in set R(G) , respectively. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] The larger the Z(G′) , the more significant is the sub-graph G′ as a motif. Alternatively, another measurement in statistical hypothesis testing that can be considered in motif detection is the p -value , given as the probability of F R (G′) ≥ F G (G′) (as its null-hypothesis), where F R (G′) indicates the frequency of G' in a randomized network. [ 6 ] A sub-graph with p -value less than a threshold (commonly 0.01 or 0.05) will be treated as a significant pattern. The p -value for the frequency of G′ is defined as P ( G ′ ) = 1 N ∑ i = 1 N δ ( c ( i ) ) c ( i ) : F R i ( G ′ ) ≥ F G ( G ′ ) {\displaystyle P(G^{\prime })={\frac {1}{N}}\sum _{i=1}^{N}\delta (c(i))\quad c(i):F_{R}^{i}(G^{\prime })\geq F_{G}(G^{\prime })} where N indicates the number of randomized networks, i is defined over an ensemble of randomized networks, and the Kronecker delta function δ(c(i)) is one if the condition c(i) holds. The concentration [ 9 ] [ 10 ] of a particular n-size sub-graph G′ in network G refers to the ratio of the sub-graph appearance in the network to the total n -size non-isomorphic sub-graphs' frequencies, which is formulated by C G ( G ′ ) = F G ( G ′ ) ∑ i F G ( G i ) {\displaystyle C_{G}(G^{\prime })={\frac {F_{G}(G^{\prime })}{\sum _{i}F_{G}(G_{i})}}} where index i is defined over the set of all non-isomorphic n-size graphs. Another statistical measurement is defined for evaluating network motifs, but it is rarely used in known algorithms. This measurement is introduced by Picard et al. in 2008 and used the Poisson distribution, rather than the Gaussian normal distribution that is implicitly being used above. [ 11 ] In addition, three specific concepts of sub-graph frequency have been proposed. [ 12 ] As the figure illustrates, the first frequency concept F 1 considers all matches of a graph in original network. This definition is similar to what we have introduced above. The second concept F 2 is defined as the maximum number of edge-disjoint instances of a given graph in original network. And finally, the frequency concept F 3 entails matches with disjoint edges and nodes. Therefore, the two concepts F 2 and F 3 restrict the usage of elements of the graph, and as can be inferred, the frequency of a sub-graph declines by imposing restrictions on network element usage. As a result, a network motif detection algorithm would pass over more candidate sub-graphs if we insist on frequency concepts F 2 and F 3 . [ citation needed ] The study of network motifs was pioneered by Holland and Leinhardt [ 13 ] [ 14 ] [ 15 ] [ 16 ] who introduced the concept of a triad census of networks. They introduced methods to enumerate various types of subgraph configurations, and test whether the subgraph counts are statistically different from those expected in random networks. [ citation needed ] This idea was further generalized in 2002 by Uri Alon and his group [ 17 ] when network motifs were discovered in the gene regulation ( transcription ) network of the bacteria E. coli and then in a large set of natural networks. Since then, a considerable number of studies have been conducted on the subject. Some of these studies focus on the biological applications, while others focus on the computational theory of network motifs. [ citation needed ] The biological studies endeavor to interpret the motifs detected for biological networks. For example, in work following, [ 17 ] the network motifs found in E. coli were discovered in the transcription networks of other bacteria [ 18 ] as well as yeast [ 19 ] [ 20 ] and higher organisms. [ 21 ] [ 22 ] [ 23 ] A distinct set of network motifs were identified in other types of biological networks such as neuronal networks and protein interaction networks. [ 5 ] [ 24 ] [ 25 ] The computational research has focused on improving existing motif detection tools to assist the biological investigations and allow larger networks to be analyzed. Several different algorithms have been provided so far, which are elaborated in the next section in chronological order. [ citation needed ] Most recently, the acc-MOTIF tool to detect network motifs was released. Various solutions have been proposed for the challenging problem of network motif (NM) discovery. These algorithms can be classified under various paradigms such as exact counting methods, sampling methods, pattern growth methods and so on. However, motif discovery problem comprises two main steps: first, calculating the number of occurrences of a sub-graph and then, evaluating the sub-graph significance. The recurrence is significant if it is detectably far more than expected. Roughly speaking, the expected number of appearances of a sub-graph can be determined by a Null-model, which is defined by an ensemble of random networks with some of the same properties as the original network. [ citation needed ] Until 2004, the only exact counting method for NM detection was the brute-force one proposed by Milo et al. . [ 3 ] This algorithm was successful for discovering small motifs, but using this method for finding even size 5 or 6 motifs was not computationally feasible. Hence, a new approach to this problem was needed. [ citation needed ] Here, a review on computational aspects of major algorithms is given and their related benefits and drawbacks from an algorithmic perspective are discussed. [ citation needed ] The table below lists the motif discovery algorithms that will be described in this section. They can be divided into two general categories: those based on exact counting and those using statistical sampling and estimations instead. Because the second group does not count all the occurrences of a subgraph in the main network, the algorithms belonging to this group are faster, but they might yield in biased and unrealistic results. [ citation needed ] In the next level, the exact counting algorithms can be classified to network-centric and subgraph-centric methods. The algorithms of the first class search the given network for all subgraphs of a given size, while the algorithms falling into the second class first generate different possible non-isomorphic graphs of the given size, and then explore the network for each generated subgraph separately. Each approach has its advantages and disadvantages which are discussed below. [ citation needed ] The table also indicates whether an algorithm can be used for directed or undirected networks as well as induced or non-induced subgraphs. [ citation needed ] Kashtan et al. published mfinder , the first motif-mining tool, in 2004. [ 9 ] It implements two kinds of motif finding algorithms: a full enumeration and the first sampling method. Their sampling discovery algorithm was based on edge sampling throughout the network. This algorithm estimates concentrations of induced sub-graphs and can be utilized for motif discovery in directed or undirected networks. The sampling procedure of the algorithm starts from an arbitrary edge of the network that leads to a sub-graph of size two, and then expands the sub-graph by choosing a random edge that is incident to the current sub-graph. After that, it continues choosing random neighboring edges until a sub-graph of size n is obtained. Finally, the sampled sub-graph is expanded to include all of the edges that exist in the network between these n nodes. When an algorithm uses a sampling approach, taking unbiased samples is the most important issue that the algorithm might address. The sampling procedure, however, does not take samples uniformly and therefore Kashtan et al. proposed a weighting scheme that assigns different weights to the different sub-graphs within network. [ 9 ] The underlying principle of weight allocation is exploiting the information of the sampling probability for each sub-graph, i.e. the probable sub-graphs will obtain comparatively less weights in comparison to the improbable sub-graphs; hence, the algorithm must calculate the sampling probability of each sub-graph that has been sampled. This weighting technique assists mfinder to determine sub-graph concentrations impartially. In expanded to include sharp contrast to exhaustive search, the computational time of the algorithm surprisingly is asymptotically independent of the network size. An analysis of the computational time of the algorithm has shown that it takes O(n n ) for each sample of a sub-graph of size n from the network. On the other hand, there is no analysis in [ 9 ] on the classification time of sampled sub-graphs that requires solving the graph isomorphism problem for each sub-graph sample. Additionally, an extra computational effort is imposed on the algorithm by the sub-graph weight calculation. But it is unavoidable to say that the algorithm may sample the same sub-graph multiple times – spending time without gathering any information. [ 10 ] In conclusion, by taking the advantages of sampling, the algorithm performs more efficiently than an exhaustive search algorithm; however, it only determines sub-graphs concentrations approximately. This algorithm can find motifs up to size 6 because of its main implementation, and as result it gives the most significant motif, not all the others too. Also, it is necessary to mention that this tool has no option of visual presentation. The sampling algorithm is shown briefly: 1. Pick a random edge e 1 = (v i , v j ) . Update E s = {e 1 }, V s = {v i , v j } 2. Make a list L of all neighbor edges of E s . Omit from L all edges between members of V s . 3. Pick a random edge e = {v k ,v l } from L . Update E s = E s ⋃ {e }, V s = V s ⋃ {v k , v l }. 4. Repeat steps 2-3 until completing an n -node subgraph (until |V s | = n ). 5. Calculate the probability to sample the picked n -node subgraph. Schreiber and Schwöbbermeyer [ 12 ] proposed an algorithm named flexible pattern finder (FPF) for extracting frequent sub-graphs of an input network and implemented it in a system named Mavisto . [ 26 ] Their algorithm exploits the downward closure property which is applicable for frequency concepts F 2 and F 3 . The downward closure property asserts that the frequency for sub-graphs decrease monotonically by increasing the size of sub-graphs; however, this property does not hold necessarily for frequency concept F 1 . FPF is based on a pattern tree (see figure) consisting of nodes that represents different graphs (or patterns), where the parent of each node is a sub-graph of its children nodes; in other words, the corresponding graph of each pattern tree's node is expanded by adding a new edge to the graph of its parent node. At first, the FPF algorithm enumerates and maintains the information of all matches of a sub-graph located at the root of the pattern tree. Then, one-by-one it builds child nodes of the previous node in the pattern tree by adding one edge supported by a matching edge in the target graph, and tries to expand all of the previous information about matches to the new sub-graph (child node). In next step, it decides whether the frequency of the current pattern is lower than a predefined threshold or not. If it is lower and if downward closure holds, FPF can abandon that path and not traverse further in this part of the tree; as a result, unnecessary computation is avoided. This procedure is continued until there is no remaining path to traverse. The advantage of the algorithm is that it does not consider infrequent sub-graphs and tries to finish the enumeration process as soon as possible; therefore, it only spends time for promising nodes in the pattern tree and discards all other nodes. As an added bonus, the pattern tree notion permits FPF to be implemented and executed in a parallel manner since it is possible to traverse each path of the pattern tree independently. However, FPF is most useful for frequency concepts F 2 and F 3 , because downward closure is not applicable to F 1 . Nevertheless, the pattern tree is still practical for F 1 if the algorithm runs in parallel. Another advantage of the algorithm is that the implementation of this algorithm has no limitation on motif size, which makes it more amenable to improvements. The pseudocode of FPF (Mavisto) is shown below: Result: Set R of patterns of size t with maximum frequency. P ← start pattern p1 of size 1 M p 1 ← all matches of p 1 in G While P ≠ φ do P max ← select all patterns from P with maximum size. P ← select pattern with maximum frequency from P max Ε = ExtensionLoop (G, p, M p ) Foreach pattern p ∈ E If F = F 1 then f ← size (M p ) Else f ← Maximum Independent set (F, M p ) End If size (p) = t then If f = f max then R ← R ⋃ {p } Else if f > f max then R ← {p }; f max ← f End Else If F = F 1 or f ≥ f max then P ← P ⋃ {p } End End End End The sampling bias of Kashtan et al. [ 9 ] provided great impetus for designing better algorithms for the NM discovery problem. Although Kashtan et al. tried to settle this drawback by means of a weighting scheme, this method imposed an undesired overhead on the running time as well a more complicated implementation. This tool is one of the most useful ones, as it supports visual options and also is an efficient algorithm with respect to time. But, it has a limitation on motif size as it does not allow searching for motifs of size 9 or higher because of the way the tool is implemented. Wernicke [ 10 ] introduced an algorithm named RAND-ESU that provides a significant improvement over mfinder . [ 9 ] This algorithm, which is based on the exact enumeration algorithm ESU , has been implemented as an application called FANMOD . [ 10 ] RAND-ESU is a NM discovery algorithm applicable for both directed and undirected networks, effectively exploits an unbiased node sampling throughout the network, and prevents overcounting sub-graphs more than once. Furthermore, RAND-ESU uses a novel analytical approach called DIRECT for determining sub-graph significance instead of using an ensemble of random networks as a Null-model. The DIRECT method estimates the sub-graph concentration without explicitly generating random networks. [ 10 ] Empirically, the DIRECT method is more efficient in comparison with the random network ensemble in case of sub-graphs with a very low concentration; however, the classical Null-model is faster than the DIRECT method for highly concentrated sub-graphs. [ 3 ] [ 10 ] In the following, we detail the ESU algorithm and then we show how this exact algorithm can be modified efficiently to RAND-ESU that estimates sub-graphs concentrations. The algorithms ESU and RAND-ESU are fairly simple, and hence easy to implement. ESU first finds the set of all induced sub-graphs of size k , let S k be this set. ESU can be implemented as a recursive function; the running of this function can be displayed as a tree-like structure of depth k , called the ESU-Tree (see figure). Each of the ESU-Tree nodes indicate the status of the recursive function that entails two consecutive sets SUB and EXT. SUB refers to nodes in the target network that are adjacent and establish a partial sub-graph of size |SUB| ≤ k . If |SUB| = k , the algorithm has found an induced complete sub-graph, so S k = SUB ∪ S k . However, if |SUB| < k , the algorithm must expand SUB to achieve cardinality k . This is done by the EXT set that contains all the nodes that satisfy two conditions: First, each of the nodes in EXT must be adjacent to at least one of the nodes in SUB; second, their numerical labels must be larger than the label of first element in SUB. The first condition makes sure that the expansion of SUB nodes yields a connected graph and the second condition causes ESU-Tree leaves (see figure) to be distinct; as a result, it prevents overcounting. Note that, the EXT set is not a static set, so in each step it may expand by some new nodes that do not breach the two conditions. The next step of ESU involves classification of sub-graphs placed in the ESU-Tree leaves into non-isomorphic size- k graph classes; consequently, ESU determines sub-graphs frequencies and concentrations. This stage has been implemented simply by employing McKay's nauty algorithm, [ 27 ] [ 28 ] which classifies each sub-graph by performing a graph isomorphism test. Therefore, ESU finds the set of all induced k -size sub-graphs in a target graph by a recursive algorithm and then determines their frequency using an efficient tool. The procedure of implementing RAND-ESU is quite straightforward and is one of the main advantages of FANMOD . One can change the ESU algorithm to explore just a portion of the ESU-Tree leaves by applying a probability value 0 ≤ p d ≤ 1 for each level of the ESU-Tree and oblige ESU to traverse each child node of a node in level d-1 with probability p d . This new algorithm is called RAND-ESU . Evidently, when p d = 1 for all levels, RAND-ESU acts like ESU . For p d = 0 the algorithm finds nothing. Note that, this procedure ensures that the chances of visiting each leaf of the ESU-Tree are the same, resulting in unbiased sampling of sub-graphs through the network. The probability of visiting each leaf is Π d p d and this is identical for all of the ESU-Tree leaves; therefore, this method guarantees unbiased sampling of sub-graphs from the network. Nonetheless, determining the value of p d for 1 ≤ d ≤ k is another issue that must be determined manually by an expert to get precise results of sub-graph concentrations. [ 8 ] While there is no lucid prescript for this matter, the Wernicke provides some general observations that may help in determining p_d values. In summary, RAND-ESU is a very fast algorithm for NM discovery in the case of induced sub-graphs supporting unbiased sampling method. Although, the main ESU algorithm and so the FANMOD tool is known for discovering induced sub-graphs, there is trivial modification to ESU which makes it possible for finding non-induced sub-graphs, too. The pseudo code of ESU (FANMOD) is shown below: Input: A graph G = (V, E) and an integer 1 ≤ k ≤ |V| . Output: All size- k subgraphs in G . for each vertex v ∈ V do VExtension ← {u ∈ N({v}) | u > v } call ExtendSubgraph ({v}, VExtension, v) endfor if |VSubgraph| = k then output G[VSubgraph] and return while VExtension ≠ ∅ do Remove an arbitrarily chosen vertex w from VExtension VExtension′ ← VExtension ∪ {u ∈ N excl (w, VSubgraph) | u > v } call ExtendSubgraph (VSubgraph ∪ {w}, VExtension′, v) return Chen et al. [ 29 ] introduced a new NM discovery algorithm called NeMoFinder , which adapts the idea in SPIN [ 30 ] to extract frequent trees and after that expands them into non-isomorphic graphs. [ 8 ] NeMoFinder utilizes frequent size-n trees to partition the input network into a collection of size- n graphs, afterward finding frequent size-n sub-graphs by expansion of frequent trees edge-by-edge until getting a complete size- n graph K n . The algorithm finds NMs in undirected networks and is not limited to extracting only induced sub-graphs. Furthermore, NeMoFinder is an exact enumeration algorithm and is not based on a sampling method. As Chen et al. claim, NeMoFinder is applicable for detecting relatively large NMs, for instance, finding NMs up to size-12 from the whole S. cerevisiae (yeast) PPI network as the authors claimed. [ 31 ] NeMoFinder consists of three main steps. First, finding frequent size- n trees, then utilizing repeated size-n trees to divide the entire network into a collection of size- n graphs, finally, performing sub-graph join operations to find frequent size-n sub-graphs. [ 29 ] In the first step, the algorithm detects all non-isomorphic size- n trees and mappings from a tree to the network. In the second step, the ranges of these mappings are employed to partition the network into size-n graphs. Up to this step, there is no distinction between NeMoFinder and an exact enumeration method. However, a large portion of non-isomorphic size-n graphs still remain. NeMoFinder exploits a heuristic to enumerate non-tree size-n graphs by the obtained information from the preceding steps. The main advantage of the algorithm is in the third step, which generates candidate sub-graphs from previously enumerated sub-graphs. This generation of new size- n sub-graphs is done by joining each previous sub-graph with derivative sub-graphs from itself called cousin sub-graphs . These new sub-graphs contain one additional edge in comparison to the previous sub-graphs. However, there exist some problems in generating new sub-graphs: There is no clear method to derive cousins from a graph, joining a sub-graph with its cousins leads to redundancy in generating particular sub-graph more than once, and cousin determination is done by a canonical representation of the adjacency matrix which is not closed under join operation. NeMoFinder is an efficient network motif finding algorithm for motifs up to size 12 only for protein-protein interaction networks, which are presented as undirected graphs. And it is not able to work on directed networks which are so important in the field of complex and biological networks. The pseudocode of NeMoFinder is shown below: G - PPI network; N - Number of randomized networks; K - Maximal network motif size; F - Frequency threshold; S - Uniqueness threshold; Output: U - Repeated and unique network motif set; D ← ∅ ; for motif-size k from 3 to K do T ← FindRepeatedTrees (k) ; GD k ← GraphPartition (G, T) D ← D ∪ T ; D′ ← T ; i ← k ; while D′ ≠ ∅ and i ≤ k × (k - 1) / 2 do D′ ← FindRepeatedGraphs (k, i, D′) ; D ← D ∪ D′ ; i ← i + 1 ; end while end for for counter i from 1 to N do G rand ← RandomizedNetworkGeneration () ; for each g ∈ D do GetRandFrequency (g, G rand ) ; end for end for U ← ∅ ; for each g ∈ D do s ← GetUniqunessValue (g) ; if s ≥ S then U ← U ∪ {g }; end if end for return U ; Grochow and Kellis [ 32 ] proposed an exact algorithm for enumerating sub-graph appearances. The algorithm is based on a motif-centric approach, which means that the frequency of a given sub-graph, called the query graph , is exhaustively determined by searching for all possible mappings from the query graph into the larger network. It is claimed [ 32 ] that a motif-centric method in comparison to network-centric methods has some beneficial features. First of all it avoids the increased complexity of sub-graph enumeration. Also, by using mapping instead of enumerating, it enables an improvement in the isomorphism test. To improve the performance of the algorithm, since it is an inefficient exact enumeration algorithm, the authors introduced a fast method which is called symmetry-breaking conditions . During straightforward sub-graph isomorphism tests, a sub-graph may be mapped to the same sub-graph of the query graph multiple times. In the Grochow–Kellis (GK) algorithm symmetry-breaking is used to avoid such multiple mappings. Here we introduce the GK algorithm and the symmetry-breaking condition which eliminates redundant isomorphism tests. The GK algorithm discovers the whole set of mappings of a given query graph to the network in two major steps. It starts with the computation of symmetry-breaking conditions of the query graph. Next, by means of a branch-and-bound method, the algorithm tries to find every possible mapping from the query graph to the network that meets the associated symmetry-breaking conditions. An example of the usage of symmetry-breaking conditions in GK algorithm is demonstrated in figure. As it is mentioned above, the symmetry-breaking technique is a simple mechanism that precludes spending time finding a sub-graph more than once due to its symmetries. [ 32 ] [ 33 ] Note that, computing symmetry-breaking conditions requires finding all automorphisms of a given query graph. Even though, there is no efficient (or polynomial time) algorithm for the graph automorphism problem, this problem can be tackled efficiently in practice by McKay's tools. [ 27 ] [ 28 ] As it is claimed, using symmetry-breaking conditions in NM detection lead to save a great deal of running time. Moreover, it can be inferred from the results in [ 32 ] [ 33 ] that using the symmetry-breaking conditions results in high efficiency particularly for directed networks in comparison to undirected networks. The symmetry-breaking conditions used in the GK algorithm are similar to the restriction which ESU algorithm applies to the labels in EXT and SUB sets. In conclusion, the GK algorithm computes the exact number of appearance of a given query graph in a large complex network and exploiting symmetry-breaking conditions improves the algorithm performance. Also, GK algorithm is one of the known algorithms having no limitation for motif size in implementation and potentially it can find motifs of any size. Most algorithms in the field of NM discovery are used to find induced sub-graphs of a network. In 2008, Noga Alon et al. [ 34 ] introduced an approach for finding non-induced sub-graphs too. Their technique works on undirected networks such as PPI ones. Also, it counts non-induced trees and bounded treewidth sub-graphs. This method is applied for sub-graphs of size up to 10. This algorithm counts the number of non-induced occurrences of a tree T with k = O(logn) vertices in a network G with n vertices as follows: As available PPI networks are far from complete and error free, this approach is suitable for NM discovery for such networks. As Grochow–Kellis Algorithm and this one are the ones popular for non-induced sub-graphs, it is worth to mention that the algorithm introduced by Alon et al. is less time-consuming than the Grochow–Kellis Algorithm. [ 34 ] Omidi et al. [ 35 ] introduced a new algorithm for motif detection named MODA which is applicable for induced and non-induced NM discovery in undirected networks. It is based on the motif-centric approach discussed in the Grochow–Kellis algorithm section. It is very important to distinguish motif-centric algorithms such as MODA and GK algorithm because of their ability to work as query-finding algorithms. This feature allows such algorithms to be able to find a single motif query or a small number of motif queries (not all possible sub-graphs of a given size) with larger sizes. As the number of possible non-isomorphic sub-graphs increases exponentially with sub-graph size, for large size motifs (even larger than 10), the network-centric algorithms, those looking for all possible sub-graphs, face a problem. Although motif-centric algorithms also have problems in discovering all possible large size sub-graphs, but their ability to find small numbers of them is sometimes a significant property. Using a hierarchical structure called an expansion tree , the MODA algorithm is able to extract NMs of a given size systematically and similar to FPF that avoids enumerating unpromising sub-graphs; MODA takes into consideration potential queries (or candidate sub-graphs) that would result in frequent sub-graphs. Despite the fact that MODA resembles FPF in using a tree like structure, the expansion tree is applicable merely for computing frequency concept F 1 . As we will discuss next, the advantage of this algorithm is that it does not carry out the sub-graph isomorphism test for non-tree query graphs. Additionally, it utilizes a sampling method in order to speed up the running time of the algorithm. Here is the main idea: by a simple criterion one can generalize a mapping of a k-size graph into the network to its same size supergraphs. For example, suppose there is mapping f(G) of graph G with k nodes into the network and we have a same size graph G′ with one more edge &langu, v⟩ ; f G will map G′ into the network, if there is an edge ⟨f G (u), f G (v)⟩ in the network. As a result, we can exploit the mapping set of a graph to determine the frequencies of its same order supergraphs simply in O(1) time without carrying out sub-graph isomorphism testing. The algorithm starts ingeniously with minimally connected query graphs of size k and finds their mappings in the network via sub-graph isomorphism. After that, with conservation of the graph size, it expands previously considered query graphs edge-by-edge and computes the frequency of these expanded graphs as mentioned above. The expansion process continues until reaching a complete graph K k (fully connected with k(k-1) ⁄ 2 edge). As discussed above, the algorithm starts by computing sub-tree frequencies in the network and then expands sub-trees edge by edge. One way to implement this idea is called the expansion tree T k for each k . Figure shows the expansion tree for size-4 sub-graphs. T k organizes the running process and provides query graphs in a hierarchical manner. Strictly speaking, the expansion tree T k is simply a directed acyclic graph or DAG, with its root number k indicating the graph size existing in the expansion tree and each of its other nodes containing the adjacency matrix of a distinct k -size query graph. Nodes in the first level of T k are all distinct k -size trees and by traversing T k in depth query graphs expand with one edge at each level. A query graph in a node is a sub-graph of the query graph in a node's child with one edge difference. The longest path in T k consists of (k 2 -3k+4)/2 edges and is the path from the root to the leaf node holding the complete graph. Generating expansion trees can be done by a simple routine which is explained in. [ 35 ] MODA traverses T k and when it extracts query trees from the first level of T k it computes their mapping sets and saves these mappings for the next step. For non-tree queries from T k , the algorithm extracts the mappings associated with the parent node in T k and determines which of these mappings can support the current query graphs. The process will continue until the algorithm gets the complete query graph. The query tree mappings are extracted using the Grochow–Kellis algorithm. For computing the frequency of non-tree query graphs, the algorithm employs a simple routine that takes O(1) steps. In addition, MODA exploits a sampling method where the sampling of each node in the network is linearly proportional to the node degree, the probability distribution is exactly similar to the well-known Barabási-Albert preferential attachment model in the field of complex networks. [ 36 ] This approach generates approximations; however, the results are almost stable in different executions since sub-graphs aggregate around highly connected nodes. [ 37 ] The pseudocode of MODA is shown below: Output: Frequent Subgraph List: List of all frequent k -size sub-graphs Note: F G : set of mappings from G in the input graph G fetch T k do G′ = Get-Next-BFS (T k ) // G′ is a query graph if |E(G′)| = (k – 1) call Mapping-Module (G′, G) else call Enumerating-Module (G′, G, T k ) end if save F 2 if |F G | > Δ then add G′ into Frequent Subgraph List end if Until |E(G')| = (k – 1)/2 return Frequent Subgraph List A recently introduced algorithm named Kavosh [ 38 ] aims at improved main memory usage. Kavosh is usable to detect NM in both directed and undirected networks. The main idea of the enumeration is similar to the GK and MODA algorithms, which first find all k -size sub-graphs that a particular node participated in, then remove the node, and subsequently repeat this process for the remaining nodes. [ 38 ] For counting the sub-graphs of size k that include a particular node, trees with maximum depth of k, rooted at this node and based on neighborhood relationship are implicitly built. Children of each node include both incoming and outgoing adjacent nodes. To descend the tree, a child is chosen at each level with the restriction that a particular child can be included only if it has not been included at any upper level. After having descended to the lowest level possible, the tree is again ascended and the process is repeated with the stipulation that nodes visited in earlier paths of a descendant are now considered unvisited nodes. A final restriction in building trees is that all children in a particular tree must have numerical labels larger than the label of the root of the tree. The restrictions on the labels of the children are similar to the conditions which GK and ESU algorithm use to avoid overcounting sub-graphs. The protocol for extracting sub-graphs makes use of the compositions of an integer. For the extraction of sub-graphs of size k , all possible compositions of the integer k-1 must be considered. The compositions of k-1 consist of all possible manners of expressing k-1 as a sum of positive integers. Summations in which the order of the summands differs are considered distinct. A composition can be expressed as k 2 ,k 3 ,...,k m where k 2 + k 3 + ... + k m = k-1 . To count sub-graphs based on the composition, k i nodes are selected from the i -th level of the tree to be nodes of the sub-graphs ( i = 2,3,...,m ). The k-1 selected nodes along with the node at the root define a sub-graph within the network. After discovering a sub-graph involved as a match in the target network, in order to be able to evaluate the size of each class according to the target network, Kavosh employs the nauty algorithm [ 27 ] [ 28 ] in the same way as FANMOD . The enumeration part of Kavosh algorithm is shown below: Input: G : Input graph u : Root vertex S : selection ( S = { S 0 ,S 1 ,...,S k-1 } is an array of the set of all S i ) Remainder : number of remaining vertices to be selected i : Current depth of the tree. Output: all k -size sub-graphs of graph G rooted in u . if Remainder = 0 then return else ValList ← Validate (G, S i-1 , u) n i ← Min (|ValList|, Remainder) for k i = 1 to n i do C ← Initial_Comb (ValList, k i ) (Make the first vertex combination selection according) repeat S i ← C Enumerate_Vertex (G, u, S, Remainder-k i , i+1) Next_Comb (ValList, k i ) (Make the next vertex combination selection according) until C = NILL end for for each v ∈ ValList do Visited[v] ← false end for end if Input: G : input graph, Parents : selected vertices of last layer, u : Root vertex. Output: Valid vertices of the current level. ValList ← NILL for each v ∈ Parents do for each w ∈ Neighbor[u] do if label[u] < label[w] AND NOT Visited[w] then Visited[w] ← true ValList = ValList + w end if end for end for return ValList Recently a Cytoscape plugin called CytoKavosh [ 39 ] is developed for this software. In 2010, Pedro Ribeiro and Fernando Silva proposed a novel data structure for storing a collection of sub-graphs, called a g-trie . [ 40 ] This data structure, which is conceptually akin to a prefix tree, stores sub-graphs according to their structures and finds occurrences of each of these sub-graphs in a larger graph. One of the noticeable aspects of this data structure is that coming to the network motif discovery, the sub-graphs in the main network are needed to be evaluated. So, there is no need to find the ones in random network which are not in the main network. This can be one of the time-consuming parts in the algorithms in which all sub-graphs in random networks are derived. A g-trie is a multiway tree that can store a collection of graphs. Each tree node contains information about a single graph vertex and its corresponding edges to ancestor nodes. A path from the root to a leaf corresponds to one single graph. Descendants of a g-trie node share a common sub-graph. Constructing a g-trie is well described in. [ 40 ] After constructing a g-trie , the counting part takes place. The main idea in counting process is to backtrack by all possible sub-graphs, but at the same time do the isomorphism tests. This backtracking technique is essentially the same technique employed by other motif-centric approaches like MODA and GK algorithms. Taking advantage of common substructures in the sense that at a given time there is a partial isomorphic match for several different candidate sub-graphs. Among the mentioned algorithms, G-Tries is the fastest. But, the excessive use of memory is the drawback of this algorithm, which might limit the size of discoverable motifs by a personal computer with average memory. ParaMODA [ 41 ] and NemoMap [ 42 ] are fast algorithms published in 2017 and 2018, respectively. They aren't as scalable as many of the others. [ 43 ] Tables and figure below show the results of running the mentioned algorithms on different standard networks. These results are taken from the corresponding sources, [ 35 ] [ 38 ] [ 40 ] thus they should be treated individually. Much experimental work has been devoted to understanding network motifs in gene regulatory networks . These networks control which genes are expressed in the cell in response to biological signals. The network is defined such that genes are nodes, and directed edges represent the control of one gene by a transcription factor (regulatory protein that binds DNA) encoded by another gene. Thus, network motifs are patterns of genes regulating each other's transcription rate. When analyzing transcription networks, it is seen that the same network motifs appear again and again in diverse organisms from bacteria to human. The transcription network of E. coli and yeast, for example, is made of three main motif families, that make up almost the entire network. The leading hypothesis is that the network motif were independently selected by evolutionary processes in a converging manner, [ 44 ] [ 45 ] since the creation or elimination of regulatory interactions is fast on evolutionary time scale, relative to the rate at which genes change, [ 44 ] [ 45 ] [ 46 ] Furthermore, experiments on the dynamics generated by network motifs in living cells indicate that they have characteristic dynamical functions. This suggests that the network motif serve as building blocks in gene regulatory networks that are beneficial to the organism. The functions associated with common network motifs in transcription networks were explored and demonstrated by several research projects both theoretically and experimentally. Below are some of the most common network motifs and their associated function. One of simplest and most abundant network motifs in E. coli is negative auto-regulation in which a transcription factor (TF) represses its own transcription. This motif was shown to perform two important functions. The first function is response acceleration. NAR was shown to speed-up the response to signals both theoretically [ 47 ] and experimentally. This was first shown in a synthetic transcription network [ 48 ] and later on in the natural context in the SOS DNA repair system of E. coli. [ 49 ] The second function is increased stability of the auto-regulated gene product concentration against stochastic noise, thus reducing variations in protein levels between different cells. [ 50 ] [ 51 ] [ 52 ] Positive auto-regulation (PAR) occurs when a transcription factor enhances its own rate of production. Opposite to the NAR motif this motif slows the response time compared to simple regulation. [ 53 ] In the case of a strong PAR the motif may lead to a bimodal distribution of protein levels in cell populations. [ 54 ] This motif is commonly found in many gene systems and organisms. The motif consists of three genes and three regulatory interactions. The target gene C is regulated by 2 TFs A and B and in addition TF B is also regulated by TF A . Since each of the regulatory interactions may either be positive or negative there are possibly eight types of FFL motifs. [ 55 ] Two of those eight types: the coherent type 1 FFL (C1-FFL) (where all interactions are positive) and the incoherent type 1 FFL (I1-FFL) (A activates C and also activates B which represses C) are found much more frequently in the transcription network of E. coli and yeast than the other six types. [ 55 ] [ 56 ] In addition to the structure of the circuitry the way in which the signals from A and B are integrated by the C promoter should also be considered. In most of the cases the FFL is either an AND gate (A and B are required for C activation) or OR gate (either A or B are sufficient for C activation) but other input function are also possible. The C1-FFL with an AND gate was shown to have a function of a 'sign-sensitive delay' element and a persistence detector both theoretically [ 55 ] and experimentally [ 57 ] with the arabinose system of E. coli . This means that this motif can provide pulse filtration in which short pulses of signal will not generate a response but persistent signals will generate a response after short delay. The shut off of the output when a persistent pulse is ended will be fast. The opposite behavior emerges in the case of a sum gate with fast response and delayed shut off as was demonstrated in the flagella system of E. coli . [ 58 ] De novo evolution of C1-FFLs in gene regulatory networks has been demonstrated computationally in response to selection to filter out an idealized short signal pulse, but for non-idealized noise, a dynamics-based system of feed-forward regulation with different topology was instead favored. [ 59 ] The I1-FFL is a pulse generator and response accelerator. The two signal pathways of the I1-FFL act in opposite directions where one pathway activates Z and the other represses it. When the repression is complete this leads to a pulse-like dynamics. It was also demonstrated experimentally that the I1-FFL can serve as response accelerator in a way which is similar to the NAR motif. The difference is that the I1-FFL can speed-up the response of any gene and not necessarily a transcription factor gene. [ 60 ] An additional function was assigned to the I1-FFL network motif: it was shown both theoretically and experimentally that the I1-FFL can generate non-monotonic input function in both a synthetic [ 61 ] and native systems. [ 62 ] Finally, expression units that incorporate incoherent feedforward control of the gene product provide adaptation to the amount of DNA template and can be superior to simple combinations of constitutive promoters. [ 63 ] Feedforward regulation displayed better adaptation than negative feedback , and circuits based on RNA interference were the most robust to variation in DNA template amounts. [ 63 ] De novo evolution of I1-FFLs in gene regulatory networks has been demonstrated computationally in response to selection to a generate a pulse, with I1-FFLs being more evolutionarily accessible, but not superior, relative to an alternative motif in which it is the output rather than the input that activates the repressor. [ 64 ] In some cases the same regulators X and Y regulate several Z genes of the same system. By adjusting the strength of the interactions this motif was shown to determine the temporal order of gene activation. This was demonstrated experimentally in the flagella system of E. coli . [ 65 ] This motif occurs when a single regulator regulates a set of genes with no additional regulation. This is useful when the genes are cooperatively carrying out a specific function and therefore always need to be activated in a synchronized manner. By adjusting the strength of the interactions it can create temporal expression program of the genes it regulates. [ 66 ] In the literature, Multiple-input modules (MIM) arose as a generalization of SIM. However, the precise definitions of SIM and MIM have been a source of inconsistency. There are attempts to provide orthogonal definitions for canonical motifs in biological networks and algorithms to enumerate them, especially SIM, MIM and Bi-Fan (2x2 MIM). [ 67 ] This motif occurs in the case that several regulators combinatorially control a set of genes with diverse regulatory combinations. This motif was found in E. coli in various systems such as carbon utilization, anaerobic growth, stress response and others. [ 17 ] [ 22 ] In order to better understand the function of this motif one has to obtain more information about the way the multiple inputs are integrated by the genes. Kaplan et al. [ 68 ] has mapped the input functions of the sugar utilization genes in E. coli , showing diverse shapes. An interesting generalization of the network-motifs, activity motifs are over occurring patterns that can be found when nodes and edges in the network are annotated with quantitative features. For instance, when edges in a metabolic pathways are annotated with the magnitude or timing of the corresponding gene expression, some patterns are over occurring given the underlying network structure. [ 69 ] An assumption (sometimes more sometimes less implicit) behind the preservation of a topological sub-structure is that it is of a particular functional importance. This assumption has recently been questioned. Some authors have argued that motifs, like bi-fan motifs , might show a variety depending on the network context, and therefore, [ 70 ] structure of the motif does not necessarily determine function. Indeed, an analysis of motifs in the C. elegans brain connectome in terms of “uncolored nodes” (nodes without a functional tag) revealed no significant difference in motif abundance compared to chance. [ 71 ] When nodes are assigned colors according to their functional role in the network, however, (for example, different colors for sensory neurons, motor neurons, or interneurons), particular colored motifs are found to be used significantly more than expected by chance, reflecting the functional role of the motif. [ 72 ] Certain bi-fan motifs, for example, appear with significantly enhanced frequency, while other colored bi-fan motifs do not. [ 72 ] Because the number of colored motifs increases exponentially with the number of colors, a search for colored motifs with significant bias can only be carried out for a small number of colors (node types). Network structure certainly does not always indicate function; this is an idea that has been around for some time, for another example see the Sin operon. [ 73 ] Most analyses of motif function are carried out looking at the motif operating in isolation. Recent research [ 74 ] provides good evidence that network context, i.e. the connections of the motif to the rest of the network, is too important to draw inferences on function from local structure only — the cited paper also reviews the criticisms and alternative explanations for the observed data. An analysis of the impact of a single motif module on the global dynamics of a network is studied in. [ 75 ] Yet another recent work suggests that certain topological features of biological networks naturally give rise to the common appearance of canonical motifs, thereby questioning whether frequencies of occurrences are reasonable evidence that the structures of motifs are selected for their functional contribution to the operation of networks. [ 76 ] [ 77 ]
https://en.wikipedia.org/wiki/Network_motif
Network on Terminal Architecture (i.e. NoTA ) is a modular service based system architecture for mobile and embedded devices. NoTA enables mobile device makers speed-up their product development by shortening the integration phase. Additionally NoTA makes it possible to quickly bring-in 3rd party innovations into the products due to loosely coupled and functional-driver -less approach. NoTA device consists of Service Nodes (SN) and Application Nodes (AN) that communicate through logical Interconnect (IN) . IN provides two basic means of communication, namely message based and streaming type. The former is bi-directional and used for Service Messages. The latter one is uni-directional and used for large amounts of data like media content. Service Nodes have unique Service Identifier (SID) . Service Nodes and Application Nodes map into sub-systems consisting of all software and hardware resources needed to implement those. In order to maintain system level modularity, the only way for a node to use SW&HW resources from other sub-systems is through Services Nodes. Interconnect is divided into two layers, namely High Interconnect (H_IN) and Low Interconnect (L_IN) . The former provides means for service activation and deactivation as well as service and stream accesses. Low Interconnect provides network socket interface with uniform addressing mechanism. L_IN internally can be divided into transport network independent and dependent parts. MIPI Alliance originated solutions are expected to be key enablers for wide use of NoTA. NoTA Sub-system provides the physical implementation for a set of Nodes (ANs and/or SNs). Sub-system consists of all the software and hardware resources (including peripherals, memories, controllers, internal buses etc.) needed to implement the defined Nodes. The only means for a sub-system to use the other Sub-systems' resources is via Service Nodes. Every NoTA Sub-system consists of the NoTA Interconnect stack. NoTA principles The NoTA concept and the first implementations were the result of internal Nokia Research Center activities started in 2003. The objective of this work was to develop a novel embedded device architecture that could solve the existing R&D challenges, as well as prepare the company to face the expected horizontalization and digital convergence . The NoTA basic framework was strongly influenced by Network-on-Chip (NoC) and Web Services research. NoTA Interconnect Release 1 was released in December 2005. Release 1 only consisted of Service communication, activation/deactivation, discovery and access. Release 2 added efficient data communication means, with a handle based stream referencing approach. This functionality, called DOA (Direct Object Access), allows direct memory-to-memory streaming between different NoTA subsystems. Release 2 came out during the second half of 2006. Release 3 became the official public release comprising all the essential functionalities. Fast-time-to-market is possible due to multiple reasons. Product vendors can purchase already productized NoTA Sub-systems removing time needed for vendor specific requirements definition, implementation and integration phases. In case there are no ready-made products on the market, NoTA type system-level modularity allows technology vendors to do the implementation and testing without heavy involvement with other Sub-system provides (e.g. the application engine). Scalability in integration level allows product companies to do fast cost optimization without major extra R&D effort. NoTA core is physical interconnect agnostic and hence replacing e.g. off-chip interconnect with on-chip interconnect does not destroy the device functionality. More practical example is to integrate multiple ICs into the same package (e.g. through stacking) and use package internal interconnect technologies. Cost reduction in product development can be achieved in two dimensions. Firstly system-level modularity allows free and fair competition between different technology vendors reducing Sub-system costs. Secondly in many cases product vendors do not have to bear costs incurring from sub-system adaptation work specific to their technologies. Performance and features meeting end-user needs . Product companies are more agile to adopt new technology or technology that better meets users' needs in digital convergence devices. Due to being agnostic to transport technology, NoTA can be used for many inter-device use-cases (wireless based L_INdown). There are currently projects running e.g. in Finland (SHOK DIEM) and in Japan to apply NoTA in the ubiquitous world. An excellent outcome revealed in the TronShow2010 is the intelligent house built in Taiwan utilizing both the T-Kernel and NoTA technologies. In addition, VTT (Technical Research Center of Finland) has demonstrated their NoTA (and Smart M3) based intelligent Greenhouse. Extending NoTA to the Internet is one of the research topics. So called NoTA Virtual Device (NVD) is expected to provide a solution here. Through the NVD one can build combined service platforms where the services can be running intra-device, inter-device and/or in the Internet.
https://en.wikipedia.org/wiki/Network_on_Terminal_Architecture
In telecommunications and computer networking , a network packet is a formatted unit of data carried by a packet-switched network . A packet consists of control information and user data; [ 1 ] the latter is also known as the payload . Control information provides data for delivering the payload (e.g., source and destination network addresses , error detection codes, or sequencing information). Typically, control information is found in packet headers and trailers . In packet switching , the bandwidth of the transmission medium is shared between multiple communication sessions, in contrast to circuit switching , in which circuits are preallocated for the duration of one session and data is typically transmitted as a continuous bit stream . In the seven-layer OSI model of computer networking , packet strictly refers to a protocol data unit at layer 3, the network layer . [ 2 ] A data unit at layer 2, the data link layer , is a frame . In layer 4, the transport layer , the data units are segments and datagrams . Thus, in the example of TCP/IP communication over Ethernet , a TCP segment is carried in one or more IP packets , which are each carried in one or more Ethernet frames . The basis of the packet concept is the postal letter: the header is like the envelope, the payload is the entire content inside the envelope, and the footer would be your signature at the bottom. [ 3 ] Network design can achieve two major results by using packets: error detection and multiple host addressing . [ 4 ] Communications protocols use various conventions for distinguishing the elements of a packet and for formatting the user data. For example, in Point-to-Point Protocol , the packet is formatted in 8-bit bytes, and special characters are used to delimit elements. Other protocols, like Ethernet, establish the start of the header and data elements by their location relative to the start of the packet. Some protocols format the information at a bit level instead of a byte level. [ 5 ] A packet may contain any of the following components: IP packets are composed of a header and payload. The header consists of fixed and optional fields. The payload appears immediately after the header. An IP packet has no trailer. However, an IP packet is often carried as the payload inside an Ethernet frame, which has its own header and trailer. Per the end-to-end principle , IP networks do not provide guarantees of delivery, non-duplication, or in-order delivery of packets. However, it is common practice to layer a reliable transport protocol such as Transmission Control Protocol on top of the packet service to provide such protection. The Consultative Committee for Space Data Systems ( CCSDS ) packet telemetry standard defines the protocol used for the transmission of spacecraft instrument data over the deep-space channel. Under this standard, an image or other data sent from a spacecraft instrument is transmitted using one or more packets. Packetized elementary stream (PES) is a specification associated with the MPEG-2 standard that allows an elementary stream to be divided into packets. The elementary stream is packetized by encapsulating sequential data bytes from the elementary stream between PES packet headers. A typical method of transmitting elementary stream data from a video or audio encoder is to first create PES packets from the elementary stream data and then to encapsulate these PES packets inside an MPEG transport stream (TS) packets or an MPEG program stream (PS). The TS packets can then be transmitted using broadcasting techniques, such as those used in an ATSC and DVB . In order to provide mono compatibility , the NICAM signal is transmitted on a subcarrier alongside the sound carrier. This means that the FM or AM regular mono sound carrier is left alone for reception by monaural receivers. The NICAM packet (except for the header) is scrambled with a nine-bit pseudo-random bit-generator before transmission. Making the NICAM bitstream look more like white noise is important because this reduces signal patterning on adjacent TV channels.
https://en.wikipedia.org/wiki/Network_packet
Network performance refers to measures of service quality of a network as seen by the customer. There are many different ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled and simulated instead of measured; one example of this is using state transition diagrams to model queuing performance or to use a Network Simulator. The following measures are often considered important: The available channel bandwidth and achievable signal-to-noise ratio determine the maximum possible throughput. It is not generally possible to send more data than dictated by the Shannon-Hartley Theorem . Throughput is the number of messages successfully delivered per unit time. Throughput is controlled by available bandwidth, as well as the available signal-to-noise ratio and hardware limitations. Throughput for the purpose of this article will be understood to be measured from the arrival of the first bit of data at the receiver, to decouple the concept of throughput from the concept of latency. For discussions of this type, the terms 'throughput' and 'bandwidth' are often used interchangeably. The Time Window is the period over which the throughput is measured. The choice of an appropriate time window will often dominate calculations of throughput, and whether latency is taken into account or not will determine whether the latency affects the throughput or not. The speed of light imposes a minimum propagation time on all electromagnetic signals. It is not possible to reduce the latency below where s is the distance and c m is the speed of light in the medium (roughly 200,000 km/s for most fiber or electrical media, depending on their velocity factor ). This approximately means an additional millisecond round-trip delay (RTT) per 100 km (or 62 miles) of distance between hosts. Other delays also occur in intermediate nodes. In packet switched networks delays can occur due to queueing. Jitter is the undesired deviation from true periodicity of an assumed periodic signal in electronics and telecommunications , often in relation to a reference clock source . Jitter may be observed in characteristics such as the frequency of successive pulses, the signal amplitude , or phase of periodic signals. Jitter is a significant, and usually undesired, factor in the design of almost all communications links (e.g., USB , PCI-e , SATA , OC-48 ). In clock recovery applications it is called timing jitter . [ 1 ] In digital transmission , the number of bit errors is the number of received bits of a data stream over a communication channel that have been altered due to noise , interference , distortion or bit synchronization errors. The bit error rate or bit error ratio ( BER ) is the number of bit errors divided by the total number of transferred bits during a studied time interval. BER is a unitless performance measure, often expressed as a percentage . The bit error probability p e is the expectation value of the BER. The BER can be considered as an approximate estimate of the bit error probability. This estimate is accurate for a long time interval and a high number of bit errors. All of the factors above, coupled with user requirements and user perceptions, play a role in determining the perceived 'fastness' or utility, of a network connection. The relationship between throughput, latency, and user experience is most aptly understood in the context of a shared network medium, and as a scheduling problem. For some systems, latency and throughput are coupled entities. In TCP/IP, latency can also directly affect throughput. In TCP connections, the large bandwidth-delay product of high latency connections, combined with relatively small TCP window sizes on many devices, effectively causes the throughput of a high latency connection to drop sharply with latency. This can be remedied with various techniques, such as increasing the TCP congestion window size, or more drastic solutions, such as packet coalescing, TCP acceleration , and forward error correction , all of which are commonly used for high latency satellite links. TCP acceleration converts the TCP packets into a stream that is similar to UDP . Because of this, the TCP acceleration software must provide its own mechanisms to ensure the reliability of the link, taking the latency and bandwidth of the link into account, and both ends of the high latency link must support the method used. In the Media Access Control (MAC) layer, performance issues such as throughput and end-to-end delay are also addressed. Many systems can be characterized as dominated either by throughput limitations or by latency limitations in terms of end-user utility or experience. In some cases, hard limits such as the speed of light present unique problems to such systems and nothing can be done to correct this. Other systems allow for significant balancing and optimization for best user experience. A telecom satellite in geosynchronous orbit imposes a path length of at least 71000 km between transmitter and receiver. [ 2 ] which means a minimum delay between message request and message receipt, or latency of 473 ms. This delay can be very noticeable and affects satellite phone service regardless of available throughput capacity. These long path length considerations are exacerbated when communicating with space probes and other long-range targets beyond Earth's atmosphere. The Deep Space Network implemented by NASA is one such system that must cope with these problems. Largely latency driven, the GAO has criticized the current architecture. [ 3 ] Several different methods have been proposed to handle the intermittent connectivity and long delays between packets, such as delay-tolerant networking . [ 4 ] At interstellar distances, the difficulties in designing radio systems that can achieve any throughput at all are massive. In these cases, maintaining communication is a bigger issue than how long that communication takes. Transportation is concerned almost entirely with throughput, which is why physical deliveries of backup tape archives are still largely done by vehicle.
https://en.wikipedia.org/wiki/Network_performance
Network planning and design is an iterative process, encompassing topological design , network-synthesis , and network-realization , and is aimed at ensuring that a new telecommunications network or service meets the needs of the subscriber and operator . [ 1 ] The process can be tailored according to each new network or service. [ 2 ] A traditional network planning methodology in the context of business decisions involves five layers of planning, namely: Each of these layers incorporates plans for different time horizons, i.e. the business planning layer determines the planning that the operator must perform to ensure that the network will perform as required for its intended life-span. The Operations and Maintenance layer, however, examines how the network will run on a day-to-day basis. The network planning process begins with the acquisition of external information. This includes: Planning a new network/service involves implementing the new system across the first four layers of the OSI Reference Model . [ 1 ] Choices must be made for the protocols and transmission technologies. [ 1 ] [ 2 ] The network planning process involves three main steps: These steps are performed iteratively in parallel with one another. [ 1 ] [ 2 ] During the process of Network Planning and Design, estimates are made of the expected traffic intensity and traffic load that the network must support. [ 1 ] If a network of a similar nature already exists, traffic measurements of such a network can be used to calculate the exact traffic load. [ 2 ] If there are no similar networks, then the network planner must use telecommunications forecasting methods to estimate the expected traffic intensity. [ 1 ] The forecasting process involves several steps: [ 1 ] Dimensioning a new network determines the minimum capacity requirements that will still allow the Teletraffic Grade of Service (GoS) requirements to be met. [ 1 ] [ 2 ] To do this, dimensioning involves planning for peak-hour traffic, i.e. that hour during the day during which traffic intensity is at its peak. [ 1 ] The dimensioning process involves determining the network’s topology, routing plan, traffic matrix , and GoS requirements, and using this information to determine the maximum call handling capacity of the switches , and the maximum number of channels required between the switches. [ 1 ] This process requires a complex model that simulates the behavior of the network equipment and routing protocols . A dimensioning rule is that the planner must ensure that the traffic load should never approach a load of 100 percent. [ 1 ] To calculate the correct dimensioning to comply with the above rule, the planner must take on-going measurements of the network’s traffic, and continuously maintain and upgrade resources to meet the changing requirements. [ 1 ] [ 2 ] Another reason for overprovisioning is to make sure that traffic can be rerouted in case a failure occurs in the network. Because of its complexity, network dimensioning is typically done using specialized software tools. Whereas researchers typically develop custom software to study a particular problem, network operators typically make use of commercial network planning software. Compared to network engineering, which adds resources such as links, routers, and switches into the network, traffic engineering targets changing traffic paths on the existing network to alleviate traffic congestion or accommodate more traffic demand. This technology is critical when the cost of network expansion is prohibitively high and the network load is not optimally balanced. The first part provides financial motivation for traffic engineering while the second part grants the possibility of deploying this technology. Network survivability enables the network to maintain maximum network connectivity and quality of service under failure conditions. It has been one of the critical requirements in network planning and design. It involves design requirements on topology, protocol, bandwidth allocation, etc.. Topology requirement can be maintaining a minimum two-connected network against any failure of a single link or node. Protocol requirements include using a dynamic routing protocol to reroute traffic against network dynamics during the transition of network dimensioning or equipment failures. Bandwidth allocation requirements pro-actively allocate extra bandwidth to avoid traffic loss under failure conditions. This topic has been actively studied in conferences, such as the International Workshop on Design of Reliable Communication Networks (DRCN). [ 3 ] More recently, with the increasing role of Artificial Intelligence technologies in engineering, the idea of using data to create data-driven models of existing networks has been proposed. [ 4 ] By analyzing large network data, also the less desired behaviors that may occur in real-world networks can be understood, worked around, and avoided in future designs. Both the design and management of networked systems can be improved by data-driven paradigm. [ 5 ] Data-driven models can also be used at various phases of service and network management life cycle such as service instantiation, service provision, optimization, monitoring, and diagnostic. [ 6 ]
https://en.wikipedia.org/wiki/Network_planning_and_design
A network protector is a type of electric protective device used in electricity distribution systems. The network protector automatically disconnect its associated distribution transformer from the secondary network when the power starts flowing in reverse direction. Network protectors are used on both spot networks and grid networks . The secondary grid system improves continuity of service for customers, since multiple sources are available to supply the load; a fault with any one supply is automatically isolated by the network protector and does not interrupt service from the other sources. Secondary grids are often used in downtown areas of cities where there are many customers in a small area. Typically the network protector is set to close when the voltage difference and phase angle are such that the transformer will supply power to the secondary grid, and is set to open when the secondary grid would back-feed through the transformer and supply power to the primary circuit. Network protectors typically have three settings, "automatic", "open", and "close". The top side is fed from multiple protectors and is always energized unless all units on a spot network are in the open position. Grid units will always be energized on the top side from the many other units tied into the grid. A spot network is 2 or more transformers dedicated to a single customer. The grid feeds multiple customers. A network protector has a circuit breaker set of contacts and a controlling protection relay. The components are enclosed in a protective housing; some network protectors are installed on transformers below grade and must be in water-resistant enclosures. The mechanism contains electrical and mechanical parts to switch open and close the secondary contacts. The controlling relay monitors voltage and current in the transformer, and opens or closes the contact mechanism through electrical signals. The relay uses a power/time curve so that small, short term reverse power flow (such as from elevator hoists) are ignored. Spot units will be 277/480 and the grid units will be 120/208. The network protector does not protect the (secondary) network cable from overload . The network protector is installed to protect the stability and reliability of the secondary grid by preventing power flow away from the customers and into the primary feeders. If there is a fault on the primary feeder, the substation circuit-breaker is meant to open, disconnecting the primary feeder from one side. The problem is that this primary cable is also connected to a network transformer, which is interconnected to other network transformers on its secondary side. The secondary network will energize the primary feeder through the network transformer. This can be very dangerous, because a fault will continue to be 'fed' from the secondary network side of transformer. Even without a fault, if the electric utility wants to perform maintenance on that primary cable, they must have a way to fully disconnect that primary cable, without worrying about the cable being energized by the secondary network through the network transformer. Thus, the network protector is designed to open its contacts if the relay senses backwards flowing current. However, if there is a fault on the secondary grid, the network protector is not designed to open its contacts up. The secondary fault will continue to be fed from the primary side of the system. In some cases, networks are designed with cable limiters (like fuses) to melt and disconnect the secondary fault under the right conditions. In other cases, the utility lets cable 'burn clear', in which case the fault is allowed to remain fed until the cables fuse, then the fault is isolated. Analysis of the system is required to ensure that the system can, indeed, supply enough current to fuse the cables, wherever the fault is. This method tends to work well at 120 volts, but it is less reliable at higher voltages. The danger in depending on the cable to 'burn clear' is that some conditions will not cause the cable to burn in this manner and instead, the entire section of cable can be damaged from excessive, long-term overloading, causing fires and damage to the secondary network. Typically, network protectors are contained inside a submersible enclosure which is bolted to the throat of the network transformer and placed in underground vaults. IEEE standard C57.12.44 covers network protectors.
https://en.wikipedia.org/wiki/Network_protector
Network resource planning is an enhanced process of network planning that incorporates the disciplines of business planning, marketing, and engineering to develop integrated, dynamic master plans for all domains of communications networks. Many communications service providers - from wireline , wireless , broadband to next generation carriers - are introducing next-generation services such as interactive video over cell phones and multi-user conference calling. [ 1 ] These new services are straining the capacity of existing networks. In a 2006 Reuters interview , John Roese, CTO of Nortel, pointed out that YouTube almost destroyed the Internet, [ 2 ] and in a keynote speech at Cisco’s C-Scape analyst conference in December 2006, John Chambers, CEO of Cisco Networks said, “Things like YouTube are just the baby steps of the impact video will have on networks.” Since every video transmission requires roughly 150 times the bandwidth of a voice transmission, it is estimated that a one percent adoption of the Verizon Wireless V CAST service required a 400 percent increase in Verizon’s corresponding network capacity. The bandwidth-intense nature of next generation services has required traditional network planning to evolve. Subscriber growth of legacy services like voice and data had an incremental impact on networks. New subscriptions and their corresponding bandwidth demand followed a relatively linear growth curve. As such, planning methods such as link- and node-specific forecasting or “trending” were sufficient to ensure networks could support current and planned subscribers. The dramatic swings in bandwidth demand that slight variances in subscription rates bring to bear on networks carrying services such as video can no longer be adequately planned for with these traditional methods. Network Resource Planning addresses the weaknesses of trending by incorporating business planning and marketing insight in the planning process. The addition of market analysis adds an additional layer of context and provides a feedback loop that enables more accurate planning. Furthermore, the importance of coordinating infrastructure investment activities across organizations is addressed to ensure that network capacity is provided when and where it is needed, and that human and operational support system resources are appropriately included in the planning process. The bandwidth needs of next generation services has placed added pressure on carriers to migrate from traditional networks like PSTN and TDMA to new Internet Protocol (IP) -based, or next generation networks , that can more adequately support the new services. Planning the transition to IP-based networks is a difficult endeavor in many respects. The capital expenditure (CAPEX) challenge of these new networks is that while it is remains expensive to make a mistake and deploy too much equipment (i.e., over-building their network and wasting assets), the non-linear relationship between bandwidth and network requirements means there are also significant costs from deploying too little (i.e., under-building the network and putting themselves at a higher chance of delivering poor quality of service and losing market share). From a technical perspective, the new IP-based networks are also far more difficult to plan. The self-routing nature of IP networks requires planners to determine how the network will behave under normal, overloaded, and failure scenarios. The fact that IP can drop or delay packets during overload conditions introduces new complexity to the system. Interactive services such as voice, two-way video, and gaming are particularly susceptible to the resultant digital jitter and delay. Under these circumstances, the network planners need to know how these services will be affected under varying conditions. In addition, they need to know how the network can be configured to provide the best quality of service at the least cost. [ 3 ] The issue is further compounded by the fact that the simplicity of IP network operations comes from a more uniform, layered approach to network architecture. It’s the interaction between the layers of the network that creates significant complexity. For example, routine services can run on an IP (and/or Ethernet) network, while high-QoS services are assigned to special routes. These services ride on the underlying logical transport network (ring or mesh), which in turn rides on the underlying optical infrastructure. For planning teams, the effect of traffic on each layer must be taken into account in the other layers. This situation is made even more complex when reliability and disaster scenarios come into play, as backup resources must be made available at each layer in the hierarchy. [ 3 ] Traditionally, network planning was performed on a domain-by-domain (i.e., transport, access, etc.) and isolated basis. Network Resource Planning has adapted to address the shared-fabric nature of IP networks by integrating planning across domains. Network planners have a much more powerful tool in Network Resource Planning for leveraging all of the strengths of the various domains in comprehensive master plans. Over the next five years, the vast majority of tier-1 and -2 service providers are expected to shift to convergent network planning systems to handle the complexity of these networks, as well as reduce CAPEX and operational costs. [ 4 ]
https://en.wikipedia.org/wiki/Network_resource_planning
A network scheduler , also called packet scheduler , queueing discipline ( qdisc ) or queueing algorithm , is an arbiter on a node in a packet switching communication network. It manages the sequence of network packets in the transmit and receive queues of the protocol stack and network interface controller . There are several network schedulers available for the different operating systems , that implement many of the existing network scheduling algorithms . The network scheduler logic decides which network packet to forward next. The network scheduler is associated with a queuing system, storing the network packets temporarily until they are transmitted. Systems may have a single or multiple queues in which case each may hold the packets of one flow , classification , or priority. In some cases it may not be possible to schedule all transmissions within the constraints of the system. In these cases the network scheduler is responsible for deciding which traffic to forward and what gets dropped . A network scheduler may have responsibility in implementation of specific network traffic control initiatives. Network traffic control is an umbrella term for all measures aimed at reducing network congestion , latency and packet loss. Specifically, active queue management (AQM) is the selective dropping of queued network packets to achieve the larger goal of preventing excessive network congestion. The scheduler must choose which packets to drop. Traffic shaping smooths the bandwidth requirements of traffic flows by delaying transmission packets when they are queued in bursts. The scheduler decides the timing for the transmitted packets. Quality of service (QoS) is the prioritization of traffic based on service class ( Differentiated services ) or reserved connection ( Integrated services ). In the course of time, many network queueing disciplines have been developed. Each of these provides specific reordering or dropping of network packets inside various transmit or receive buffers . [ 1 ] Queuing disciplines are commonly used as attempts to compensate for various networking conditions, like reducing the latency for certain classes of network packets, and are generally used as part of QoS measures. [ 2 ] [ 3 ] [ 4 ] Classful queueing disciplines allow the creation of classes, which work like branches on a tree. Rules can then be set to filter packets into each class. Each class can itself have assigned other classful or classless queueing discipline. Classless queueing disciplines do not allow adding more queueing disciplines to it. [ 5 ] Examples of algorithms suitable for managing network traffic include: Several of the above have been implemented as Linux kernel modules [ 13 ] [ 14 ] and are freely available . Bufferbloat is a phenomenon in packet-switched networks in which excess buffering of packets causes high latency and packet delay variation . Bufferbloat can be addressed by a network scheduler that strategically discards packets to avoid an unnecessarily high buffering backlog. Examples include CoDel , FQ-CoDel and random early detection . The Linux kernel packet scheduler is an integral part of the Linux kernel 's network stack and manages the transmit and receive ring buffers of all NICs. The packet scheduler is configured using the utility called tc (short for traffic control ). As the default queuing discipline, the packet scheduler uses a FIFO implementation called pfifo_fast , [ 15 ] although systemd since its version 217 changes the default queuing discipline to fq_codel . [ 16 ] The ifconfig and ip utilities enable system administrators to configure the buffer sizes txqueuelen and rxqueuelen for each device separately in terms of number of Ethernet frames regardless of their size. The Linux kernel's network stack contains several other buffers, which are not managed by the network scheduler. [ a ] Berkeley Packet Filter filters can be attached to the packet scheduler's classifiers. The eBPF functionality brought by version 4.1 of the Linux kernel in 2015 extends the classic BPF programmable classifiers to eBPF. [ 17 ] These can be compiled using the LLVM eBPF backend and loaded into a running kernel using the tc utility. [ 18 ] ALTQ is the implementation of a network scheduler for BSDs . As of OpenBSD version 5.5 ALTQ was replaced by the HFSC scheduler. Schedulers in communication networks manage resource allocation, including packet prioritization, timing, and resource distribution. Advanced implementations increasingly leverage artificial intelligence to address the complexities of modern network configurations. For instance, a supervised neural network (NN)-based scheduler has been introduced in cell-free networks to efficiently handle interactions between multiple radio units (RUs) and user equipment (UEs). This approach reduces computational complexity while optimizing latency, throughput, and resource allocation, making it a promising solution for beyond-5G networks. [ 19 ]
https://en.wikipedia.org/wiki/Network_scheduler
In mathematical optimization , the network simplex algorithm is a graph theoretic specialization of the simplex algorithm . The algorithm is usually formulated in terms of a minimum-cost flow problem . The network simplex method works very well in practice, typically 200 to 300 times faster than the simplex method applied to general linear program of same dimensions. [ 1 ] For a long time, the existence of a provably efficient network simplex algorithm was one of the major open problems in complexity theory, even though efficient-in-practice versions were available. In 1995 Orlin provided the first polynomial algorithm with runtime of O ( V 2 E log ⁡ ( V C ) ) {\displaystyle O(V^{2}E\log(VC))} where C {\displaystyle C} is maximum cost of any edges. [ 2 ] Later Tarjan improved this to O ( V E log ⁡ V log ⁡ ( V C ) ) {\displaystyle O(VE\log V\log(VC))} using dynamic trees in 1997. [ 3 ] Strongly polynomial dual network simplex algorithms for the same problem, but with a higher dependence on the numbers of edges and vertices in the graph, have been known for longer. [ 4 ] The network simplex method is an adaptation of the bounded variable primal simplex algorithm. The basis is represented as a rooted spanning tree of the underlying network, in which variables are represented by arcs, and the simplex multipliers by node potentials. At each iteration, an entering variable is selected by some pricing strategy, based on the dual multipliers (node potentials), and forms a cycle with the arcs of the tree. The leaving variable is the arc of the cycle with the least augmenting flow. The substitution of entering for leaving arc, and the reconstruction of the tree is called a pivot. When no non-basic arc remains eligible to enter, the optimal solution has been reached. The network simplex algorithm can be used to solve many practical problems including, [ 5 ]
https://en.wikipedia.org/wiki/Network_simplex_algorithm
The network theory of aging supports the idea that multiple connected processes contribute to the biology of aging. Kirkwood and Kowald helped to establish the first model of this kind by connecting theories and predicting specific mechanisms. In departure of investigating a single mechanistic cause or single molecules that lead to senescence , the network theory of aging takes a systems biology view to integrate theories in conjunction with computational models and quantitative data related to the biology of aging. The network theory of aging provides a deeper look at the damage and repair processes at the cellular level and the ever changing balance between those processes. To fully understand the network theory as its applied to aging you must look at the different hierarchical elements of the theory as it pertains to aging. • DNA damage theory of aging
https://en.wikipedia.org/wiki/Network_theory_of_aging
Network throughput (or just throughput , when in context) refers to the rate of message delivery over a communication channel in a communication network , such as Ethernet or packet radio . The data that these messages contain may be delivered over physical or logical links, or through network nodes . Throughput is usually measured in bits per second ( bit/s , sometimes abbreviated bps), and sometimes in packets per second ( p/s or pps) or data packets per time slot . The system throughput or aggregate throughput is the sum of the data rates that are delivered over all channels in a network. [ 1 ] Throughput represents digital bandwidth consumption. The throughput of a communication system may be affected by various factors, including the limitations of the underlying physical medium, available processing power of the system components, end-user behavior, etc. When taking various protocol overheads into account, the useful rate of the data transfer can be significantly lower than the maximum achievable throughput; the useful part is usually referred to as goodput . Users of telecommunications devices, systems designers, and researchers into communication theory are often interested in knowing the expected performance of a system. From a user perspective, this is often phrased as either "which device will get my data there most effectively for my needs?", or "which device will deliver the most data per unit cost?". Systems designers often select the most effective architecture or design constraints for a system, which drive its final performance. In most cases, the benchmark of what a system is capable of, or its maximum performance is what the user or designer is interested in. The term maximum throughput is frequently used when discussing end-user maximum throughput tests. Maximum throughput is essentially synonymous with digital bandwidth capacity . Four different values are relevant in the context of maximum throughput are used in comparing the upper limit conceptual performance of multiple systems. They are maximum theoretical throughput , maximum achievable throughput , peak measured throughput , and maximum sustained throughput . These values represent different qualities, and care must be taken that the same definitions are used when comparing different maximum throughput values. Each bit must carry the same amount of information if throughput values are to be compared. Data compression can significantly alter throughput calculations, including generating values exceeding 100% in some cases. If the communication is mediated by several links in series with different bit rates, the maximum throughput of the overall link is lower than or equal to the lowest bit rate. The lowest value link in the series is referred to as the bottleneck . Maximum theoretical throughput is closely related to the channel capacity of the system, [ 2 ] and is the maximum possible quantity of data that can be transmitted under ideal circumstances. In some cases, this number is reported as equal to the channel capacity, though this can be deceptive, as only non-packetized systems technologies can achieve this. Maximum theoretical throughput is more accurately reported taking into account format and specification overhead with best-case assumptions. The asymptotic throughput (less formal asymptotic bandwidth ) for a packet-mode communication network is the value of the maximum throughput function, when the incoming network load approaches infinity , either due to a message size , [ 3 ] or the number of data sources. As with other bit rates and data bandwidths , the asymptotic throughput is measured in bits per second (bit/s) or (rarely) bytes per second (B/s) , where 1 B/s is 8 bit/s . Decimal prefixes are used, meaning that 1 Mbit/s is 1000000 bit/s . Asymptotic throughput is usually estimated by sending or simulating a very large message (sequence of data packets) through the network, using a greedy source and no flow control mechanism (i.e., UDP rather than TCP ), and measuring the volume of data received at the destination node. Traffic load between other sources may reduce this maximum network path throughput. Alternatively, a large number of sources and sinks may be modeled, with or without flow control, and the aggregate maximum network throughput measured (the sum of traffic reaching its destinations). In a network simulation model with infinitately large packet queues, the asymptotic throughput occurs when the latency (the packet queuing time) goes to infinity, while if the packet queues are limited, or the network is a multi-drop network with many sources, and collisions may occur, the packet-dropping rate approaches 100%. A well-known application of asymptotic throughput is in modeling point-to-point communication where message latency T ( N ) {\displaystyle T(N)} is modeled as a function of message length N {\displaystyle N} as T ( N ) = ( M + N ) / A {\displaystyle T(N)=(M+N)/A} where A {\displaystyle A} is the asymptotic bandwidth and M {\displaystyle M} is the half-peak length. [ 4 ] As well as its use in general network modeling, asymptotic throughput is used in modeling performance on massively parallel computer systems, where system operation is highly dependent on communication overhead, as well as processor performance. [ 5 ] In these applications, asymptotic throughput is used modeling which includes the number of processors, so that both the latency and the asymptotic throughput are functions of the number of processors. [ 6 ] Where asymptotic throughput is a theoretical or calculated capacity, peak measured throughput is throughput measured on a real implemented system, or on a simulated system. The value is the throughput measured over a short period of time; mathematically, this is the limit taken with respect to throughput as time approaches zero. This term is synonymous with instantaneous throughput . This number is useful for systems that rely on burst data transmission; however, for systems with a high duty cycle , this is less likely to be a useful measure of system performance. This value is the throughput averaged or integrated over a long time (sometimes considered infinity). For high duty cycle networks, this is likely to be the most accurate indicator of system performance. The maximum throughput is defined as the asymptotic throughput when the load (the amount of incoming data) is large. In packet switched systems where the load and the throughput always are equal (where packet loss does not occur), the maximum throughput may be defined as the minimum load in bit/s that causes the delivery time (the latency ) to become unstable and increase towards infinity. This value can also be used deceptively in relation to peak measured throughput to conceal packet shaping . Throughput is sometimes normalized and measured in percentage, but normalization may cause confusion regarding what the percentage is related to. Channel utilization , channel efficiency and packet drop rate in percentage are less ambiguous terms. The channel efficiency, also known as bandwidth utilization efficiency , is the percentage of the net bit rate (in bit/s ) of a digital communication channel that goes to the actually achieved throughput. For example, if the throughput is 70 Mbit/s in a 100 Mbit/s Ethernet connection, the channel efficiency is 70%. In this example, effectively 70 Mbit of data are transmitted every second. Channel utilization is instead a term related to the use of the channel, disregarding the throughput. It counts not only with the data bits, but also with the overhead that makes use of the channel. The transmission overhead consists of preamble sequences, frame headers and acknowledge packets. The definitions assume a noiseless channel. Otherwise, the throughput would not be only associated with the nature (efficiency) of the protocol, but also to retransmissions resultant from the quality of the channel. In a simplistic approach, channel efficiency can be equal to channel utilization assuming that acknowledge packets are zero-length and that the communications provider will not see any bandwidth relative to retransmissions or headers. Therefore, certain texts mark a difference between channel utilization and protocol efficiency. In a point-to-point or point-to-multipoint communication link, where only one terminal is transmitting, the maximum throughput is often equivalent to or very near the physical data rate (the channel capacity ), since the channel utilization can be almost 100% in such a network, except for a small inter-frame gap. For example, the maximum frame size in Ethernet is 1526 bytes: up to 1500 bytes for the payload, eight bytes for the preamble, 14 bytes for the header, and 4 bytes for the trailer. An additional minimum interframe gap corresponding to 12 bytes is inserted after each frame. This corresponds to a maximum channel utilization of 1526 / (1526 + 12) × 100% = 99.22%, or a maximum channel use of 99.22 Mbit/s inclusive of Ethernet datalink layer protocol overhead in a 100 Mbit/s Ethernet connection. The maximum throughput or channel efficiency is then 1500 / (1526 + 12) = 97.5%, exclusive of the Ethernet protocol overhead. The throughput of a communication system will be limited by a huge number of factors. Some of these are described below: The maximum achievable throughput (the channel capacity) is affected by the bandwidth in hertz and signal-to-noise ratio of the analog physical medium. Despite the conceptual simplicity of digital information, all electrical signals traveling over wires are analog. The analog limitations of wires or wireless systems inevitably provide an upper bound on the amount of information that can be sent. The dominant equation here is the Shannon–Hartley theorem , and analog limitations of this type can be understood as factors that affect either the analog bandwidth of a signal or as factors that affect the signal-to-noise ratio. The bandwidth of wired systems can be in fact surprisingly [ according to whom? ] narrow, with the bandwidth of Ethernet wire limited to approximately 1 GHz, and PCB traces limited by a similar amount. Digital systems refer to the 'knee frequency', [ 7 ] the amount of time for the digital voltage to rise from 10% of a nominal digital '0' to a nominal digital '1' or vice versa. The knee frequency is related to the required bandwidth of a channel, and can be related to the 3 db bandwidth of a system by the equation: [ 8 ] F 3 d B ≈ K / T r {\displaystyle \ F_{3dB}\approx K/T_{r}} Where Tr is the 10% to 90% rise time, and K is a constant of proportionality related to the pulse shape, equal to 0.35 for an exponential rise, and 0.338 for a Gaussian rise. Computational systems have finite processing power and can drive finite current. Limited current drive capability can limit the effective signal to noise ratio for high capacitance links. Large data loads that require processing impose data processing requirements on hardware (such as routers). For example, a gateway router supporting a populated class B subnet , handling 10 × 100 Mbit/s Ethernet channels, must examine 16 bits of address to determine the destination port for each packet. This translates into 81913 packets per second (assuming maximum data payload per packet) with a table of 2^16 addresses this requires the router to be able to perform 5.368 billion lookup operations per second. In a worst-case scenario, where the payloads of each Ethernet packet are reduced to 100 bytes, this number of operations per second jumps to 520 billion. This router would require a multi-teraflop processing core to be able to handle such a load. Ensuring that multiple users can harmoniously share a single communications link requires some kind of equitable sharing of the link. If a bottleneck communication link offering data rate R is shared by "N" active users (with at least one data packet in queue), every user typically achieves a throughput of approximately R/N , if fair queuing best-effort communication is assumed. The maximum throughput is often an unreliable measurement of perceived bandwidth, for example the file transmission data rate in bits per seconds. As pointed out above, the achieved throughput is often lower than the maximum throughput. Also, the protocol overhead affects the perceived bandwidth. The throughput is not a well-defined metric when it comes to how to deal with protocol overhead. It is typically measured at a reference point below the network layer and above the physical layer. The simplest definition is the number of bits per second that are physically delivered. A typical example where this definition is practiced is an Ethernet network. In this case, the maximum throughput is the gross bit rate or raw bit rate. However, in schemes that include forward error correction codes (channel coding), the redundant error code is normally excluded from the throughput. An example in modem communication, where the throughput typically is measured in the interface between the Point-to-Point Protocol (PPP) and the circuit-switched modem connection. In this case, the maximum throughput is often called net bit rate or useful bit rate. To determine the actual data rate of a network or connection, the " goodput " measurement definition may be used. For example, in file transmission, the "goodput" corresponds to the file size (in bits) divided by the file transmission time. The " goodput " is the amount of useful information that is delivered per second to the application layer protocol. Dropped packets or packet retransmissions, as well as protocol overhead, are excluded. Because of that, the "goodput" is lower than the throughput. Technical factors that affect the difference are presented in the " goodput " article. Often, a block in a data flow diagram has a single input and a single output, and operate on discrete packets of information. Examples of such blocks are fast Fourier transform modules or binary multipliers . Because the units of throughput are the reciprocal of the unit for propagation delay , which is 'seconds per message' or 'seconds per output', throughput can be used to relate a computational device performing a dedicated function such as an ASIC or embedded processor to a communications channel, simplifying system analysis. In wireless networks or cellular systems , the system spectral efficiency in bit/s/Hz/area unit, bit/s/Hz/site or bit/s/Hz/cell, is the maximum system throughput (aggregate throughput) divided by the analog bandwidth and some measure of the system coverage area. Throughput over analog channels is defined entirely by the modulation scheme, the signal-to-noise ratio, and the available bandwidth. Since throughput is normally defined in terms of quantified digital data, the term 'throughput' is not normally used; the term 'bandwidth' is more often used instead.
https://en.wikipedia.org/wiki/Network_throughput
Network tomography is the study of a network 's internal characteristics using information derived from end point data. The word tomography is used to link the field, in concept, to other processes that infer the internal characteristics of an object from external observation, as is done in MRI or PET scanning (even though the term tomography strictly refers to imaging by slicing). The field is a recent development in electrical engineering and computer science , dating from 1996. [ 1 ] Network tomography seeks to map the path data takes through the Internet by examining information from “edge nodes,” the computers in which the data are originated and from which they are requested. The field is useful for engineers attempting to develop more efficient computer networks. Data derived from network tomography studies can be used to increase quality of service by limiting link packet loss and increasing routing optimization. There have been many published papers and tools in the area of network tomography, which aim to monitor the health of various links in a network in real-time. These can be classified into loss and delay tomography. [ 2 ] [ 3 ] Loss tomography aims to find “lossy” links in a network by sending active “probes” from various vantage points in the network or the Internet. [ 4 ] [ 5 ] The area of delay tomography has also attracted attention in the recent past. It aims to find link delays using end-to-end probes sent from vantage points. This can potentially help isolate links with large queueing delays caused by congestion . [ 6 ] Network tomography may be able to infer network topology using end-to-end probes. Topology discovery is a tradeoff between accuracy vs. overhead. With network tomography, the emphasis is to achieve as accurate a picture of the network with minimal overhead. In comparison, other network topology discovery techniques using SNMP or route analytics aim for greater accuracy with less emphasis on overhead reduction. Network tomography may find links which are shared by multiple paths (and can thus become potential bottlenecks in the future). [ 7 ] Network Tomography may improve the control of a smart grid [ 8 ]
https://en.wikipedia.org/wiki/Network_tomography
In computer networking , network traffic control is the process of managing, controlling or reducing the network traffic, particularly Internet bandwidth , e.g. by the network scheduler . [ 1 ] It is used by network administrators, to reduce congestion , latency and packet loss . This is part of bandwidth management . In order to use these tools effectively, it is necessary to measure the network traffic to determine the causes of network congestion and attack those problems specifically. Network traffic control is an important subject in datacenters as it is necessary for efficient use of datacenter network bandwidth and for maintaining service level agreements. [ 1 ] Traffic shaping is the retiming (delaying) of packets (or frames ) until they meet specified bandwidth and or burstiness limits. [ 1 ] Since such delays involve queues that are nearly always finite and, once full, excess traffic is nearly always dropped (discarded), traffic shaping nearly always implies traffic policing as well. Traffic policing is the dropping (discarding) or reduction in priority (demoting) of packets (or frames) that exceed some specified bandwidth and or burstiness limit.
https://en.wikipedia.org/wiki/Network_traffic_control
Network utilities are software utilities designed to analyze and configure various aspects of computer networks . The majority of them originated on Unix systems, but several later ports to other operating systems exist. The most common tools (found on most operating systems) include: Other network utilities include: Some usages of network configuration tools also serve to display and diagnose networks, for example: This computer networking article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Network_utility
The Networked Readiness Index is an index published annually by the World Economic Forum in collaboration with INSEAD , as part of their annual Global Information Technology Report . [ citation needed ] It aims to measure the degree of readiness of countries to exploit opportunities offered by information and communications technology . [ 1 ] [ 2 ] The Networked Readiness Index was first conceived of and constructed by Geoffrey Kirkman, Jeffrey Sachs and Carlos Osorio in 2002 at Harvard University. [ 3 ] The 2016 edition covers 139 nations. [ 4 ] This computer networking article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Networked_Readiness_Index
The Networked Transport of RTCM via Internet Protocol ( NTRIP ) is a protocol for streaming differential GPS (DGPS) corrections over the Internet for real-time kinematic positioning . NTRIP is a generic, stateless protocol based on the Hypertext Transfer Protocol HTTP/1.1 and is enhanced for GNSS data streams. [ 1 ] The specification is standardized by the Radio Technical Commission for Maritime Services (RTCM). [ 2 ] NTRIP was developed by the German Federal Agency for Cartography and Geodesy (BKG) [ 3 ] and the Dortmund University Department of Computer Science . [ 4 ] Ntrip was released in September 2004. [ 5 ] The 2011 version of the protocol is version 2.0. [ 6 ] NTRIP used to be [ 7 ] an open standard protocol but it is not available freely (as of 2020). There is an open source implementation available from software.rtcm-ntrip.org from where the protocol can be reverse-engineered.
https://en.wikipedia.org/wiki/Networked_Transport_of_RTCM_via_Internet_Protocol
Networked flying platforms (NFPs) are unmanned flying platforms of various types including unmanned aerial vehicles (UAVs), drones , tethered balloon and high-altitude/medium-altitude/low-altitude platforms (HAPs/MAPs/LAPs) carrying RF / mmWave / FSO payload ( transceivers ) along with an extended battery life capabilities, and are floating or moving [ 1 ] in the air at a quasi-stationary positions with the ability to move horizontally and vertically to offer 5G and beyond 5G (B5G) cellular networks and network support services. There are following two possible NFPs deployment configurations: NFPs can be manually (non-autonomously) controlled but mainly designed for autonomous pre-determined flights. [ 8 ] NFPs can either operate in a single NFP mode where NFPs do not cooperate with other NFPs in the network, if exists or a swarm of NFPs where multiple interconnected NFPs cooperate, collaborate and perform the network mission autonomously with one of the NFPs designated as mother-NFP [ 2 ]
https://en.wikipedia.org/wiki/Networked_flying_platform
The Neugebauer equations are a set of equations used to model color printing systems, developed by Hans E. J. Neugebauer . [ 1 ] [ 2 ] They were intended to predict the color produced by a combination of halftones printed in cyan, magenta, and yellow inks . The equations estimate the reflectance (in CIE XYZ coordinates or as a function of wavelength) as a function of the reflectance of the 8 possible combinations of CMY inks (or the 16 combinations of CMYK inks), weighted by the area they take up on the paper. In wavelength form: [ 1 ] where R i ( λ ) is the reflectance of ink combination i , and w i is the relative proportions of the 16 colors in a uniformly colored patch. The weights are dependent on the halftone pattern and possibly subject to various forms of dot gain . [ 3 ] Light can interact with the paper and ink in more complex ways. The Yule–Nielsen correction takes into account light entering through blank regions and re-emerging through ink: [ 4 ] The factor n would be 2 for a perfectly diffusing Lambertian paper substrate, but can be adjusted based on empirical measurements. Further considerations of the optics, such as multiple internal reflections, can be added at the price of additional complexity. In order to achieve a desired reflectance, these equations have to be inverted to produce the actual dot areas or digital values sent to the printer, a nontrivial operation that may have multiple solutions. [ 5 ] This applied mathematics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neugebauer_equations
In mathematics, the Neumann polynomials , introduced by Carl Neumann for the special case α = 0 {\displaystyle \alpha =0} , are a sequence of polynomials in 1 / t {\displaystyle 1/t} used to expand functions in term of Bessel functions . [ 1 ] The first few polynomials are A general form for the polynomial is and they have the "generating function" where J are Bessel functions . To expand a function f in the form for | t | < c {\displaystyle |t|<c} , compute where c ′ < c {\displaystyle c'<c} and c is the distance of the nearest singularity of f(z) from z = 0 {\displaystyle z=0} . An example is the extension or the more general Sonine formula [ 2 ] where C k ( s ) {\displaystyle C_{k}^{(s)}} is Gegenbauer's polynomial . Then, [ citation needed ] [ original research? ] the confluent hypergeometric function and in particular the index shift formula the Taylor expansion (addition formula) (cf. [ 3 ] [ failed verification ] ) and the expansion of the integral of the Bessel function, are of the same type.
https://en.wikipedia.org/wiki/Neumann_polynomial
A Neumann series is a mathematical series that sums k -times repeated applications of an operator T {\displaystyle T} . This has the generator form where T k {\displaystyle T^{k}} is the k -times repeated application of T {\displaystyle T} ; T 0 {\displaystyle T^{0}} is the identity operator I {\displaystyle I} and T k := T k − 1 ∘ T {\displaystyle T^{k}:={}T^{k-1}\circ {T}} for k > 0 {\displaystyle k>0} . This is a special case of the generalization of a geometric series of real or complex numbers to a geometric series of operators. The generalized initial term of the series is the identity operator T 0 = I {\displaystyle T^{0}=I} and the generalized common ratio of the series is the operator T . {\displaystyle T.} The series is named after the mathematician Carl Neumann , who used it in 1877 in the context of potential theory . The Neumann series is used in functional analysis . It is closely connected to the resolvent formalism for studying the spectrum of bounded operators and, applied from the left to a function, it forms the Liouville-Neumann series that formally solves Fredholm integral equations . Suppose that T {\displaystyle T} is a bounded linear operator on the normed vector space X {\displaystyle X} . If the Neumann series converges in the operator norm , then I − T {\displaystyle I-T} is invertible and its inverse is the series: where I {\displaystyle I} is the identity operator in X {\displaystyle X} . To see why, consider the partial sums Then we have This result on operators is analogous to geometric series in R {\displaystyle \mathbb {R} } . One case in which convergence is guaranteed is when X {\displaystyle X} is a Banach space and | T | < 1 {\displaystyle |T|<1} in the operator norm; another compatible case is that ∑ k = 0 ∞ | T k | {\textstyle \sum _{k=0}^{\infty }|T^{k}|} converges. However, there are also results which give weaker conditions under which the series converges. Let C ∈ R 3 × 3 {\displaystyle C\in \mathbb {R} ^{3\times 3}} be given by: For the Neumann series ∑ k = 0 n C k {\textstyle \sum _{k=0}^{n}C^{k}} to converge to ( I − C ) − 1 {\displaystyle (I-C)^{-1}} as n {\displaystyle n} goes to infinity, the matrix norm of C {\displaystyle C} must be smaller than unity. This norm is confirming that the Neumann series converges. A truncated Neumann series can be used for approximate matrix inversion . To approximate the inverse of an invertible matrix A {\displaystyle A} , consider that A − 1 = ( I − I + A ) − 1 = ( I − ( I − A ) ) − 1 = ( I − T ) − 1 {\displaystyle {\begin{aligned}A^{-1}&=(I-I+A)^{-1}\\&=(I-(I-A))^{-1}\\&=(I-T)^{-1}\end{aligned}}} for T = ( I − A ) . {\displaystyle T=(I-A).} Then, using the Neumann series identity that ∑ k = 0 ∞ T k = ( I − T ) − 1 {\textstyle \sum _{k=0}^{\infty }T^{k}=(I-T)^{-1}} if the appropriate norm condition on T = ( I − A ) {\displaystyle T=(I-A)} is satisfied, A − 1 = ( I − ( I − A ) ) − 1 = ∑ k = 0 ∞ ( I − A ) k . {\textstyle A^{-1}=(I-(I-A))^{-1}=\sum _{k=0}^{\infty }(I-A)^{k}.} Since these terms shrink with increasing k , {\displaystyle k,} given the conditions on the norm, then truncating the series at some finite n {\displaystyle n} may give a practical approximation to the inverse matrix: A corollary is that the set of invertible operators between two Banach spaces B {\displaystyle B} and B ′ {\displaystyle B'} is open in the topology induced by the operator norm. Indeed, let S : B → B ′ {\displaystyle S:B\to B'} be an invertible operator and let T : B → B ′ {\displaystyle T:B\to B'} be another operator. If | S − T | < | S − 1 | − 1 {\displaystyle |S-T|<|S^{-1}|^{-1}} , then T {\displaystyle T} is also invertible. Since | I − S − 1 T | < 1 {\displaystyle |I-S^{-1}T|<1} , the Neumann series ∑ k = 0 ∞ ( I − S − 1 T ) k {\textstyle \sum _{k=0}^{\infty }(I-S^{-1}T)^{k}} is convergent. Therefore, we have Taking the norms, we get The norm of T − 1 {\displaystyle T^{-1}} can be bounded by The Neumann series has been used for linear data detection in massive multiuser multiple-input multiple-output (MIMO) wireless systems. Using a truncated Neumann series avoids computation of an explicit matrix inverse, which reduces the complexity of linear data detection from cubic to square. [ 1 ] Another application is the theory of propagation graphs which takes advantage of Neumann series to derive closed form expressions for transfer functions.
https://en.wikipedia.org/wiki/Neumann_series
Neumorphism is a design style used in graphical user interfaces . It is commonly identified by a soft and light look (for which it is sometimes referred to as soft UI ) [ 1 ] with elements that appear to protrude from or dent into the background rather than float on top of it. [ 2 ] It is sometimes considered a medium between skeuomorphism and flat design . [ 3 ] The term neumorphism was coined by Jason Kelly in 2019 as a portmanteau of neo and skeuomorphism, emphasizing its role as a semi-revival of skeuomorphism. [ 4 ] Many neumorphic design concepts can be traced to Alexander Plyuto, who created a mockup for a banking app showing various elements of neumorphic design. He posted it to the website Dribbble , where it quickly blew up to 3,000 views. [ 1 ] On November 12, 2020, Apple released macOS Big Sur . The update included graphical designs that featured neumorphism prominently, such as the app icons and use of translucency . [ 5 ] Neumorphism is a form of minimalism characterized by a soft and light look, often using pastel colors with low contrast . Elements are usually the same color as the background, and are only distinguished by shadows and highlights surrounding the element. This gives the elements the appearance that they are "protruding" from the background, or that they are dented into it. [ 2 ] [ 3 ] Designers may like the look and feel of neumorphism because it provides a middle ground between skeuomorphism and flat design. Specifically, it aims to look plausibly realistic, while still looking clean and adhering to minimalism. [ 6 ] Neumorphism has received a lot of criticism, notably for its lack of accessibility , difficulty in implementation, [ 7 ] low contrast, and incompatibility with certain brands. [ 8 ]
https://en.wikipedia.org/wiki/Neumorphism
The Neupert effect refers to an empirical tendency for high-energy ('hard') X-ray emission to coincide temporally with the rate of rise of lower-energy ('soft') X-ray emission of a solar flare . [ 1 ] Here 'hard' and 'soft' mean above and below an energy of about 10 keV to solar physicists, though in non-solar X-ray astronomy one typically sets this boundary at a lower energy. This effect gets its name from NASA solar physicist and spectroscopist Werner Neupert, who first documented a related correlation (the integral form) between microwave ( gyrosynchrotron ) and soft X-ray emissions in 1968. [ 2 ] The standard interpretation is that the accumulated energy injection associated with the acceleration of non-thermal electrons (which produce the hard X-rays via non-thermal bremsstrahlung ) release energy in the lower solar atmosphere (the chromosphere ); this energy then leads to thermal (soft X-ray) emission as the chromospheric plasma heats and expands into the corona. [ 1 ] The effect is very common, but does not represent an exact relationship and is not observed in all solar flares. [ 3 ] This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neupert_effect
Neural Audio Corporation was an audio research company based in Kirkland, Washington . The company specialized in high-end audio research. It helped XM Satellite Radio launch their service using the Neural Codec Pre-Conditioner, which was designed to provide higher quality audio at lower bitrates . The company was co-founded by two audio engineers , Paul Hubert and Robert Reams in 2000. In 2009 the company was acquired by DTS Inc. for $15 million in cash. [ 1 ] Neural was mostly known for its work in the field of audio processing and its "Neural Surround" sound format. ESPN , FOX, NBC, CBS, Sony, Universal, Warner Bros , THX, Yamaha, Pioneer Electronics , Ford, Honda , Nissan , Vivendi and SiriusXM were partners and customers in connection with sound for movies, broadcasting applications, music reproduction and video games . [ citation needed ] "Neural Surround" is a technology similar to MPEG Surround , where a 5.1 stream is downmixed into stereo than recovered using cues – encoded into the downmixed stereo. NPR participated in a trial of the "Neural Surround" technology in 2004, using the Harris NeuStar 5225. [ 2 ] XM HD Surround was based on the same technology. Neural provides its "Codec Pre-Conditioner" in at least two types of devices, a "NeuStar UltraLink digital radio audio conditioner" built as a physical device [ 2 ] and a "Neustar SW4.0" built as a piece of software on Windows XP . [ 3 ] Manual of the software indicates that the pre-conditioner works by analyzing noise in each frequency bin and masking them to not exceed predefined limits, so that they would not overwhelm a codec. [ 4 ] Harris Broadcast acted as a redistributor of Neural technology. [ 5 ]
https://en.wikipedia.org/wiki/Neural_Audio_Corporation
Neural differential equations are a class of models in machine learning that combine neural networks with the mathematical framework of differential equations . [ 1 ] These models provide an alternative approach to neural network design, particularly for systems that evolve over time or through continuous transformations. The most common type, a neural ordinary differential equation (neural ODE) , defines the evolution of a system's state using an ordinary differential equation whose dynamics are governed by a neural network: d h ( t ) d t = f θ ( h ( t ) , t ) . {\displaystyle {\frac {\mathrm {d} \mathbf {h} (t)}{\mathrm {d} t}}=f_{\theta }(\mathbf {h} (t),t).} In this formulation, the neural network parameters θ determine how the state changes at each point in time. [ 1 ] This approach contrasts with conventional neural networks , where information flows through discrete layers indexed by natural numbers . Neural ODEs instead use continuous layers indexed by positive real numbers , where the function h : R ≥ 0 → R {\displaystyle h:\mathbb {R} _{\geq 0}\to \mathbb {R} } represents the network's state at any given layer depth t . Neural ODEs can be understood as continuous-time control systems , where their ability to interpolate data can be interpreted in terms of controllability . [ 2 ] They have found applications in time series analysis , generative modeling, and the study of complex dynamical systems . Neural ODEs can be interpreted as a residual neural network with a continuum of layers rather than a discrete number of layers. [ 3 ] Applying the Euler method with a unit time step to a neural ODE yields the forward propagation equation of a residual neural network: h ℓ + 1 = f θ ( h ℓ , ℓ ) + h ℓ , {\displaystyle \mathbf {h} _{\ell +1}=f_{\theta }(\mathbf {h} _{\ell },\ell )+\mathbf {h} _{\ell },} with ℓ being the ℓ-th layer of this residual neural network. While the forward propagation of a residual neural network is done by applying a sequence of transformations starting at the input layer, the forward propagation computation of a neural ODE is done by solving a differential equation. More precisely, the output h out {\displaystyle \mathbf {h} _{\text{out}}} associated to the input h in {\displaystyle \mathbf {h} _{\text{in}}} of the neural ODE is obtained by solving the initial value problem d h ( t ) d t = f θ ( h ( t ) , t ) , h ( 0 ) = h in , {\displaystyle {\frac {\mathrm {d} \mathbf {h} (t)}{\mathrm {d} t}}=f_{\theta }(\mathbf {h} (t),t),\quad \mathbf {h} (0)=\mathbf {h} _{\text{in}},} and assigning the value h ( T ) {\displaystyle \mathbf {h} (T)} to h out {\displaystyle \mathbf {h} _{\text{out}}} . In physics-informed contexts where additional information is known, neural ODEs can be combined with an existing first-principles model to build a physics-informed neural network model called universal differential equations (UDE). [ 4 ] [ 5 ] [ 6 ] [ 7 ] For instance, an UDE version of the Lotka-Volterra model can be written as [ 8 ] d x d t = α x − β x y + f θ ( x ( t ) , y ( t ) ) , d y d t = − γ y + δ x y + g θ ( x ( t ) , y ( t ) ) , {\displaystyle {\begin{aligned}{\frac {dx}{dt}}&=\alpha x-\beta xy+f_{\theta }(x(t),y(t)),\\{\frac {dy}{dt}}&=-\gamma y+\delta xy+g_{\theta }(x(t),y(t)),\end{aligned}}} where the terms f θ {\displaystyle f_{\theta }} and g θ {\displaystyle g_{\theta }} are correction terms parametrized by neural networks.
https://en.wikipedia.org/wiki/Neural_differential_equation
The neural efficiency hypothesis proposes that while performing a cognitive task, individuals with higher intelligence levels exhibit lower brain activation in comparison to individuals with lower intelligence levels. [ 1 ] This hypothesis suggests that individual differences in cognitive abilities are due to differences in the efficiency of neural processing. Essentially, individuals with higher cognitive abilities utilize fewer neural resources to perform a given task than those with lower cognitive abilities. [ 2 ] Since the late 19th century, there has been a growing interest among psychologists to understand the influence of individual differences in intelligence [ 3 ] and the underlying neural mechanisms of intelligence. [ 4 ] [ 5 ] The Neural efficiency hypothesis was first introduced by Haier et al. in 1988 through a Positron Emission Tomography (PET) study aimed at investigating the relationship between intelligence and brain activation. [ 6 ] PET is a type of nuclear medicine procedure that measures the metabolic activity of the cells of body tissues. [ 7 ] During the study, participants underwent PET of the head while completing different cognitive tasks such as Raven's Advanced Progressive Matrices (RAPM) and Continuous Performance Tests (CPT). The PET Scans showed that task performance activated specific regions of the participant's brain. Also, a negative correlation was found between brain glucose metabolism levels and intelligence test scores. The results of the study indicated that individuals with higher intelligence levels exhibited lower levels of brain glucose metabolism while solving cognitive tasks. [ 6 ] A few years later, Haier confirmed the results of the study by replicating it while considering learning as a factor. [ 8 ] The early studies mainly focused on certain cognitive tasks such as intelligence tests to test the hypothesis, potentially confounding efficiency during the intelligence-test performance with neural efficiency in general. [ 9 ] To overcome this limitation recent studies have refined and expanded the hypothesis by applying and testing it in various domains. In one study, researchers used a personal decision-making task to test the NEH which included questions about preferences like, “which profession do you prefer?”. Subjective preferences were used to force participants to make decisions, and preference ratings were used to manipulate the level of decisional conflict. The study found that individuals with higher intelligence test scores displayed less brain activity during simple tasks and greater brain activity during complex tasks, compared to individuals with lower intelligence test scores. This suggested that smarter people can use their brains more effectively by turning on only the areas that are required for the activity at hand. Also, more intelligent people displayed quicker reaction times during challenging tasks. These findings offered fresh evidence in support of the NEH and indicated that the neural efficiency of highly intelligent people can be applied to tasks that are different from typical intelligence tests. [ 9 ] Another study focused on understanding the effect of long-term specialized training on an athlete's neural efficiency, using functional neuroimaging while performing a sport-specific task. The results of this study showed that athletes with prolonged experience or “experts” in their domains performed better than novices in terms of speed, accuracy, and efficiency, with lower activity levels in the sensory and motor cortex and less energy expenditure. These findings supported the Neural Efficiency Hypothesis (NEH) and proved that individuals who are highly skilled and experienced have more efficient brain functioning. [ 10 ] Recent studies on the Neural Efficiency Hypothesis have identified several limitations in the former research. They have also found several moderating variables, such as task complexity, sex and task type. The difficulty level of the task is one of the key moderating variables that influence the neural efficiency hypothesis. [ 1 ] In a study, it was found that the hypothesis only holds for easy tasks. For difficult tasks, intelligent individuals may show increased brain activation. The study revealed that participants with high IQ showed weaker activation during easy tasks but had a significant increase from easy to difficult tasks. This pattern was not observed in the average IQ group. The study suggests that the relationship between intelligence and brain activation depends on the difficulty of the task. [ 11 ] Former studies have primarily used uniform tasks and have mainly focused on male participants. [ 12 ] One study found that neural efficiency was influenced by sex and task content. The study tried to examine possible sex differences in human brain functioning. It aimed at investigating the relationship between intelligence and cortical activation during the cognitive performance in various versions of a task, using brain imaging techniques. The results of the study suggested that, In the verbal task, the females were more likely to produce cortical activation patterns consistent with the NEH. Whereas, in the figural task, the expected neural activation was primarily in the males in comparison to the female participants. This suggested the role of sex and task type as moderating variables. [ 13 ]
https://en.wikipedia.org/wiki/Neural_efficiency_hypothesis
Neural facilitation , also known as paired-pulse facilitation ( PPF ), is a phenomenon in neuroscience in which postsynaptic potentials (PSPs) ( EPPs , EPSPs or IPSPs ) evoked by an impulse are increased when that impulse closely follows a prior impulse. PPF is thus a form of short-term synaptic plasticity . The mechanisms underlying neural facilitation are exclusively pre-synaptic; broadly speaking, PPF arises due to increased presynaptic Ca 2+ concentration leading to a greater release of neurotransmitter-containing synaptic vesicles . [ 1 ] Neural facilitation may be involved in several neuronal tasks, including simple learning, information processing, [ 2 ] and sound-source localization. [ 3 ] Ca 2+ plays a significant role in transmitting signals at chemical synapses . Voltage-gated Ca 2+ channels are located within the presynaptic terminal. When an action potential invades the presynaptic membrane, these channels open and Ca 2+ enters. A higher concentration of Ca 2+ enables synaptic vesicles to fuse to the presynaptic membrane and release their contents ( neurotransmitters ) into the synaptic cleft to ultimately contact receptors in the postsynaptic membrane. The amount of neurotransmitter released is correlated with the amount of Ca 2+ influx. Therefore, short-term facilitation (STF) results from a build up of Ca 2+ within the presynaptic terminal when action potentials propagate close together in time. [ 4 ] Facilitation of excitatory post-synaptic current (EPSC) can be quantified as a ratio of subsequent EPSC strengths. Each EPSC is triggered by pre-synaptic calcium concentrations and can be approximated by: EPSC = k([ Ca 2+ ] presynaptic ) 4 = k([ Ca 2+ ] rest + [ Ca 2+ ] influx + [ Ca 2+ ] residual ) 4 Where k is a constant. Facilitation = EPSC 2 / EPSC 1 = (1 + [ Ca 2+ ] residual / [ Ca 2+ ] influx ) 4 - 1 Early experiments by Del Castillo & Katz in 1954 and Dudel & Kuffler in 1968 showed that facilitation was possible at the neuromuscular junction even if transmitter release does not occur, indicating that facilitation is an exclusively presynaptic phenomenon. [ 5 ] [ 6 ] Katz and Miledi proposed the residual Ca 2+ hypothesis. They attributed the increase in neurotransmitter release to residual or accumulated Ca 2+ ("active calcium") within the axon membrane that remains attached to the membrane's inner surface. [ 7 ] Katz and Miledi manipulated the Ca 2+ concentration within the presynaptic membrane to determine whether or not residual Ca 2+ remaining within the terminal after the first impulse caused an increase in neurotransmitter release following the second stimulus. During the first nerve impulse, Ca 2+ concentration was either significantly below or nearing that of the second impulse. When Ca 2+ concentration was approaching that of the second impulse, facilitation was increased. In this first experiment, stimuli were presented in intervals of 100 ms between the first and second stimuli. An absolute refractory period was reached when intervals were about 10 ms apart. To examine facilitation during shorter intervals, Katz and Miledi directly applied brief depolarizing stimuli to nerve endings. When increasing the depolarizing stimulus from 1-2 ms, neurotransmitter release greatly increased due to accumulation of active Ca 2+ . Therefore, the degree of facilitation depends on the amount of active Ca 2+ , which is determined by the reduction in Ca 2+ conductance over time as well as the amount of removed from axon terminals after the first stimulus. Facilitation is greatest when the impulses are closest together because Ca 2+ conductance would not return to baseline prior to the second stimulus. Therefore, both Ca 2+ conductance and accumulated Ca 2+ would be greater for the second impulse when presented shortly after the first. In the Calyx of Held synapse, short term facilitation (STF) has been shown to result from the binding of residual Ca 2+ to neuronal Ca 2+ sensor 1 (NCS1). Conversely, STF has been shown to decrease when Ca 2+ chelators are added to the synapse (causing chelation ) which reduce residual Ca 2+ . Therefore, "active Ca 2+ " plays a significant role in neural facilitation. [ 8 ] In the synapse between Purkinje cells , short-term facilitation has been shown to be entirely mediated by the facilitation of Ca 2+ currents through the voltage-dependent calcium channels . [ 9 ] Short-term synaptic enhancement is often differentiated into categories of facilitation , augmentation , and potentiation (also referred to as post-tetanic potentiation or PTP ). [ 1 ] [ 10 ] These three processes are often differentiated by their time scales: facilitation usually lasts for tens of milliseconds, while augmentation acts on a time scale on the order of seconds and potentiation has a time course of tens of seconds to minutes. All three effects increase the probability of neurotransmitter release from the presynaptic membrane, but the underlying mechanism is different for each. Paired-pulse facilitation is caused by the presence of residual Ca 2+ , augmentation likely arises due to increased action of the presynaptic protein munc-13, and post-tetanic potentiation is mediated by presynaptic activation of protein kinases. [ 4 ] The type of synaptic enhancement seen in a given cell is also related to variant dynamics of Ca 2+ removal, which is in turn dependent upon the type of stimuli; a single action potential leads to facilitation, while a short tetanus generally causes augmentation and a longer tetanus leads to potentiation. [ 1 ] Short-term depression (STD) operates in the opposite direction of facilitation, decreasing the amplitude of PSPs. STD occurs due to a decrease in the readily releasable pool of vesicles (RRP) as a result of frequent stimulation. The inactivation of presynaptic Ca 2+ channels after repeated action potentials also contributes to STD. [ 8 ] Depression and facilitation interact to create short-term plastic changes within neurons, and this interaction is called the dual-process theory of plasticity . Basic models present these effects as additive, with the sum creating the net plastic change (facilitation - depression = net change). However, it has been shown that depression occurs earlier on in the stimulus-response pathway than facilitation, and therefore plays into the expression of facilitation. [ 11 ] Many synapses exhibit properties of both facilitation and depression. In general, however, synapses with low initial probability of vesicle release are more likely to exhibit facilitation, and synapses with high probability of initial vesicle release are more likely to exhibit depression. [ 3 ] Because the probability of vesicle release is activity-dependent, synapses can act as dynamic filters for information transmission. [ 3 ] Synapses with a low initial probability of vesicle release act as high-pass filters : because the release probability is low, a higher-frequency signal is needed to trigger release, and the synapse thus selectively responds to high-frequency signals. Likewise, synapses with high initial release probabilities serve as low-pass filters , responding to lower-frequency signals. Synapses with an intermediate probability of release act as band-pass filters that selectively respond to a specific range of frequencies. These filtering characteristics may be affected by a variety of factors, including both PPD and PPF, as well as chemical neuromodulators . In particular, because synapses with low release probabilities are more likely to experience facilitation than depression, high-pass filters are often converted to band-pass filters. Likewise, because synapses with high initial release probabilities are more likely to undergo depression than facilitation, it is common for low-pass filters to become band-pass filters, as well. Neuromodulators, meanwhile, may affect these short-term plasticities. In synapses with intermediate release probabilities, properties of the individual synapse will determine how the synapse changes in response to stimuli. These changes in filtration affect information transmission and encoding in response to repeated stimuli. [ 3 ] In humans, sound localization is primarily accomplished using information about how the intensity and timing of a sound vary between each ear. Neuronal computations involving these interaurual intensity differences (IIDs) and interaural time differences (ITDs) are typically carried out in different pathways in the brain. [ 12 ] Short-term plasticity likely assists in differentiating between these two pathways: short-term facilitation dominates in intensity pathways, while short-term depression dominates in temporal pathways. These different types of short-term plasticity allow for different kinds of information filtration, thus contributing to the division of the two kinds of information into distinct processing streams. The filtering capabilities of short-term plasticity may also assist with encoding information related to amplitude modulation (AM). [ 12 ] Short-term depression can dynamically adjust the gain on high-frequency inputs, and may thus allow for an expanded high-frequency range for AM. A mixture of facilitation and depression may also assist in AM coding by leading to rate filtering.
https://en.wikipedia.org/wiki/Neural_facilitation
A neural processing unit ( NPU ), also known as AI accelerator or deep learning processor, is a class of specialized hardware accelerator [ 1 ] or computer system [ 2 ] [ 3 ] designed to accelerate artificial intelligence (AI) and machine learning applications, including artificial neural networks and computer vision . Their purpose is either to efficiently execute already trained AI models (inference) or to train AI models. Their applications include algorithms for robotics , Internet of things , and data -intensive or sensor-driven tasks. [ 4 ] They are often manycore designs and focus on low-precision arithmetic, novel dataflow architectures , or in-memory computing capability. As of 2024 [update] , a typical AI integrated circuit chip contains tens of billions of MOSFETs . [ 5 ] AI accelerators are used in mobile devices such as Apple iPhones and Huawei cellphones, [ 6 ] and personal computers such as Intel laptops, [ 7 ] AMD laptops [ 8 ] and Apple silicon Macs . [ 9 ] Accelerators are used in cloud computing servers, including tensor processing units (TPU) in Google Cloud Platform [ 10 ] and Trainium and Inferentia chips in Amazon Web Services . [ 11 ] Many vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design . Graphics processing units designed by companies such as Nvidia and AMD often include AI-specific hardware, and are commonly used as AI accelerators, both for training and inference . [ 12 ] All models of Intel Meteor Lake processors have a built-in versatile processor unit ( VPU ) for accelerating inference for computer vision and deep learning. [ 13 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neural_processing_unit
A neural substrate is a term used in neuroscience to indicate the part of the central nervous system (i.e., brain and spinal cord ) that underlies a specific behavior , cognitive process , or psychological state . [ 1 ] [ 2 ] Neural is an adjective relating to "a nerve or the nervous system ", [ 3 ] while a substrate is an "underlying substance or layer". [ 4 ] Some examples are the neural substrates of language acquisition, [ 5 ] memory, [ 6 ] prediction and reward , [ 7 ] pleasure , facial recognition , [ 8 ] envisioning the future, [ 9 ] intentional empathy, [ 10 ] religious experience, [ 11 ] spontaneous musical performance, [ 12 ] and anxiety. [ 13 ] This neuroscience article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Neural_substrate
Neural synchrony is the correlation of brain activity across two or more people over time. In social and affective neuroscience , neural synchrony specifically refers to the degree of similarity between the spatio-temporal neural fluctuations of multiple people. This phenomenon represents the convergence and coupling of different people's neurocognitive systems, and it is thought to be the neural substrate for many forms of interpersonal dynamics and shared experiences. Some research also refers to neural synchrony as inter-brain synchrony, brain-to-brain coupling, inter-subject correlation, between-brain connectivity, or neural coupling. In the current literature, neural synchrony is notably distinct from intra-brain synchrony—sometimes also called neural synchrony—which denotes the coupling of activity across regions of a single individual's brain. Neural synchrony approaches represent an important theoretical and methodological contribution to the field. Since its conception, studies of neural synchrony have helped elucidate the mechanisms underlying social phenomena , including communication, narrative processing, coordination, and cooperation. By emphasizing the social dynamics of the brain, this area of research has played a critical role in making neuroscience more attuned to people's social proclivities—a perspective that is often lost on individual-level approaches to understanding the brain. Driven by the desire to understand the social nature of the human brain , the study of neural synchrony stems from social cognition , a subfield of psychology that explores how we understand and interact with other people through processes like mentalization or theory of mind . [ 1 ] Given that it relies on measuring brain activity , neural synchrony also has its roots in cognitive neuroscience . [ 2 ] Despite the growth of social cognition and cognitive neuroscience prior to the early 2000s, research into the brain neglected interpersonal processes, focusing mostly on the neural mechanisms of individuals' behaviors. [ 2 ] Furthermore, neuroscience research that did probe social questions only investigated how social processes affect neural dynamics in a single brain. [ 3 ] Considering that researchers clearly recognized how interpersonal interaction was fundamental to human cognition, the paucity of social and multi-brain neuroscience research represented a tension in the field. In response to the discrepancy between the complexity of social interaction and the single-brain focus of cognitive neuroscience, researchers called for a multi-person, interaction-oriented approach to understanding the brain. [ 1 ] [ 2 ] [ 4 ] [ 5 ] [ 6 ] In 2002, the American neuroscientist P. Read Montague [ 4 ] articulated the need to examine the neural activity of multiple individuals at one time. To this point, Montague and his colleagues wrote, "Studying social interactions by scanning the brain of just one person is analogous to studying synapses while observing either the presynaptic neuron or the postsynaptic neuron, but never both simultaneously." [ 7 ] They performed the first brain scan of more than one person by using functional magnetic resonance imaging (fMRI) to take simultaneous recordings of two people engaged in a simple deception game. While this study marked the first example of multi-brain neuroimaging, in 2005, King-Casas and others [ 8 ] combined neuroimaging with an economic exchange game to conduct the first study that directly compared neural activity between pairs of subjects. [ 3 ] Since then, multi-brain imaging studies have grown in popularity, leading to the formation of preliminary neural synchrony frameworks. [ 2 ] Early conceptualizations of neural synchrony, largely shaped by the work of Uri Hasson at Princeton University , were motivated by models of stimulus-to-brain coupling. In these models, aspects of the physical environment emit mechanical, chemical, and electromagnetic signals, which the brain receives and translates into electrical impulses that guide our actions and allow us to understand the world. [ 2 ] Researchers presumed that the synchronization of neural activity between two brains should leverage the same system that binds one's neural activity to environmental stimuli. If the stimulus is another person, then the perceptual system of one brain may couple with the behaviors or emotions of the other person, causing "vicarious activations" [ 9 ] that manifest as synchronized neural responses across perceiver and agent. [ 2 ] According to the theory, this process also occurs through more complex, synergistic interactions, especially when people communicate and convey meaning. [ 10 ] Over the last two decades, neural synchrony has become an increasingly common topic of study in social and affective neuroscience research, spurring conceptual and methodological development. Along with an emphasis on ecologically valid, naturalistic experimental designs, the focus on multi-brain neuroscience studies has increased researchers' ability to explore neural synchrony in social contexts. As a result, conceptualizations of neural synchrony have been expanded to incorporate a wider range of ideas, though it is often viewed as a neural correlate for two or more people's shared experiences. Studies now involve a variety of social processes, with applications spanning simple motor synchronization to classroom learning. [ 3 ] Notable methodological advancements have come from the evolution of multi-brain imaging techniques beyond fMRI, especially magnetoencephalography / electroencephalography (MEG/EEG) and functional near-infrared spectroscopy (fNIRS)—methods which afford more socially interactive experimental designs. [ 3 ] [ 11 ] These technologies are also complemented by comprehensive data processing techniques that are useful in multi-brain analyses, [ 12 ] [ 13 ] such as Granger causality [ 14 ] or Phase Locking Value (PLV). [ 15 ] As a progressively paradigmatic approach in social and affective neuroscience, neural synchrony undergirds the field's search for the brain basis of social interaction. [ 3 ] A 2022 study by the University of Helsinki measured brain synchronization among players during cooperative online video gaming . [ 16 ] The study of neural synchrony is predicated on advanced neuroimaging methods, particularly hyperscanning. Coined in 2002 by Montague et al., [ 4 ] hyperscanning refers to the method of simultaneously measuring the hemodynamic or neuroelectric responses of two or more brains as they engage with the same task or stimulus. [ 17 ] [ 18 ] [ 19 ] The ability to record time-locked activity from multiple brains makes hyperscanning conducive to exploring the variation in activity across brains. It also allows experimenters to examine various aspects of neural recordings in naturalistic scenarios, from low-level stimulus processing to high-level social cognition. [ 13 ] For these reasons, hyperscanning has helped foster a systematic investigation of interpersonal dynamics at the level of the brain. [ 19 ] [ 20 ] Though hyperscanning has become the most common imaging technique for studying neural synchrony, researchers do not necessarily need to scan brains simultaneously. Sometimes referred to as off-line measurement, or "pseudo-hyperscanning"; [ 20 ] this alternative approach follows the same basic premise as hyperscanning, except that participants' brain activity is recorded one at a time. Data from different scans of isolated participants are then analyzed to compare functional similarities during identical tasks or stimuli. [ 18 ] [ 19 ] Hyperscanning and off-line scanning methods can be achieved through common noninvasive hemodynamic or neuroelectric brain imaging techniques. A review of neural synchrony hyperscanning studies showed that the most prevalent methods are EEG, fNIRS, and fMRI, which account for 47%, 35%, and 17% of studies, respectively. [ 3 ] Each technique offers unique contributions to the understanding of neural synchrony given their relative advantages and limitations. [ 18 ] EEG measures the brain's electrical activity through the scalp. It is widely used to study neural synchrony because of its superior millisecond -range temporal resolution . [ 21 ] Though susceptible to head movements, EEG still allows for exploring neural synchrony through naturalistic designs where people can interact socially. [ 11 ] The downside to EEG is its relatively poor spatial resolution , which makes it difficult to elucidate spatial qualities of brain activation in social contexts. [ 18 ] fNIRS uses near infrared waves to measure the blood-oxygen-level-dependent (BOLD) response in the brain. It is an increasingly popular imaging method for neural synchrony studies because of its portability and motion tolerance , which makes it ideal for testing real-world social stimuli. [ 22 ] fNIRS only measures the cortical regions of the brain , and its temporal resolution is not as fine as EEG. However, the balance between spatial and temporal properties, combined with subjects' ability to move around and interact with relative freedom during scanning, qualify fNIRS as a versatile option for exploring neural synchrony. [ 3 ] fMRI uses magnetic resonance to measure the brain's BOLD response. The major advantage of fMRI is the precise spatial resolution. fMRI allows researchers to examine in-depth neurocognitive processes that occur across brains. However, fMRI has low temporal resolution, is highly sensitive to motion, and requires that subjects lie flat in a loud MRI machine while interacting with a screen. These factors pose limitations to the study of neural synchrony, which often calls for naturalistic environments and tasks that are representative of real-world social contexts. [ 3 ] [ 6 ] A standard approach to investigating neural synchrony, especially with data from naturalistic experimental designs, is inter-subject correlation (ISC). [ 23 ] [ 24 ] Often, ISC is the Pearson correlation, or robust regression, of spatio-temporal patterns of neural activity in multiple subjects. In ISC, an individual's brain responses are either correlated across the average of the other subjects in a leave-one-out analysis, or all pairs of subjects are correlated in a pairwise analysis. [ 13 ] This method leverages time-locked stimuli in order to understand how brain activity across participants relates to different parts of the task. Rather than focusing on the strength of activation in brain areas, ISC explores the variability in neural activity across subjects, [ 25 ] allowing researchers to probe the level of similarity or idiosyncrasy in people's brain responses. [ 26 ] Shared variance in neural activity is assumed to be indicative of similar processing of identical stimuli or tasks. Similar to the general linear model , it is important to compare ISC values to a null, which can be derived from recordings of resting states or irrelevant stimuli. Because it depends on extended designs that allow for activity recording over time, ISC is especially conducive to social interaction studies, which makes it a powerful approach for exploring neural synchrony in social contexts. However, ISC depends on stimulus-driven responses, which poses difficulties for researchers interested in resting-state activity. [ 27 ] Recently, inter-subject representational similarity analysis (IS-RSA) has been put forward as a way to detect the individual differences, or “idiosynchrony,” across people experiencing naturalistic experimental stimuli. This analysis takes the neural synchrony of each subject to the other subjects and relates it to known individual behavioral measures, allowing researchers to compare multi-person-level brain data with individual-level traits and behaviors. [ 13 ] [ 28 ] Neural synchrony is a relatively new area of study that affords a variety of approaches, and no prevailing paradigm exists to collect, analyze, and interpret the data. Many decisions, such as imaging techniques or analysis methods, depend on researchers’ goals. However, there are some generally agreed upon best practices when designing these experiments. For example, sample sizes of about 30 are necessary to acquire reliable and reproducible statistical ISC maps. [ 27 ] Furthermore, when studying shared responses, researchers typically prefer a strong stimulus that is able to generate significant brain responses, allowing researchers to detect greater levels of neural synchrony across participants. The exception to this preference is when researchers are more interested in the individual differences that drive synchrony. In these cases, researchers should employ stimuli that are strong enough to evoke neural synchrony, yet modest enough to maintain sufficient neural variability that researchers can later relate to the variability in behavioral measures. [ 29 ] [ 13 ] One of the biggest considerations for conducting neural synchrony studies concerns the ecological validity of the design. As an inherently social phenomenon, neural synchrony calls for multidimensional stimuli that emulate the richness of the social world. [ 17 ] [ 30 ] Furthermore, by nature of how it is measured—through computing the variance in multiple brains' responses to a task over time—neural synchrony is particularly amenable to extended social stimuli. Ecological designs are notably difficult in most neuroimaging studies, yet they are especially important for capturing social processes, and they also play to the strengths and affordances of neural synchrony approaches. [ 17 ] Examining neural synchrony through multi-brain studies has offered insight into the shared and idiosyncratic aspects of human communication. As a potential neural mechanism for the effective transfer of information across brains, neural synchrony has shown how brain activity temporally and spatially couples when people communicate. Synchrony during communication occurs in a number of brain frequencies and regions, notably alpha and gamma bands, the temporal parietal junction , and inferior frontal areas. [ 18 ] In a seminal study, Stephens et al. [ 31 ] demonstrated this inter-brain link through an fMRI analysis of speakers and listeners. Using the speaker's spatial and temporal neural responses to model the listener's responses during natural verbal communication, they found that brain activity synchronized in dyads in both a delayed and anticipatory manner, but this synchrony failed to occur when subjects did not communicate (e.g., speaking in a language the listener does not understand). Greater synchrony across brains, especially in the predictive anticipatory responses, indicated better scores on comprehension measures. Building on this work, other research has sought to pinpoint communicative factors associated with neural synchrony. By manipulating conversation modality and instruction, research has found that neural synchrony is strongest during face-to-face conversations that incorporate turn-taking behavior and multi-sensory verbal and nonverbal interaction. [ 32 ] [ 33 ] Network structure dynamics also play a role in neural synchrony, such that central figures, like conversation leaders, tend to show greater neural synchrony than non-leaders with other discussion partners. [ 34 ] Neural synchrony is also found in nonverbal communication, such as hand gestures and facial expressions. An early study found synchronization across participants playing a game of charades. Using fMRI to record brain activity as people gestured or watched the gestures, researchers found synchronized temporal variation in brain activity in mirror neuron and mentalizing systems. [ 14 ] Another study showed that communicative behaviors like shared gaze and positive affect expression generated neural synchrony in romantic partners, though not in strangers. [ 35 ] As a whole, neural synchrony studies surrounding verbal, multi-sensory, and nonverbal communication demonstrate its potential as a tool for exploring the underlying mechanisms of interpersonal communication. [ 2 ] Another focus of neural synchrony studies involves narrative processing. This direction of research has some crossover with neural synchrony studies of communication, but there remains sufficient interest in the similarities and differences in how people specifically process multimodal narrative information, such as watching movies, hearing stories, or reading passages. Importantly, narrative processing studies of neural synchrony observe hierarchical levels of processing that unfold over time, [ 36 ] [ 37 ] starting in areas responsible for low-level processing of auditory or visual stimuli. As semantic information becomes more salient in the narrative, synchronized processing moves to more integrative networks, such as the inferior parietal lobe or temporal parietal junction. [ 36 ] Research shows that neural synchrony is indicative of the similarity in people's narrative recall and understanding, even for ambiguous narratives. One study demonstrated this phenomenon using Heider and Simmel's [ 38 ] classic paradigm, where simple shapes move around the screen in a way that causes people to imbue the shapes with stories and social meaning. [ 39 ] Participants who interpreted the movement of shapes in similar ways showed greater neural synchrony in cortical brain regions. This connection between neural synchrony and similarity in comprehension reliably occurs across other types of narratives, including listening to stories and free viewing of visual content, [ 40 ] [ 41 ] [ 23 ] and it persists throughout different stages of the narrative, such as consuming the story, recalling the story, and listening to another person recall the story. Together, these findings highlight neural synchrony as a reliable neural mechanism for the convergence of people's hierarchical narrative processing, suggesting that synchrony plays a critical role in how, if, and why we see meaning in the world similarly. [ 42 ] [ 43 ] The pursuit of complex goals for individuals or groups depends on successful coordination, and neural synchrony provides a window into the underlying mechanisms of these processes as well. A review of hyperscanning research shows that neural synchrony approaches have explored coordination through a range of paradigms, including joint attention , movements, ideas, and tasks. [ 18 ] These findings also demonstrate synchronization across a variety of brain areas associated with sharing actions and mentalizing, namely the inferior and temporal parietal areas, as well as alpha band and other frequencies. Furthermore, converging evidence suggests that inter-brain models (i.e., neural synchrony) are more effective than intra-brain models at predicting performance for tasks requiring social coordination. [ 18 ] Understanding how coordination via joint attention relates to neural synchrony, and how this relationship drives performance, is of particular interest to researchers. Research shows that even simple social interactions, like attention convergence, can induce synchrony. For example, in a task where one participant must direct another participant to a target location through eye gazing only, which requires that both participants eventually coordinate eye movements, researchers found significant neural synchrony in mentalizing regions of interacting pairs. [ 44 ] Other studies show strong neural synchrony during simple coordinated events like hand and finger movement imitation, [ 45 ] [ 46 ] humming, [ 47 ] and even eye-blinking. [ 48 ] Coordination studies also find neural synchrony in more complex social coordinations. A set of studies has demonstrated the prevalence of neural synchrony in music production while people coordinate rhythms and movements. Early studies showed that dyads of guitarists generate greater low frequency band neural synchrony when playing together than when playing solo. [ 49 ] Also, people who performed distinct roles in an intricate musical piece showed synchrony between brains during periods of coordination. [ 50 ] Another series of studies examined pilots and copilots in a flight simulator, finding that synchrony was strongest when the situation demanded more social coordination, such as during stressful scenarios or takeoff and landing. [ 51 ] [ 52 ] These findings implicate neural synchrony as a reliable correlate of social coordination, even when interactions call for coordination of various forms and complexities. [ 53 ] As measured through tasks that involve interactive decision-making and games, results from the field suggest a close association between neural synchrony and cooperation. Decision-making contexts and games that demand greater levels of social, high-level, and goal-directed engagement with other people are typically more conducive to neural synchrony. [ 54 ] In this domain, researchers are particularly interested in how neural synchrony levels vary depending on whether people collaborate, compete, or play alone. [ 3 ] [ 11 ] For example, one study that employed a computer video game found high levels of neural synchrony - and better performance - across subjects when they played on the same team, but this effect disappeared when people played against each other or by themselves. [ 55 ] Similarly, researchers that administered a puzzle solving task found neural synchrony for people when they are working as a team, yet synchrony decreased for the same people when they worked separately or watched others solve the puzzle. [ 56 ] Another study using a classic prisoner's dilemma game showed that participants experienced higher neural synchrony with each other in the high-cooperation-context conditions than they did in the low-cooperation-context conditions or when they interacted with the computer. [ 57 ] Subjective measures of perceived cooperativeness mediated this effect. Critically, the idea that neural synchrony is robust during cooperation, that more interactive and demanding cooperative tasks recruit greater neural synchrony, and that better cooperation often links to better performance is corroborated throughout the neural synchrony literature. [ 11 ] [ 17 ] Much of the neural synchrony literature examines how stimuli drive responses across multiple brains. Because these responses are often task-dependent, it becomes hard to disentangle state-level factors from individual-level factors (e.g., traits). However, creative experimental designs, access to certain populations, and advances in analysis methods, like IS-RSA, have offered some recent insight into how individual-level differences affect neural synchrony. [ 13 ] Using an ambiguous social narrative, Finn et al. [ 58 ] report that individuals with high-trait paranoia showed stronger neural synchrony with each other in socially-motivated cortical regions than they did with low-trait paranoia subjects - a finding that also scales when examining the semantic and syntactic similarities of their narrative recall. Similarly, research shows that people's cognitive styles affect their level of synchrony with each other. In response to viewing a film, Bacha-Trams et al. demonstrated that holistic thinkers showed greater neural synchrony with each other, and presumably understood the film more similarly, than analytic thinkers did with each other. The two groups also exhibited within-group synchrony in different brain regions. [ 59 ] The idea that individual-level differences affect neural synchrony extends to clinical areas as well. Some research indicates that people who manage autism spectrum disorder exhibit distinct and diminished patterns of neural synchrony compared to people without autism spectrum disorder. [ 60 ] [ 61 ] Clinically driven discrepancies in neural synchrony have also been shown to increase along with symptom severity. [ 62 ] Neural synchrony has major implications for the brain-as-predictor approach, which encourages the use of neuroimaging data to predict robust, ecologically valid behavioral outcomes. The brain-as-predictor approach has been effective in predicting outcomes across a variety of domains, including health and consumer choices. Given its social nature, neural synchrony has the potential to build on brain-as-predictor models by allowing for predictions about real-world social processes. Some researchers have started to employ this approach. [ 63 ] In one study, members of a bounded social network watched a battery of short audiovisual movies in an MRI scanner. Hypothesizing that similarity in neural responses tracks with social closeness, the researchers used the strength of neural synchrony measures across participants to reliably predict real-world social network proximity and friendship. Another example of how neural synchrony can be leveraged to predict outcomes involves the use of neural reference groups, which can predict behaviors like partisan stance on controversial topics at above-chance levels. This approach requires identifying groups of people that perceive and respond to the world in similar ways, measuring their brain activity and dispositional attitudes related to any stimuli of interest, and then using a synchrony-based classification method to predict whether new individuals see the world similarly or differently depending on their synchrony with the reference group. Together, these findings illustrate the power and potential for neural synchrony to contribute to brain-as-predictor models, ultimately framing neural synchrony as a tool for understanding real-world outcomes above and beyond behavioral measures alone. [ 64 ]
https://en.wikipedia.org/wiki/Neural_synchrony
Neural tissue engineering is a specific sub-field of tissue engineering . Neural tissue engineering is primarily a search for strategies to eliminate inflammation and fibrosis upon implantation of foreign substances. Often foreign substances in the form of grafts and scaffolds are implanted to promote nerve regeneration and to repair nerves of both the central nervous system (CNS) and peripheral nervous system (PNS) due to injury. There are two parts of the nervous system: the central nervous system (CNS) and the peripheral nervous system (PNS) . General body functions are supervised by the central nervous system (CNS), which includes the brain and spinal cord . The PNS delivers motor signals to control body activities and receives sensory data from the CNS. The PNS It is made up of nerve fibers arranged into nerves. The PNS's autonomic nervous system (ANS), whose sympathetic and parasympathetic branches preserve homeostasis and regulate involuntary physiological functions. [ 1 ] The "fight-or-flight" reaction is triggered by the sympathetic nervous system (SNS), which is derived from the thoracic and upper lumbar spinal cord . It readies the body for quick reactions under pressure. The parasympathetic nervous system (PSNS), on the other hand, is derived from the brainstem and sacral spinal cord and facilitates normal physiological processes by encouraging rest and energy conservation. One of the main nerves in the PSNS, the vagus nerve , originates in the brainstem and travels throughout the body, affecting different organs. It has sensory and motor fibers . Sensory messages tell the brain what the body is doing, allowing it to maintain homeostasis and control activities. Additionally, the vagus nerve influences emotions and memory through connections to several brain regions. Neuroimmune Interactions The immune system's role is to identify and protect the body against external chemicals and infections. It is separated into innate and adaptive immunity and consists of immune organs, cells, and active ingredients. Remarkably, under certain circumstances, a variety of non-immune cells can display immunological properties. The immune system and the neurological system, which control body processes, are interdependent. [ 2 ] By controlling humoral chemicals on a systemic level, the central nervous system CNS affects the immune system. Sleep and other psychosocial variables can affect immunological responses. [ 3 ] Obesity and sleep deprivation , for example, can impair immunity, and long-term stress can erode immunological responses, making people more vulnerable to infections like COVID-19 . [ 4 ] In diseases like asthma that are made worse by psychological stress or depression , neuroimmune interactions are clearly seen. The immune response can impact brain activity, and neuroendocrine hormones control the release of cytokines . [ 5 ] Fever symptoms like drowsiness and decreased appetite are caused by proinflammatory mediators. Immune system organs get autonomic innervation from the peripheral nervous system (PNS), which facilitates specialized communication between the two systems. Comprehensive information on bidirectional crosstalk pathways is frequently lacking, despite evidence of functional links between the neurological and immune systems already in place. [ 1 ] lymph nodes are essential components of the immune system because they serve as both collecting places for various immune cells and act as filters for dangerous chemicals. Their well-structured composition promotes efficient immune responses, protecting the body against external chemicals, infections, and malignancies. [ 6 ] Regional innervation of lymph nodes involves complex participation from the sympathetic and parasympathetic branches of the autonomic nervous system (ANS). [ 7 ] Furthermore, there is afferent innervation, which is in charge of immune responses in particular areas. Through the use of neuropeptides, nociceptors—specialized nerve endings that feel pain—control the immune system. Distinct nerve fibers inside lymph nodes are identified by several markers, such as TH, anti-β2-AR, ChAT, and VAChT. Studies have shown that nerve fibers originate from the hilum, travel along blood vessels, cross medullary areas, and form subscapular plexuses. [ 7 ] Some limitations do, however, remain. These include the sparse identification of neurons and nerve fibers , the lack of a thorough examination of fine nerve fibers, the incomplete knowledge of innervation in particular regions, and the inadequate documentation in certain studies of close interactions between immune and non-immune cells and nerve fibers. [ 8 ] Neuroimmune interplays have possible therapeutical approaches [ 9 ] Novel approaches focusing on neuroimmune interactions may alter the course of the disease or reduce symptoms. Targeting neuroimmune pathways is a holistic approach that seeks to affect both immune responses and brain functioning. The term " acupuncture " refers to the ancient Chinese medical technique of gently stimulating nociceptors and receptors with tiny needles inserted into certain body sites in order to treat various ailments, including pain and inflammation. [ 10 ] The FDA -approved therapy for depression and epilepsy , vagus nerve stimulation (VNS), may also be beneficial for non-neurological conditions such rheumatoid arthritis and inflammatory bowel disease. Chemical therapies, such as peripheral nervous system (PNS) modulation, are being investigated for the treatment of infectious and inflammatory disorders, such as rheumatoid arthritis and issues associated with diabetes . [ 11 ] Targeting tumor innervation is being explored as a potential new treatment approach. Intratumoral innervation, which involves nerves inside or around tumors, influences the biology of cancer. [ 12 ] Peripheral neuropathy is one of the PNS-associated disorders that can be treated with immunotherapy manipulation. [ 13 ] According to many experimental researchers, extensive clinical studies are necessary to confirm the safety, effectiveness, and regulatory approval of these experimental techniques prior to their establishment as established therapies. [ 14 ] [ 11 ] Tissue Engineering The need for neural tissue engineering arises from the difficulty of the nerve cells and neural tissues to regenerate on their own after neural damage has occurred. The PNS has some, but limited, regeneration of neural cells. Adult stem cell neurogenesis in the CNS has been found to occur in the hippocampus , the subventricular zone (SVZ), and spinal cord. [ 15 ] CNS injuries can be caused by stroke , neurodegenerative disorders , trauma , or encephalopathy . A few methods currently being investigated to treat CNS injuries are: implanting stem cells directly into the injury site, delivering morphogens to the injury site, or growing neural tissue in vitro with neural stem or progenitor cells in a 3D scaffold . [ 16 ] Proposed use of electrospun polymeric fibrous scaffolds for neural repair substrates dates back to at least 1986 in a NIH SBIR application from Simon. [ 17 ] For the PNS, a severed nerve can be reconnected and reinnervated using grafts or guidance of the existing nerve through a channel. [ 18 ] Recent research into creating miniature cortexes, known as corticopoiesis , and brain models, known as cerebral organoids , are techniques that could further the field of neural tissue regeneration. The native cortical progenitors in corticopoiesis are neural tissues that could be effectively embedded into the brain. [ 19 ] Cerebral organoids are 3D human pluripotent stem cells developed into sections of the brain cortex, showing that there is a potential to isolate and develop certain neural tissues using neural progenitors. [ 20 ] Another situation that calls for implanting of foreign tissue is use of recording electrodes . Chronic Electrode Implants are a tool being used in research applications to record signals from regions of the cerebral cortex . Research into the stimulation of PNS neurons in patients with paralysis and prosthetics could further the knowledge of reinnervation of neural tissue in both the PNS and the CNS. [ 21 ] This research is capable of making one difficult aspect of neural tissue engineering, functional innervation of neural tissue, more manageable. [ 21 ] There are four main causes of CNS injury: stroke , traumatic brain injury (TBI), brain tumors , or developmental complications. Strokes are classified as either hemorrhagic (when a vessel is damaged to the point of bleeding into the brain) or ischemic (when a clot blocks the blood flow through the vessel in the brain). When a hemorrhage occurs, blood seeps into the surrounding tissue, resulting in tissue death, while ischemic hemorrhages result in a lack of blood flow to certain tissues. Traumatic brain injury is caused by external forces impacting the cranium or the spinal cord. Problems with CNS development results in abnormal tissue growth during development, thus decreasing the function of the CNS. [ 16 ] One method to treat CNS injury involves culturing stem cells in vitro and implanting the non-directed stem cells into the brain injury site. Implanting stem cells directly into the injury site prevents glial scar formation and promotes neurogenesis originating from the patient, but also runs the risk of tumor development, inflammation , and migration of the stem cells out of the injury location. Tumorigenesis can occur due to the uncontrolled nature of the stem cell differentiation, inflammation can occur due to rejection of the implanted cells by the host cells, and the highly migratory nature of stem cells results in the cells moving away from the injury site, thus not having the desired effect on the injury site. Other concerns of neural tissue engineering include establishing safe sources of stem cells and getting reproducible results from treatment to treatment. [ 16 ] Alternatively, these stem cells can act as carriers for other therapies, though the positive effects of using stem cells as a delivery mechanism has not been confirmed. Direct stem cell delivery has an increased beneficial effect if they are directed to be neuronal cells in vitro . This way, the risks associated with undirected stem cells are decreased; additionally, injuries that do not have a specific boundary could be treated efficiently. [ 16 ] Molecules that promote the regeneration of neural tissue, including pharmaceutical drugs , growth factors known as morphogens , and miRNA can also be directly introduced to the injury site of the damaged CNS tissue. Neurogenesis has been seen in animals that are treated with psychotropic drugs through the inhibition of serotonin reuptake and induction of neurogenesis in the brain. When stem cells are differentiating, the cells secrete morphogens such as growth factors to promote healthy development. These morphogens help maintain homeostasis and neural signaling pathways , and they can be delivered into the injury site to promote the growth of the injured tissues. Currently, morphogen delivery has minimal benefits because of the interactions the morphogens have with the injured tissue. Morphogens that are not innate in the body have a limited effect on the injured tissue due to the physical size and their limited mobility within CNS tissue. To be an effective treatment, the morphogens must be present at the injury site at a specific and constant concentration. miRNA has also been shown to affect neurogenesis by directing the differentiation of undifferentiated neural cells. [ 16 ] A third method for treating CNS injuries is to artificially create tissue outside of the body to implant into the injury site. This method could treat injuries that consist of large cavities, where larger amounts of neural tissue needs to be replaced and regenerated. Neural tissue is grown in vitro with neural stem or progenitor cells in a 3D scaffold , forming embryoid bodies (EBs). These EBs consist of a sphere of stem cells, where the inner cells are undifferentiated neural cells, and the surrounding cells are increasingly more differentiated. 3D scaffolds are used to transplant tissue to the injury site and to make the appropriate interface between the artificial and the brain tissue. The scaffolds must be: biocompatible , biodegradable , fit injury site, similar to existing tissue in elasticity and stiffness, and support growing cells and tissues. The combination of using directed stem cells and scaffolds to support the neural cells and tissues increase the survival of the stem cells in the injury site, increasing the efficacy of the treatment. [ 16 ] There are 6 different types of scaffolds that are being researched to use in this method for treating neural tissue injury: These 3D scaffolds can be fabricated using particulate leaching , gas foaming , fiber bonding , solvent casting , or electrospinning techniques; each technique creates a scaffold with different properties than the other techniques. [ 22 ] Incorporation success of 3D scaffolds into the CNS has been shown to depend on the stage at which the cells have differentiated. Later stages provide a more efficient implantation, while earlier staged cells need to be exposed to factors that coerce the cells to differentiate and thus respond appropriately to the signals the cells will receive at the CNS injury site. [ 23 ] Brain-derived neurotrophic factor is a potential co-factor to promote functional activation of ES cell-derived neurons into the CNS injury sites. [ 24 ] Trauma to the PNS can cause damage as severe as a severance of the nerve, splitting the nerve into a proximal and distal section. The distal nerve degenerates over time due to inactivity, while the proximal end swells over time. The distal end does not degenerate right away, and the swelling of the proximal end does not render it nonfunctional, so methods to reestablish the connection between the two ends of the nerve are being investigated. [ 18 ] One method to treat PNS injury is surgical reconnection of the severed nerve by taking the two ends of the nerve and suturing them together. When suturing the nerves together, the fascicles of the nerve are each reconnected, bridging the nerve back together. Though this method works for severances that create a small gap between the proximal and distal nerve ends, this method does not work over gaps of greater distances due to the tension that must be put on the nerve endings. This tension results in the nerve degeneration , and therefore the nerve cannot regenerate and form a functional neural connection. [ 18 ] Tissue grafts utilize nerves or other materials to bridge the two ends of the severed nerve. There are three categories of tissue grafts: autologous tissue grafts, nonautologous tissue grafts, and acellular grafts. Autologous tissue grafts transplant nerves from a different part of the body of the patient to fill the gap between either end of the injured nerve. These nerves are typically cutaneous nerves , but other nerves have been researched as well with encouraging results. These autologous nerve grafts are the current gold standard for PNS nerve grafting because of the highly biocompatible nature of the autologous nerve graft, but there are issues concerning harvesting the nerve from the patients themselves and being able to store a large amount of autologous grafts for future use. Nonautologous and acellular grafts (including ECM -based materials) are tissues that do not come from the patient, but instead can be harvested from cadavers (known as allogenic tissue ) or animals (known as xenogeneic tissue ). While these tissues have an advantage over autologous tissue grafts because the tissue does not need to be taken from the patient, difficulty arises with the potential of disease transmission and thus immunogenic problems . Methods of eliminating the immunogenic cells, thus leaving behind only the ECM-components of the tissue, are currently being investigated to increase the efficacy of nonautologous tissue grafts. [ 18 ] Guidance methods of PNS regeneration use nerve guide channels to help axons regrow along the correct path, and may direct growth factors secreted by both ends of the nerve to promote growth and reconnection. Guidance methods reduce scarring of the nerves, increasing the functionality of the nerves to transmit action potentials after reconnection. Two types of materials are used in guidance methods of PNS regeneration: natural-based materials and synthetic materials. Natural-based materials are modified scaffolds stemming from ECM components and glycosaminoglycans . Laminin , collagen , and fibronectin , which are all ECM components, guide axonal development and promote neural stimulation and activity. Other molecules that have the potential to promote nerve repair are: hyaluronic acid , fibrinogen , fibrin gels, self-assembling peptide scaffolds, alginate , agarose , and chitosan . Synthetic materials also provide another method for tissue regeneration in which the graft's chemical and physical properties can be controlled. Since the properties of a material may be specified for the situation in which it is being used, synthetic materials are an attractive option for PNS regeneration. The use of synthetic materials come with certain concerns, such as: easy formation of the graft material into the necessary dimensions, biodegradable, sterilizable, tear resistant, easy to operate with, low risk of infection, and low inflammation response due to the material. The material must also maintain the channel during the nerve regeneration. Currently, the materials most commonly researched mainly focus on polyesters , but biodegradable polyurethane , other polymers , and biodegradable glass are also being investigated. Other possibilities for synthetic materials are conducting polymers and polymers biologically modified to promote cell axon growth and maintain the axon channel. [ 18 ] Extracellular vesicles (EVs) are bilayer-bound lipid particles that participate in intercellular communication by releasing a variety of substances, including nucleic acids , lipids , and proteins . [ 25 ] Exosomes , macrovesicles , and apoptotic bodies are the three primary forms; each has unique properties. EVs have the potential to be used as therapeutic delivery vehicles [ 26 ] and diagnostic biomarkers [ 27 ] and play roles in immunological responses, cancer, tissue regeneration, and neurological diseases. Damaged neurons generate neuron-derived exosomes (NDEs), which can influence target cells by transferring a variety of cargos, including the Zika virus. [ 28 ] [ 29 ] Neurodegenerative illnesses are linked to NDEs. Immune cell exosomes (IEEs) have the potential to be used in immunotherapy and vaccine development since they influence immune responses and interact with other cells. Immune cells such as DCs, macrophages, B cells, and T cells produce IEEs. EVs have been shown to promote neuroimmune crosstalk, allowing for both local and distant tissue and cell communication. [ 27 ] Because there are so many factors that contribute to the success or failure of neural tissue engineering, there are many difficulties that arise in using neural tissue engineering to treat CNS and PNS injuries. First, the therapy needs to be delivered to the site of the injury. This means that the injury site needs to be accessed by surgery or drug delivery. Both of these methods have inherent risks and difficulties in themselves, compounding the problems associated with the treatments. A second concern is keeping the therapy at the site of the injury. Stem cells have a tendency to migrate out of the injury site to other sections of the brain, thus the therapy is not as effective as it could be as when the cells stay at the injury site. Additionally, the delivery of stem cells and other morphogens to the site of injury can cause more harm than good if they induce tumorigenesis, inflammation, or other unforeseen effects. Finally, the findings in laboratories may not translate to practical clinical treatments. Treatments are successful in a lab, or even an animal model of the injury, may not be effective in a human patient. [ 30 ] Two models for brain tissue development are cerebral organoids and corticopoiesis . These models provide an "in vitro" model for normal brain development, [ 20 ] but they can be manipulated to represent neural defects. Therefore, the mechanisms behind healthy and malfunctioning development can be studied by researchers using these models. [ 20 ] These tissues can be made with either mouse embryonic stem cells (ESC)s or human ESCs. Mouse ESCs are cultured in a protein called Sonic Hedgehog inhibitor to promote the development of dorsal forebrain and study cortical fate. [ 19 ] This method has been shown to produce axonal layers that mimic a broad range of cortical layers . [ 31 ] Human ESC-derived tissues use pluripotent stem cells to form tissues on scaffold, forming human EBs. These human ESC-derived tissues are formed by culturing human pluripotent EBs in a spinning bioreactor . [ 20 ] Targeted reinnervation is a method to reinnervate the neural connections in the CNS and PNS, specifically in paralyzed patients and amputees using prosthetic limbs. Currently, devices are being investigated that take in and record the electrical signals that are propagated through neurons in response to a person's intent to move. This research could shed light on how to reinnervate the neural connections between severed PNS nerves and the connections between the transplanted 3D scaffolds into the CNS. [ 21 ]
https://en.wikipedia.org/wiki/Neural_tissue_engineering
Neural top–down control of physiology concerns the direct regulation by the brain of physiological functions (in addition to smooth muscle and glandular ones). Cellular functions include the immune system’s production of T-lymphocytes and antibodies , and nonimmune related homeostatic functions such as liver gluconeogenesis , sodium reabsorption , osmoregulation , and brown adipose tissue nonshivering thermogenesis . This regulation occurs through the sympathetic and parasympathetic system (the autonomic nervous system ), and their direct innervation of body organs and tissues that starts in the brainstem . There is also a noninnervation hormonal control through the hypothalamus and pituitary ( HPA ). These lower brain areas are under control of cerebral cortex ones. Such cortical regulation differs between its left and right sides . Pavlovian conditioning shows that brain control over basic cell level physiological function can be learned. Sympathetic and parasympathetic nervous systems and the hypothalamus are regulated by the higher brain. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Through them, the higher cerebral cortex areas can control the immune system , and the body’s homeostatic and stress physiology. Areas doing this include the insular cortex , [ 5 ] [ 6 ] [ 7 ] the orbital , and the medial prefrontal cortices . [ 8 ] [ 9 ] These cerebral areas also control smooth muscle and glandular physiological processes through the sympathetic and parasympathetic nervous system including blood circulation , urogenital , gastrointestinal [ 10 ] functions, pancreatic gut secretions, [ 11 ] respiration , coughing , vomiting , piloerection , pupil dilation, lacrimation and salivation . [ 12 ] The sympathetic nervous system is predominantly controlled by the right side of the brain (focused upon the insular cortex), while the left side predominantly controls the parasympathetic nervous system. [ 4 ] The cerebral cortex in rodents shows lateral specialization in its regulation of immunity with immunosuppression being controlled by the right hemisphere, and immunopotention by the left one. [ 9 ] [ 13 ] Humans show similar lateral specialized control of the immune system from the evidence of strokes , [ 14 ] surgery to control epilepsy , [ 15 ] and the application of TMS . [ 16 ] The higher brain top down control of physiology is mediated by the sympathetic and parasympathetic nervous systems in the brainstem, [ 1 ] [ 2 ] [ 3 ] [ 4 ] and the hypothalamus. [ 1 ] [ 17 ] [ 18 ] The sympathetic nervous system arises in brainstem nuclei that project down into intermediolateral columns of thoracolumbar spinal cord neurons in spinal segments T1–L2. The parasympathetic nervous system in the motor nuclei of cranial nerves III, VII, IX, (control over the pupil and salivary glands) and X (vagus –many functions including immunity) and sacral spinal segments (gastrointestinal and urogenital systems). [ 12 ] Another control occurs through top down control by the medial areas of the prefrontal cortex . [ 1 ] [ 17 ] [ 18 ] upon the hypothalamus which has a nonnerve control of the body through hormonal secretions of the pituitary . The brain controls immunity both indirectly through HPA glucocorticoid secretions from the pituitary, and by various direct innervations. [ 19 ] The liver receives both sympathetic and parasympathetic nervous system innervation. [ 32 ] The brains of animals can anticipatorily learn to control cell level physiology such as immunity through Pavlovian conditioning . In this conditioning, a neutral stimulus saccharin is paired in a drink with an agent, cyclophosphamide , that produces an unconditioned response ( immunosuppression ). After learning this pairing, the taste of saccharin by itself through neural top down control created immunosuppression, as a new conditioned response. [ 42 ] This work was originally done on rats, however, the same conditioning can also occur in humans. [ 43 ] The conditioned response happens in the brain with the ventromedial nucleus of the hypothalamus providing the output pathway to the immune system, the amygdala, the input of visceral information, and the insular cortex acquires and creates the conditioned response. [ 5 ] The production of different components of the immune system can be controlled as conditioned responses: Nonimmune functions can also be conditioned:
https://en.wikipedia.org/wiki/Neural_top–down_control_of_physiology
Exo-α-sialidase ( EC 3.2.1.18 , sialidase, neuraminidase ; systematic name acetylneuraminyl hydrolase ) is a glycoside hydrolase that cleaves the glycosidic linkages of neuraminic acids : Neuraminidase enzymes are a large family, found in a range of organisms. The best-known neuraminidase is the viral neuraminidase , a drug target for the prevention of the spread of influenza infection. Viral neuraminidase was the first neuraminidase to be identified. It was discovered in 1957 by Alfred Gottschalk at the Walter and Eliza Hall Institute in Melbourne . [ 1 ] The viral neuraminidases are frequently used as antigenic determinants found on the surface of the influenza virus. Some variants of the influenza neuraminidase confer more virulence to the virus than others. Other homologues are found in mammalian cells, which have a range of functions. At least four mammalian sialidase homologues have been described in the human genome (see NEU1 , NEU2 , NEU3 , NEU4 ). Sialidases may act as pathogenic factors in microbial infections. [ 2 ] There are two major classes of neuraminidase that cleave exo or endo poly-sialic acids : Sialidases, also called neuraminidases, catalyze the hydrolysis of terminal sialic acid residues from the newly formed virions and from the host cell receptors. [ 5 ] Sialidase activities include assistance in the mobility of virus particles through the respiratory tract mucus and in the elution of virion progeny from the infected cell. [ 6 ] [ 7 ] Swiss-Prot lists 137 types of neuraminidase from various species as of October 18, 2006. [ 8 ] Nine subtypes of influenza neuraminidase are known; many occur only in various species of duck and chicken. Subtypes N1 and N2 have been positively linked to epidemics in humans, and strains with N3 or N7 subtypes have been identified in a number of isolated deaths. [ citation needed ] CAZy defines a total of 85 glycosyl hydrolase families, of which families GH34 (viral), GH33 (cellular organisms), GH58 (viral and bacterial), GH83 (viral) are major families that contain this enzyme. GH58 is the only endo-acting family. [ 9 ] The following is a list of major classes of neuraminidase enzymes: [ citation needed ] Influenza neuraminidase is a mushroom-shaped projection on the surface of the influenza virus. It has a head consisting of four co-planar and roughly spherical subunits, and a hydrophobic region that is embedded within the interior of the virus' membrane. It comprises a single polypeptide chain that is oriented in the opposite direction to the hemagglutinin antigen. The composition of the polypeptide is a single chain of six conserved polar amino acids, followed by hydrophilic, variable amino acids. β-Sheets predominate as the secondary level of protein conformation. [ citation needed ] The structure of trans-sialidase includes a catalytic β-propeller domain, a N -terminal lectin -like domain and an irregular beta-stranded domain inserted into the catalytic domain. [ 10 ] Recent emergence of oseltamivir and zanamivir resistant human influenza A( H1N1 ) H274Y has emphasized the need for suitable expression systems to obtain large quantities of highly pure and stable, recombinant neuraminidase through two separate artificial tetramerization domains that facilitate the formation of catalytically active neuraminidase homotetramers from yeast and Staphylothermus marinus , which allow for secretion of FLAG-tagged proteins and further purification. [ 11 ] The enzymatic mechanism of influenza virus sialidase has been studied by Taylor et al., shown in Figure 1. The enzyme catalysis process has four steps. The first step involves the distortion of the α-sialoside from a 2 C 5 chair conformation (the lowest-energy form in solution) to a pseudoboat conformation when the sialoside binds to the sialidase. The second step leads to an oxocarbocation intermediate, the sialosyl cation. The third step is the formation of Neu5Ac initially as the α-anomer, and then mutarotation and release as the more thermodynamically stable β-Neu5Ac. [ 12 ] Neuraminidase inhibitors are useful for combating influenza infection: zanamivir , administered by inhalation; oseltamivir , administered orally; peramivir administered parenterally , that is through intravenous or intramuscular injection; and laninamivir which is in phase III clinical trials. [ citation needed ] There are two major proteins on the surface of influenza virus particles. One is the lectin haemagglutinin protein with three relatively shallow sialic acid-binding sites and the other is enzyme sialidase with the active site in a pocket. Because of the relative deep active site in which low-molecular-weight inhibitors can make multiple favorable interactions and approachable methods of designing transition-state analogues in the hydrolysis of sialosides, the sialidase becomes more attractive anti-influenza drug target than the haemagglutinin. [ 13 ] After the X-ray crystal structures of several influenza virus sialidases were available, the structure-based inhibitor design was applied to discover potent inhibitors of this enzyme. [ 14 ] The unsaturated sialic acid ( N -acetylneuraminic acid [Neu5ac]) derivative 2-deoxy-2, 3-didehydro- D - N -acetylneuraminic acid (Neu5Ac2en), a sialosyl cation transition-state (Figure 2) analogue, is believed the most potent inhibitor core template. Structurally modified Neu5Ac2en derivatives may give more effective inhibitors. [ 15 ] Many Neu5Ac2en-based compounds have been synthesized and tested for their influenza virus sialidase inhibitory potential. For example: The 4-substituted Neu5Ac2en derivatives (Figure 3), 4-amino-Neu5Ac2en (Compound 1), which showed two orders of magnitude better inhibition of influenza virus sialidase than Neu5Ac2en5 and 4-guanidino-Neu5Ac2en (Compound 2), known as Zanamivir, which is now marketed for treatment of influenza virus as a drug, have been designed by von Itzstein and coworkers. [ 16 ] A series of amide-linked C9 modified Neu5Ac2en have been reported by Megesh and colleagues as NEU1 inhibitors. [ 17 ]
https://en.wikipedia.org/wiki/Neuraminidase
Neuraminidase inhibitors (NAIs) are a class of drugs which block the neuraminidase enzyme. They are a commonly used antiviral drug type against influenza. Viral neuraminidases are essential for influenza reproduction, facilitating viral budding from the host cell. Oseltamivir (Tamiflu), zanamivir (Relenza), laninamivir (Inavir), and peramivir belong to this class. Unlike the M2 inhibitors, which work only against the influenza A virus, NAIs act against both influenza A and influenza B . [ 1 ] [ 2 ] [ 3 ] [ 4 ] The NAIs oseltamivir and zanamivir were approved in the US and Europe for treatment and prevention of influenza A and B. Peramivir acts by strongly binding to the neuraminidase of the influenza viruses and inhibits activation of neuraminidase much longer than oseltamivir or zanamivir. [ 5 ] However, laninamivir in the cells is slowly released into the respiratory tract, resulting in long-lasting anti-influenza virus activity. Thus the mechanism of the long-lasting activity of laninamivir is basically different from that of peramivir. [ 6 ] The efficacy was highly debated in recent years. [ 7 ] [ 8 ] [ 9 ] However, after the pandemic caused by H1N1 in 2009, the effectiveness of early treatment with neuraminidase inhibitors in reducing serious cases and deaths was reported in various countries. [ 10 ] [ 11 ] [ 12 ] [ 13 ] In countries where influenza-like illness is treated using NAIs on a national level, statistical reports show a low fatality record for symptomatic illness because of the universal implementation of early treatment using this class of drugs. [ 14 ] Although oseltamivir is widely used in these countries, there have been no outbreaks caused by oseltamivir-resistant viruses and also no serious illness caused by oseltamivir-resistant viruses has ever been reported. [ 14 ] The United States Centers for Disease Control and Prevention continues to recommend the use of oseltamavir treatment for people at high risk for complications and the elderly and those at lower risk who present within 48 hours of first symptoms of infection. [ 15 ] Common side effects include nausea and vomiting . The abnormal behaviors of children after taking oseltamivir that have been reported may be an extension of delirium or hallucinations caused by influenza. [ 14 ] It occurs in the early stages of the illness, such as within 48 hours after onset of the illness. Therefore, children with influenza are advised to be observed by their parents until 48 hours after the onset of the influenza illness, regardless of whether the child is treated with NAIs. [ 14 ] Media related to Neuraminidase inhibitors at Wikimedia Commons
https://en.wikipedia.org/wiki/Neuraminidase_inhibitor
Neuro-Information-Systems (NeuroIS) is a subfield of the information systems (IS) discipline, which relies on neuroscience and neurophysiological knowledge and tools to better understand the development, use, and impact of information and communication technologies. [ 1 ] [ 2 ] [ 3 ] The field has been formally established at the International Conference on Information Systems (ICIS) in 2007. [ 4 ] Research evidence supports the idea that human behavior is influenced by individual factors (e.g., genetic predisposition) and environmental factors. [ 5 ] These influences affect the brain (e.g., its structure and processing mechanisms) which subsequently impacts the way in which information is processed. [ 4 ] By acknowledging this relationship of individual characteristics (e.g., experiences with e-commerce platforms that have led to changes in the brain due to learning processes), environmental influences (e.g., characteristics of an IT artifact such as the usability of an e-commerce platform) and human behavior (e.g., purchasing behavior in an e-commerce context), NeuroIS seeks to understand the internal processes that are involved in the formation of human behavior related to information systems. By applying theories and tools from neuroscience and related fields, NeuroIS strives to make a number of important contributions, including but not limited to: [ 4 ] Applying theories and tools from neuroscience, NeuroIS also draws from other reference disciplines and shares a close connection with sister disciplines that have also added these theories and instruments to their set of methods. Reference disciplines and sister disciplines for NeuroIS include, but are not limited to: [ 3 ] [ 4 ] Two commonly used types of neurophysiological data collection methods are applied in NeuroIS research: [ 6 ] The most commonly used psychophysiological tools in NeuroIS include the measurement of eye gaze behavior and pupil dilation (eye tracking), the measurement of electrodermal activity (skin conductance response), the measurement of muscular activity (electromyography) and the measurement of heart-related activity (electrocardiogram). [ 7 ] The main brain imaging tools that are used in NeuroIS include functional magnetic resonance imaging (fMRI) and Electroencephalography (EEG). [ 7 ] Since 2009 an annual conference is taking place in Austria to support NeuroIS research. From 2009 to 2017 this conference has been called the Gmunden Retreat on NeuroIS and took place in Gmunden, Austria. Since 2018, it is being called the NeuroIS Retreat and takes place in Vienna, Austria. [ 8 ] In 2018, a society called the NeuroIS Society has been founded in Austria to further support the growth of the field and the collaboration amongst NeuroIS researchers. [ 9 ]
https://en.wikipedia.org/wiki/Neuro-Information-Systems